by uninverted on 8/24/09, 2:28 AM with 28 comments
by iman on 8/24/09, 3:20 AM
Then there are languages that try to be even more powerful than Haskell, for example Epigram, which has dependent types.
Lazy evaluation is probably the language feature that is most mainstream that lisp lacks. For those not familiar with the abstraction benefits of lazy evaluation, consider a library for dealing with prime numbers. In a language without lazy evaluation, you are going to need an API with a function for getting the n_th prime number, a function for testing if a number is a prime number, a function for getting the smallest prime number larger than the number x, and a bunch of other functions.
In Haskell, the API only needs a single variable, called "primes" that is simply the infinite list of all the prime numbers:
primes :: [Integer] -- primes is of type list of Integer
Thanks to lazy evaluation, this single variable is all that is needed to support all of the above operations in an efficient way. For example, to get the n_th prime number you simply write (primes !! n)
Now, this may look cool, but what does it have to do with abstraction? Since "primes" is a regular list, I can use it with any list function and build more complex things out of it. I can also have a list of all the catalan numbers in the same way, and all my functions that I've written that do cool things with the prime numbers can now be instantly used against catalan numbers.
You can manually do lazy evaluation in languages that don't natively have it, but unless you do it everywhere you won't get the abstraction benefits, and if you do do it everywhere then the compiler almost certainly will not be able to optimize it as well as a native implementation.
by jerf on 8/24/09, 4:30 AM
A good language should make the right thing easier than the wrong thing. You could cripple a language that has all those bullet-point features by making putting hoops to jump through in the way of using macros or first-class functions. A klunky syntax, some sort of extra typing information to be manually added at every macro invocation, extra-verbose S-expressions, etc.
A REPL could be crippled by making it less than a full REPL, such that there are things that you still have to build modules for. See Erlang's REPL, which is mostly nice, except you can't define new records or do a handful of other useful things.
The most likely way this could happen is a language that tries to be LISP while still looking as much like C(++/#) as possible, and bringing over impedance-mismatched concepts better left in C(++/#). The second-most likely would be in some way constraining the power so as not to scare programmers or so as to avoid some "trap"; for instance, see Java's dropping of multiple inheritance. Thus, even though Java has "OO", it is less powerful than a Java that had MI too. You might have a "first class function" that is somehow limited to be less useful. (Perhaps you get first-class functions, but themselves are not allowed to return functions, only "values".)
Do not underestimate the power of language implementors to cripple a language, both intentionally and otherwise.
by patio11 on 8/24/09, 2:57 AM
Who a language "caters to", who an "average" programmer is, and who a "hacker" is are all social constructs which have no relationship to the objective reality of what features are in a language. People who wish to assert their superiority are quite willing to do so regardless of the technical merits of the matter.
Thus, rather than wasting one's time with meaningless geek-on-geek pissing matches, you should probably just get back to writing software which solves problems for people. You can do that in most languages -- even in Lisp.
by gruseom on 8/24/09, 3:45 AM
Now obviously we can argue about how to define "powerful", and the whole discussion can easily become another pointless language flamewar. But let's not do that. Let's provisionally grant the essay its definition of "powerful" and the corollary that Lisp macros are currently the apex of that power. The question is, is a more powerful (in this sense) language possible? (You don't have to agree with that definition of "powerful" to find the question interesting, by the way. Just rephrase it as, could another language beat Lisp at its own game?)
In my mind I've always referred to the above argument rather pompously (i.e. half-jokingly) as "Graham's Thesis" (because it reminds me of what used to be called Church's Thesis when I studied logic - it's a non-provable-because-non-formal assertion that expresses an intuition about something - in that case computability, in this case programming languages). We can state Graham's Thesis as: Any programming language that's as powerful as Lisp is isomorphic to Lisp.
So, is this true? The critical thing is code=data. Is there a fundamentally non-Lispy way to represent code as data, that's as good or better for programming than sexps? (That qualifier is important, because there are definitely ways to represent code as data that make a lousy notation for programming, e.g. machine language.) If there is such a representation, then a language based on exposing it as notation could be as powerful as Lisp without being Lisp. But if there isn't, then variations of sexps are the only game in town, and those don't count as new languages. Replacing parentheses with different brackets, as was suggested here either trollingly or stupidly the other day, doesn't cut it.
As anyone who's seen a Lisp program and knows the first thing about a compiler knows, sexps are just the simplest notation for a syntax tree. That is, the thing that parsers turn programs in other languages into, Lisp programs just are. That makes sense, because parsers turn source code into data, and Lisp programs just are data. And any language in which you write programs as parse trees is Lisp (or will soon become Lisp as people add the obvious things you'd want in such a language).
So Graham's Thesis (man I feel silly writing that) reduces to the following: the only representation of programs suitable as both a data structure and a notation for human programmers is the syntax tree.
I'd really like to know if this is true. (Apologies to anyone who read my comment on this here the other day, as I'm repeating myself.) What languages are there whose source code gets turned into something that is fundamentally not a syntax tree? And what would the simplest explicit notation for that structure look like? I think this would be the area in which to look for an answer to the question.
There's one thing that makes me think such a representation might not exist: sexps are really just function composition, and function composition is how humans have done math for a long time. But if one does exist, I'd like to see it. There are a lot of people here with much wider backgrounds in programming languages than mine, so perhaps someone can just answer this.
by raganwald on 8/24/09, 3:19 AM
by icey on 8/24/09, 2:30 AM
by mahmud on 8/24/09, 5:36 AM
The sort of applications being written with the language are a huge factor in making it attractive to other users. All the truly beautiful languages had operating systems or huge desktop applications written in them; you used the language to extend something already powerful. It rewards your programming. Clojure will most likely become a server-side programming language, with little user interaction.
by wglb on 8/24/09, 3:00 AM
by rikthevik on 8/24/09, 5:39 AM