Lockless Inc

Why Lisp Failed

This article investigates why the lisp computer language is no longer widely used. Long ago, this language was at the forefront of computer science research, and in particular artificial intelligence research. Now, it is used rarely, if at all. The reason for this failure is not age. Other languages of similar heritage are still widely used.

Some "old" languages are FORTRAN, COBOL, LISP, BASIC, and the ALGOL family. The primary difference between these languages is who they were designed for. FORTRAN was designed for scientists and engineers, for whom solving equations on computers was the primary task of programming. COBOL was designed for businesses, being especially expressive so that business people could take advantage of the computer age. LISP was designed by computer scientists, and was expressive enough for research into the fundamentals of computation. BASIC was designed for beginners to learn programming. Finally, the ALGOL language was modified by computer programmers, and evolved into a huge family of popular languages such as C, Pascal and Java.

Some of the above languages are no longer quite as popular as they once were. This will be the definition of "failure" we will use here. The question is why did they fail? The first stand-out is COBOL. Unfortunately, its design to be humanly-readable by business people was its downfall. Businesses found that it was possible to hire programmers to look after their computers. Programmers would then gravitate to languages designed for them, rather than their managers. Thus over time, more and more business functions would be programmed in languages such as VB, C, C++ and Java. Now, only relic software tends to be still be written in the language.

BASIC suffered a different fate. It was the language of beginners. Those just learning to program on microcomputers would use the in-built BASIC language to start off with. As time progressed, microcomputers were replaced by personal computers running Microsoft operating systems, or macintoshes running Apple's. The language evolved with time, becoming Visual Basic once the desktop paradigm arrived. Since it could be used by those with little programming skill, it replaced COBOL for a while. Why pay for an expensive compiler, if a cheap interpreter that comes with your machine is all you need? Recently, Microsoft has moved to the .Net system, leaving VB behind. Its replacement, C#, is an ALGOL family member closely related to Java.

FORTRAN usage has waxed and waned throughout the years. At one stage, nearly all science codes were written in it. Its advantage was that there were no pointers in the language, and recursion was disallowed. This meant that all data reference locations were able to be compile-time constants. FORTRAN compilers could use this extra information to make extremely fast programs. Unfortunately, as time progressed fixed sized arrays as data structures became obsolete. Now, science works with arbitrary shaped grids, and even more complex representations of the real world. This required the addition of pointers to the language. Around the time that happened, FORTRAN went into decline. Now it is relegated to high performance computing workloads where new parallel matrix and vector operations recently added to the language still give it the performance edge.

The ALGOL language family succeeded. The reason for this is that these were the languages written by programmers for programmers. As time progressed, these evolved into the system and application languages most commonly used today. Their advantage was that the more programmers that used them, the more the languages improved, and the more programs that were written in them. This provided a virtuous cycle, where more programmers were in turn hired to work on the programs that were written. This is an example of the network effect. The "worth" of a system is proportional to the square of the number of users of it, due to the number of interactions between users scaling at this rate.

So why did the Lisp programming language family end up on the failure side? Some say it is due to the syntax. Lisp is notorious for its parentheses. I do not believe that is the reason. Many users of Lisp say that formatting quickly allows them to match bracket-pairs. Also, soon after the invention of the language super-brackets were created to quickly close off an arbitrary number of open brackets. This language feature is rarely used today. Finally, syntax-understanding editors have made most of the layout problems of Lisp nonexistent in this age.

Another common complaint against Lisp is that it is a functional language. Could this be the reason for failure? Amongst all the early important languages, it alone is functional in nature. Unfortunately, I don't think reality is this simple. Lisp contains imperative features, and the ALGOL family can be used in a purely functional manner. If one wishes to code to a certain paradigm, certain languages may make that choice easier to make. However, modern languages are flexible enough to support many programming paradigms, and there is no reason a mostly imperative Lisp might not exist.

Perhaps the problem with Lisp was that it used a garbage collector? Again, amongst the early important languages, it alone had one. Garbage collection requires more memory and computational resources than manual memory management. Could the lack of memory and low performance of early computers have held Lisp back enough to slow its adoption? Again, I do not think this was the case. The complex programs Lisp was used to create would require something with the complexity of a garbage collector to be written any way if they were implemented in another language. The proverbial statement that any complex enough program eventually contains a poorly written implementation of Lisp does hold some weight after all.

The reason Lisp failed was that it was too successful at what it was designed for. Lisp, alone amongst the early languages was flexible enough that the language itself could be remade into whatever the user required. Programming with the other early languages involved breaking a task into small sub-tasks that could then be implemented. The larger tasks could then be implemented in terms of the smaller ones. Lisp was different, due to its power, a programmer would be able to design a domain-specific language that would perfectly solve the task at hand. Due to the orthogonality of the language, the extensions written would work seamlessly with the core language.

So what is the problem with creating domain-specific languages as a problem solving technique? The results are very efficient. However, the process causes Balkanization. It results in many sub-languages all slightly different. This is the true reason why Lisp code is unreadable to others. In most other languages it is relatively simple to work out what a given line of code does. Lisp, with its extreme expressibility, causes problems as a given symbol could be a variable, function or operator, and a large amount of code may need to be read to find out which.

The reason Lisp failed was because it fragmented, and it fragmented because that was the nature of the language and its domain-specific solution style. The network effect worked in reverse. Less and less programmers ended up talking the same dialect, and thus the total "worth" ended up decreasing relative to the ALGOL family.

If one were designing a language now, how could this problem be prevented? If expressibility is the goal of the language, then it must somehow be moderated. The language must have deliberate limitations that allow for readability of code written in it. Python is a successful language where this has been done, where some of these limitations are hard-coded, and others exist via the use of convention.

Unfortunately, so much time has passed, and so many Lisp variants have been created, that yet another new language based upon it is probably not the answer. There simply will not be enough users to make a difference. Perhaps the solution is to slowly add Lisp-like features to languages within the ALGOL family. Fortunately, this seems to be what is happening. The newer languages (C#, D, Python etc.) tend to have garbage collectors. They also tend to be even more orthogonal than the older languages. The future may eventually contain a popular language that behaves much like Lisp.

Comments

reader said...
There is just too many bs in this article to write a counter argument. GOD! Here's a penny go get a better programming book.
Felix said...
I really enjoyed reading this article. This might be one of the sanest summaries of Lisp's downfall on the Internet.

Anyway, there is one thing you didn't include: While the popularity of Common Lisp has wayned over the years it is now fluctuating. The essays and books of Paul Graham and others have caused some adoption. Nowadays, a small but active Common Lisp community prospers and provides libraries, documentation and support for newcomers.

The Scheme language is still in use at many universities as a means of teaching "fundamentals of computation" to freshmen.

Very recently, the Clojure language has gained some traction among ALGOL-family programmers. Clojure departs from Common Lisp heritage quite a bit, but offers a rethought and modern Lisp-like language running on the Java virtual machine.
Sebastian said...
"This is the true reason why Lisp code is unreadable to others. In most other languages it is relatively simple to work out what a given line of code does. Lisp, with its extreme expressibility, causes problems as a given symbol could be a variable, function or operator, and a large amount of code may need to be read to find out which."

I could not agree more.

Lisp's strength seems to also be its weakness at the same time.

Sad, somehow...
anus boy said...
ONE WORD: FORCED PROCESSING OF THE LISTS. /THREAD
Morrie said...
Going to put this atricle to good use now.
Antonio Bonifati said...
You actually *can* tell variable from functions and operators in standard Lisp, why not? First, functions and operators are the same thing, as in mathematics. You do not need to make any syntactic distinction, so it's simpler and more flexible than in other languages. And variables are introduced by some pre-established kinds of forms eg. defvar, defparameter, let, defun, defmacro, etc. Common Lisp has separated namespaces for functions and variables to make the distinction even clearer in those few tricky cases where you want to call a function with a variable name.

You can argue a lot about pros and cons of domain specific languages, but the fact is that you decide whether to use a DSL or not. Lisp just makes it easy to develop DSLs, but it does not forces you to do so all the time, for all aspects of a problem, etc.

I believe that Lisp failed for only one reason: ignorance! I personally did not know anything about Lisp, I have always been taught and forced to use other ALGOL-like languages for more than 12 years. I discovered it by change and now I do not want to use the other inferior languages any more. Why? I would just feel stupid and there is no advantage: rather, it takes more time and money to develop. This is a common situation amongst people who are rediscovering Lisp.

My 1-cent piece of advice: Learn Lisp well, use it for a project, compare the implementation of the same program with others, made in other languages you know and then you will agree with me and thank me forever :-)
cs124 said...
nice article! :D
David A. Wheeler said...
No, it's the syntax. Lisp syntax is horribly unreadable, and as a result, most developers choose to NOT use Lisp.

"Super-brackets" don't help, as you noted, they're rarely used. But saying that "editors have made most of the layout problems of Lisp nonexistent" is nonsense. The problem isn't *layout* (which a pretty-printer can do), the problem is that a properly-indented programs is still painful to read. Its lack of infix notation - which even tiny BASICs supported - is absurd today. You need to have a *clear* notation when working with other people - Lisp is not that language.

I agree it can't be that it's functional, since it need not be used that way. And it can't be the garbage collector; nearly all modern languages have one. Supporting domain-specific languages isn't a problem; that's a strength, and should have made Lisps *more* popular.

The claim "Lisp failed was because it fragmented" may describe Scheme, but it doesn't describe Common Lisp at all. Common Lisp is quite standardized, and all the different implementations are highly compatible.

You later note that "The language must have deliberate limitations that allow for readability of code written in it." Well, no. You don't need limitations, you need a readable notation. Python is fantastically readable - indentation is enforced, infix is built-in, function call notation is just like math class.

An alternative is to add abbreviations to Lisp readers so that you can optionally use an easier-to-read notation for common conventions. Such abbreviations need to be general and homoiconic; past efforts failed because they didn't meet those criteria.

Please take a look at the "readable" project at:
  http://readable.sourceforge.net
You may find that it's possible to have an alternative and better Lisp notation.

Thanks for reading.
Scott said...
Good article, but it should be fewer and fewer, not less and less programmers.
NP said...
Sorry, FORTRAN is still very much in use today. Fortran 2008 with its coarrays, dynamic memory allocation, whole array operations etc is very well suited for scientific computing. I work in one of the top world's aerospace companies, we run jobs that take 1000+ cores and run for a month. All out big codes, with a few exceptions are written in FOrtran (the few others are in C), and this is tru for all HPC industry (NASA, Boeing,...).
Tom said...
Syntax can be learned.
What is harder to grasp is semantics.
In my opinion many programming languages have too many features and most programmers fail to understand how even a small fraction of those features is used properly.
Just my opinion...
puiu-bratu-nelu said...
Upgrade LISP to VirtualC+Lisp
http://lisp2arx.3xforum.ro/post/21/1/YouTube_Quickhelps/
Rahul said...
Clojure can be the solution.
puiu-bratu-nelu2 said. said...
Good solution is "VirtualC+LISP" a hybride
 programming language between C and LISP
You write C/C++ code and LISP code , and the result code after compilation process wiil be 100%LISP(all C+ lines will be LISP lines source)
www.youtube.com/watch?v=91mwgtNSIrE
HomePage http://lisp2arx.3xforum.ro


Lispy lisper of lisp said...
Lisp is truely a god like language that remains awaiting the saviour to reclaim the territory that is the Earth. Total domination and subjugation of earthly programmers is the final result of man returning to a god like super programming language like lisp. When the devil of this earth is gone then the people will kick out their silly tiresome so called bosses and implement the one true pure language and not have to put up with stupid marketing junk for other stupid languages made by marketing people not technical programmers.So yea lisp will eventually be the one and only language that the whole world uses. so there.
Sylwester said...
I'm not entirely sure Algol syntax is easier to read than Lisp if you have never seen either before. I'll test it out when my daughter gets a little older :)

Both Algol and Basic are derived from the Fortran family. I think the Algol family and imperative way was easy for me to learn since I've written Basic in the beginning on my C64 and in that family C was easy since I knew assembly for both x86 and 68000.

When you think in Algol, programming a Lisp dialect would be very difficult. Though I bet Lisp is easier to learn for a Algol programmer than Haskell is.
said...
From my experience with Lisp, it's unreadability is neither caused by too many parentheses, nor by the lack of distinction between operators, function, and variables. In my eyes, what makes it unreadable is its lack of structural variety.

In the C-family, for instance, we have blocks of statements in braces '{}' delimited by ';' and a convention to write one statement per line. We have a function call syntax that uses parentheses '()' and separates arguments with ',', and a convention to not split a function call across lines. We have a special syntax for accessing array elements '[]', even though 'a[b]' is equivalent to '*(a + b)'. We have a special syntax for loops with statements within parentheses 'for(;;)'. We have a special syntax for case labels 'case a:', ... I could go on like this for quite a while, but the point is, that these diverse syntax elements convey a lot of meaning at one glance. A particular strength of C over languages like Pascal and Fortran is that this syntactic diversity is nevertheless very concise, allowing a programmer to grasp a lot a meaning from a small amount of text.

Lisp, on the other hand, has only one syntactic structure that's used over and over again: the list '(a b)'. The only syntactic variation is when a pair is explicitely used '(a . b)'. That means, whenever a programmer sees something listed within parentheses, he knows nothing about what the code is supposed to be doing. He even has to read the context to know whether this list is interpreted as data or as code. Perhaps it's first interpreted as data, modified further, and then executed as code. Even when he knows that it's actually executed as code, he still has to look at the first element of the list to see whether the code is supposed to implement a loop, a condition, call a function, define a function, declare some variables, ...

Of course, this deficiency can somewhat mitigated by good formatting, but I for one believe that formatting becomes more effective with more diverse and expressive syntax, and that Lisp failed because it cannot easily convey the structure of the code to the reader.
PUMA said...
A nice article, and good ideas. I don't agree that limitations are the answer. I tend to agree with David Wheeler. I love everything about LISP except the syntax (lack of). And for that the immediate counter-reaction from any LISPy message board is outright vitriol. The lady doth protest too much, methinks.

The power of the LISP is great, but its syntax (lack of) renders it absolutely unreadable. Python is great and even C is *much* *more* readable. No, I shouldn't have to learn to "not see the parenthesis". They just shouldn't be there.

Add two things to the reason LISP failed:
- the explosion of open source, UNIX, and the web, all almost 100% using C. Before that other languages championed by a small or medium sized group might have had a chance.
- the smug superiority complex and unfriendly attitudes of a large number of anonymous lower echelon would be LISP advocates. It's framed as defense - after all "LISP is the greatest, blah, blah, why are you hating on ()". Well, now that history has soundly *proved* otherwise, instead of getting defensive, let's just honestly ask ourselves "why?".

For that, asking why, this was refreshing to read! Thanks!
chelahmy said...
We read back our codes because we want to do alteration or to correct bugs. How about if we never have to read back codes? How about if we just write some codes and forget about it? LISP failed because we want to use it like C, Java, or bla... bla.. bla.. Everybody knows that LISP is different and special. We suppose to code in LISP once we already know what exactly the program should do and write it fast without us to structure it other than to use (). If LISP failed then why LISP is still a fundamental language in major programming courses?
erik said...
>Of course, this deficiency can somewhat mitigated by good formatting, but I for one believe that formatting becomes more effective with more diverse and expressive syntax, and that Lisp failed because it cannot easily convey the structure of the code to the reader.


But in Lisp, you are actually reading the AST that represents the code.
Yu said...
> From my experience with Lisp, it's unreadability
> is neither caused by too many parentheses, nor
> by the lack of distinction between operators,
> function, and variables. In my eyes, what makes
> it unreadable is its lack of structural variety.

I'd think this is mostly a training effect. My programming beyond lectures (both computer science lectures and "programming for scientists") so far was mostly restricted to data analysis, but overall I've used some amount of python and Java. Yet, later I got invested into emacs and starting from simply customizations of modes I started writing my own libraries. After some time I actually found the structural variety of other languages awfully distracting even in Python.

Genrally I found reading the emacs lisp libraries more feasible than reading another persons python code. Surprisingly the absence of a true module system generally helps readability, as it causes each function call or variable access to be clearly marked with the module prefix. Kind of as if python required fully classifying the module name for each function call (except that if it did module hierachies would have evolved to be much more flat and module names as short as possible while remaining unique).
BAKL:U said...
All these comments were written by one person
another anon said...
I completely agree with "said..." (i.e. the man w/o a name :)). I started looking for a better language because C++ looked ugly, yet I needed something that could be fast enough and communicate with C/C++ code. Lisp seemed a good choice, especially because of extending possibilities, small syntax, etc. Later, when I started to read books about Lisp, I thought that syntax was hard because it was very new to me, so I didn't pay much attention to that. Yet, now, after reading some articles comparing Lisp and other languages, maybe even C++, C++ doesn't seem as alien to me as Lisp does: to read a small Lisp function I need a few minutes, while to read the same function in C++ I'll need no more than a minute (at least I'll have general understanding of it). Also, no matter how I formatted the code I couldn't read it as good as Algol-based languages.

Now, because of Lisp, I see Python as an ugly language (intermixed paradigms that demand synchronization when you need to change something small), but also I understand that that is an architectural compromise. In this world it's hard to be all nice and shiny.

Yet, I must admit I *really* want to use Lisp-based language, even functional-only, e.g. Scheme, but it is very hard to do so and I'm afraid I won't be able to reach that "light in the end of a tunnel" in my lifespan.
another anon said...
Damn, it's always like this: I post an answer or question, and then almost immediately I come with something better. Guess what, there *is* a way to solve syntax problem, i.e. to make it readable. Know why? Because it's Lisp! Check these out: (1) https://www.youtube.com/watch?v=MHDmVRU4fqw, (2) http://readable.sourceforge.net/
Andy said...
First C and C++ didn't spring from ALGOL. They are closer to PL1. ALGOL was designed for mathematical algorithms. None of the curent ilk of programming languages implement call by name. As far as I know the only widely used computer that provided that feature in hardware were the Barrows ALGOL machines. B5500 for example. The B5500 had 48 bit word memory and 3 flag bits. The code in the flag bits provided for indirect references and call by name on passed arguments. You could than use sum like

      Y = sum(x,1,4,sum(y,3,8,x+y));

In the above the inner sum(y,3,8,x+y) would generate an unnamed function to pass to the outer sum call. The variables x and y are not passed by value but by address. In the case of variables they are akin to pass by address address in C. It gets a bit confusing when recursion is involved.

Borrows B5500 had 48 bit word memory with 3 flag bits. The flag bits implemented the cal by name of ALGOL. it was a stack machine so when function was loaded onto the stack the call by name fag would cause it to be called. The compiler would generate unnamed functions when expressions were used as arguments. A variable would be a simple indirect reference. An error would occur writing to a function.

In the above the inner sum(y,3,8,x+y) would generate an unnamed function to pass to the outer sum call. The variables x and y are not passed by value but by name. In the case of variables it is the came as a address & reference in C.

ALGOL died with the machines designed to run it. It didn't have a big user vase to start with.

The only thing modern block structure languages have in common with ALGOL is their block structure. By the time C was being developed there were several block structured languages.

LISP was developed for AI research projects. It was tried for other functions and found lacking right from the start. APL is another failure and like LISP was unreadable. Those languages are are easy to program and near impossible to read.

There was a block structured LISP but nothing came of it. There certainly application were a list processing language is useful. AUTOLISP worked well in AUTOCAD.

One can in C++ implement a lisp list class. There are some specilized languages like TREEMETA to make use of lists.

The real reason that all the old programming languages are dying out is the personal computer. Very few would run on a small computer. And by the time personal computer evolved to where you could run mainfram languages on them no one was interested. I think PASCAL lasted the longest. PASCAL was meant to be used for teaching programming and lacked many features need for real world projects. Extended PASCAL compilers fixed the problems in different ways and eventually were droped in favor of C. It was free and available on most platforms. And along came WINDOWS and eventually object programming in C++.

Enter the 10 characters above here


Name
About Us Returns Policy Privacy Policy Send us Feedback
Company Info | Product Index | Category Index | Help | Terms of Use
Copyright © Lockless Inc All Rights Reserved.