Much like the sibling comment I have heard variations of this proposition before. But instead of wondering why don’t more groups take this approach, I want to know what make it possible in the first place. So, in your experience, or even just via some passes down stories, etc., what is it about Lisp that allows for this efficiency?
I have heard, although it is always just a quick statement with no explanation that it is the macros. And then one of siblings children comments seems to support the idea that Lisp programmers as a whole may be more experienced/better/more well rounded developers in general, so maybe that’s it. I am genuinely interested in peoples thoughts on this matter and am not being flippant or sarcastic.
I can't speak from a professional level about this, but I'll tell you why I started using Common Lisp for my personal development.
One day I was thinking it would be awesome if someone created a really powerful language with great facilities for abstraction: first class functions, objects, meta objects, etc. It should use dynamic typing during development with an excellent REPL and a built in debugger available everywhere, but it should have the ability to add type declarations when the code got more settled.
Even better, this language should include ways of hinting to the compiler how to make a chunk of code faster, and if that wasn't enough, you should be able to replace chunks of code seamlessly with highly optimized C code.
I thought about how to implement a language like this for a while, but when I looked into it I discovered Common Lisp does most of this already. It's static typing isn't perfect, though now there is https://coalton-lang.github.io/ if that's your thing.
So that did it for me, but really this is only part of what you can do with Common Lisp. I think it still has a niche in places where the "right thing" isn't know yet.
For example, I read a Reddit post saying that Common Lisp would be a poor choice now for a web startup, and that Paul Graham overstates Common Lisp's case as the reason for Viaweb's success. This is probably true in 2022, but I think it's instructive to look at why Paul claimed Lisp was so helpful back in 1996. There was no Rails or Django, no React or single page web apps. Viaweb built Scheme style continuations onto Common Lisp to simulate an SPA before AJAX was a thing, and on top of that created a slick HTML templating language based off lisp macros.
To do that in Perl, or C, would have required something like writing a scheme interpreter, and would have taken much longer for a 2 person team. Is that a 100x advantage? Hard to say.
Ultimately, I do think it still has advantages, you just have to find the areas where nothing else will really do the job.
Lisp projects also tend to keep the dependency list short.
In the best case scenario, once the foundation is laid and the project has momentum, you end up with what feels like a language finely tuned to express solutions to problems in your domain.
In other languages, you may reuse useful routines but it's not as powerful. Additionally, as the dependencies are added the interactions between them can become a real headache.
What's interesting to me is that for some problems Lisp feels like a cheat code and it goes exactly the way I just described. For other problems it feels like a weird old language with a lot of parens. Perhaps I'm just not good enough at Lisp though.
It's really not that much more efficient when you compare Lisps to modern languages that share features like interactive (REPL) development, garbage collection, large standard libraries, concurrency primitives and abstractions, object/functional programming styles, integrated package managers, etc.
When you compare modern languages, including Lisps, to something like C99, I think it's easier to see how you can be much more efficient/productive.
If you take away the similarities of Lisps and modern languages, you're mostly left with the syntax and how the syntax enables macros that set Lisps apart from other languages. But, most of your Lisp code looks very much like your other code where you're writing functions, defining structures, calling object methods, etc. so it seems a stretch that the syntax and macros alone make you much more efficient/productive.
> It's really not that much more efficient when you compare Lisps to modern languages that share features like interactive (REPL) development, garbage collection, large standard libraries, concurrency primitives and abstractions, object/functional programming styles, integrated package managers, etc.
pick 2!
But seriously, what modern languages have all these features?
Java/JVM languages, C#/CLR languages, Python and Ruby (minus real threads, but compare to Racket with not-the-best real threading story), NodeJS (minus the "large" standard library, but compare to Scheme with a small standard library), can probably include Go
I don't think any of these have REPL-driven development. They might have a "REPL" but not interactive code reloading as a part of the development cycle as is meant by a Lisp REPL.
Python with a Jupyter notebook is a very solid REPL development environment:
- Prototype code in Jupyter. Write first as a few lines of code per cell. Then merge into functions. Focus is on 5-10 cells becoming a single function.
- When working, copy the functions back into the main Python file.
- Periodically reimport the main Python file back into Jupyter and recalc the cells currently in scope.
- Repeat...
- The main Python file ends up mostly like an API. And the Jupyter notebook as docs on how to call that API.
i agree that python's repl environment is nice. i worked with python and jupyter for much longer than with common lisp. actually due to my domain it is still my main language. however i was taken aback when i first saw repl development in lisp. it is just far more seemless and stable. how often do you need to do python kernel restarts in jupyter? what i found extremely surprising is the huge performance difference between sbcl and cpython. common lisp as implemented in sbcl produces some of the most performant computations out there. also writing code in s-expressions turned out to be an unexpected killer feature for me. finally something you cannot do in python that is built into lisp is being able to debug and live edit a running (production) image.
while i continue to use python the same is no longer true for jupyter. i think using org-babel in emacs is a much more superior experience, with the added benefit of having the whole development environment available to you
How does 'REPL-driven developent' work, exactly? Are you writing code in files and then testing it in the REPL? Or writing substantive code in the REPL and then copying it to files somehow?
It's how I've worked day-to-day for thirty years or so, so here's my take on it:
I generally work in Emacs. I'll have one or more source files open, plus at least one REPL buffer. The REPL is my communication channel with the running Lisp.
The object of the game is to teach the Lisp interactively how to be the program I want. It's already a working program; it just isn't the one I want to build. I'll use Lisp expressions to teach it, one expression at a time, how to be what I want.
The contents of the source files depend on the stage I'm at with a program. If I'm just starting then it's probably one file with a few snippets that I C-x C-e to send to the REPL to modify the dynamic environment of the running Lisp.
If it's a more mature project, then I have a bunch of source files that fit into an ASDF system (which defines sources to build and dependencies among them and any libraries they depend on). The system as a whole will be loaded into the memory of the running Lisp, converting it into a work-in-progress version of my app.
I'll have some subset of the project files open in buffers so that I can add or edit definitions and C-x C-e them into the running Lisp and see in the REPL whether my changes have the effect that I intend.
I continue this way indefinitely, teaching the running Lisp expression-by-expression the features that I intend for the finished product to have. When the difference between my idea and what the Lisp does shrinks to epsilon, the program is implemented. I run a build function and I have a delivered application.
When something goes wrong during my work, I land in a breakloop, which is something like a backtrace, except live. Rather than a printout of an unwound stack, it's a live REPL on the dynamic environment of the stack at the moment the error occurred. I can use the breakloop to wander up and down the stack, inspect and edit variable, type, and function definitions, and resume execution at the moment of the error when I'm ready to see the effect of my changes. The Lisp then runs the same function from where it broke, except that the variables, types, and functions I modified now have the definitions I just gave them, rather than the ones they originally had.
If I change the definition of some class that already has live instances, then the Lisp reinitializes those instances with the new class definition and continues. If I neglected to show it how to reinitialize those classes, it drops me into another breakloop and offers me the opportunity to tell it how to do so, after which it will continue, as before.
If I stumble across something curious and I want to show it to a collaborator in context, I can save a heap image of the running Lisp and give it to my colleague. That colleague can start the image on their machine and see the same dynamic environment that I saw. In some older Lisps, that dynamic state could include all of the windows on my screen, and their contents, and which one was the active window, and where the text-insertion point was.
I can deploy my work-in-progress apps to a staging machine with a repl server built in. I can let it run in a testing environment indefinitely, and use Emacs to connect to a live REPL that I can use to rummage around in the running app, inspect and edit variables, types, and functions, and generally do anything I would do if it were running in my normal dev environment. If I see something odd, I can again dump a heap image, copy it back to my development machine, and crawl around in its running state while I let the deployed test version go back to doing what it was doing.
There's quite a bit more to it, but with luck, that gives a general idea of what the workflow is like. Common Lisp environments are generally like this; some have richer sets of affordances than others. Smalltalks are like that, too, and so is Factor.
Most other Languages and repls are not so much like that. Clojurescript with figwheel gives you part of it, and the part it implements is pretty good.
> I can deploy my work-in-progress apps to a staging machine with a repl server built in. I can let it run in a testing environment indefinitely, and use Emacs to connect to a live REPL that I can use to rummage around in the running app, inspect and edit variables, types, and functions, and generally do anything I would do if it were running in my normal dev environment.
Not the GP. Thanks for the comment, I always find your comments about your experiences with Lisp insightful... But I'm curious, does/can this lead to situations where colleagues are stepping on each others toes in the live environments?
If you're going to test that way, you want to have your own staging environment, distinct from other developers', just like you don't want two different people's unit tests writing to the same test data as they run their tests (unless that kind of race condition is exactly what you're testing, of course).
Very interesting, thanks! This is indeed how I tinker with my .emacs file, though I’ve only done relatively trivial stuff there.
What do you prefer about using C-x C-e on individual sexps rather than reloading everything and keeping the entire program synced between source files and the running environment?
Generally speaking, I work by building some data that represent my understanding of the model I'm working with, and some functions to operate on that model. My models start out simple and more or less wrong, and as I work, I build improved understanding. Mostly my understanding improves incrementally, and so does my code. I build up the app piece by piece, adding and changing things bit by bit. So most of the time, changing a small piece is all I need, and that's what evaluating one expression does.
Moreover, if I recompile everything, then I'm rebuilding my in-memory state from scratch. Mostly, that's not what I want. Mostly I work like a sculptor: I make a rough approximation and then refine it stepwise. I don't want to rebuild from scratch every time I discover something new. I want to keep most of what I've built and change just what needs to be changed.
That said, sometimes a discovery does call for a larger-scale reconstruction. That's when I reload a whole file or, in more extreme cases, reload an entire ASDF system of files. Once in a while, I change enough (or make enough mistakes) that it makes sense to blow the whole thing away--kill the Lisp, restart it, and load the system from scratch. But that's fairly rare.
Mostly I want to plop down my clay and tweak it a little at a time toward my goal, discovering and refining it a little at a time.
From what I've seen, the cycle is a bit like this:
- launch the program you want to run and have it running in the background
- text editor open
- bits of code in editor loaded in the running program through text selection and sending them to the be loaded (the most common scenario is doing this in Emacs)
However my concern with this is... if you don't run the entire "code frozen" program at once, how can you guarantee the internal consistency of the program if over time you kind of haphazardly add bits of code to it? Maybe you loaded that function, maybe you didn't, did you add that dependency function in the correct order and at the correct "version" (where version is used loosely, every time you type in something, it's a new "version").
I'd really love for someone experienced with Lisp to describe their workflow and also maybe clarify how they handle this (in my opinion, huge) problem of possible program inconsistent states.
For Clojure there are libraries to refresh the program: these stop the app, reload and evaluate all the source code files, and restart the app to ensure you are working with a consistent state that reflects the source code.
Between such reloads you might end up in inconsistent states,
but you benefit from fast iteration:
My workflow is to update functions in the source code, send to to the repl, quickly test them in the repl, and once done, I save, commit and push the code.
CI then runs the test suite from the source files.
I guess it just doesn't really come up that much? I personally aim to have whatever I'm not currently working on written to disk, and very rarely redefine things in the REPL once I've written them to source. Other than that every time you change anything it's automatically propagated, with a few known exceptions (macros mainly). I can say I've never wound up in an unknown state, and I'll keep a slime server running for days.
Well, not the same thing but Java can hot-reload classes (to a degree depending on implementation). But what truly makes repl-driven development in lisps so good is the more functional approach to development. Hot reload gets better the smaller the replaceable units get. Lisps have it good with basically paren-level hot reload.
GP didn't mention native code generation, which is a pretty compelling aspect of Common Lisp. I know there are some JITs for Ruby but it seems MRI is still canonical, just like cpython for Python.
I have heard the same kind of claims about Ruby, Python, JavaScript, non-typed languages, functional languages etc. as well. I still have seen zero evidence that any of those claims are true. I think those claims are often made when somebody feel super productive using language/technology X, and then jump to conclusions about X based on that feeling. Seriously, if Lisp was indeed 10x more productive than competing languages, I would have switched to it a long time ago. It would be silly not to!
How many languages do you know? You might find some inspiration in answering your own question by comparing what makes your efficiency different in one or the other. If you find your efficiencies are roughly the same, I might suggest you aren't using each language to its fullest potential and are writing in a lowest-common-denominator style, but that really might not be the case and perhaps the efficiency gains and tradeoffs have just been balanced well for you, so you'll need to decompose individual (in)efficiencies. For example, in one language you might not need to wait for a lengthy compilation cycle before you can run the code after changes, but with a beefy enough computer (or a networked supercomputer constantly doing compilation for all developers) perhaps the other language's lengthy compilation cycle is not in fact so lengthy for you. Or in another language, you might have strings with built-in pattern matching facilities that don't even need an import statement, if you need such things you'll be at an advantage over languages without them -- until you discover that for some other language someone else has already written a nice regex library with syntax that isn't too annoying even if it's not quite as elegant as something that's part of the language, and so the advantage narrows or disappears. (In regards to the GP's comment, you might very well get by fine with a Java team of 10 just like the Lispers, because 90+ other Java programmers have already written what you need in the broader Java ecosystem which the Lispers take for granted as part of the language, allowing you both to focus on the problem without distraction.)
I'm a fan of JRebel for improving efficiencies in Java development, in one of their marketing materials (https://www.jrebel.com/sites/rebel/files/pdfs/rw-mythbusters...) they claim that waiting on redeploys takes up about 20% of a Java developer's salaried time. So with naive assumptions if you had a team of 100 developers, you could fire 20 of them and maintain the same pace just by forcing the remaining 80 to use JRebel. Or just keep all 100 but now everyone's got a 20% efficiency boost and can do 20% more than before! (Yadda yadda real world caveats.)
For Common Lisp, this "no waiting for redeploys" is part of the language, you've been able to enjoy such a feature for decades without having to wait for JRebel to come into existence/figure out how to add it to a language yourself (some game studios accomplish approximations with their custom C++ engines), so it's a possible efficiency gain over languages that don't have a way to balance it -- like with JRebel which comes close, or to a lesser degree by more disciplined unit testing that lets you confidently and accurately make changes without needing to redeploy the whole thing for verification, or writing in a more purely functional style where you don't additionally suffer the burden of having to rebuild up tons of state after a redeploy, or to another degree by solving everything with an advanced type system, or just by having simple problems where you need integer-factor efficiency differences for them to matter for the duration you'll be dealing with the problem...
CL has a bunch of other stuff built-in that can lead to efficiency gains too; macros of course are a big part of that but not the only other part. I'll list a few examples of macros being useful. There's a macro that's been around in the ecosystem since the 90s that lets you write things in infix notation, so you can write your #I((-b + sqrt(b^^2 - 4*a*c))/(2*a)) and not have to live with (/ (+ (- b) (sqrt (- (expt b 2) (* 4 a c)))) (* 2 a)). (In any case negative sqrt will give you a complex number automatically, which is nice.) The built in "loop" macro for looping has some very advanced features on top of the more common for-i-.. style or for-each.. style of other langs, e.g. from Practical Common Lisp:
(defparameter *random* (loop repeat 100 collect (random 10000)))
(loop for i in *random*
counting (evenp i) into evens
counting (oddp i) into odds
summing i into total
maximizing i into max
minimizing i into min
finally (return (list min max total evens odds)))
(Symbols being first-class (that is, "maximizing" or "minimizing" aren't function names or strings) enables a lot of nice features apart from macros themselves.) It's not particularly hard to get such similar statistics from a list of numbers in another language, but having it be this easy and terse can give you a marginal efficiency boost if you happen to need to do something similar. The string formatting function "format" (not macro) has similar flexibility and any time I have to output a comma-delimited (except the last item) string of things in other-lang I get sad I don't have Lisp's format. Marginal efficiency boosts add up. The final macro example is that in Java we eventually got try-with-resources syntax (https://docs.oracle.com/javase/tutorial/essential/exceptions...), or in Python we eventually got the "with" statement, meanwhile in CL there's always been a style of "with-" macros that address the same issues. Some "with-" things are built-in, but it's trivial to write your own just like it's trivial in Java to implement AutoCloseable. Macros in Lisp can be about as easy to write as C preprocessor macros, though instead of string substitution into a template you're technically doing expression substitution, but Lisp macros can also manipulate the incoming data and run any other Lisp code before it produces some expansion, making them capable of far more when you need them to be. e.g. the boolean operator "or" is implemented as a macro that expands into a conditional evaluation (second+ operands won't evaluate if the first is true). Historically this has helped Lisp programmers "steal" any neat features from other languages without having to wait for an update to the Lisp implementation or standard. That has more to do with how Lisp has continued to be relevant though.
Hopefully at least I've given you an idea of how it's at least possible for there to be any efficiency gains using Lisp, either in the past before other languages and their ecosystems caught up, or even now for certain classes of problems (e.g. if you believe 20% efficiency gain is possible by eliminating waiting on redeploys while developing). Though I think marginal efficiencies that sum up to 10x are probably pretty rare nowadays. Such a general claim was more tenable in the past when even garbage collection wasn't really mainstream...
I have heard, although it is always just a quick statement with no explanation that it is the macros. And then one of siblings children comments seems to support the idea that Lisp programmers as a whole may be more experienced/better/more well rounded developers in general, so maybe that’s it. I am genuinely interested in peoples thoughts on this matter and am not being flippant or sarcastic.