i am going to echo a lot of the comments here and say that this is an eye-opening essay. it got me to thinking things that i would not have otherwise. that is a very high bar to clear, which very rarely happens. for any random bit of text on the internet you might choose to read, you can't expect much more than that. in that respect, it is a wild success.
on the other hand ... i believe kragen has deep knowledge of this concept he's brought up, but he has failed to convey its specifics to people who don't already understand it. i acknowledge i may well be too stupid to grasp all this, but it would have been a nice bonus if i could have.
i often wonder: why does any reasonably worthwhile program, the type other people will pay for, always require at least 50,000 lines of code? is that really necessary? surely we can do better than that. but if there's a way, apparently nobody has found it yet. i hope people like kragen help push the state of the art forward, so we might someday find a way around that.
> i often wonder: why does any reasonably worthwhile program, the type other people will pay for, always require at least 50,000 lines of code? is that really necessary? surely we can do better than that. but if there's a way, apparently nobody has found it yet. i hope people like kragen help push the state of the art forward, so we might someday find a way around that.
It's unlikely you'll get many people to pay for automating a simple process, so "serious programs" will generally only be the ones that automate complicated processes. So from the start, the pure algorithm you're going to implement will usually be complex. Then, to actually implement that algorithm on a real machine, you'll have to deal with all of the specifics of that machine as well, which adds significant boiler-plate even when coding in Python or Haskell, not to mention programming in C++ or Rust. This boilerplate typically grows exponentially with the size of the algorithm you have to implement, and with the size of the data that will be processed.
It's also important to remember that many "simple" operations, such as "print the date one hour from now", or "read the user's name" hide huge complexity encoded in human context and history.
> It's also important to remember that many "simple"
operations, such as "print the date one hour from
now", or "read the user's name" hide huge complexity
encoded in human context and history.
obviously true. but are those operations more complex than, say, what a modern cpu does? think of the effectively infinite number of details of computation that the cpu i am using to
type this message is hiding from me. i don't need to know. it's all taken care of. but for some reason, we can't embed "print the date one hour from now" into the operating system, so that particular bit of code only has to be written once and then reused infinitely, like my cpu's microcode?
programmers know why this is harder than it looks, of course. but is it really unsolvable? i hope not.
on the other hand ... i believe kragen has deep knowledge of this concept he's brought up, but he has failed to convey its specifics to people who don't already understand it. i acknowledge i may well be too stupid to grasp all this, but it would have been a nice bonus if i could have.
i often wonder: why does any reasonably worthwhile program, the type other people will pay for, always require at least 50,000 lines of code? is that really necessary? surely we can do better than that. but if there's a way, apparently nobody has found it yet. i hope people like kragen help push the state of the art forward, so we might someday find a way around that.