Except for this statement, beware of absolutism in statements of How To Do Software Development.
The specifics of this debate are kind of uninteresting because of the (general) lack of nuance from various sides, albeit all informed by their own lived experience.
OTOH, the recurrent reality of <insert topic> debate in our industry is very interesting.
I think it's some combination of:
* a bunch of problems are still unsolved
* software is so powerful that sub-optimal solutions are usuallly Good Enough
* industry amnesia, driven by developer/engineer turnover
* the relative infancy of the industry, especially as a function of the rate of change (I'm not sure how you would normalize for rate-of-change, social structure and communication speed, but it would be interesting to compare these debates to medieval guilds in Europe).
* ???
To take up the first two above:
Things are better than they used to be -- as late as the 90s, code reuse was still an unsolved problem.
Of course, code quality is still hard--we are reusing broken code, but at least we "only" have to fix it once.
I think it's hard to overestimate the importance of Good Enough as a factor in these recurring debates. Everyone can be right from the business's point of view--tons of money is still being saved. Once you get past the initial ramp of a company, how to structure for continuing velocity of a team and make headway in your chosen market(s) seems like a different optimization problem than what got you there (again, not a new topic!)
Was that an implication that code reuse is solved? I'd say there are more options but their value is still a matter of debate. I have a hard time imagining that code written today is better than 30 years ago. (We do have much better source control tools so at least that's something.)
I doubt that code written today is better than 30 years ago. We have much better tools, including source control, and much better hardware, but that's enabled us to write vastly more code than was feasible 30 years ago. I think there's been a choice about whether to use that extra power to write either better code or more code, and we've veered sharply towards writing more code. Some if it is better, but most of it is probably worse.
30 years ago was right at the start of my 'career', learning to program my ZX-81 as a pre-teen. I learned to program it in Z80 machine code, because it only had 1K of RAM and you couldn't fit much BASIC code into that space. (Also, there was no assembler. You had to write the assembly code on paper, manually convert it to machine code, and type in the bytes into a comment line in a BASIC program.) My experience with the ZX-81 isn't that much different from what the grownups had gone through over the previous 10-20 years with teletypes, mainframes, and the other early PCs.
The degree of low-level fiddling and knowledge needed to program computers back then led to much more careful analysis and understanding, I think. No one had to tell us to design first before we sat down and started coding, because there was no other option. And not just design; we had to run the code in our head and debug it before it was ever put into the computer. That's a skill that I've pretty much retained today, 30 years later, and occasionally I still use it. (eg: During code reviews, or thinking about a problem I'm working on while in the shower.)
Younger developers, mostly, haven't gone through that kind of experience. I think it makes a difference. I'm really happy about the existence and popularity of the Arduino and similar open hardware platforms because they bring back that bare-metal level of software development. I'll bet that today and in the future, the best programmers are going to have a shared experience of working on devices like that when they were young.
I am around your age and started then as well, also Z80. In my area it was rather normal to just write and run hex in your head. I still know the whole instructionset in hex as I wrote so much code with it. I totally fail to see the reason why anyone did use an assembler for that; I can just read and write the hex codes and did. Besides the time it took to get back after a crash it did not take much more time to 'get shit done' then as it does now. There were just vastly less programmers and thus less code was written. Unfortunately I don't have my 80s code anymore but my output has not changed much; better code and the code does more because of the libraries we have now, but it's around the same amounts. Running code in my head is great and was easier with something limited as an MSX (or zx for that matter); but it gives me a lot of advantage I believe.
About code reuse - here is some disagreement about whether it is a good thing in the first place: 'I also must confess to a strong bias against the fashion for reusable code. To me, "re-editable code" is much, much better than an untouchable black box or toolkit. I could go on and on about this. If you’re totally convinced that reusable code is wonderful, I probably won’t be able to sway you anyway, but you’ll never convince me that reusable code isn’t mostly a menace.' This is from Donald Knuth: http://www.informit.com/articles/article.aspx?p=1193856
I have to wonder if he was thinking about, for example, a reusable implementation of a hash table. And if we was, why in the world wouldn't he want that. Running with the hash table example: I use them many, many times a day, and if I had to reimplement them every time, I'd be sunk. Just a few moments ago, I wrote a script to check for duplicate entries that looked something like this:
seen = {}
elements.each do |element|
if seen[element]
raise "Duplicate element: #{element.inspect}"
end
seen[element] = true
end
It's quick and dirty and isn't a shining example of architecture. But it found my duplicates and let me move on with my day. But what if I didn't have a reusable hash implementation to lean on? Would I even have attempted to write that script? Or would I have done my duplicate checking manually, wasting about an hour?
Andrew: A story states that you once entered a programming
contest at Stanford (I believe) and you submitted the winning
entry, which worked correctly after a single compilation. Is
this story true? In that vein, today’s developers frequently
build programs writing small code increments followed by
immediate compilation and the creation and running of unit
tests. What are your thoughts on this approach to software
development?
Knuth: As to your real question, the idea of immediate
compilation and "unit tests" appeals to me only rarely, when
I’m feeling my way in a totally unknown environment and need
feedback about what works and what doesn’t. Otherwise, lots of
time is wasted on activities that I simply never need to
perform or even think about. Nothing needs to be "mocked up."
Amusing given the link prompting this whole thread.
In the context where Knuth was writing, he was substantially correct. In languages where you can sufficiently encapsulate sufficiently abstract patterns, it ceases to be. When your abstractions don't leak, you don't need to touch the innards. As Rusky says, size also plays a role.
The very notion of a "non-leaky abstraction" is a tool that is only just beginning to be employed by programmers. At its most advanced level you're talking about bisimulation proofs over abstract data types.
We may get there one day, but for right now just spec'ing interfaces as combinations of consistent laws is a stretch.
I'm not sure that's true. It would certainly be an interesting anthropological undertaking to pin down the details, but (at substantial risk of being wrong) I feel like this was an assumption in early attempts at abstraction and we only feel the need to specify "non-leaky" because we have discovered important things that previous attempts have - in practice - tended to leak.
I think I'm in agreement with you here—we're slowly, as a community, discovering how to make non-leaky abstractions. They've been around for a while in places where "in practice" had a lot of legroom (pure mathematics). CS has a lot of legroom too, but it's taken a long time for us to think about it in such a way to know where we can place our weight.
I think he's probably right about bigger, higher-level, or more domain-specific things. But there is a boundary- tools, languages, simple libraries, etc. can be, should be, and are reused to great effect.
I believe recurrent debates are people problems more than technical problems. Although news of DHH's denouncing TDD have been very popular, the technical debate over the details is the same boring story as ever. There are people that can't keep themselves clean from this kind of causes.
I agree with your (likely more than) partially formed thoughts; there is still a lot we don't know. In my case, I often work with CSS and TDD would not just add complexity, it conceptually does not make sense with my workflow. It is human nature to generalize solutions. But in our industry, we sometimes find ourselves generalizing 'development' when that is an enormity in and of itself.
From the article: "Besides, how do you know if the CSS is correct? Remember we are doing TDD. We are writing our tests first. How do you know, in advance, what the CSS should be?"
> as late as the 90s, code reuse was still an unsolved problem.
But that is about the social and business structures and economics around how computing matured. Zero to do with technology. The technology aspects of code re-use were worked out long ago.
The specifics of this debate are kind of uninteresting because of the (general) lack of nuance from various sides, albeit all informed by their own lived experience.
OTOH, the recurrent reality of <insert topic> debate in our industry is very interesting.
I think it's some combination of:
* a bunch of problems are still unsolved
* software is so powerful that sub-optimal solutions are usuallly Good Enough
* industry amnesia, driven by developer/engineer turnover
* the relative infancy of the industry, especially as a function of the rate of change (I'm not sure how you would normalize for rate-of-change, social structure and communication speed, but it would be interesting to compare these debates to medieval guilds in Europe).
* ???
To take up the first two above:
Things are better than they used to be -- as late as the 90s, code reuse was still an unsolved problem. Of course, code quality is still hard--we are reusing broken code, but at least we "only" have to fix it once.
I think it's hard to overestimate the importance of Good Enough as a factor in these recurring debates. Everyone can be right from the business's point of view--tons of money is still being saved. Once you get past the initial ramp of a company, how to structure for continuing velocity of a team and make headway in your chosen market(s) seems like a different optimization problem than what got you there (again, not a new topic!)
Just some partially formed thoughts...