The sentence that I found the most interesting is this one: "The right architecture to support 1996-ebay isn't going to be the right architecture for 2006-ebay. The 1996 one won't handle 2006's load but the 2006 version is too complex to build, maintain, and evolve for the needs of 1996." This is something to keep in mind when you are bashing the old codebase and want to rewrite it desperately.
I think it's crucial here to identify the point where a particular architecture is breaking down and no longer serves the needs to find out when you should rewrite. Oftentimes the bashing of an old codebase occurs not while it's still adequate, but rather when it's still in use long after its expiration date. What starts as a quick hack can survive remarkably long, but sooner or later technical debt makes things so complicated that thinking about a rewrite is probably not the worst thing to do ... if you have the time and money to do so (but if you don't you'll just ink more and more time and money into an inadequate solution that breaks down catastrophically sooner or later).
The best thing probably would be to try to figure out whether you'd hit that point in the near future, so you can start work on the rewrite or overhaul ahead of time instead of being forced to make the new code a rush job as well because everything's breaking down around you.
* Release early, release often, evolve fast or even pivot. What is good for performance and availability is often bad for simplicity and flexibility. Build with an intention to rebuild when the time comes.
* It's like throw-away prototypes, only in production. When your business grows, you may have to throw away some or all of your previous code base (as eBay did, twice). This does not mean that the previous solutions were bad: not at all, they were adequate for the previous step.
* Modularity (e.g. microservices) is good, but it may add complexity. A monolith may be a fine sacrificial architecture, to be eventually replaced with something better (and more complex and expensive to build).
The last point is especially good, and from my experience I can say: leave the most complicated part of the system as a monolith until it's functional, and do detailed design of the easy parts. You never can really predict which dependencies will arise within the complex part.
I believe the "e.g." in the last point is especially important: Even if you don't use microservices, you can (and should!) still make your architecture modular.
- "performance is a feature" ... But any feature is something you have to choose versus other features.
- Good modularity is a vital part of a healthy code base, and modularity is usually a big help when replacing a system. Indeed one of the best things to do with an early version of a system is to explore what the best modular structure should be so that you can build on that knowledge for the replacement.
- Design a system for ten times its current needs, with the implication that if the needs exceed an order of magnitude then it's often better to throw away and replace from scratch
- Microservices imply distribution and asynchrony, which are both complexity boosters. I've already run into a couple of projects that took the microservice path without really needing to — seriously slowing down their feature pipeline as a result. So a monolith is often a good sacrificial architecture, with microservices introduced later to gradually pull it apart.
Microservices imply distribution and asynchrony, which are both complexity boosters.
In many case you are right.
While Starting with a microservice architecture maybe is a premature optimization,
Microservice are a kind of pattern that fits well when the business logic will grow in complexity and in term of entities/actors number.
At the some time Microservice give you a chance to scale inside the microservice itself without touch any other service
Rewriting from scratch when the old code is completely thrown away is a very expensive process. Joel is advocating a safer approach where you take the long route and refactor the system gradually. This is actually pretty close to what Martin is saying too where you gradually transform your legacy system into something new by refactoring the old architecture and, if/when necessary, splitting bits from the old monolith system into microservices (for example).
I've re-written a couple of systems. I think sometimes a refactor is a lot more expensive. e.g old system was written in actionscript/flash. The old devs left and lots of little bugs. Best to just rewrite in HTML/JS that works on mobile and you have full control.
Another example. This dude had a giant wordpress site with a bazillion plugins. It worked but when something went wrong he couldn't figure jack shit and couldn't add more. Re-wrote it with a CMS, he got a 100x boost on how fast he could get things done.
Sometimes the old just needs to be replaced with the new.
Are you sure you rewrote a "system"? Or a program?
Fred Brooks makes a distinction between a program, a product, a system, and a "programming system product" (what I believe Joel was thinking about when he was talking about Netscape)
"A product (more useful than a program):
can be run, tested, repaired by anyone
- usable in many environments on many sets of data.
- must be tested
- documentation
Brooks estimates a 3x cost increase for this.
To be a component in a programming system (collection of interacting programs like an OS):
- input and output must conform in syntax, semantics to defined interfaces
- must operate within resource budget
- must be tested with other components to check integration (very expensive since interactions grows exponentially in n).
Brooks estimates that this too costs 3x.
A combined programming system product is 9x more costly than a program.
"Once you have gained traction, never ever allow yourself to be caught without ship-able product"
Also old code indeed rusts. Framework, API, binaries, libraries change, sometimes in very subtle ways, so a perfect piece of code that was written for php 3.2, apache 1.1 and MySQL 3 and left untouched for a decade just refuses to work when php ticks from 5.3 to 5.4
"The IRS says the costs of developing computer so closely resembles research and experimental expenses that it warrants similar accounting treatment. As a result, a taxpayer may use any of the following three methods for costs paid or incurred in developing software for a particular project, either for the taxpayer’s own use, or to be held by the taxpayer for sale or lease to others:
The costs may be consistently treated as current expenses and deducted in full.
The costs may be consistently treated as capital expenses that are amortized ratably over 60 months from the date of completion of the software development.
The costs may be consistently treated as capital expenses and amortized ratably over 36 months from the date the software is placed in service.
Under this method, the cost may also be eligible for a bonus first-year depreciation allowance."
It is all iterative and relative to the current time: hardware, software, network speeds, access, mobile etc.
Just like the sexy new iPhone 6 looks like the thing, in a few years it will look old and dated. Everything looks good in its time.
The same goes for software, however simple fundamentals do remain and grow stronger. Hopefully over time iteration does simplify, but as a whole, it can look complex to meet the needs of the current market. Everything we do now won't be used in the future, it does expire, but it is useful as a step or else there is no future version without this version.
I think this can apply for some things and not others. I mean, imagine replacing GCC. Obviously, it'd be better to do. I've heard it's one of the worst code bases to contribute to. Or look at OpenSSL even.
For some things you need compatibility, and that means that writing a replacement must be done extremely carefully and will require significant effort.
GCC will probably never be replaced. The current C++ standard is so monstrously complicated that implementing a full compiler would be an enormous task. But it doesn't matter, because eventually few or no new projects will be done in C++. Consider the fate of COBOL, which was once a shiny new language, a real step up from coding business application in assembler. It's now a tired old has-been, used only for patching up legacy systems.
As a system ages, it becomes more and more complicated because it is serving more and more purposes. Accordingly making upgrades while preserving backward compatibility becomes harder and harder. I remember hearing about an old AT&T switch with software so intricate that in the end it required one meeting for every line of code changed. And so progress grinds to a halt.
At some point, you either break backward compatibility or face stagnation. Careful modularization and upgrades can push back the boundary, but not forever. Even good practices, good people, and a good organization can only tame so much complexity.
LLVM would be an enormous task to write, indeed. Thankfully, it's done already :)
Your tired has-been COBOL is so infrequently used that on Tiobe, it outranks D, Erlang, Common Lisp, Haskell, Scala, Go, Lua, ...
In light of that, I'd be careful with predictions like "few or no new projects will be done in C++". It might be a long while before we reach that point.
For what I'm aware of, that's all maintenance and extension of existing projects. Is COBOL being used for new projects anywhere? That was my question, and the only thing relevant to the original claim.
You didn't answer his question, though. His question was whether COBOL was being used for new development. Is anyone, when writing a new program from scratch, doing so in COBOL? I hope not.
COBOL had a purpose, which was making programs more readable / understandable. We now have much better ways to do that, and it's a primary area of research.
C++ has a purpose, which is to maximize abstraction without imposing any unnecessary runtime cost. But we're not focused on doing better at this, since most applications don't really care about performance. C++ may become somewhat of a niche language only used to build the invisible low-level stuff, but it will never go the way of COBOL until something better at its purpose comes along (I've heard good things about Rust?).
You know, as much as I've heard some variation on that phrase, I've yet to see the benefits in practice--at least not from the last couple of revisions of the language.
I tried to hack on g++ as part of project years ago, but really struggled to make any contribution. The codebase is indeed monstrously complex, but the core development team is also very protective. A secondary effect of the growing complexity in an open source project is the decreasing willingness for the core team to bring in new people because of the time/effort cost involved and potential risks. If the team naturally atrophies over time, you end up with increasing stagnation.
I've enjoyed much of Fowler's previous output, but this post didn't grab me. Assuming anyone building a modern web-oriented service is going to use at least some kind of service oriented style architecture (SOA) - eg. service-specific containers - this post is kind of toothless, since individual subsystems are by definition possible to retire, fail, scale, etc. without throwing the baby out with the bathwater.