Yeah always switching tools basically resets your experience to zero, so you have to do the same mistakes over and over, no wonder it's hard to stay motivated. And the "senior" jobs have zero power, so you can't stop people from making mistakes, and trying to "influence" just makes the experience even more exhausting and frustrating when people have no reason to listen to you.
I wish there was real senior roles you could grow into where your experience is actually valued, and you would gain certain power to make decisions, but then the argument is that you can't hire juniors anymore because they think it's too uncool to have a boss.
> I wish there was real senior roles you could grow into where your experience is actually valued, and you would gain certain power to make decisions
In my experience, the only way of getting some decision power is moving into management. Even team lead roles don't count and don't give you any ownership over the product direction.
In big tech engineers are generally trusted more, but still product ownership is dedicated to management.
the whole tech thing is changing fast, ageism is a true thing, in a scenario where most of the previous knowledge can be ignored, being a senior with 5 years of experience, 10 or 20 doesn't change much. Given that young people usually simp for the companies much more due to being naive, they have a huge preference in the hiring process.
Tech is removing the root of the knowledge, migrating from understanding the solution, to quick copy&paste from some places.
There's a huge difference between understanding tools/libraries/frameworks/programming languages/APIs etc vs understanding how to build a system on top of any of these things that scales well, is maintainable, is easy to collaborate with others on, that can be extended quickly, that doesn't cost that much time/money to build in the first place, etc.
Yes the former changes every 1-5 years. Doing the latter well is much harder, no single tool can solve these problems, and I think years of experience really does help.
> Half life of knowledge in our profession is more or less a year and a half.
Only if you define your profession as knowing the latest front-end frameworks. In terms of concrete technical knowledge that half-life lengthens as you go down the stack. But beyond knowledge of existing tools and frameworks, there is also the understanding of systems and how they interact with the real world. This is what you need to understand to really give yourself a life-long career. It can still be hard tech (distributed systems, scalability, etc), or it can be a little softer (UX, maintainability, collaboration, etc), but these skills will give you the ability to dwarf the actual business impact of the 25 year-old who has maximized knowledge of the latest tools.
I agree that years aren't necessarily a good metric for experience, although I have decades in IT - started when I was 19 and retirement is a real thing I have to consider.
Years do give you some experience that isn't translatable to the resume: after a while, you've seen through most of the tricks that management likes to try but which the younger colleagues haven't learned. Having older folks around can spoil the surprise.
My personal theory of the roots of ageism has this as a pillar.
They try to make you commit to additional unofficial work/projects for which you get no extra time or resources. If you fail and burn out, that's just you being a bit "overambitious", and if you succeed, they will just make if official. In that way, they can avoid owning up to any failures and only take responsibility for the successful ones. So you have to be a bit careful in casual conversations with the boss about ideas and possible improvements.
On the other hand, AWS Lambda seems like CGI/FastCGI all over again, but with proper automation, so I have at least one data point on 20 year cycles (to confirm we need someone who is in profession for at least 40 years).
Developer here who has been programming over 40 years (since I was a teenager in the late 1970s).
I know I am stretching things a bit here, but IBM mainframes, multi-user Forth systems, and distributed QNX systems ranging from the 1970s to the 1980s -- not to mention UNIX systems -- could all support remote procedure calls or interprocess/interapplication scripting across standard APIs to some extent (for a loose sense of process or application, especially with Forth). Even Smalltalk back then could do that to an extent but mostly from a single-user perspective in the sense that Smalltalk is mostly about message-passing objects. Essentially, you could have a system that could talk to itself or other similar systems in standard ways.
Yeah, there have been so many cycles of forgetting and reinventing with new generations of programmers. Although it is true some things improve even as sometimes other things decay for a constantly changing kaleidoscope of opportunities and risks (a bit like host/parasite arms races in evolutionary cycles).
And also from the 1960s-1970s:
https://en.wikipedia.org/wiki/PLATO_%28computer_system%29
"Although PLATO was designed for computer-based education, perhaps its most enduring legacy is its place in the origins of online community. This was made possible by PLATO's groundbreaking communication and interface capabilities, features whose significance is only lately being recognized by computer historians. PLATO Notes, created by David R. Woolley in 1973, was among the world's first online message boards, and years later became the direct progenitor of Lotus Notes."
And from a different perspective, what is email but a standard way to do a remote procedure call to hopefully invoke some behavior -- even if a human may often be in the loop?
https://en.wikipedia.org/wiki/History_of_email
And from the 1930s an earlier Paul Otlet invented the idea of using a standard 3x5 index card to store and transmit information (mainly metadata):
https://en.wikipedia.org/wiki/Paul_Otlet
"Otlet was responsible for the development of an early information retrieval tool, the "Repertoire Bibliographique Universel" (RBU) which utilized 3x5 inch index cards, used commonly in library catalogs around the world (now largely displaced by the advent of the online public access catalog (OPAC)). Otlet wrote numerous essays on how to collect and organize the world's knowledge, culminating in two books, the Traité de Documentation (1934) and Monde: Essai d'universalisme (1935)."
For another example of cycles, my current favorite UI technology is Mithril+HyperScript+Tachyons for JavaScript (although Elm is great too conceptually, and likely inspired Mithril and React in part) which is so easy to use from a developer ergonomic point of view in part by (simplifying with a very broad brush) re-inventing the OpenGL video game paradigm of redrawing everything (with behind-the-scenes VDOM optimizations) from essentially a global state tree whenever the UI is considered "dirty" because someone touched it. Mithril is so much easier to use than UI systems that are all about creating dependencies (like most Smalltalk systems) or which require storing and updating state in carefully managed components (like React) or similar constrained models. But sadly React+JSX+SCSS has so far won the mindshare war despite overall worse developer ergonomics. I hope that cycle continues to turn someday and the Mithril approach will win out (if maybe in some other implementation by then).
https://github.com/pdfernhout/choose-mithril
Frankly it has been frustrating over the decades to see great ideas lose out for a time to lesser ones with better marketing or other institutional advantages or other non-technical issues (Forth vs. DOS, CPM vs. DOS, Smalltalk vs. Java, Mithril vs. React) or which better fit with the familiarity of developers with earlier systems (HyperScript vs. JSX, Lisp vs. C++). Yet, I can also still be hopeful things may improve as social dynamics and technical dynamics change over time in various ways. Like was said about JavaScript which I mainly program in now: "It is better than we deserve..."
So if you had 6 of those, you could support 96 users. You could get expansion units too for the main bus:
https://www.reddit.com/r/retrobattlestations/comments/dpt47y...
"It takes up one ISA slot in the main PC, and then hauls the signal to the external box, where you can plug in up to 7 more cards, plus some RAM"
So, using 6 slots in the first box, and 6 slots in the next, and 16 port serial cards, that's in theory 192 users on RS-232 lines. Anyway, this is just a guess. I vaguely remember hearing of some actual (lesser) systems with lots of RS-232 ports, but don't recall exactly how they worked.
So, that's 12K to support 150 input buffers, plus probably at least 1K for each user dictionary on top of the shared dictionary (12K + 150K = 172K total). That is probably low -- if users might want 4K each that's 600K. Throw in 28K for an extensive base system to round things off and that is 640K for a great low-latency system supporting 150 users all simultaneously having a command line, assembler, compiler, linker, and editor. And I'd guess probably a database too on a shared 10MB hard disk. And it might even feel more responsive than many modern single-user systems (granted, expectations were lower back then for what you could actually do with a computer). So, yes, "640K of memory should be enough for 150 anyones." :-)
Related: "Why Modern Computers Struggle to Match the Input Latency of an Apple IIe" https://www.extremetech.com/computing/261148-modern-computer...
"Comparing the input latency of a modern PC with a system that’s 30-40 years old seems ridiculous on the face of it. Even if the computer on your desk or lap isn’t particularly new or very fast, it’s still clocked a thousand or more times faster than the cutting-edge technology of the 1980s, with multiple CPU cores, specialized decoder blocks, and support for video resolutions and detail levels on par with what science fiction of the era had dreamed up. In short, you’d think the comparison would be a one-sided blowout. In many cases, it is, but not with the winners you’d expect. ... The system with the lowest input latency — the amount of time between when you hit a key and that keystroke appears on the computer — is the Apple IIe, at 30ms ... This boils down to a single word: Complexity. For the purposes of this comparison, it doesn’t matter if you use macOS, Linux, or Windows. ..."
Seriously? Amending a trash fire with a mound of glowing embers (that can't all be extinguished because precious backwards compatibility -- e.g., `auto_ptr`) is "just reinvention"?
You can only hold that view if you don't understand C++11. :p You're more accurately complaining about "invention" (well, in the C++ world; in the Rust world, it's "C++ implemented our stuff").
> I wish there was real senior roles you could grow into where your experience is actually valued, and you would gain certain power to make decisions, but then the argument is that you can't hire juniors anymore because they think it's too uncool to have a boss.
Uhh... who are these junior people who don't like having a boss? I read the first part of this sentence and wondered why wouldn't a lower level colleague not want a senior helping them avoid potholes in the road...
> why wouldn't a lower level colleague not want a senior helping them avoid potholes in the road
Because they become much more influential in a shorter time, if they make a coup d'Etat by paving a completely new road, now they are the new road master. The old potholes are gone so they don't have to worry about those, and the new potholes are still unknown and yet to be discovered.
I wish there was real senior roles you could grow into where your experience is actually valued, and you would gain certain power to make decisions, but then the argument is that you can't hire juniors anymore because they think it's too uncool to have a boss.
It's really rigged for shorter careers.