The AS/400 was the most reliable server I have ever worked with.
They tended to just work month after month.
I am not saying it is the most reliable server in the world,
because there are a lot I have not worked with, but it was
far more reliable than anything else I have worked on.
IF something did go wrong, they would at times call IBM and ask for service
before the customer had any knowledge of an issue.
My mother became the AS/400 sys admin at her company and she
had no idea about servers or technical sides of computers.
She changed the backup tapes on a schedule and that was it .
Many places nobody knew where the box was since nobody interacted
with it on a physical level. Green screen terminals or terminal apps
on Windows was the norm.
Some business had them for inventory / sales / accounting
and so on.
Many places nobody knew where the box was since nobody interacted
with it on a physical level. Green screen terminals or terminal apps
on Windows was the norm.
There were a couple of simple games (not from IBM) you could download
and play, but it wasn't worth it.
There were some drawbacks, but Id love to have one to play with.
It surely worked, but when it didn’t it took ages to resolve the issue. I remember back in mid nineties, IBM had brought a whole team to troubleshoot an issue and they stayed at the company where I worked for almost a week to find out what went wrong.
Well, exactly the same thing happened in a project I worked on in mid-2000s.
Except it was not AS400 or anything hardware... it was a problem with an Oracle product, and took a bit over one week with higher and higher gurus being parachuted in by Oracle Europe.
And in the early 90s I had another similar episode, this time with DEC/Vax. Not sure anymore but I think they finally had to patch the OS.
My point: nothing made by humans can guarantee 101% uptime. The main difference is how far any vendor will go to solve your problems.
My point: nothing made by humans can guarantee 101% uptime. The main difference is how far any vendor will go to solve your problems.
From my experience, IBM will go to the far side of the moon and back in order to solve your problem. It will probably cost you in fees as much as buying a new car, but there's no luck of resources or resilience from their part. And I admire them for that. As much as we like to idolize the latest and greatest in this forum, the truth is that when it comes to mission critical equipment you need someone that you know will fly an expert from a different country in the middle of the night if they have to.
i would say, rather, that the main difference is how dependent you are on the goodwill of the vendor to solve your problems, and how much you can solve yourself (or by hiring people). which is why oracle has been mostly replaced with postgres, mysql, and sqlite; why as/400 has been mostly replaced with web servers running apache and nginx; why vaxen and the like have been mostly replaced with i386 and later amd64 hardware with buses like pci and pcie; and why vax/vms has been replaced by linux
when you have business-critical systems problems only your systems vendor can solve, over time, their pricing strategy tends to converge on the following:
1. how much money does your company make?
2. hand it over.
you may not know as much about linux as the gurus dec parachuted in to patch your vms installation knew about vms, but i can guarantee that you care more about your own problems than dec did, and that counts for more. you can either learn what you need to know or, if you have more money than time, hire somebody who already knows it—at a market rate, not a monopoly-rents rate. the perverse incentives enjoyed by vendors of closed systems slowly but surely destroyed almost all of them, leaving only a few standing
I am not sure of what point you are trying to make here.
I have been in the industry since late 80s and (surely due to the kind of customers I worked with, so I am not implying that this applies to everyone and everywhere) and I have never seen anything even remotely "mission-critical" replacing Oracle or Microsoft stacks with Linux/Postgres.
I am currently working on something based on ridiculously obsolete technology (Progress) which is running on Solaris, for example.
The company I am working with is concerned about the chance that Progress will just die, but replacing it with Postgres is the least of our problems.
The real problem is that a full rewrite would take maybe 5 years and nobody wants to start the ball rolling.
In the meantime, we are "more than happy" to pay whatever the licensing is both for Progress and Oracle (for Solaris) and having them "on call" in case something breaks.
i am sorry that my comment was unclear, and i would be happy to explain whatever it is that you are having trouble understanding
> I have never seen anything even remotely "mission-critical" replacing Oracle or Microsoft stacks with Linux/Postgres
yeah, this only rarely happens inside a single company. mostly the way this works is that companies that run their mission-critical systems on oracle and microsoft stacks get outcompeted in the market by companies running on linux and postgres, because they can solve their own problems instead of letting their vendors decide which problems are worth solving and what their profit margins should be allowed to be
you have, i am sure, noticed that many more companies today run their mission-critical systems on linux than on solaris, though microsoft is still in the running and even in the lead in some areas
Maybe the problem is that the company or companies you have in mind operate mostly in the service business and are relative recent - i.e. dating back to the 90s at most (e.g. AirBnb).
I sincerely doubt that for automotive/manufacturing, utilities or travel (the three industries I have most experience with) linux (or any specific IT technology) has ever made the difference between surviving and being eaten alive by competitors.
In that universe IT services are a commodity. Just like Electrical Power: you cannot operate an automotice company without electrical power, but buying it from Utility Company A instead of Utility Company B does not matter at all.
For sure when an entire industry adopts a completely new technology (e.g.: IOT Shipping) they will go for whatever seems to provide the best solution for their specific case. But their bread and butter stuff (ERP, CRM, Finance...) will either be some custom legacy app (or more probably a hodge-podge of balkanized legacy apps) or will be some customized behemoth like SAP, Oracle Finance, Microsoft Dynamics.
Both SAP and Oracle products can run on Linux, granted. But in that case I think it will be a vendor-backed version of Linux, so the support will come from the same entity that provided the application stack.
it'll be interesting to see what happens, but my perspective is pretty different from yours, which is surprising to me since we're talking about an area you have intimate and deep knowledge of. i'll explain my perspective here, in the knowledge that probably the reason for our disagreement is largely things you do know and i don't, so i fully expect this to make me look very foolish. hopefully you'll be kind enough to correct my ignorance, impressive though it may be
information technology is pretty important to travel and to logistics in general. remember that ibm tpf was originally the 'airline control program', and before that, sabre, developed in collaboration with aa. and travel and logistics planning are of course among the most economically significant uses of mixed integer linear programming.
the key question is, i think, whether operational excellence in it operations is an economically significant competitive advantage. will customers switch airlines (or hotel chains, or car rental companies, or travel agencies) because of corporate incompetence at operating their data centers? how would they even know if their airline or hotel chain or travel agency sucks at resolving operational problems, has colossal security breaches due to utter incompetence, and struggles to roll out new it-enabled features?
well, we can start with travel agencies. most travel agencies have already gone out of business. orbitz, expedia, and hipmunk ate their lunch. priceline came in and obliterated the discount special ticket market, growing into booking.com. all of these so-called online travel agencies live and die by their computer systems, and large parts of their workforce consist of programmers
twice in 02022, british airways had a not-quite-systemwide outage due to it incompetence: https://www.datacenterdynamics.com/en/news/british-airways-i... and most ominously the reporting included the phrase, 'BA has a long history of outages that have impacted passengers.'
mckinsey has a list of recent airline problems due to it incompetence that have affected customers, including in some cases not only delayed flights but leaks of private data customers are legally required to provide to airlines: https://www.mckinsey.com/~/media/McKinsey/Industries/Travel%... (but unfortunately they have carefully purged it of details that would make each case independently verifiable)
so i would say that specifically in the case of travel, incompetence at information technology operations is a significant public relations problem already; it regularly makes national and world news. and we should expect this to get much worse before it gets better. today the saving grace is that no airline currently has reasonably competent information technology operations, so there's no competitor eating southwest's lunch. but when your incompetence at running your computer systems loses you a billion dollars here and a billion dollars there, sooner or later it starts to add up to real money
with respect to manufacturing, i think it's well known that jit supply chain management is extremely demanding on it operations, and since covid began, supply-chain disruption has been an enormous stress on the profitability of manufacturing operations—in particular, in the automotive sector
more broadly, though, increasing automation of manufacturing is a crucial competitive advantage; to take one example, today jlcpcb or pcbway will sell you custom pcb manufacturing services for fifty dollars (or two dollars as a promotional offer) that would cost you a thousand dollars and up from flextronics. and now they're moving into cnc machining, as well, and of course service bureaus like shapeways have been around for a while, gradually moving up the value chain. all of this is crucially dependent on the operational stability of their web sites and their resistance to data breaches and ransomware, but so far they're doing a lot better than the airlines
this is not just a big challenge for traditional job shops, who are at risk of being ghettoized into a constantly shrinking super-premium market as disruptive innovation eats them from below; it's also a huge opportunity for new kinds of products that depend on the low-cost availability of such highly automated custom manufacturing for their profitability
in a related development, many high-value, highly complex products manufactured using traditional techniques—most obviously mechanical wristwatches, cameras, and various kinds of on-orbit systems, but also timers, governors, carburetors, power transmission systems, internal combustion engines, and now even lightswitches and lightbulbs—are being replaced by mechanically much simpler systems which instead rely heavily on embedded software and, in some cases, centralized cloud services and much more highly automated manufacturing
having your car start reliably, and your steering and braking systems work reliably, and being able to turn on the heater without taking your eyes off the road, hasn't been much of a competitive advantage for decades; but, as much of the unreliability of those systems starts to stem from software incompetence instead of incompetence at hydraulics and gears, we should start to see automotive suppliers competing not just on how efficient their inventory management is but on how reliable and featureful and usable their firmware is. at the most prosaic level, if the bluetooth audio only works half the time you get into your friend's chevrolet, you're more likely to buy a hyundai
utilities, though, i agree. that's because utilities are regulated monopolies, so it doesn't matter how incompetent they are; relatively few companies and fewer people are going to move out of texas because ercot and its member utilities can't mitigate even the most glaring risks to system stability. pg&e might be an exception, because its sky-high energy prices are making california heavy industry as a whole economically uncompetitive on the world scene, but pg&e isn't going to go bankrupt; its customers are
fundamentally, though, in the 20th century companies were operated by managers, and in the 21st century, increasingly, they are operated by computers—as are physical products. outsourcing your computing makes as much sense as outsourcing your management
You make a long list of things that you probably had to google on, but you still seem to lack any actual, hands-on experience on how things actually work and mostly important why systems like SABRE take forever to be replaced. Trust me: it was not because of Linux or the lack of it.
Most importantly, though, you keep missing the forest for the three: yes, modern cars have more and more electronics in them, and this most probably runs on customized, hardened Linux or some other modern OS. And they need constant communication with cloud based services etc. etc.
This applies also to fridges, TVs and so on.
My point is that IN ORDER TO PRODUCE THE CAR, FRIDGE OR TV manufacturing industry still uses for the most part legacy systems and that the cost to replace these with newly developed applications (that for some reason are either available only on Linux or "work better" on Linux) is never been appealing enough to justify the switchover.
[also, please stop quoting buzzwords at me: "JIT supply chain management" is what I was working on when I encountered the aforementioned problem on DEX/VAX. The application was written in COBOL, but was the first project where we adopted Oracle (Version 6 I believe) instead of a Hierarchical DBMS. The year was 1991 or 1992, and the company I was working for was definitely a late adopter].
When companies producing TVs switched to more modern, larger screens (let's say LCD/LED)... they just changed the BOM of the new models in their own heavily customized SAP instance. Maybe they had to add a couple of fields to a table because of new characteristics they needed to track and that were not present on older models.
They probably revamped their production lines, for sure. They had to hire younger engineers to develop or productivize (sp?) these new technologies.
This still did not make a dent in the large pile of legacy code that was used to manage the company as a whole.
Even when the PRODUCT is totally, radically new, the systems that manage its production (and sale, and post sale support) do not need to follow suit.
You still need to order components to suppliers, schedule the production, deliver the completed product to the customer or to the reseller, possibly keep track of where something was sold/sent in case they need to recall some faulty batch or police needs to know about it (a common requirements for vehicles, for example).
The only exceptions to this simple rule are companies like Tesla, i.e. someone who creates a completely new manufacturing company from scratch.
Of course, these are outliers, but they are also the only ones that don't already have an enormous amount of resources invested in systems created 25+ years ago.
The other exceptions, of course, are companies who do not produce or handle anything physical (remember when I mentioned Airbnb?).
These, too, can start from scratch, and maybe for these the technology makes a difference. On the other hand, a startup cannot really afford to pay the exorbitant fees requested by Oracle, and they find it much more convenient using Postgres or any other Open Source product.
More power to them. All this does not change my original viewpoint:
IT for mature companies is exactly like any other utility.
If you need fuel for your steel mill furnace, changing provider every 3 months might help you save a few thousands bucks (because you use so much of the stuff that even a sub-cent decrease in price will be noticeable) ... but by doing so you will probably have to pay 10 times that just guarantee that the switchover to the new supplier goes on without any glitch (like having your gas supply cut off one day or two before the new one starts working).
If you need electrical power you may consider to put solar panels on the roof of your plants... just like you may want to put a REST interface in front of some of your stored procedures.
But you will still depend on the power company for 99% of your energy, and you will still have to mantain your stored procedure because if you need to change something there, no amount of JSON will solve the problem.
This is the gist of my argument, if it still sounds unconvincing to you it may just be because I worked most of my career server side as a system integration expert for large, mature companies.
Maybe you have a different background - companies established after the web was born are surely a completely different beast, and my experience most probably would not apply, but as long as we are talking of anything founded before 1990 I am pretty sure that inertia is the prevalent force that shaped their IT landscape.
i can't find much to disagree with in your comment. i wish i could disagree with the snarky comments about my lack of expertise, but unfortunately those are on target too :)
but it sounds like you think some things about my previous comment are wrong; i just can't tell what any of them are
Sorry: I checked your cv and I can assure that your skills as a developer are at least 10x what I had even at the top of my career.
I type mostly from my phone so I am often hasty (and make lots of typos).
I assure you that when maybe looked like "lack of expertise" I meant "less experience dealing with the messy underbelly of large, mature companies".
while i appreciate the groundless flattery, no apologies are necessary; whatever effects the tone may have on my ego are less important than your generosity in sharing your experience
> it was a problem with an Oracle product, and took a bit over one week with higher and higher gurus being parachuted in by Oracle Europe.
From my understanding that was basically Oracle's business model back in the day (or the model of a lot of employees). Extensive consultant billable hours for maintenance and integration after you buy it.
I never saw that anywhere I was, but every system will have its share of issues
for sure.
Something similar happened to a client a couple of years ago.
They wanted to set up their "devops pipeline" on AWS.
First they tried to solve it in house.
After two months they gave up and brought in an AWS specialist.
After two weeks, we had two more AWS specialist inclduing one
really expensive.
1.5 months later we had the pipeline going, but fragile as hell.
I have not been able to piece together how it could have possibly
taken so long.
I didn not work on this part of the project myself at all.
Given that the entire back-edn was in C#, with VSProject configurations
the whole bit, it would have taken less than week on Azure, if you
kept some of the presets.
But yeah Azure CI/CD can suck as well and its less configurable.
Which is a curse or a benefit depending on your standpoint.
In the 90s I worked across the aisle from our AS/400 dev - after an office power outage we’d spend hours running fsck on our unixen and he’d take a long lunch. Every time.
As someone who works heavily in infra, months of uptime is absolutely the norm. If you find yourself having to reboot servers or they are crashing, you're doing something wrong.
Some of that reliability is just the homogeneous software ecosystem. Where most applications developed for the AS/400 are using roughly the same stack for "app server", "database", and so on. That closed ecosystem means core bugs get ironed out via customer feedback pretty quickly. There's a limited amount of unique combinations of software parts and config.
totally agree, we had one at the minesite I was admin at. Ran JD.Edwards ERP on it. was rock. solid. in a time when NT4 was king.
Also had a Domino mail server (for lotus notes.
It was way ahead of its time (at the time) and also super rock solid. Neither ever gave me any trouble... even when the Domino server ran out of disk space.. it never died or corrupted. just let me know and when i fixed it it carried on like there was no issue.
> There were some drawbacks, but Id love to have one to play with.
One of the shops I worked at had an AS/400 and related equipment sitting on a pallet by the elevator bank. I could have had it for free.
Drawbacks? I had no way to transport a pallet of computer equipment. And IIRC it required 3 phase power so care and feeding would have been well beyond my means. (But I still have and occasionally use the IBM Model M that they were going to discard and another time I carried a retired Sun pizza box home on the train.)
> IF something did go wrong, they would at times call IBM and ask for service before the customer had any knowledge of an issue.
I always assumed their call home service was triggered by SNMP but may be wrong? Regardless, IBM exposed a lot of metrics via SNMP so it was always an easy way to query it for metrics and/or accept traps from their devices/OS’s for failures
You don't understand where AS-400 real power comes from. DB integrated directly into filesystem, you optimize your work for how the whole system has been designed and get massive benefits and robustness.
Performance between junior basic code and heavily optimized complex queries even on just Oracle can be 1000x easily, just ask any data warehouse guy (seen it few times myself even though I don't do warehouses). You can maybe add another 0 for AS-400 there in extreme cases.
Yes if you end up doing things wrong and expect that CPU to do some heavy math then yes this will be dusted quickly. Otherwise, not so much.
But this goes against most modern 'SV principles' so I don't expect much love from younger generations here. Business love that though, and secretly wish all IT could be as reliable and predictable as them although that ship has sailed long time ago. One of those 'they don't do them like good old times anymore'.
I was part of the project that moved Nintendo’s inventory management system from as/400 to SAP HANA in 2015.
We had massive queries that ran in seconds that HANA with millions blown into the consultants just couldn’t complete in time (nightly billing runs).
The solution in the end was that a lot of complexity and decades of cleverness in adjusting to specific patterns in customer behavior were thrown out and replaced by “the consultant will just adjust that every week by hand”.
Also - I was soooo much faster with a num pad and the terminal than clicking around the SAP GUI. Ultimately part of why I left there.
Microsoft completed a very successful SAP implementation, on SQL Server and Windows. After two failed attempts. I am probably one of the few people who worked on all 3 tries. And prior to that they had a few AS400s running the business.
Because you only hear of the failures. Viaot the SAP homepage to get an idea of who is using it, all of those companies implemented it auccessfully at one point.
OP isn't saying the SAP implementation wasn't successful so, just expensive (no surprise), and somethings didn't work as begore (no surprise neither).
I love their design architecture, a full OS being bytecode based besides the kernel infrastructures since 1988, while we keep getting from those SV principles how WASM is going to change the world.
... and AoT-compiled at installation time! We didn't get this in mainstream software until Android Runtime started AoT-compiling Dalvik bytecode. As I've mentioned elsewhere, Android Runtime shows that AoT compilation doesn't prevent you from dynamically re-optimizing your binaries based on profiling feedback.
We kind of did in Windows/.NET with NGEN, and Windows Phone 8.x and 10, before Android started doing it.
Naturally NGEN has the caveat of only being good enough for faster startup, and requiring strong binaries (aka signed dlls), which meant not everyone adopted it.
Windows 8.x adopted Bartok from Singularity, where applications would be precompiled on the store, and linked on device at instalation time.
Windows 10 moved full AOT compilation into the store when downloading into specific devices.
Note that full AOT on Android is only on versions 5 and 6, starting with version 7 onwards is a mix of interpretation, JIT and AOT, which is nonetheless quite cool.
Why did android go from full AOT in 5.0 back to a mix of the two? I remember when AOT was "hyped" (not really, but talked about) for Lollipop, but I never heard about why they went for a mix instead?
I am very aware that an instance of db2 is deeply embedded into the OS, which is designed to be easily maintained. I deal with one regularly at another corporate site.
You might also find it interesting to consider that the CPU was likely made by Global Foundries (assuming that their arrangement with IBM is still in place).
I am still trying to get my coworkers excited about VMS on x86; our VAX users are completely disinterested.
I came into contact with z/OS, VSAM and the likes, but I couldn't see how they did relational queries (joins), all I remember is that every file is row based structured columns that doesn't require parsing (and language integrated in the case of COBOL). What am I missing ?
ECC beyond SECDED, processor instruction retry, CRC on all fabrics part of which can fail and the system will degrade to half-bandwidth. Hot/Swap and/or Redudant everything to include the LCD panel on the OP Panel.
I'm actually curious what type of RAS stuff EPYC and XEON have, and hope somebody can link to the info.
I use AS/400 in 2024, though not that much anymore because we only have a few things left in there that haven't been migrated. What we have gone to is less reliable and getting data out of it is always a herculean effort that requires a bunch of service tickets and meetings.
One of the great features of AS/400 is I think shift+esc, which allows you to quickly view the list of tables (or files as they call them) that are being used to populate the current screen. This should be a standard function for Tableau/Power BI workbooks that have live database connections.
I have a love-hate relationship with it. It's like an old pickup with 1M miles that refuses to die.
How do you use that info? Validate that it’s using the correct data, or as info when writing a new app? I often wished I had the same when building something on top of sap/salesforce/any other app that would show the source of every field on a page, ideally also showing the api that would give that data.
When people say "I want the <insert metric> from G42" with G42 being a certain screen, it's helpful to be able to quickly see what all is populating that screen. It's not exact but over the course of 30 years a lot of tables can get created and 7-8 character limits on table and field names doesn't help.
I am not sure if it is a requirement when creating a table to have descriptions but they are all there in my org. So go to the screen to get the tables and then join it with a "give me the list of all tables, table descriptions, columns, column descriptions" SQL query. EZ.
After seeing how useful these descriptions are, I firmly believe that every organization should make them a requirement even if the rdb itself does not. Just simple form of documentation at time of creation that allows for context while still giving you the ability to enforce rigid naming structures.
What I find most interesting from this clip is that, if you just swapped out some of the product names and acronyms, it would largely sound like a technical presentation that could be given today. “Here’s a database, we want to connect it with SQL to another product for visualizing the data, and we’re going to add some automation to take care of business process X.”
Surely things have gotten faster and (in some cases!) more efficient, but we are doing the same thing 30 years later that we were excited about in 1993. In a way that’s both comforting and darkly funny.
Choosing software, prioritizing the right things, and connecting it all is hard. We're not good at it and we have to re-connect the dots every-time we shuffle the deck chairs.
I think I've worked on the problem of "we want our sales guys to get an email whenever the sale they made ships" ... an un-countable amount of times. Without fail the first and biggest hurdle is that the customer does not reliably capture or connect their sales people's email addresses in any kind of reliable way. I'm sure they do when it comes to paying bonuses, but not for sending the email when the thing ships. It makes no sense, it should be easy, but they don't. So we work on it on and on and then with the next customer and so on.
And honestly I'm not sure any of these sales folks read the dang email.
The fact that this is server side and data storage probably contributes to this. Consumer grade client devices in 1993 were large and bulky, didn't even do multitasking or memory protection very well. Now they're cheap, small, ubiquitous, have lots of very high level programming interfaces.
I work in the auto industry. Cars have changed so much but at the end of the day they're still just internal combustion engines that spin wheels. I suppose EVs represent true innovation but I'm still dubious on their future.
> The 1916 Type 53 was the first car to use the same control layout as modern automobiles- with the gear lever and hand brake in the middle of the front two seats, a key started ignition, and three pedals for the clutch, brake and throttle in the modern order.
I've seen a clip on Top Gear where they drove some early cars and the controls were completely different like levers for acceleration and braking.
How long would it take someone familiar with this 108 year old Cadillac to learn how to drive a modern car from point A to point B? 1 minute?
Driving a Tesla is much easier than driving the 1916 Cadillac. The biggest surprise to this ancient driver would be the aggressive regenerative braking.
Now that internal combustion powered cars are down to less than 20% of new car sales in quite some countries, and has been for several years, I think we can safely say that the EV is not a fad and that the fossil technology is going the same way as the steam engine in the 60s. The modern electric motor is a superior technology in almost any way. The battery, however, is just good enough, and for the replacement to be complete, better battery technology will be needed. For those last 15-20%, the energy density of Li-Ion is just not good enough.
Honestly, the world was simpler then (with fewer layers of abstraction separating the database from the presentation) and I suspect the majority of the time it was far more efficient then than now.
... speaking as someone who was in the job of enterprise BI where data sources were primarily Progress databases on AS400, back from around 2000-2010, when we finally migrated to a Linux / PostgreSQL stack.
I had a summer internship in high school with an AS/400 application development shop. I found the machine and its development environment annoying and unpleasant compared to Unix, but the support from IBM was incredible. Absurdly bureaucratic, but also so thorough and detailed and accessible.
I still have this memory that IBM sent out an Engineering Change for the AS/400 that consisted of a twist tie to help customers who had purchased an Ethernet card for it coil their Ethernet cables more reliably. (The twist tie of course then having a specific IBM part number.) I would love to be able to substantiate this memory.
IBM was also seemingly very open to supporting new technologies, both hardware and software, on the AS/400, including some that were invented decades after the machine was introduced. Usually for a fee, of course!
That twist-tie ECO process sounds aerospace-grade.
I saw things like that from IBM, just from their later Power AIX workstations.
For mission-critical computing companies, there was lots of process intended not to have anything slip through the cracks.
The first memory that comes to mind was actually DEC (or was it HP?), who, to send a single sheet of paper (e.g., for a license key), would routinely use an entire shipping box. Paper would be shrink-wrapped or poly-bagged, with a sheet of cardboard. Plus a packing list, itemizing not only that one sheet of paper, but also itemizing all the shipping supplies to be used.
Not very efficient in some sense, but if it avoided a single mission-critical incident for a customer, I suppose it was worthwhile.
I supported core banking software and did some RPG II development on the AS/400 in the early 1990s. That system lives on in banking and has since been rebranded to iSeries and then Power (and the OS/400 operating system rebranded to "i").
Chances are if you bank in a community or regional bank, the core banking software is run on Power hardware. Long live the AS/400!
It feels like few people stay at one company for more than a few years these days, but then conversely, it also feels like companies are set up in a way that makes most people replaceable.
Note that my take is biased, I've been a "consultant" for most of my career which is a glorified temp, and you end up in projects and organizations that hires temps. I tried an old fashioned product company once, it wasn't for me (nothing in common with my colleagues who were 20-30 years my senior, and they and the company were happy just plodding along until retirement)
When I moved to the PNW, it was for Amazon, and during my interview loop, I asked everyone, "How long have you been at the company?" They all had pretty much the same answer: three months, five months, eight months, and some that had been there for a few years.
But I decided to get out of Dodge and interviewed at Microsoft, and I asked everyone the same question. The responses were shocking to me: five years, ten years, 12 years, 22 years. One person even told me that he hadn't been with the company very long: only 18 years. And he wasn't being coy.
I've been at Microsoft for more than ten years, and I still feel like the new guy. I started at the tail end of the Ballmer days, and I'm sure it was a real grind back then, but I'm glad to see a company that -- in my experience -- treats people well enough that they'll stick around.
In complex technical environments, it often doesn’t make sense to have an in-depth expert assigned to every system. You get a consultant in to set it all up and get the gears moving, but you can often get less experienced, or more versatile, people in to keep it running.
Also, the demand for skills has been silly for the past 20-odd years, so there’s less incentive stick around, other than money. Loyalty rarely pays off, unless you’re talking shares.
Microsoft is one of the few companies that seems to do a good job allowing employees to rise through the ranks from top to bottom. Most of the Partners I have met were cultivated internally. Facebook is also like this, with the added bonus of allowing much faster progression for high performers than most other companies (I know someone who became an E6 Engineering Manager/Tech Lead 3 years out of college. I don’t think it was necessarily for “bullshit” either, his work was very fundamental and important to the company).
But this is rather rare and most companies have a soft ceiling for growth internally. At Google for example, for years they have been filling most Director positions externally, and so most employees find it very hard to get there and progress past that. Progression is also often subject to norms that make the sheer number of promotions required to make it high up in the company impossible.
I believe that if a company treats their employees with respect (automatically give them cost-of-living raises without them having to ask for it, for example) and like humans instead of cogs in a machine, keep them intellectually challenged and fulfilled, and give them a sense of meaning and purpose, then they're likely to stay much longer. If you're doing that then they're giving your customers the same attention. So maybe you have a thriving business that employees will stay at for their entire career.
Contrasting that with my experience I can name one company in 30 years that did that. As is so often the case in our society, everybody is doing it backwards and wondering why everything is broken.
That seems: a) far too broad - if an HR assistant can't be replaced at a startup, why not? and b) a calculated risk - for certain things key people are just necessary to begin with, and if they leave it causes a massive issue. Startups aren't choosing that issue; they are just hoping it doesn't materialise until they have the normal way of doing things in place.
Also the best strategy for the last 10+ years was to move around frequently. Change jobs every year (or two if you can change projects inside one job). As the result you have much more salary and experience.
But everyone will bullshit you to stay as long as possible with little salary rise.
But with current market it is better to stay for longer if you have "okay" project.
I dunno exactly what a "technical marketing manager" is, but Nadella had a master's in computer science at that point, so I doubt he was just a sales guy.
> I dunno exactly what a "technical marketing manager" is
A TMM/TME is a hybrid role support role consisting of functions of a Dev Advocate, PM, and PMM.
While PMM would own non-technical GTM and enablement, the TMM/TME would own technical GTM and enablement.
It's an older role common in 90s era companies like Cisco, Palo Alto Networks, etc though is has been renamed as Technical Product Management now in most orgs (eg. Lacework, Netskope, etc)
Technical chops make the solutioning and customer relationship part easier.
Whatever tech expertise Nadella has, his EQ and ability to navigate the toxic and vicious politics of Microsoft is his superpower. They aren’t as in your face hostile as Oracle, but they are cutthroat.
Having these kind of roles with a strong technical background is typical for Microsoft. They have a parallel management and technical hierarchy all the way to the top. Everyone should copy this, so much talent is lost by good people going into management where they often aren't even that good.
It sounds like what a police officer described in an AMA; it's possible to move from uniform to plainclothes detective, or stay in uniform on patrol, for one's entire career without penalty to salary. <https://np.reddit.com/r/SouthLAndTV/comments/1di5yq/i_am_cop...>
That's what struck me, too. Here's a guy who looks and talks like some rando young really smart technical dude, like the hundreds I've met over my career. He doesn't appear to have those Ivy Leaguer mannerisms, not some tweed blazer-wearing, popped collar, McKinsey BankingConsultant, who you always expect to end up as all your C-level execs. This guy doing the demo could have been you or I. Yet here I still am in my late 40s still an IC worker bee at the bottom of the org chart, and here he is running the most valuable company in the world. How can you not believe in randomness?
In my opinion the most talented don't always succeed in the wildest ways but they do succeed to a pretty large extent. In an alternative timeline bill gates might not be the richest man heading up one of the largest companies but he would definitely have been a very successful multimillionaire banker, lawyer etc...
This is insightful comment. I feel the same. Impossible level of success is most likely random. But a decent level of success is achievable with sustained effort in their particular domain
Personally I find it depressing. He could have retired years ago to work on a passion project. Why is he still dealing with the soul-sucking internal politics of a mega-corp? Unless he somehow enjoys walking on egg shells all day, it doesn't make sense to me.
I was surprised Sun/Oracle's JVM (or Apple, after their 3rd architecture migration) never took a page from AS/400's TIMI (Technology Independent Machine Interface) and compile an architecture-independent representation to native code at installation time.
As the Android Runtime later demonstrated, nothing prevents you from distributing an architecture-independent representation, AoT-compiling to native code at installation time, instrumenting the native code, and then re-optimizing at runtime and/or in a background batch process if your instrumentation statistics deviate substantially from what was available the last time you re-optimized.
Apart from the extra disk space for both bytecode and native representations, something like TIMI allows for the best of both worlds as far as AoT and JIT. (It's also nice that TIMI doesn't require a garbage collector to be running.)
I'm not aware of modern AS/400 dynamically re-optimizing binaries, but doing so wouldn't break anything. Given the conservative nature of mainframe users, I imagine dynamic re-compilation would need to be an opt-in feature.
> I was surprised Sun/Oracle's JVM (or Apple, after their 3rd architecture migration) never took a page from AS/400's TIMI (Technology Independent Machine Interface) and compile an architecture-independent representation to native code at installation time.
But, I think "TIMI" has significantly less value today than it once did. Due to PASE (the AIX compatibility environment), there is ever more code on contemporary IBM i systems which runs outside of the MI bytecode layer. Shifting IBM i to something other than POWER would require recompiling all that code from source.
Beyond recompiling, wouldn't porting 'i' to another architecture, say x86-64 or ARM64, would require the addition some sort of PowerPC AS tags active mode - instructions to set the "this is a valid pointer bits" ?
If I'm understanding correctly, Apple actually did do something similar with Bitcode. The developer would submit one app as serialized LLVM intermediate representation, then Apple would recompile it and serve the correct version for your architecture in the App Store.
If you're really into it, there are usually a few on eBay. A few years back I almost picked one up for my dad as a fun gift; he worked for IBM for 25 years and much of the later years were focused on the AS/400. He spoke quite fondly of it.
Fair comment. I was thinking of the original AS/400 hardware but AS/400 is used to refer the later PowerPC versions (which I guess Nadella would have been using here) and IBMi.
Single color polo, white undershirt, khaki pants, leather walking shoes. It was the same at a lot of tech places throughout the 90s and I remember it well. I’m not sure how that style started or where, it might have come out of Microsoft and Seattle.
I'm always so impressed with how long MS is able to keep employees. I see so many 10-20+ year vets. I know a ton of people who have left and gone back as well.
I think it's a very millennial thing but I've barely gone past 3 years somewhere. Half our careers our salaries were controlled by 2008 and you had to job hop to get more money.
Microsoft had a lot of AS/400s when I worked there in 1993. I believe most of the core infrastructure ran on it.
On my last day there before returning to Canada in August 1993, I vividly remember that Bill Gates had a general meeting where he was talking about how IBM was doing poorly and that he was thinking of buying it, but not the whole company, just the AS/400 division (obviously, that didn't happen).
I've never worked directly with an AS/400. I do remember doing a school internship when I was 15 or so with our town's local IT shop and we went to a customer once and replaced the backup tapes. Still stuck in my head as something that was impressive at the time.
I don’t have a lot of AS/400 experience but I have done my share of swearing at Excel. I’m curious to see the reverse demonstration. Move data from Excel to the AS/400 and see all the options that then become available.
We are replacing our 400 with Azure. We are moving to D365 F&O from the ERP that we have been running for the last thirty years. It is in the best interest for the company but I have to admit that I will be a little sad the day we power it down for the last time.
> Fair comment. I was thinking of the original AS/400 hardware but AS/400 is used to refer the later PowerPC versions (which I guess Nadella would have been using here) and IBMi.
When I was there (part of the project), it all went to SAP R/3 on SQL Server and Windows. But I left in 2008 and I'm sure they have changed a few things since then :)
I'm not sure I ever saw a height adjustable monitor till LCD screens started appearing.
My first desktop LCD was a Sony and did not height adjust.
Some people had furniture that raised the monitor up. Other people had the monitor stacked on top of the computer itself, and the "stack of books" thing was around.
in the 1980ties my system "of choice" was an atari st and my schools computer was a VAX 11/730 running VMS 4.x ... later 2 of them in a DECnet - had some fun with them :))
but a friend of mine was an admin for an IBM AS400 and out of curiosity i visited him from time to time and "looked him over his shoulder" as we say in german during updates and similar tasks ...
was a nice and very stable system back then ...
it had a small hdd - can't remeber its size - and the updates came on floppies with 8 inches if i remember it correctly ... 4 or 5 of them at a time - around 120 k each ... laughable by today's standards, but afaik pretty common for a small to medium sized business back then - a travel agency.
In his memoir, Making It So, Patrick Stewart (Jean-Luc Picard, Professor X, etc.) said that he was noticeably balding at 17 and was more or less completely bald by 19. Stewart may be an extreme case but a sizeable population of men lose a substantial amount of their hair before 26.
yeah it's kinda weird that we associate baldness with age, most of the people that get it genetically seem to start balding at a young age. I have a friend that started losing it in his teens and had lost enough to constantly wear hats by college.
I've met a few people who had a clearly visible bald spot at that age (including my father, fortunately I got my hair from my mother). Not many, but it happens.
I am not saying it is the most reliable server in the world, because there are a lot I have not worked with, but it was far more reliable than anything else I have worked on.
IF something did go wrong, they would at times call IBM and ask for service before the customer had any knowledge of an issue.
My mother became the AS/400 sys admin at her company and she had no idea about servers or technical sides of computers. She changed the backup tapes on a schedule and that was it .
Many places nobody knew where the box was since nobody interacted with it on a physical level. Green screen terminals or terminal apps on Windows was the norm.
Some business had them for inventory / sales / accounting and so on.
Many places nobody knew where the box was since nobody interacted with it on a physical level. Green screen terminals or terminal apps on Windows was the norm.
There were a couple of simple games (not from IBM) you could download and play, but it wasn't worth it.
There were some drawbacks, but Id love to have one to play with.