Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
DevOps Isn't Dead, but It's Not in Great Health Either (thenewstack.io)
67 points by milkglass on July 6, 2024 | hide | past | favorite | 72 comments


I like DevOps, I don't like being on call 24x7 so I tell my clients I am a software developer not an ops. DevOps was sold as a way to avoid dev and ops teams blaming each other for not shipping software. It was at the time true that devs did not understand hardware and networking and ops did not particularly understand software. We got nice things like Docker and k8s, but the core issues have not gone away, devs don't understand hardware and networking and ops are an almost extinct species. We are re-learning the hard way that certain performance problems are not solved by throwing more hardware at them and that devs will always write Helm charts for convenience and not security. We are also learning that the complexity of networking and backed designs has not gone away we just got more modern tools to describe it. One other unintended negative consequence of misunderstanding DevOps is the proliferation of "full-stack engineers", which is basically an ops, a DBA, a secops, a backend dev, a frontend dev, and a website designer rolled into one. What we have today is a situation pretty much the same as it was before the arrival of DevOps... devs thinking ops are useless, ops thinking dev are no good and ... both thinking the others have no sense of style.


> ops are an almost extinct species.

This is infuriating at times. We ingest a lot of data from 3rd party systems of our clients for things like KPI analysis, the devs¹ provided various APIs for submitting data and we can read from various others, but good old reliable dumping-CSV-files-to-us-via-SFTP is still what most clients want to do.

I have no problem with this², BUT the amount of times I end up talking to clients (sometimes large banking organisations) that don't have anyone available who actually understands SFTP/SSH amazes me. We often resort to password based auth because they simply don't get private/public keys (or claim their other infrastructure just doesn't work with them).

----

[1] used to be a full-stacker myself, but I've let large parts of my skillset rot so except for when working on legacy issues I'm a more specialised database/data-processing/infrastructure fellow these days

[2] other than, of course, dealing with bad CSV output implementations⁴ or just all the combinations of how to encode problems like quotes, eols in long stings, and so forth. JSON³ is starting to get some traction as an alternative which is handy because being more modern it is more rigidly designed compared to the mess that CSV is due to decades of implementations being slapped together in an it'll-do-until-it-doesn't manner.

[3] Don't expect structured data though: the clients & 3rd parties I deal with are sending the same simple table format data, with multiple files for us to join together later when there is more complex structure, that they would send as CSV, just using the more bloated format⁵ for the gain in reduced encoding confusions.

[4] not quoting values containing the separator character or otherwise escaping it, not dealing with existing quotes when values do get quoted, etc. - if you'd be surprised to find CSV is far from a solved problem today, then prepare to be very surprised!

[5] Don't get me wrong when I mention the format bloat: this _is_ a good trade for easing the other issues. Bandwidth & storage (or compute resource for compression) are more than cheap & fast enough to make it so.


Why even use SFTP?

Just have them upload blobs to S3 or Azure Storage.

The security is better and both are serverless solutions that cost basically nothing at normal scale.

I seriously don’t understand why people in 2024 insist in reproducing these core services but badly and at great expense.

PS: Azure Storage is multi-protocol and supports SFTP! https://learn.microsoft.com/en-us/azure/storage/blobs/secure...


> Just have them upload blobs to S3 or Azure Storage.

Not everyone can store their data in cloud services, most likely.

That said, self-hosted S3 compatible solutions like MinIO might be a good choice: https://github.com/minio/minio or maybe SeaweedFS for something with a more permissive license: https://github.com/seaweedfs/seaweedfs


> Not everyone can store their data in cloud

This is a nearly verbatim conversation I’ve had three or four times in the last twelve months:

“Why not just use blob storage for the data ingest?”

“We have security standards for sensitive data; we have to use private links and validated protocols and systems.”

“Okay, what’s step two? Where does the data end up? Where do end users consume it from?”

“Power BI!”

“The local app?”

“No, the online version.”

“You just said you can’t use cloud services like blob storage!”

“We can’t!”

“You’re using Power BI!”

“That’s different!”

“How?”

“Err…”


Similar, though only for one client ATM, and our clients are fine with cloud storage these days as long as everything is in an account properly locked down (bring your own key, etc.) so only their application instances can access it.

At least the data doesn't rest (as far as we know…) in PowerBI.


> That said, self-hosted S3 compatible solutions like MinIO might be a good choice

But then you would have people who can't figure out SSH trying to figure out private S3-compatible infrastructure.


Having walked people through both I’m going to say setting up a client for an S3 clone is easier.


You're simply missing the point.

> I seriously don’t understand why people in 2024 insist in reproducing these core services but badly and at great expense.

Because those users have a flow that they like and don't want to change it regardless of all the pitfalls in that flow. "It works, there is just this one problem. Why don't you invent a solution for this one problem for us. I don't want to use S3, or Azure or whatever. Just give me an sftp endpoint I can upload files to". Oh CSV doesn't have a standardized way of parsing escapes? why don't you handle a `# escape=\` or `# doublequote` or `# singlequote` or `# ignore-new-lines-in-the-middle-of-a-string` and we will use that.

Heck, it's 2024, and we have a fully automated, source control (github, gitlab, bitbucket, etc) integrated, S3/Azure Storage/GCP Buckets integrated, Dockerfile and Makefile aware, application deployment service and we still have a non-trivial number of users asking "Why can't I just drag and drop my .php or .py files over fileZilla to deploy my code? I don't want to use git or docker or S3". Oh, you can't version our code updates when we just drag 100 files over sftp? why don't we update a `version.txt` file when we're done updating then you count that as a version?


> Because users have a flow that they like

https://www.bbc.com/news/articles/cx82407j1v3o

Some of these solutions were never good, but build a momentum of “we’ve always done it this way”.


> Why even use SFTP?

Because that is the infrastructure they have, what their current arrangements support. We can offer a number of other methods, but if their other systems don't support than and they don't want to pay for upgrades, SFTP it is. Not that I have anything against gobs of data arriving via SFTP for cleaning and importing¹ – it is tried & tested, reliable, and if both sides have a clue² it is easy & pain free. Much the same set of reasons they still want to send us CSV formatted data, often Win1252 encoded rather than UTF8, not JSON or something else better defined.

> Just have them upload blobs to S3 or Azure Storage.

The data is ending up in Azure storage in our case. But any transfer method still has identification and authentication matters to deal with, and they need to understand and their other software needs to support, the method chosen. This is why SFTP often wins by default: it is a relatively proven technique that people see as secure, and it is commonly supported.

> The security is better and both are serverless solutions that cost basically nothing at normal scale.

The cost to the client is only zero if their other systems already support it. They are very much against anything that implies extra development costs or the cost & effort of assessing and switching to newer systems.

> PS: Azure Storage is multi-protocol and supports SFTP!

SFTP support in Azure is a pretty recent addition, some time in this last year IIRC. We (well, I) had to put together our own relay arrangement some years ago: OpenSSH, appropriately locked down, with blob containers in storage accounts mounted via blobfuse). It works well, in nice cheap VMs/containers.

SFTP support in Azure works out rather expensive for our use case, or any reasonable use case IMO. It needs the hierarchical namespace enabled though that is “only” an additional $27/month per storage account, but then you need to pay for SFTP support by the hour!³⁴ Unless we constrain the clients to sending data at specific times that comes to >$200 per month, per account. We can't get away with charging our smaller clients, or the bigger ones for that matter, an extra $2,400/yr nor can we afford to eat that ourselves. See https://azure.microsoft.com/en-gb/pricing/calculator/ for reference.

We could work around the per-account part by stuffing all the data into multiple containers in one account – but that would not fly with some of the requirements financial sector companies understandably have for data at rest, so we would never pass an audit if we did that. Many even want their live and UAT/training/other environments to have completely separate storage accounts so that is potentially more than one lot of up to $2,400 per year per client needing SFTP. And even on one account, $2,400/year to enable 24/7 availability of SFTP transfers is absolutely ridiculous.

----

[1] I'd rather it be cleaned first, but that is a separate issue

[2] This part is, of course, the problem…

[3] $0.30/hour: https://azure.microsoft.com/en-gb/pricing/details/storage/bl...

[4] No shit… https://learn.microsoft.com/en-gb/azure/storage/blobs/secure....


sftp? That's too modern. We had 3 or 4 vendors that only do ftp via vpn. Also ops and and networking/vpn is on different departments.

> [1] am currently heading to a similar path


> clients (sometimes large banking organisations) that don't have anyone available who actually understands SFTP/SSH

Oh man, I've lost track of the number of times I was infuriated by this. I especially love the guys pretending to take security seriously only to show up with some obsolete ssh config that doesn't work out of the box on modern systems.


Would it be possible for you to take a step back and describe the problem you are infuriated by again?


Sitting on the phone with people who's job it is to know these things for their organisation, having to explain to them that no, they shouldn't send me the private key, not even (especially not even) if they send the passphrase too, and many other problems.

There seems to be a dearth of people for the ops jobs that need doing, hence OP's “ops are an almost extinct species”, so the tasks are thrown around and have to be done by the first person who doesn't duck fast enough. Those of us on the other side of the company interaction boundary end up having to help them with what should be the basics. I'm not their infrastructure team, I don't have access to be able to answer some of the questions they ask about their own infrastructure. I shouldn't even be client facing never mind teaching them how to do their jobs.

The infuriating part (perhaps “frightening” is a better word here) is that there are people managing the secure transmission of data (sometimes sensitive personal data) who do not understand what is going on.


Thank you for taking the time to reply. FWIW my comment was added a few minutes before your edits so it only contained the [1] footnote and possibly other edits.

I guess I was missing the "why" where your post focused on the "what".

Of course when you are working with sensitive data like that it makes a lot of sense to be really careful about security.

I am well aware about the problems with the csv format. But a lot of times you can not really expect clients to be able to download and decode a parquet file, when all they want is to submit a password, download the data and look at it in excel.


> download the data and look at it in excel

Oh, that is a whole extra rant! As much as I actually like Excel as a tool for numerous tasks, it gets things annoyingly wrong too.

We've had clients who manually merge or otherwise tinker with data in Excel to feed to us¹ which can cause numerous problems if leading zeros get cut off identifiers that aren't actually numbers but get interpreted as such, and the age-old joke about Excel being a bit of an inv-cel that mistakes things for dates is still very relevant.

Also it requires (as do a number of MS tools) UTF8 files to have the initial BOM, which is actually not recommended as per the relevant standards², or it will assume Win1252 with all the text corruption that implies if your data contains accented characters, currency symbols, or anything else not in the 7-bit ASCII character set. Sometimes we see data where this has happened at some point to some of it, but not all, so the file has a mix of current UTF8 and corrupted UTF8 then encoded as UTF8. Funfunfun.

----

[1] in one case because their requirements changed, and they didn't want to pay anyone (us, the company responsible for the system the data was output from, or their own internal IT/processing/other teams) to automate the new data manipulation…

[2] https://en.wikipedia.org/wiki/Byte_order_mark#cite_note-5:~:...


Yeah. I rarely work with Excel. When we moved to google sheets, most of the problems you listed mysteriously disappeared :)


Tell me about it. I'm in the healthcare industry and hospital systems are way too complex and/or they don't have the proper people to manage them. Getting them to understand SFTP is hard enough, let alone public key encryption. Additionally, their systems are often so old and out of date that they don't support anything newer than SFTP. On top of that the regulations they have to follow are just overwhelming and completely out of date.

Honestly, I kinda feel sorry for the poor DevOps people in hospitals. It's got to be torture.


DevOps is dying? As in the job market for DevOps is actually shrinking?


I'd rather think that the need for Ops is growing much faster than there is supply.

It is a primarily knowledge-intense profession where you can't just hire some untrained professionals and let them hack together an app. Or cannibaliize data-analystis and make them build some backend.


That implies the reverse! That DevOps is hotter than ever.


No. Ops is back. DevOps has just not worked in practice over the last decade as clients had let go of ops and dumped all work onto devs. We will be seeing an increase in demand for ops + cloud + k8s skills, which is the right direction to go into. We will also be seeing a significant move away from the cloud and you need old-school ops skills for that. Not necessarily for deploying, but definitely for configuring server and networks and debugging performance issues.


Wait. Not every company needs to work like Google Or Netflix? (sorry, couldn't resist). Ops-knowledge is so much in demand that anybody with an actual "DevOps" role has done Ops nearly exclusively.

In all seriousness, this is what I am seeing already. Often I see a platform team providing an Openshift for developers to rule as they want. There is cooperation on some levels and a technical architect who is setting standards.

And most companies don't want to pay the price tag that comes with an actually good DevOps guy anyway.


I don't necessarily think this is a bad thing.

I "enjoy" doing my own ops part of the job because it often feels like I have fewer roadblocks in my way. I build my stuff (or fix my stuff), test it, build some deployable artifact, and deploy it. I do it all on my own schedule, immediately when I'm ready to do each step, and I don't have to wait on someone else.

But if I'm being honest, I have spent so so so many hours dealing with "ops bullshit", that I probably would have been more productive overall (perhaps worse latency, but better throughput) if I could have just focused on the build/fix and test parts, and kicked the final result over to another team to deal with deployment concerns.

I'm still not sure which is better. Maybe neither is objectively better, and it depends on the person and organization. Maybe have separate teams, but make your devs do a tour of duty through the ops team every now and then so they understand why the ops people get mad at them from time to time. And vice versa.

Anyway, I don't buy the CYA/hand-wavy reasons in the article guessing why DevOps is supposedly dying. I suspect it's for the same reason Agile turned out to be a drag on so many developers: people cargo cult and don't really understand why they're doing a particular thing, so they never adapt it properly to their environment.

So the managers bring in consultants, and the consultants just roll out their cookie cutter solution, with their favorite issue tracker, CI/CD product, deployment system, etc., without really understanding the org's needs.

Then the managers get upset, and we get articles like this.


Exactly, dunno what to call it ... but every new concept starts out good, gets crazier and crazier and sometimes it may become good again, but most of the time it just devolves into more crazy.

In one of the orgs I worked, ther was a team which worked on the basic devops framework which the application teams could use to then configure their resources. It was a very sane approach.


Nah, it works like this:

- we get a new methodology by remarkable ppl who can make it work really well

- some managers/consultants decide to modify it, twist it, with reasoning that „its hard to do well”

- the initial thing is lost in translation, twisted and bent to sell better by consultants or fit nonfunctional corps

Agile, DevOps, Software Craftsmanship, testing methodologies all did share the same fate.

Issue is that remarkable ppl are few and far in between. Medicore and subpar ppl are plenty.


Yup, it's a cycle. Take "agile", for example.

Kent Beck (of "Extreme Programming" aka "XP" fame) is a joy to read. He thinks really deeply about software, and takes things to interesting extremes. But he's very pragmatic about it.

Then XP becomes the Agile Manifesto. At this point, we're getting a bit silly but it's still an interesting idea.

Then we start talking about paid "agile" consultants teaching big companies. Maybe it's still better than the alternative. And some of the best ideas, like unit testing, seep into the culture. (80s and 90s project planning and testing could be terrible in ways any undergrad could fix today.)

And then, eventually, we have so-called "Scrum" being run by someone who learned about it third hand through a game of telephone. Every truly successful revolution lives long enough to become a gross and broken status quo. And so the cycle repeats.

That said, Kent Beck is still writing, and he's still delightful to read. I knew his favorite themes 20 years ago, but sometimes I pick up his stuff and find a clever new insight.


Disagree on point 2. It goes straight from successful for one team, to lost in translation. It's not managers' fault that it gets lost- it's just what happens when teams try to adopt something that worked well for one team, without the implicit context.


DevOps has different meanings to different organizations. Just by skimming over the comments here the problem becomes obvious. For some it is basically a platform integration team that assist product teams for what they call devops, for others they confuse it as a modern sysadmin role and for others like product developers they confuse being on-call "24x7" as some kind of devops principles. It's a heavily abused term, even as we have a common industry definition which is grounded on scientific methods, all of the above examples are still common misconceptions about devops.


> a modern sysadmin role

As a sysadmin, I feel attacked. DevOps is not a modern used version of the sysadmin. It's a different shade with different expertise to apply to a slightly different field that what a sysadmin does. I've seen devoperators who don't know how to manage a server and I've seen sysadmins who don't know how to write scripts. One is not more modern or superior than the other.


I started as a developer, and became a sysadmin by accident/necessity.

Over the past 20ish years I've been: "sysadmin", "sre", "devops engineer", "cloud engineer", and "platform support engineer". Probably more titles, and roles, than I can remember to be honest.

I write scripts, I automate systems (in the past with perl, then cfengine, puppet,etc) via the use of AMIs and standard templates. I use strace to debug problems, and diagnose networking issues with tcpdump/wireshark. Nowadays I mostly use EKS/K8S, but at heart I remember debugging broken deployments with GDB and patching binaries with emacs and I like those simpler times.

I want to call myself a sysadmin, but people hiring for stuff want devops/cloud everywhere. It's all a mess. Still I like big networks and lots of systems, and it seems there's no shortage of things to improve no matter the size of the company.


Ten thousand organizations the world over have ten thousand problems. One person with a shinny resume and big corp names on it will come out and publish some manifesto about how they figure one problem out at one very particular company. Because that particular company becomes one of the top 10 worldwide, ten thousand companies will try to claim they do it to by throwing some job titles around. People realize it's bunk and search for the next silver bullet.

Brooks warned us about seeking simple solutions to complex endeavors' problems nearly 40 years ago.


> even as we have a common industry definition which is grounded on scientific methods

there's hardly anything scientific about devops, it's all about culture and people


The point about multiple deployments per day is well taken. Even those of us who wish to get there, have to contend with decades of process to make headway.

In my F500, we have a regular release cadence of every two weeks. We would like that to be daily or more, but we are probably one to two years of maturity away from making that a reality.

Not that we do DevOps in the way it was envisioned. We have a "DevOps" team that is responsible for platforms, processes, and operations, and a developer team that is responsible for the application code. The DevOps team works closely with devs to plan and architect the designs to make sure we're using the right services, scale, tuning, etc.

Within our org, the only real difference between DevOps and classic Ops is the skill level within the DevOps org is dramatically higher than what you used to find in the sysadmin crowd. Most of our DevOps people are cloud native, but a few have deep systems knowledge and decades of hardware, Linux, networking, security, etc experience. All of our DevOps team is very comfortable with Python, that ends up being our default for glueups. We even have a few former devs who also write custom tooling for our pipelines and other QoL improvements.

So for us, developers deliver application code, devops does everything else.


ended up at a place like this too + in our project we, the devops-people, also develop and support the z/OS-data-integrations. I generally tell non-IT people it's like a sysadmin, but not the windows kind they interact with - you won't roll out fixes in 2 hours from testing to prod (including the process stuff) at that scale (PB of data-warehouse, some 1000 B2B-customers on different versions) if all you know is clicking buttons in Windows server.


So, looks like you have the right people. Why are you 2 years away from reaching one deploy per day?

PS consider that the people that sells you the "we deploy N times per hour" consider, rightly so, feature flag activation as a deploy


We're still in the process of automating the promotion process. We're very conservative. We have dev, qa, stg, uat, and prod. All fully duplicated and IaC 100% after dev.

The "app" we deploy every two weeks is three major components made up of multiple microservices each, across eight dev teams.

And yeah, feature flags is under evaluation right now.


Performance in the report is defined by how fast you can make a change to production, and how fast you can restore service.

That’s fine I guess.

What’s not taken into account is the time wasted on incidental complexity with devops tools.

For example a highly competent engineer I know is burning hours trying to get ssl working on k8s/gcp with a specific configuration.

Conceptually ssl is all straightforward, but there’s some arcane knowledge that’s been designed into k8s and gcp. Because they were designed for way more complex use cases… but they’ve also pushed out simpler tools.


hitting this paradox a lot with containerisation.

Spent a good bit of effort building everything into a fully containerised stack, thinking once we got through the hard parts of it, everyone would be more productive and new devs would get up to speed fast. But it's not like that. We just have a whole different set of complexities, mainly at all the container boundaries where ports, volumes, user ids, file permissions, file paths etc now create issues where before they didn't.

I think we probably came out with a modest win, but I'm not sure it beat out what we might have achieved with the same effort going to other productivity measures.


I can’t speak to your experience, but often times I’ve run into this in development teams where there was not a strong level of systems knowledge or not a very strong understanding of exactly what an application needs to run.

Perhaps one of the more amusing issues I’ve seen was someone’s surprise that a temp directory creation put the files in /tmp instead of the application’s working directory. It’s not something a lot of people would think about in development/testing I suppose.

More often than not, containerization has seemed to force developers to better understand the impacts of their code on the system.


And when you choose badly the cloud platform that looked good at start. But later becomes complete pain... Always finding new amazing ways to break things... (looking at you Azure Container Instances)

In the end for some light work loads could have been just thrown on VMs already running for other reasons. And it would have saved lot of time and effort... And it could have still be deployed with some level of devops...


What would you use instead of ACI?


Not sure, in this case of workload with very low load a VM I think...


What I’ve found is that pure “DevOps” roles have become a way for recruiters to sell people on being the ops person without them realizing it.

I once worked at a place that hired me in as a software dev working in DevOps and the position quickly became consumed by the on-call workload.

The system we were managing constantly had fires that needed to be put out but we could never fix the underlying problems because someone needed to answer the page which often times was a false alarm.

So we’d quickly slap on band-aids to band-aids and management gave lip service to fixing the underlying problems with on-call. And the job became about reading the Jenkins build logs because the “devs” thought that was our job. They even forgot how to build and test their own software locally in some cases…

Needless to say I left pretty quickly and it made me not want to do a pure DevOps job again.


> What I’ve found is that pure “DevOps” roles have become a way for recruiters to sell people on being the ops person without them realizing it.

Way waaaay back when I first heard of DevOps, it was described along the lines of "bringing developer best practices like automation and testing to ops". What you're seeing could be a remnant of that.


It isn't dead, just companies refuse to implement DevOps philosophies because they assume it adds too much friction. When things run some and DevOps appears it isn't doing anything they fire operational engineers. Months later things go to shit because they thought they were not doing anything. Operational engineers or Site Reliability Engineers are essential because they consume toil work which would otherwise be there to distract a developer from developing. No one wants to solve pipelines and delivery methods and ensure stability and deliverability of code. Pipelines need as much continual development as a developer who makes the profitable applications.


"The report’s authors speculate that “It may be that the ubiquity of DevOps practices has allowed developers and organizations to increase the complexity of projects they are involved in, counteracting the benefits to development velocity. In other words, DevOps practices have likely made the development velocity of complex projects comparable to simpler projects without DevOps practices.”

So.. in other words - DevOps is in great health.

As a nit, that would not be "counteracting the benefits to development velocity". It would be increasing it for those complex use cases. It might be lowering the poorly used statistical average, but that is more of a 'you' problem from using exceptionally poor statistical practices.

I always find these broad 'Something is not dead yet but dying' articles to be of very low overall content value. They far too often depend of either highly anecdotal information, or even worse conclusions based on widespread data averages. Even worse than that, self reported AND self selected data.


Companies do what they always do. It isn't the first time a organization sees something trendy, say that they adopt it and just renames their old processes to with what is trendy and doesn't change one damn thing. It happened with agile and it is happening with DevOps. These poor performers are most likely doing ops in a traditional way they just adopted the vernacular of modern practices and claiming it doesn't work.

The core of DevOps and agile is essentially tightening the feedback loops and using the feedback to adjust to become more efficient. "Oh we are failing deploys" and not figuring out why to adjust well that isn't DevOps. Not figuring out why you don't deliver what you promise isn't agile either.



I think this [1] is mistaking some post-devops developers for "low performers".

The survey questions in slide deck p 13 (https://cdfound.lfprojects.linuxfoundation.org/wp-content/up...) could all be "no" for someone who deploys to Firebase or a low code platform. Even if they are doing continuous deployment.

[1] "[...] while 83% of developers are actively engaged in DevOps, there’s been a troubling increase in the proportion of low performers in deployment metrics."


Senior devops here, the ultimate goal of the devops is to remove itself from the equation, when the cicd gets in a good place we switch focus to other areas like observability


As someone that besides being a developer, has had build engineer, systems engineer, devops, as 2nd title, this has always been a placeholder for everything else that we do besides coding activities.

The guy in the team that bothers with setting up computers, physical or virtual, configuring Jenkins, Bamboo, Team City,.., writes the RPM build scripts, and whatever else that other devs rather not do.

Just like Agile, DevOps and such, they all end up being ways to have conferences, books, consulting services,...


From an outside perspective - I've been in hundreds of meetings around different companies (medium to large size) and these efforts have been deployed in sooo many ways (and interpreted) that it becomes a tough funnel to single out.

What usually works across companies are somewhat standard systems and flows. But based on what I've seen in the past 2 decades, there's a bunch of different solutions to a similar problem and most have very different ways of solving challenges...


Docker/containers, k8s, and all the bent bloated habits eventually take a toll.

Sure, it lets you ship that app with postgres, clickhouse, redis, rabbitmq, task runners, workers, the jvm, 5 different versions of nodejs wedged in... But is this really a good thing?

Its like vacuum sealing poop in unlabeled bags and putting them into someone's refrigerator. Yay, surprise.


Containers and what you put in them isn't really the same thing though. I can't imagine doing deployment without containers ever again, but I'm not sure why you would put a database in one. Well except for local development. You do have a point of course, because people do put all those silly things into containers even when they don't need any of it.

For me the real issue is that we still live in a world where many developers don't have anyone to hand that container off to. Even if your cloud helps you, you're likely still going to have to configure things like networking, and while I've done that myself because nobody else could, it really shouldn't have been something I did. The first time I did it, every application/service had 250 IPs despite using 3 or 4 and I'm fairly certain my vnet ate up like 50.000 IPs which I think you call a block? And as you can see, I still shouldn't ever configure network, but I'm still the best at it and nobody in management seems to want to change that.


We have a "DevOps" team, which is ops.


DevOps like Agile was a beautiful idea with a coherent underlying well reasoned philosophy.

DevOps like Agile was then hacked to pieces and had its corpse paraded around by sleazy consultants and idiotic management that was told "you can save all this money on all your infrastructure people by making your developers do all the ops stuff to." And who doesn't like saving money on those useless crusty old system administrators that don't seem to do anything except whine for "more disk space" if they were so smart they would've prevented that outage la da t week where the logging volume filled up.

Finally the toolmakers sold the idea that DevOps was something you could buy. Just like the vendors convinced management that buy purchasing Jira you were now agile, now if you have a CI/CD pipeline you are now doing DevOps. But you still can't release without getting approval from the release committee and you can't actually provision a new host in the cloud until you have filled out the requsite approval forms, but key for some reason we uprooted our monolith and threw it on kubenetes so we are doing DevOps.

It really feels like relieving the Agile cycle all over again in every detail. Just wish I could figure out the next big fad so I could get in on this money printer.

PS. If you use the term DevSecOps or DevSecFinOps unironiclly your living proof the Dunning-Kruger effect.


It's always been the people, not the process. I've basically been the same sysadmin with a CS degree I've been since before I got a degree. I've been called a developer, a sysadmin, pool boy, IT, site reliability... It's all the same job - keep things scaling, running and watch the budget. Move the product forward by enabling the people who make the product. Occasionally be the adult in the room who knows how the sausage is made from the metal to the frontend.

People try and put a label on it to market it so they can sell it back to you.

My clients rarely know what I'm specifically I'm working on because I'm a technical grazer who just monitors the world looking for interesting problems - but they continue to pay me and I believe they extract a fairly positive value from me.

If I ever stuck to a single job title, they'd probably expect me to stop grazing - but that's what makes me happy and productive.


To me, agile is simply

People over process

Results over bluster

Flexibility over rigidity


Which is painfully ironic, because as soon as somewhere starts 'doing agile' it's all about the (scrum) process, bluster abounds, and everything is in pretty rigid sprints, scope changes to be avoided, etc.


And there is still a contract wrecking nearly everything, including the business relationship.


Management just slaps agile onto everything nowadays. No time given for coming up with better solution, reducing tech debt. Most companies devolving into feature factories now. I call this methodology FRAGILE®


WaterScrumFall™

Funding process, requirements, planning: Waterfall

Development: Scrum

Manual testing, change approval boards, release management: Waterfall

Waterfall on the outside, scrum in the middle


I call it WAGILE® aka waterfall+agile, blended with a pinch of spikes :D


Until that director comes

Expecting results

Yesterday


Cowboys who can shoot with both hands. Got it.


how strange!

for me it has

always been

whatever the

highest paid person

wanted: "Agile"


Adopting a DevOps culture is usually very difficult at sales-driven orgs, most of which fallaciously call themselves product-driven orgs. Mostly because PMs see it as a cost rather than an investment, and their sole goal is for devs to ship an incremental ball of crap.

The only times I've seen DevOps succeed was when it was supported by very senior engineers who were allowed to ignore product people's antics and implement it anyway, and when engineers keen on DevOps inflated their estimates to make room for that kind of work, particularly the initial research and yak shaving.

Imo, DevOps isn't in great health because the industry isn't in great health, and most non wealthy tech companies are doing panic-driven and FOMO-driven development.


Indeed the culture you describe does not sound healthy! In theory, Engineering does not report to Product, it is a partner with an important perspective, and can negotiate prioritizing certain projects without subterfuge.

In low trust environments, all engineering-led projects that Product/"the business" don't understand are assumed to be low value. Sometimes this is unfair, but sometimes it is warranted. Poor engineering leadership will green light all sorts of counterproductive "silver bullet" projects.


Winner!


We will soon have AIOps instead.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: