The vast majority of web software is comically over-engineered. The deployment process has so many features and components that you need a full-time staff of specialists to configure and monitor the build system.
And it could be something as straightforward as a marketplace website or medium complex like some kinds of SaaS. The web application could probably be compiled into a single binary or container and it’s database could run in RAM. And yet to ship a single change doesn’t take one developer but a whole team, eye watering enterprise contracts with several vendors, and it’s still slow as molasses.
There’s a time and a place for config as code and container orchestration. The problem is that it’s used for every single project regardless of requirements.
Show me the difference between these 3 pictures (the third being DevOps Engineering.) Go ahead, I'll wait.
They're all the same role, where people are all doing a mixed bag of things for companies that their developers decided they didn't want to do (or sometimes didn't have the experience to do.) I'm a Senior DevOps Engineer, and you could give me any one of the 3 titles, and they would all be correct characterizations of what I do.
A Eulogy for DevOps - https://news.ycombinator.com/item?id=40826236 - June 2024 (94 comments)
Also discussed a bit at the time (of the OP):
DevOps: The Funeral - https://news.ycombinator.com/item?id=36482646 - June 2023 (4 comments)
I work at a medium size publicly traded company and our SOX compliance controls would take literal months to generate and/or prove to auditors without our CI/CD pipelines. It's just an extract from GH Actions with a report of who modified, who approved, and who actually pushed to main. All of these actions must be siloed (if you can commit to repo, you cannot push to main)
Potentially this is a consequence of micro service infra, my team alone manages nearly 25 separate git repositories.
However, the first conversation (after finishing up the contract) with the sysadmins went like: "Yeah, ingresses are fine and such, but we'll have our loadbalancer in front of that which will inspect the url paths to know which ingress it has to go to, we don't want some 'developer' to open up some new endpoint without us approving it! Those guys never know what they're doing."
At this point I'm not sure if he's pulling my leg or something, b/c before the we wrote up the contract the whole idea was to give their dedicated platform team and developers more self service. In one fell swoop, bam, negated.
Devops imo is about clear responsibility, it just doesn't work if boundaries are not communicated explicitly. In the above scenario once everything is setup, developers will not gain any velocity, and the sysadmin are still manually adding url routing to their loadbalancers.
The real issue though are not these model but the modern software stack and the story date back decades, when Unix was born their author think they can separate the "system" (bootloader, kernel, basic userland) and "the rest", allowing cheap end-user programming via scripts assembling system functions through IPCs. They quickly discover this approach is fast but fairly limited and they added GUIs violating their own principles since their GUIs can't be assembled with scripts and have only cut&paste/D&D as sole IPCs. A more broad and precise analysis can be found in the Unix Haters Handbook, but as Unix get ground quickly so "modern" GUIs do the same and we quickly evolve from "internet as a network of flexible, user programmable desktops" to "internet as a network of hosts serving some limited desktop to a "internet as a network of hosts who own anything and some modern dumb terminals named endpoints" and that's the present with desktops that are very complex browsers bootloaders, witch are themselves very complex virtual machines, and "the real intelligence" in code is now some third party services offering APIs modern Dev(s) use to assemble their very complex and unmaintainable script named webapps. DevOps was an intermediary step of this hyper succinct mess.
We need to came back to an-OS-is-a-single-application where anything is integrated so things like reading emails meaning a network mount of a remote share, provided by the system, and a stat/read of few files in there, the messages, sending it is just mounting the receiver mailserver share creating a new file there if allowed, similarly reading a website is just the same and so on, oh, it's the Plan 9 model. Oh, such systems are like Smalltalk workstations or LispM where anything is a function that can be called, modified, combined with any other with few lines of code. This model is hard to evolve system side because there are MANY interdependencies to be taken into account, BUT allow to makes modern hyper-big projects a simple and manageable codebases where there is no need of CI/CD because updating is just patching in a version control system, so is rollback and third party services offers just what they want without specific APIs to support.
I know described like that in my limited English is a bit confuse, but if you try reasoning "being outside" you'll see the big picture easier.