- Work at a hedge fund
- Every evening, the whole firm "cycles" to start the next trading day
- Step 7 of 18 fails
- I document Step 7 and then show it to a bunch of folks
- I end up having a meeting where I say: "Two things are true: 1. You all agree that Step 7 is incorrectly documented. 2. You all DISAGREE on what Step 7 should be doing"
I love this story as it highlights that JUST WRITING DOWN what's happening can be a giant leap forward in terms of getting people to agree on what the process actually IS. If you don't write it down, everyone may go on basing decisions on an incorrect understanding of the system.
A related story:
"As I was writing the documentation on our market data system, multiple people told me 'You don't need to do that, it's not that complicated'. Then they read the final document and said 'Oh, I guess it is pretty complicated' "
In the world of Business IT, we get seduced by the shiny new toy. Right now, that toy is Artificial Intelligence. Boardrooms are buzzing with buzzwords like LLMs, agentic workflows, and generative reasoning. Executives are frantically asking, "What is our AI strategy?"
But here is the hard truth:
There is no such thing as an AI strategy. There is only Business Process Optimization (BPO).
This is well-expressed, and almost certainly true for an overwhelming majority of companies.
On the other hand, I have seen process stifle above average people or so called “rockstars”. The thing is, the bigger your reliance on process, the more you need these people to swoop in and fill in the cracks, save the day when things go horribly wrong, and otherwise be the glue that keeps things running (or perhaps oil for the machine is more apt).
I know it’s not “fair”, and certainly not without risk, but the best way I have (personally) seen it work is where the above average people get special permissions such as global admin or exception from the change management process (as examples) to remove some of the friction process brings. These people like to move fast and stay focused, and don’t like being bogged down by petty paperwork, or sitting on a bridge asking permission to do this or that. Even as a manger, I don’t blame them at all, and all things being equal so long as they are not causing problems I think the business would prefer them to operate as they do.
In light of those observations, I have been wrestling a lot with what it says about process itself. Still undecided.
>Processes that rely on unstructured data are usually unstructured processes.
I appreciate someone succinctly summing up this idea.
Leaders think <buzzy-technique> is a good way to save money, but <buzzy-technique> actually is a thing that requires deeper investment to realize more returns, not a money saver.
I have seen a smattering of instances along the way where the act of defining requirements forced companies to define processes better. Usually, though, companies are unwilling to do this and instead will insist on adding flexibility to the automation tooling, to the point where the tool is of no help.
I have learned to be careful of "too much process", but I find that the need for structure never disappears.
AI deals well with structure. You can adjust your structure to accept less-structured data, but you still need the structure, for after that.
Just maybe not too much structure[0].
I'm now in the process of trying to hand off chunks of the work I do to run my business to AI (both to save time but also just as my very broad, practical eval). It really is all about documentation. I buy small e-commerce brands, and they're simple enough that current SOTA models have more than enough intelligence to take a first pass at listings + financials to determine whether I should take a call with the seller. To make that work, though, I've got a prompt that's currently at six pages that is just every single thing I look when evaluating a business codified.
Using that has really convinced me that people are overrating the importance of intelligence in LLMs in terms of driving real economic value. Most work is like my evaluations - it requires intelligence, but there's a ceiling to how much you need. Someone with 150 IQ points wouldn't do any better at this task than someone with 100 IQ points.
Instead, I think what's going to drive actual change is the scaffolding that lets LLMs take on increasing numbers of tasks. My big issue right now is that I have to go to the listing page for a business that's for sale, screenshot the page, download the files, upload that all to ChatGPT and then give it the prompt. I'm still waiting for a web browsing agent that can handle all of that for me, so I can automate the full flow and just get an analysis of each listing sent to me without having to do anything.
The useful framing is not “where can we bolt on AI” but “what does the system look like if AI is a first-class component.” That requires mapping the workflow, identifying the decision points, and separating deterministic steps from judgment calls.
Most teams try to apply AI inside existing org boundaries.
That assumes the current structure is optimal. The better approach is to model the business as a set of subsystems, pick the one with the highest operational cost or latency, and simulate what happens if that subsystem becomes an order of magnitude more efficient. The rest of the architecture tends to reconfigure from that starting point.
For example, in insurance (just an illustration, not a claim about any specific firm), underwriting, sales, and support dominate cost. If underwriting throughput improves by an order of magnitude, the downstream constraints shift: pricing cycles compress, risk models refresh faster, and the human-in-the-loop boundary moves. That’s the level where AI changes the system shape and acts beyond the local workflow.
This lens seems more productive than incremental insertion into existing silos.
Example, one of many things, in our SDLC process, now we have test cases and documentation which never existed before (coming from a startup).
But I don't blame them. Process optimization is hard. If a new tool promises more speed, without changing the process, they are ready to pour money at that.
Here’s your Ai strategy: every few months re-evaluate agent fitness and start switching over. Remember backstops and canaries.
Details:
Businesses usually assign responsibilities to somewhat flaky employees, with understanding there will be a percentage of errors. This works ok so long as errors don’t fluctuate wildly and don’t amplify through the system. Most business processes are a mess and that works ok.
Once agents become less flaky and there are enough backstops to contain occasional damage business will start switching.
> The intelligence (knowing what a "risk" actually means) still requires human governance.
Less and less. Why do you trust a human who’s considered 5000 assessments to better understand “risks” and process the next 50 better than the LLM who has internalized untold millions of assessments?
What's the prompt for that one? ;)
There is only Business Process Optimization (BPO)."
Exactly, that's the fundamental truth. The shiny tool of the day doesn't change it at all
What does it bring?