I once participated in implementing a system as a monolith, and later on handled the rewrite to microservices, to 'future-proof' the system.
The nice thing is that I have the Jira tickets for both projects, and I have actual hard proof, the microservice version absolutely didn't go smoother or take less time or dev hours.
You can really match up a lot of it feature-by-feature and it'll be plainly visible that the microservice version of the feature took longer and had more bugs.
And Imo this is the best case scenario for microservices. The 'good thing' about microservices, is once you have the interfaces, you can start coding. This makes these projects look more productive at least initially.
But the issue is that, more often than not, the quality of the specs are not great to awful, I've seen projects where basically Team A and Team B coded their service against a wildly different interface, and it was only found in the final stretch that these two parts do not meet.
100% true in retrospect.
Then the engine would find the best way to resolve the graph and fetch the results. You could still add your imperative logic on top of the fetched results, but you don't concern yourself with the minutiae of resilience patterns and how to traverse the dependency graph.
Microservices are just a slightly more reliable version of that, since you can hassle the author as coworker instead of via harried FCWSNEGW support mouse.
Secondly, if you are not doing event sourcing from the get go, doing distributed system is stupid beyond imagination.
When you do event sourcing, you can do CQRS and therefore have zero need for some humongous database that scales ad infinitum and costs and arm and a leg.
While this is true, in fact for efficiency reasons it's often better to treat even local dispatch like it's "network" -- chasing pointers and doing things one at a time in a loop is far less efficient on a modern architecture than doing things in bulk and vectorized.
Non uniform memory hierarchies, caches, branch predictors, SIMD, and now GPUs, etc. all tend to reward working with data in batches.
If I were to think of a "pure" model of computation that unified remote and local it would be to treat the entire machine in terms of the relational data model, not objects. To treat all data manipulation and decisions like a query.
And to ideally in fact have the same concept of a query optimizer / planner that a DBMS has, which is able to make decisions on how to proceed based on the cost of the storage model, the indexes, etc. because it has a bigger picture of what the programmer is trying to accomplish.
I still thoroughly want to see capnproto or capnweb emerge the third party handoff, so we can do distributed systems where we tell microservice-b to use the results from microservice-a to run it's compute, without needing to proxy those results through ourself. Oh to dream.