But a lot of software engineering goes into building tools, libraries, frameworks, and systems, and even "application" code may be put to uses very distant from the originally envisioned one. And in these contexts, performance relative to the "speed of light" - the highest possible performance for a single operation - can be a very useful concept. Something "slow" that is 100x off the speed of light may be more than fast enough in some circumstances but a huge problem in others. Something "very fast" that is 1.01x the speed of light is very unlikely to be a big problem in any application. And this is true whether the speed of light for the operation in question is 1ns or 1min.
I disagree with that as the choice of framework doesn't impact just the request/response lifecycle but is crucial to the overall efficiency of the system because they lead the user down a more or less performant path. Frameworks are not just HTTP servers.
Choosing a web framework also marries you to a language, hence the upper ceiling of your application will be tied to how performant that language is. Taking the article's example, as your application grows and more and more code is in the hot path you can very easily get into a space where your requests that took 50ms now take 500ms.
People typically live only once, so I want to make the best use out of my time. Thus I would prefer to write (prototype) in ruby or python, before considering moving to a faster language (but often it is not worth it; at home, if a java executable takes 0.2 seconds to delete 1000 files and the ruby script takes 2.3 seconds, I really don't care, even more so as I may be multitasking and having tons of tabs open in KDE konsole anyway, but for a company doing business, speed may matter much more).
It is a great skill to be able to maximize for speed. Ideally I'd love to have that in the same language. I haven't found one that really manages to bridge the "scripting" world with the compiled world. Every time someone tries it, the language is just awful in its design. I am beginning to think it is just not possible.
This in a way highlights the knowledge gap that exists in American manufacturing. Physical parts are designed to work in terms of cycles, which can span both decades and milliseconds. Engines in particular need to work in terms of milliseconds and decades, but there are other vehicle parts such as airbags, pumps, and steering and suspension components that need to be designed for massive orders of magnitude.
Pretty often you have a hot path that looks like a matmul routine that does X FMAs, a physics step that takes Y matmuls, a simulation that takes Z physics steps, an optimizer that does K simulations. As a result, estimating performance across 10 orders of magnitude is just adding the logs of 4 numbers, which pretty well works out as “Count up the digits in XYZK, don’t get to 10” which is perfectly manageable to intuit
I'm unlikely to get bottlenecked on well written and idiomatic code in a slower framework. But I'm much more likely to accidentally do something very inefficient in such a framework and then hit a bottleneck.
I also think the difference in ergonomics and abstraction are not that huge between "slow" and "fast" frameworks. I don't think ASP.NET Core for example is significantly less productive than the web frameworks in dynamic languages if you know it.
I honestly don't know if async makes this easier or harder. It makes it easier to write sections of code that may have to wait for several things. It seems to make it less likely to write code that will kick off several things that can then be acted on independently when they arrive.
Yes, because there's usually context. To use his cgo example, cgo is slow compared to C->C and Go->Go function calls.