I'm not sure what the author expects the program to do when there's an internal logic error that has no known cause and no definite recovery path. Further down the article, the author suggests bubbling up the error with a result type, but you can only bubble it up so far before you have to get rid of it one way or another. Unless you bubble everything all the way to the top, but then you've just reinvented unchecked exceptions.
At some level, the simplest thing to do is to give up and crash if things are no longer sane. After all, there's no guarantee that 'unreachable' recovery paths won't introduce further bugs or vulnerabilities. Logging can typically be done just fine within a top-level exception handler or panic handler in many languages.
Does anyone have any good resources on how to get better at doing "functional core imperative shell" style design? I've heard a lot about it, contrived examples make it seem like something I'd want, but I often find it's much more difficult in real-world cases.
Random example from my codebase: I have a function that periodically sends out reminders for usage-based billing customers. It pulls customer metadata, checks the customer type, and then based on that it computes their latest usage charges, and then based on that it may trigger automatic balance top-ups or subscription overage emails (again, depending on the customer type). The code feels very messy and procedural, with business logic mixed with side effects, but I'm not sure where a natural separation point would be -- there's no way to "fetch all the data" up front.
The worst file I ever inherited to work on was the ObjC class for Instagram’s User Profile page. It looked like it’d been written by a JavaScript fan. There were no types in the whole file, everything was an ‘id’ (aka void*) and there were ‘isKindOfClass’ and null checks all over the place. I wanted to quit when I saw it. (I soon did).
My code is peppered with `assert(0)` for cases that should never happen. When they trip, then I figure out why it happened and fix it.
This is basic programming technique.
You can write the type heavy language with the nullable-type and the carefully thought through logic. Or you can use the dynamic language with the likelihood that it will crash. The issue is not “you are a bad coder, and should be guilty” but that there is a cost to a crash and a cost to moving wholesale to Haskell or perhaps more realistically to typed python, and those costs are quantifiable- and perhaps sometimes the throwaway code that has made it to production is on the right side of the cost curve.
Minor nit: this should be mutable state and lifetimes. I worked with Rust for two years before recently working with Zig, and I have to say opt-in explicit lifetimes without XOR mutability requirements would be a nice combo.
[1]: https://learn.microsoft.com/en-us/dotnet/csharp/nullable-ref...
Functional languages like ML/Haskell/Lisp dialects has no lies built in for decades, and it's good to see the mainstream programming (Java, TS, C++, etc.) to catch up as well.
There are also cute benefits of having strong schemas for your API as well -- for example, that endpoint becomes an MCP for LLMs automatically.
But it doesn't always make sense -- e.g. a language for large-scale linear algebras, or a language for web GUIs might be not the best to compile itself.
> The compiler is always angry. It's always yelling at us for no good reason. It's only happy when we surrender to it and do what it tells us to do. Why do we agree to such an abusive relationship?
Programming languages are a formal notation for the execution steps of a computing machine. A formal system is always built around rules and not following the rules is an error, in this case a malformed statement/expression. It's like writing: afjdla lkwcn oqbcn. Yes, they are characters, but they're not english words.
Apart from the syntax, which is a formal system on its own, the compiler may have additional rules (like a type system). And you can add even more rules with a static analysis tool (linter). Even though there may be false positives, failing one of those usually means that what you wrote is meaningless in some way. It may run, but it can have unexpected behavior.
Natural language have a lot of tolerance for ambiguous statements (which people may not be aware of if they share the same metaphor set). But a computer has none. You either follow the rules or you do not and have an error.
“Learn to stop worrying and love the bomb” was definitely a process I had to go through moving from JavaScript to Typescript, but I do mostly agree with the author here wrt convention. Some things, like using type names as additional levels of context - UserUUID and ItemUUID each alias UUID, which in turn is just an alias for String - have occurred to me naturally, even.
Zig is one. For that matter standard C has no exceptions
Experience showed that normal users will not shy away from using the loophole, but rather enthusiastically grab on to it as a wonderful feature that they use wherever possible. This is particularly so if manuals caution against its use.
[...]
The presence of a loophole facility usually points to a deficiency in the language proper, revealing that certain things could not be expressed.
Wirth's use of loophole most closely aligns with the unchecked casts that the article uses. I don't think exceptions amount to lying to the compiler. They amount more to assuming for sake of contradiction, which is not quite lying (e.g., AFSOC is a valid proof technique, but proofs can be wrong). Null as a form of lying is not the fault of the programmer, that's more the fault of the language, so again doesn't feel like lying.
In C++, memory management has not been a pain point for many years, and you basically don't need to do it at all if you don't want to. The standard library takes care of it well enough - with owning containers and smart pointers.
> And Rust is famous for its optimizations in the style of "zero cost abstractions".
No, it isn't that famous for those. The safety and no-UB constraints prevent a lot of that.
By the way, C++, which is more famous for them, still struggles in some cases. For example, ABI restrictions prevent passing unique_ptr's via single registers, see: https://stackoverflow.com/q/58339165/1593077
I had an issue on my local computer system yesterday; manjaro would not boot with a new kernel I compiled from source. It would freeze, at the boot menu, which I never had before. Anyway. I installed linuxmint today and went on to actually compile a multitude of things from source. I finally finished compiling mesa, xorg-server, ffmpeg, mpv, gtk3 + gtk4 - and the prior dependencies (llvm etc...). So I am almost finished finally.
I had to invest quite a lot of time hunting for dependencies. Most recent one was glad2 for libplacebo. Turns out "pip install glad2" suffices here. But getting that wasn't so trivial. The project project at pip website was virtually useless; respectively I installed "pip install glad" which was too old. Also took me perhaps one full minute or more to realise it.
I am tapping into LFS and BLFS webpage (Linux from scratch), which helps a lot but it is not perfect. So much information is not described and people have to know what they are doing. You can say this is fair, as this is more for advanced users. Ok. The problem is ... so many things that compilers do, is not well-described; or at the least you can not easily find high quality documentation. Google search is almost virtually useless now; AI just hallucinates and flat out lies to you often. Or tells you things that are trivia and you already know it. We kind of lose quality here. It's as if everything got dumbed down.
Meanwhile more and more software is required to build other software. Take mesa. Now I need not only LLVM but also the whole spirv-stack. And shaderc. And lots more. And also rust - why is rust suddenly such a huge dependency? Why is there such a proliferation of programming languages? Ok, perhaps C and C++ are no longer the best language, but WHY is the whole stack constantly expanding?
We worship complexity. The compilers also become bigger and bigger.
About two days ago I cloned gcc from https://github.com/gcc-mirror/gcc. The .tar.xz sits at 3.8 GB. Granted, regular tarball releases are much smaller, e. g. 15.1.0 tar.xz at 97MB (at https://ftp.gnu.org/gnu/gcc/?C=M;O=D). But still. These things become bigger and bigger. gcc-7.2.0.tar.xz from 9 years ago had a size of 59M. Almost twice the size now in less than 10 years. And that's really just like all the other software too. We ended up worshipping more and more bloat. Nobody cares about size. Now one can say "this is just static code", but this is expanded and it just keeps on getting bigger. Look at LLVM. How to compile this beast: https://www.linuxfromscratch.org/blfs/view/svn/general/llvm.... - and this will only get bigger and bigger and bigger.
So, back to the "are compilers your best friend"? I am not sure. We seem to have the problem of more and more complexity getting in at the same time. And everyone seems to think this is no issue. I believe there are issues. Take slackware; basically it was a one person maintains it. This may not be the primary reason, but slackware slowed down a lot in the last some years. Perhaps maintaining all of that requires a team of people. Older engineers cared about size due to constraints. Now that the constraints are less important, bloat became the default.