First of all, the fixed points are LITERALLY NOT FIXED POINTS. They're decimal floats. Fixed points are just integers that re-scale when multiplied or divided. There is no exponent field, no nothing. The author seems to have confused the notion "fixed points allow for precise calculations of monetary values" to mean that they're decimal. They're not. That section of the book contradicts itself constantly and also the code is wrong.
Also an ordered vector is used to implement a map/set. Because:
> Most people would likely instinctively reach for hash tables, and typically spend the next few months researching optimal hash algorithms and table designs.
> A binary searched vector is as simple as it gets and performs pretty well while being more predictable.
A basic hash table or hash set[1] is both simpler and faster than this solution. And I don't see what's stopping someone from spending the next few months researching optimal dynamic array growth and searching algorithms instead. This line of reasoning just doesn't make any sense.
And "Once nice advantage is that since they don't need any infrastructure, they're comparably cheap to create." What? It needs a dynamic array!
#define hc_task_yield(task)
do {
task->state = __LINE__;
return;
case __LINE__:;
} while (0)
That's just diabolical. I would not have thought to write "case __LINE__". In the case of a macro, using __LINE__ twice expands to the same value where the macro is used, even if the macro has newlines. It makes sense, but TIL.Same thing people said about other people not compiling by hand lol.
I love C because it doesn't make my life very inconvenient to protect me from stubbing my toe in it. I hate C when I stub my toe in it.
When your computer is a PDP-11, otherwise it is a high level systems language like any other.
Besides, the operations are all wrong and only work for trivial values of the exponents, like 0, 1 and 2.
First of all, those languages do not "help" "reducing" some classes of bugs. They often entirely remove them.
Then, even assuming that any safe language with unsafe regions (Rust, C#, etc) would not give you comparable flexibility at a fraction of the risk... if your flexible, effortless solution contains entire classes of bugs, then there is no point in comparing "effort". You should at least take into account the effort in providing a software with a high confidence that those bugs are not there.
Is this still true? MSVC is pretty good at compiling C++ nowadays
> The truth is that any reasonably complicated software system created by humans will have bugs, regardless of what technology was used to create it.
"Drivers wearing seatbelts still die in car accidents and in some cases seatbelts prevent drivers from getting out of the wreckage so we're better off without them." This is cope.
> Using a stricter language helps with reducing some classes of bugs, at the cost of reduced flexibility in expressing a solution and increased effort creating the software.
...and a much smaller effort debugging the software. A logic error is much easier to reason about than memory corruption or race condition on shared memory. The time you spend designing your system and handling the errors upfront pays dividends later when you get the inevitable errors.
I'm not saying that all software should be rewritten in memory-safe languages, but I'd rather those who choose to use the only language where this kind of errors regularly happens be honest about it.