For many shops it’s too much effort for the payoff. Unless you work on medical devices or aerospace
Your situation will warrant some of these things, or doing them to a lesser degree. Part of our role is to decide and recommend what’s appropriate.
This is a common misconception.
Limit function length: Keep functions concise, ideally under 70 lines. Shorter functions are easier to understand, test, and debug. They promote single responsibility, where each function does one thing well, leading to a more modular and maintainable codebase.
Say, you have a process that is single threaded and does a lot of stuff that has to happen step by step.New dev comes in; and starts splitting everything it does in 12 functions, because, _a function, should do one thing!_ Even better, they start putting stuff in various files because the files are getting too long.
Now you have 12 functions, scattered over multiple packages, and the order of things is all confused, you have to debug through to see where it goes. They're used exactly once, and they're only used as part of a long process. You've just increased the cognitive load of dealing with your product by a factor of 12. It's downright malignant.
Code should be split so that state is isolated, and business processes (intellectual property) is also self contained and testable. But don't buy into this "70 lines" rule. It makes no sense. 70 lines of python isn't the same as 70 lines of C, for starters. If code is sequential, and always running in that order and it reads like a long script; that's because it is!
Focus on separating pure code from stateful code, that's the key to large maintainable software! And choose composability over inheritance. These things weren't clear to me the first 10 years, but after 30 years, I've made those conclusions. I hope other old-timers can chime in on this.
The length of functions in terms of line count has absolutely nothing to do with "a more modular and maintainable codebase", as explained in the manifesto.
Just like "I committed 3,000 lines of code yesterday" has nothing to do with productivity. And a red car doesn't go faster.
A robust design is one that not only is correct, but also ensures the functionality even when boundary conditions deviate from the ideal. It's a mix of stability, predictability and fault tolerance. Probably "reliable" can be used as a synonym.
At the same time, in all industries except CS "safety" has a specific meaning of not causing injuries to the user.
In the design of a drill, for example, if the motor is guaranteed to spin at the intended rpm independently of orientation, temperature and state of charge of the battery, that's a robust design. You'll hear the word "safe" only if it has two triggers to ensure both hands are on the handles during operation.
The most important advice one can give to programmers is to
1. Know your problem domain.
2. Think excessively deep about a conceptual model that captures the
relevant aspects of your problem domain.
3. Be anal about naming your concepts. Thinking about naming oftentimes
feeds back to (1) and (2), forming a loop.
4. Use a language and type system that is powerful enough to implement
previous points.I work on financial data processing where you genuinely have 15 sequential steps that must run in exact order: parse statement, normalize dates, detect duplicates, match patterns, calculate VAT, validate totals, etc. Each step modifies state that the next step needs.
Splitting these into separate functions creates two problems: (1) you end up passing huge context objects between them, and (2) the "what happens next" logic gets scattered across files. Reading the code becomes an archaeology exercise.
What I've found works better: keep the orchestration in one longer function but extract genuinely reusable logic (date parsing, pattern matching algorithms) into helpers. The main function reads like a recipe - you can see the full flow without jumping around, but the complex bits are tucked away.
70 lines is probably fine for CRUD apps. But domains with inherently sequential multi-step processes sometimes just need longer functions.
I don't know how this philosophy is applied at TigerBeetle. When I establish engineering guidelines I try to frame them as exactly that: guidelines. The purpose is to spawn defensible reasoning, and to trigger reflection.
For example, I might say this:
We use a heuristic of 70 lines not as a hard limit, but as a "tripwire." If you cross it, you are not 'wrong,' but you are asked to pause and consider if you're introducing unintentional complexity. If you can justify it, keep it—there's no need to code golf.
"Style," "philosophy," "guides," they're all well-meaning and often well-informed, but you should be in command of the decision as the developer and not forget your own expertise around cohesion, cognitive load, or any functional necessities.There are staunch believers in gating deploys based solely on LOCs, I'm sure... I like the idea of finding ways to transparently trigger cognitive provocations in order for everyone to steer towards better code without absolutes.
Do you not use word wrap? The downside of this rule is that vertical scrolling is increased (yes, it's easier, but with a wrap you can make that decision locally) and accessibility is reduced (and monitors are wide, not tall), which is especially an issue when such a style is applied to comments so you can't see all the code in a single screen due to multiple lines of comments in that long formal grammatically correct style
Similarly, > Limit function length: Keep functions concise, ideally under 70 lines.
> and move non-branching logic to helper functions.
Break accessibility of logic, instead of linearly reading what's going on you have to jump around (though popups could help a bit). While you can use block collapse to hide those helper blocks without losing their locality and then expand only one helper block.
If you're risking money and time, can you really justify this?
- 'writing code that works in all situations'
- 'commitment to zero technical debt'
- 'design for performance early'
As a whole, this is not just idealist, it's privileged.
In languages with TCO (e.g. Haskell, Scheme, OCaml, etc.) the compiler can rewrite to a loop.
Some algorithms are conceptually recursive and even though you can rewrite them, the iterative version would be unreadable: backtracking solvers, parsing trees, quicksort partition & subprblems, divide-and-conquer, tree manipulation, compilers, etc.
If this topic floats your boat, go look up the NASA coding standards. For a few projects, I tried to follow a lot of their flow control recommendations, and will still reach for: `while ... && LIMIT > 0` in some situations.
Still a huge fan of including some type info in the variable name, eg: duration_s, limit_ms makes it extremely clear that you shouldn't mix math on those integers.
https://github.com/tigerbeetle/tigerbeetle/blob/main/docs/TI...
1. 100% code coverage 2. 100% branch coverage 3. 100% lint (without noqa) 4. 100% type check pass(for python/js) 5. 100% documentation coverage 6. All functions with complexity less than 5. All functions with no of lines less than 70. All files with number of lines less than 1000.
These make code high quality, and quality of life is directly proportional to qualify of your code.
Doing good design is off course important, but on the other hand software design is a lot of times iterative because of unknown unknown s. Sometimes it can be better to create quick prototype(s) to see which direction is the best to actually "do it right", instead of spending effort designing something that in the end won't be build.
Why?
“Rugged” describes software development organizations that have a culture of rapidly evolving their ability to create available, survivable, defensible, secure, and resilient software.
https://github.com/rugged-software/rugged-software.github.io
The usual BS... yes, shorter functions are easier to understand by themselves but what matters, especially when debugging, is how the whole system works.
Edit: care to refute? Several decades of experience has shown me what happens. I'm surprised this crap is still being peddled.
Any recommendations for other coding philosophies or "first principle" guides? I know of "extreme programming" but not much else.
> Do it right the first time
So easy, why didn't I think of that!? /s
Reminds me of the mental health meme of telling depressed people to just be happier instead.
I don't necessarily disagree with a lot in this philosophy, but much of it is puffery if not accompanied by practical positive and negative examples. If a junior with little experience reads this, I'm not sure if they'll be better or worse off.
For example, "Design for performance early" is dangerous if it leads to premature optimization. But that's not mentioned. Practical positive and negative examples that illustrate the balance between these two concerns would make the advice actionable.