For instance, the article itself suggests to use early/premature returns, while they are sometimes compared to "goto", making the control flow less obvious/predictable (as paxcoder mentioned here). Intermediate variables, just as small functions, can easily complicate reading of the code (in the example from the article, one would have to look up what "isSecure" means, while "(condition4 && !condition5)" would have shown it at once, and an "is secure" comment could be used to assist skimming). As for HTTP codes, those are standardized and not dependent on the content, unlike custom JSON codes: most developers working with HTTP would recognize those without additional documentation. And it goes on and on: people view different things as good practices and being simpler, depending (at least in part) on their backgrounds. If one considers simplicity, perhaps it is best to also consider it as subjective, taking into account to whom it is supposed to look simple. I think sometimes we try to view "simple" as something more objective than "easy", but unless it is actually measured with something like Kolmogorov complexity, the objectivity does not seem to be there.
Basically, you should aim to minimise complexity in software design, but importantly, complexity is defined as "how difficult is it to make changes to it". "How difficult" is largely determined by the amount of cognitive load necessary to understand it.
I'm both bothered and intrigued by the industry returning to, what I call, "pile-of-if-statements architecture". It's really easy to think it's simple, and it's really easy to think you understand, and it's really easy to close your assigned Jira tickets; so I understand why people like it.
People get assigned a task, they look around and find a few places they think are related, then add some if-statements to the pile. Then they test; if the tests fail they add a few more if-statements. Eventually they send it to QA; if QA finds a problem, another quick if-statement will solve the problem. It's released to production, and it works for a high enough percentage of cases that the failure cases don't come to your attention. There's approximately 0% chance the code is actually correct. You just add if-statements until you asymptotically approach correctness. If you accidentally leak the personal data of millions of people, you wont be held responsible, and the cognitive load is always low.
But the thing is... I'm not sure there's a better alternative.
You can create a fancy abstraction and use a fancy architecture, but I'm not sure this actually increases the odds of the code being correct.
Especially in corporate environments--you cannot build a beautiful abstraction in most corporate environments because the owners of the business logic do not treat the business logic with enough care.
"A single order ships to a single address, keep it simple, build it, oh actually, a salesman promised a big customer, so now we need to make it so a single order can ship to multiple addresses"--you've heard something like this before, haven't you?
You can't build careful bug-free abstractions in corporate environments.
So, is pile-of-if-statements the best we can do for business software?
Microsoft had three personas for software engineers that were eventually retired for a much more complex persona framework called people in context (the irony in relation to this article isn’t lost on me).
But those original personas still stick with me and have been incredibly valuable in my career to understand and work effectively with other engineers.
Mort - the pragmatic engineer who cares most about the business outcome. If a “pile of if statements” gets the job done quickly and meets the requirements - Mort became a pejorative term at Microsoft unfortunately. VB developers were often Morts, Access developers were often Morts.
Elvis - the rockstar engineer who cares most about doing something new and exciting. Being the first to use the latest framework or technology. Getting visibility and accolades for innovation. The code might be a little unstable - but move fast and break things right? Elvis also cares a lot about the perceived brilliance of their code - 4 layers of abstraction? That must take a genius to understand and Elvis understands it because they wrote it, now everyone will know they are a genius. For many engineers at Microsoft (especially early in career) the assumption was (and still is largely) that Elvis gets promoted because Elvis gets visibility and is always innovating.
Einstein - the engineer who cares about the algorithm. Einstein wants to write the most performant, the most elegant, the most technically correct code possible. Einstein cares more if they are writing “pythonic” code than if the output actually solves the business problem. Einstein will refactor 200 lines of code to add a single new conditional to keep the codebase consistent. Einsteins love love love functional languages.
None of these personas represent a real engineer - every engineer is a mix, and a human with complex motivations and perspectives - but I can usually pin one of these 3 as the primary within a few days of PRs and a single design review.
The second component, frequency of change is equally important as when faced with tradeoffs, we can push high cognitive load to components edited less frequently (eg: lower down the stack) in exchange for lower cognitive load in the most frequently edited components.
It's also why I urge junior engineers to not rely on AI so much because even though it makes writing code so much faster, it prevents them from learning the quirks of the codebase and eventually they'll lose the ability to write code on their own.
So in software development there may be an argument to always structure projects the same way. Standards are good — even when they're bad! because one of their main benefit is familiarity.
Reducing cognitive load comes from the code that you don't have to read. Boundaries between components with strong guarantees let you reason about a large amount of code without ever reading it. Making a change (which the article uses as a benchmark) is done in terms of these clear APIs instead of with all the degrees of freedom available in the codebase.
If you are using small crisp API boundaries to break up the system, "smart developer quirks" don't really matter very much. They are visible in the volume, but not in the surface area.
I have a hard time separating the why and the what so I document both.
The biggest offender of "documenting the what" is:
x = 4 // assign 4 to x
Yeah, don't do that. Don't mix a lot of comments into the code. It makes it ugly to read, and the context switching between code and comments is hard.Instead do something like:
// I'm going to do
// a thing. The code
// does the thing.
// We need to do the
// thing, because the
// business needs a
// widget and stuff.
setup();
t = setupThing();
t.useThing(42);
t.theWidget(need=true);
t.alsoOtherStuff();
etc();
etc();
Keep the code and comments separate, but stating the what is better than no comments at all, and it does help reduce cognitive load."A single page on Doordash can make upward of 1000 gRPC calls (see the interview). For many engineers, upward of a thousand network calls nicely illustrate the chaos and inefficiency unleashed by microservices. Engineers implicitly diff 1000+ gRPC calls with the orders of magnitude fewer calls made by a system designed by an architect looking at the problem afresh today. A 1000+ gRPC calls also seem like a perfect recipe for blowing up latency. There are more items in the debit column. Microservices can also increase the costs of monitoring, debugging, and deployment (and hence cause greater downtime and worse performance)."
For those asking why author doesn't come up with their own new rules that can then be followed, this would just be trading a problem for the same problem. Absentmindedly following rules. Writing accessible code, past a few basic guidelines, becomes tacit knowledge. If you write and read code, you'll learn to love some and hate some. You'll also develop a feel for heavy handedness. Author said it best:
> It's not imagined, it's there and we can feel it.
We can feel it. Yes, having to make decisions while coding is an uncomfortable freedom. It requires you to be present. But you can get used to it if you try.
It completely removes the stress of doing things repeatedly. I recently had to do something I hadn't done in 2 years. Yep, the checklist/doc on it was 95% correct, but it was no problem fixing the 5%.
If you try to do it algorithmically, you arguably won't find a simple expression. It's often glossed over how readability in one axis can drive complexities along another axis, especially when composing code into bite-size readable chunks the actual logic easily gets smeared across many (sometimes dozens) of different functions, making it very hard to figure out what it actually does, even though all the functions check all the boxes for readability, having a single responsibility, etc.
E.g. is userAuthorized(request) is true but why is it true? Well because usernamePresent(request) is true and passwordCorrect(user) is true, both of which also decompose into multiple functions and conditions. It's often a smaller cognitive load to just have all that logic in one place, even if it's not the local optimum of readability it may be the global one because needing to constantly skip between methods or modules to figure out what is happening is also incredibly taxing.
The issue with this stance is, it’s not a zero sum game. There’s no arriving to a point where there isn’t a cognitive load on the task you’re doing. There will always be some sort of load. Pushing things off so that you reduce your load is how social security databases end up on S3.
Confusion comes from complexity. Not a high cognitive load. You can have a high load and still know how it all works. I would better word this as Cognitive load increases stress as you have more things to wrestle about in your head. Doesn’t add or remove confusion (unless that’s the kind of person you are), it just adds or removes complexity.
An example of a highly complex thing with little to no cognitive load due to conditioning, driving an automobile. A not-complex thing that imparts a huge cognitive load, golf.
I could be adding a new feature six months later, or debugging a customer reported issue a week later. Especially in the latter case, where the pressure is greater and available time more constrained, I love that earlier/younger me was thoughtful enough to take the extra time to make things clear.
That this might help others is lagniappe.
There is a simplifying force: the engineers on the project who care about long-term productivity. Work to simplify the code is rarely tracked or rewarded, which is a problem across our industry. Most codebases I've worked in had some large low-hanging-fruit for increasing team productivity, but it's hard to show the impact of that work so it never gets done.
We need an objective metric of codebase cognitive complexity. Then folks can take credit for making the number go down.
Junior programmers too often make the mistake of thinking the code they write is intended for consumption by the machine.
Coding is an exercise in communication. Either to your future self, or some other schmuck down the line who inherits your work.
When I practice the craft, I want to make sure years down the line when I inevitably need to crack the code back open again, I'll understand what's going on. When you solve a problem, create a framework, build a system... there's context you construct in your head as you work the project or needle out the shape of the solution. Strive to clearly convey intent (with a minimum of cognitive load), and where things get more complicated, make it as painless as possible for the next person to take the context that was in your head and reconstruct it in their own. Taking the patterns in your brain and recreating them in someone else's brain is in fact the essence of communication. In practice, this could mean including meaningful inline comments or accompanying documentation (eg. approach summary, drawings, flowcharts, state change diagrams, etc). Whatever means you have to efficiently achieve that aim. If it helps, think of yourself as a teacher trying to teach a student how your invention works.
On the other end of the spectrum you hear sentences starting with: "It would help me to understand this more easily, if ...".
Guess, what happens over time in these teams?
To be fair, the HTTP status line allows for arbitrary informational text, so something like “HTTP/1.1 401 JWT token expired” would be perfectly allowable.
It is always worth it for a programmer to dwell over what complexity is according to Osterhaur; it is worth it to reason over what Uncle Bob thinks is "clean" code, etc. I'm not benefiting from either by applying what they say dogmatically, but I improve my taste in what is good software to me, by discovering and trying many approaches. Without reading them I might never even have thought at a particular solution, or a particular frame of mind.
In particular these variables need to be extremely well named, otherwise people reading the code will still need to remember what exactly is abstracted if the wording doesn't exactly fit their vision. E.g.
> isSecure = condition4 && !condition5
More often than not the real proper name would be "shouldBeSecureBecauseWeAlsoCheckedCondition3Before"
To a point, avoiding the abstraction and putting a comment instead can have better readability. The author's "smart" code could as well be
```
if (val > someConstant // is valid
&& (condition2 || condition3) // is allowed
&& (condition4 && !condition5) // is secure
) {
...
}
```
Not quite. Human mind has evolved to interpret the sensory data collected by senses, and cause necessary action. Some of that interpretation uses memory to correlate the perceived data with the memory data. That's pretty much it.
Overloading the human memory with tons of data which is not related to the context in which the person lives, can cause negative effects. I suspect it can also cause faster aging. New experiences, new information is like scales on a tree trunk. As you accumulate more of it, you age more.
https://news.ycombinator.com/item?id=42489645 (721 comments)
I was surprised to find an anti-framework, anti-layering perspective here. The author makes good points: it’s costly to learn a framework, costly to break out of its established patterns, and costly when we tightly couple to a framework’s internals.
But the opposite is also true. Learning a framework may help speed up development overall, with developers leaning on previous work. Well designed frameworks make things easy to migrate, if they are expressive enough and well abstracted. Frameworks prevent bad and non-idiomatic design choices and make things clear to any new coder who is familiar with the framework. They prevent a lot of glue, bad abstractions, cleverness, and non-performant code.
Layering has an indirection cost which did not appeal to me at all as a less experienced developer, but I’ve learnt to appreciate a little layering because it helps make predictable where to look to find the source of a bug. I find it saves time because the system has predictable places for business logic, serialisation, data models, etc.
Statuscode: 200 { success: false, error: "..." }
Yeah, if you go through this article and replace most of the places where it mentions "cognitive load" with "complexity," it still makes sense.
Yeah, this isn't a criticism of the article - In fact, there are important difference, like having more of a focus on what the dev is experiencing handling the complications of the system - But for those really interested in its concept, may want to learn about complexity too, as there is a lot of great info on this.
Scientists, mathematicians, and software engineers are all really doing similar things: they want to understand something, be it a physical system, an abstract mathematical object, or a computer program. Then, they use some sort of language to describe that understanding, be it casual speech, formal mathematical rigor, scientific jargon -- or even code.
In fact, thinking about it, the code specifying a program is just a human-readable description (or "theory", perhaps) of the behavior of that program, precise and rigorous enough that a computer can convert the understanding embodied in that code into that actual behavior. But, crucially, it's human readable: the reason we don't program in machine code is to maximize our and other people's understanding of what exactly the program (or system) does.
From this perspective, when we write code, articles, etc., we should be highly focused on whether our intended audience would even understand what we are writing (at least, in the way that we, the writer, seem to). Thinking about cognitive load seems to be good, because it recognizes this ultimate objective. On the other hand, principles like DRY -- at least when divorced from their original context -- don't seem to implicitly recognize this goal, which is why they can seem unsatisfactory (to me at least). Why shouldn't I repeat myself? Sometimes it is better to repeat myself!? When should I repeat myself??
If you want to see an example of a fabulous mathematician expressing the same ideas in his field (with much better understanding and clarity than I could ever hope to achieve), I highly recommend Bill Thurston's article "On proof and progress in mathematics" <https://arxiv.org/abs/math/9404236>.
Case in point: Forth. It generally has a heavy cognitive load. However, Forth also enables a radical kind of simplicity. You need to be able to handle the load to access it.
The mind can train to a high cognitive load. It's a nice "muscle" to train.
Should we care about cognitive load? Absolutely. It's a finite budget. But I also think that there are legitimate reasons to accept a high cognitive load for a piece of code.
One might ask "what if you need to onboard mediocre developers into your project?". Hum, yeah, sure. In that case, this article is correct. But being forced to onboard mediocre developers highlights an organizational problem.
Simplicity comes from practice writing and refactoring large code thousands of times. People with limited or shallow experience may think they are good at this but only when they isolate themselves to known patterns of comfort or some giant framework. There is a lot of insecurity there.
Super experienced people, that is people with lots of practice writing large original applications, don’t think like the pretenders. Simplicity is built in like muscle memory. They just solve fucking problem and go drink a beer. There is no memorized pattern nonsense.
The super experienced developers see the pretenders for what they are while the pretenders either can’t see the distinction or just feel hostility at the deviation far outside a memorized convention.
The fact is, despite all the process and pipelines and rituals we've invented to guide how software is made, the best thing leadership can do is to communicate incremental, unambiguous requirements and provide time and space for your engineers to solve the problem. If you don't do that, none of the other meetings and systems and processes and tools will matter.
I'd only lean towards intermediate variables if a) there's lots of smaller conditionals being aggregated up into bigger conditionals which makes line-by-line comments insufficient or b) I'm reusing the same conditional a lot (this is mostly to draw the reader's attention to the fact that the condition is being re-used).
While the logic behind it sounds reasonable, REST does the exact opposite with the same goal: simplicity, easy to learn, i.e. reduce mental load. I know there are other reasons for REST/SOAP/Graphql, etc. Still makes mental load a somewhat subjective matter to me.
Hopefully someone can learn from this before they spin a complex web that becomes a huge effort to untangle.
I literally relaxed in my body when I read this. It was like a deep sigh of cool relief in my soul.
"Cognitive Load" is a buzzword which is abstract.
Cognitive Load is just one factor of projects, and not the main one.
Focus on solving problems, not Cognitive Load, or other abstract concepts.
Use the simple, direct, and effective method to solve problems.
Cognitive Load is relative, it is a high Cognitive Load for one person, but low cognitive load for another person for the same thing.
The Programmer's Brain: What every programmer needs to know about cognition. By Felienne Hermans
Finding flow while coding is a juggling act to keep things in the Goldilocks zone: not too hard, not too easy.
This is tricky on an individual level and even trickier for a team / project.
Coding is communicating how to solve a problem to yourself, your team, stakeholders and lastly the computer.
The Empathic Programmer?
Balancing a cup on a tray isn't too hard. The skill comes in when you can balance 10 cups, and a tray on top of them, and then ten more cups, and another tray, and a vase on that... each step isn't difficult, but maintain the structure is difficult. It's like that, but with ideas.
There are separate contexts involved here: the coder, the compiler, the runtime, a person trying to understand the code (context of this article), etc. What's better for one context may not be better for another, and programming languages favor certain contexts over others.
In this case, since programming languages primarily favor making things easier for the compiler and have barely improved their design and usability in 50 years, both coders and readers should employ third party tools to assist them. AI can help the reader understand the code and the coder generate clearer documentation and labels, on top of using linters, test driven development, literate documentation practices, etc.
How real is this use case? Unless you switch projects really often, this is like a week per two years.
Perhaps we should focus on solving problems that are hard by nature, not by experience of a developer or other external factors.
I think I'm not smart enough for it. I can't really take anything new away from it, mainly just a message of "we're smart people, and trust us when we say smart things are bad. All the smart sounding stuff you learned about how to program from smart sounding people like us? Lol, that's all wrong now."
Okay, I get the cognitive load is bad, so what's the solution?
"Just do simple dumb stuff, duh." Oh, right... Useful.
The problem is never just the code, or the architecture, or the business, or the cognitive load. It's the mismatch of those things against the people expected to work with them.
Walk into a team full of not-simple engineers, and tell them all what they've been doing is wrong, and they need to just write simple code, some of them will fail, some will walk out, and you'll be no closer to a solution.
I wish I knew of the tech world before 20 years ago, where technical roles were long and stable enough for teams to build their own understanding of a suitable level of complexity. Without that, churn means we all have to aim for the lowest common denominator.
However, if new information that needs to be learned is in the game there is also germane cognitive load [0]. It is a nice theory, however, practically there is unfortunately no easy way to separate them and look at them totally independently.
[0] https://mcdreeamiemusings.com/blog/2019/10/15/the-good-the-b...
i like what others would call complexity, i always have, and have from very very early on been mindful of that, i think to a fault since i no longer trust my intuition
is it good to try to turn wizards into brick layers? is there no other option?
Certainly giving me some pause for thought, in my own work.
Reducing cognitive load doesn't happen in a vacuum where simple language constructs trump abstraction/smart language constructs. Writing code, documents, comments, choosing the right design all depend upon who you think is going to interact with those artifacts, and being able to understand what their likely state of mind is when they interact with those artifacts i.e. theory of mind.
What is high cognitive load is very different, for e.g. a mixed junior-senior-principal high-churn engineering team versus a homogenous team who have worked in the same codebase and team for 10+ years.
I'd argue the examples from the article are not high cognitive load abstractions, but the wrong abstractions that resulted in high cognitive load because they didn't make things simpler to reason about. There's a reason why all modern standard libraries ship with standard list/array/set/hashmap/string/date constructs, so we don't have manually reimplement them. They also give a team who is using the language (a framework in its own way) common vocabulary to talk about nouns and verbs related to those constructs. In essense, it is reducing the cognitive load once the initial learning phase of the language is done.
Reading through the examples in the article, what is likely wrong is that the decision to abstract/layer/framework is not chosen because of observation/emergent behavior, but rather because "it sounds cool" aka cargo cult programming or resume-driven programming.
If you notice a group of people fumble over the same things over and over again, and then try to introduce a new concept (abstraction/framework/function), and notice that it doesn't improve or makes it harder to understand after the initial learning period, then stop doing it! I know, sunk cost fallacy makes it difficult after you've spent 3 months convincing your PM/EM/CTO that a new framework might help, but then you have bigger problems than high cognitive load / wrong abstractions ;)
In particular, when the shit hits the fan, your max cognitive load tanks. Something people who grumble at the amount of foolproofing I prefer often only discover in a crisis. Because they’re used to looking at something the way they look at it while sipping their second coffee of the day. Not when the servers are down and customers are calling angry.
You’ll note that we only see how the control room at NASA functions in movies and TV when there’s a massive crisis going on, or intrigue. Because the rest of the time it’s so fucking boring nobody would watch it.
Let's take a recipe:
Ingredients:
large bowl
2 eggs
200 grams sugar
500 grams flour
1/2 tsp soda
Steps:
Crack the eggs into a bowl. Add sugar and whisk. Sift the flower. Add the soda.
When following the instruction, you have to always refer back to the ingredients list and search for the quantity, which massively burdens you with "cognitive load". However, if you inline things: Crack 2 eggs into a large bowl. Add 200g sugar and whisk. Sift 500g of flower. Add 1/2 tsp soda.
Much easier to follow!Also, many developers are suffering from severe cognitive load that is incurred by technology and tooling tribalism. Every day on HN I see complaints about things like 5 RPS scrapers crippling my web app, error handling, et. al., and all I can think about is how smooth my experience is from my particular ivory tower. We've solved (i.e., completely and permanently) 95% of the problems HN complains about decades ago and you can find a nearly perfect vertical of these solutions with 2-3 vendors right now. Your ten man startup not using Microsoft or Oracle or IBM isn't going to make a single fucking difference to these companies. The only thing you win is a whole universe of new problems that you have to solve from scratch again.
The author makes valid points but they are vacuous and do not provide concrete alternatives.
Many engineering articles disappoint me in this way, I get hyped by all the “don’t dos”, but the “do dos” never come.
What is this bug in software people's brains that keeps thinking "I can come up with a perfect idea that is never wrong" ? Can a psychologist explain this to me please?
Like, scientists know this is dumb. The only way something can be perceived as right, scientifically, is if lots of people independently test an idea, over and over and over and over again, and get the same result. And even then, they just say it's true so far.
But software people over here like "If I spend 15 minutes thinking about an idea, I can come up with a fundamental principle of everything that is always true forever." And sadly the whole "fundamental principle" is based in ignorance. Somebody heard an interesting-sounding term, never actually learned what it meant, but decided to make up their own meaning for it, and find anything else in their sphere (software) that backs up their theory.
If they'd at least quoted any of the academic study and research about cognitive load over the past 35 years, maybe I might be blowing this out of proportion? But nope. This is literally just a clickbait rant, based on vibes, backed up by quotes from blogs. The author doesn't seem to understand cognitive load at all, and their descriptions of what it is, and what you should do in relation to it, are all wrong. The article doesn't even mention all three types of cognitive load. And one of the latest papers on the subject (Orru G., Longo L. (2019)) basically came to the conclusion that 1) the whole thing is very complex, and 2) all the previous research might be bunk or at least need brand new measurement methods, so... why is anyone taking this all as if it's fact?
But I'm not really bothered by the ignorance. It's the ego that kills me. The idea that these random people who know nothing about a subject are rushing to debate this, as if this idea, or these people's contributions, have merit, just because they think they're really smart.
Anyway, I happen to belong in the latter category according to most -- the longer I have been doing this, the more I lean into the purely functional style, almost mathematical vigor, because I have learned how much (or rather little) margin there is to introduce subtle errors once you have actual _variables_ that may change freely, which start to encourage you to do other things which in the end contribute to lack of correctness, readable or not.
Now, you may blame people like me, and I cannot blame you for not having the cognitive load capacity to understand some of the code I write "succinctly", but my point is that for all the merit of the article (yes, I agree code is read much more often than it is written, lending value to the "readability" argument), it doesn't acknowledge the fact readability and correctness are _in practice_ often mutually exclusive. Like, in the field. Because I wager that the tendency is to approach a more mathematical expression style as one becomes better at designing software, with adversarial conditions manifesting in terms of bugs hiding in mutability of state and large, if "simple", bodies of functions, classes (which have methods you cannot guarantee to not mutate the object's state).
We need to find means to write code that is readable but without compromising other factors like mutability which _too_ has been shown to compromise correctness. What good is readable software that never manages to escape the vortex of issues, driving the perpetually busy industry "fixing bugs".
At my place of work, I obviously see both kinds of the "mutually exclusive", and I can tell you without due pride and yet with good confidence, people who write readable code -- consisting of aliasing otherwise complex expression with eloquently named variables (or sometimes even "constants", bless their heart), and designing clumsy class hierarchies -- spend a lot of subsequent effort never being able to be "done" with the code, and I don't mean just because requirements keep changing, no -- they sit and essentially "fixup commit" to the code they write, in perpetuity, seemingly. And we have select few who'd write a code-base with as few variables as possible, with a lot of pure function -- what I referred to as "mathematical programming" in a way -- and I never hear from them much offering "PRs" to fix their earlier mishaps. The message that sends me is pretty clear.
So yeah, by all means, let's find ways to write code our fellow man can understand, but the article glosses over a factor that is at least as important -- all the mutability and care for "cognitive load" capacity (which _may be_ lower for current generation of software engineers vs earlier ones) may be keeping us in the rotating vortex of bugs we so "proudly" crouch over as we pretend we are "busy". I, for one, prefer to write code that works right from get-go, and not have to come back to said code unless the requirements which made me write it the way I did, change. On a very rare occasion, admittedly, I have to sacrifice readability for correctness, not because it's inherently one or the other, but because I too haven't yet found the perfect means to always have both, and yet correctness is on the absolute top of my list, and I advocate that it should be on top of your as well, dare I to say so. But that is me -- perhaps I set the bar too high?
More coders are needed, than to who these are “simple”, I understand. But, if you have problems with these, I would definitely try to pivot to something else, like managerial positions. Especially with AI on us. Of course, if you are fine to be an “organic robot”, then it’s fine, but you’ll never really get why this profession is awesome. You’ll never have the leverage.