struct PizzaOrder {
size: PizzaSize,
toppings: Vec<Topping>,
crust_type: CrustType,
ordered_at: SystemTime,
}
The problem they want to address is partial equality when you want to compare orders but ignoring the ordered_at timestamp. To me, the problem is throwing too many unrelated concerns into one struct. Ideally instead of using destructuring to compare only the specific fields you care about, you'd decompose this into two structs: #[derive(PartialEq, Eq)]
struct PizzaDetails {
size: PizzaSize,
toppings: Vec<Topping>,
crust_type: CrustType,
… // additional fields
}
#[derive(Eq)]
struct PizzaOrder {
details: PizzaDetails,
ordered_at: SystemTime,
}
impl PartialEq for PizzaOrder {
fn eq(&self, rhs: &Self) -> bool {
self.details == rhs.details
}
}
I get that this is a toy example meant to illustrate the point; there are certainly more complex cases where there's no clean boundary to split your struct across. But this should be the first tool you reach for."Defensive programming" has multiple meanings. To the extent it means "avoid using _ as a catch-all pattern so that the compiler nags you if someone adds an enum arm you need to care about", "defensive" programming is good.
That said, I wouldn't use the word "defensive" to describe it. The term lacks precision. The above good practice ends up getting mixed up with the bad "defensive" practices of converting contract violations to runtime errors or just ignoring them entirely --- the infamous pattern in Java codebases of scrawling the following like of graffiti all over the clean lines of your codebase:
if (someArgument == null) {
throw new NullPointerException("someArgument cannot be null");
}
That's just noise. If someArgument can't be null, let the program crash.Needed file not found? Just return ""; instead.
Negative number where input must be contractually not negative? Clamp to zero.
Program crashing because a method doesn't exist? if not: hasattr(self, "blah") return None
People use the term "defensive" to refer to code like the above. They programs that "defend" against crashes by misbehaving. These programs end up being flakier and harder to debug than programs that are "defensive" in that they continually validate their assumptions and crash if they detect a situation that should be impossible.
The term "defensive programming" has been buzzing around social media the past few weeks and it's essential that we be precise that
1) constraint verification (preferably at compile time) is good; and
2) avoidance of crashes at runtime at all costs after an error has occurred is harmful.
The same day Cloudflare had its unwrap fiasco, I found a bug in my code because of a slice that in certain cases went past the end of a vector. Switched it to use iterators and will definitely be more careful with slices and array indexes in the future.
If your function gets ownership of, or an exclusive reference to an object, then you know for sure that this reference, for as long as it exists, is the only one in the entire program that can access this object (across all threads, 3rd party libraries, recursion, async, whatever).
References can't be null. Smart pointers can't be null. Not merely "can't" meaning not allowed and may throw or have a dummy value, but just can't. Wherever such type exists, it's already checked (often by construction) that it's valid and can't be null.
If your object's getter lends an immutable reference to its field, then you know the field won't be mutated by the caller (unless you've intentionally allowed mutable "holes" in specific places by explicitly wrapping them in a type that grants such access in a controlled way).
If your object's getter lends a reference, then you know the caller won't keep the reference for longer than the object's lifetime. If the type is not copyable/cloneable, then you know it won't even get copied.
If you make a method that takes ownership of `self`, then you know for sure that the caller won't be able to call any more methods on this object (e.g. `connection.close(); connection.send()` won't compile, `future.then(next)` only needs to support one listener, not an arbitrary number).
If you have a type marked as non-thread safe, then its instances won't be allowed in any thread-spawning functions, and won't be possible to send through channels that cross threads, etc. This is verified globally, across all code including 3rd party libraries and dynamic callbacks, at compile time.
So many rust articles are focused on people doing dark sorcery with "unsafe", and this is just normal every day api design, which is far more practical for most people.
One question about avoiding boolean parameters, I’ve just been using structs wrapping bools. But you can’t treat them like bools… you have to index into them like wrapper.0.
Is there a way to treat the enum style replacement for bools like normal bools, or is just done with matches! Or match statements?
It’s probably not too important but if we could treat them like normal bools it’d feel nicer.
Question: how to encourage such patterns within a team? I often find it difficult to do it during code reviews and leading to unproductive arguments about "code style" and "preferences".
Funnily, these arguments do not happen when a linter pops a warning instead...
Like whenever I read posts like this, they're always fairly anecdotal. Sometimes there will even be posts about how large refactor x unlocked new capability y. But the rationale always reads somewhat retconned (or again, anecdotal*). It seems to me that maybe such continuous meta-analysis of one's own codebases would have great potential utility?
I'd imagine automated code smell checking tools can only cover so much at least.
* I hammer on about anecdotes, but I do recognize that sentiment matters. For example, if you're planning work, if something just sounds like a lot of work, that's already going to be impactful, even if that judgement is incorrect (since that misjudgment may never come to light).
Actually the From trait documentation is now extremely clear about when to implement it (https://doc.rust-lang.org/std/convert/trait.From.html#when-t...)
Rust encourages that behavior. Sometimes rightly, but it does build a habit.
I spoke previously about how the Rust book uses the external rand create as a key example and it sets the tone for developers. I'm changing that stance somewhat since it was a decent strategic choice to have crypto packages plug-and-play. But tit still builds a habit.
The 'defensive' nature refers to the mindset of the programmer (like when guilty people are defensive when being asked a simple question), that he isn't sure of anything in the code at any point, so he needs to constantly check every invariant.
Enterprise code is full of it, and it can quickly lead to the program becoming like 50% error handling by volume, many of the errors being impossible to trigger because the app logic is validating a condition already checked in the validation layer.
Its presence usually betrays a lack of understanding of the code structure, or even worse, a faulty or often bypassed validation layer, which makes error checking in multiple places actually necessary.
One example is validating every parameter in every call layer, as if the act of passing things around has the ability to degrade information.
I would have guessed linters would have complained about what's being suggested there. Is the something special about var: _ thing that avoids it?