I always was thinking about guide like “joins are expensive” for cases like here’s query in your relational database, here are multi table joins, on top of them there are more complex filters (especially if there are subqueries and/or ), statistics is stale-ish, cardinality estimation goes out of the window and join ordering problem kills you. Especially bad when the same query was working no problem yesterday.
And this is the place when people usually quickly start to study query hints section of their server of choice. (pg_hint_plan)
And, as usual, quote from https://www.vldb.org/pvldb/vol9/p204-leis.pdf
> … For all systems we routinely observe misestimates by a factor of 1000 or more. Furthermore, as witnessed by the increasing height of the box plots, the errors grow exponentially (note the logarithmic scale) 207 as the number of joins increases [21]. For PostgreSQL 16% of the estimates for 1 join are wrong by a factor of 10 or more. This per- centage increases to 32% with 2 joins, and to 52% with 3 joins.
It’s much easier for people to use, especially non-technical stakeholders who tend to just not understand joins.
It doesn’t require anything from the query optimizer… until you’ve had a complex query randomly go from “runs instantly” to “spins for hours” overnight, you can’t appreciate the value of a simple execution plan.
A bonus benefit is that it scares off the architecture astronauts!