But how about we flip the original statement around:
Many problems thought to require giant software packages/libraries are very solvable locally if you know what you are doing
This is why LC is actually meaningful - imagine if you faced the coin challenge problem IRLin prod and you decided to pull in a constraint solver - what would've been a 25 line function now is a giant python library that carries 30MB of native dependencies around.
These solutions are often lacking in many ways, since mentally offloading some of the work means you don't have to understand the problem in depth or how the designated tool solves it, which can lead to nasty surprised in production.
We see this everywhere - people pulling in questionable npm packages to save themselves 30 mins of thinking or 20 mins of reading docs - people convinced that you do need that huge 3D framework that comes with an editor to make a WebGL widget on your landing page etc.
You are more willing to accept bloat of others, just because they pulled in Electron, because they were too afraid to learn how native UI works.
People need to be more curious, and strive to be more knowledgeable.
> It's easy to do in O(n^2) time, or if you are clever, you can do it in O(n). Or you could be not clever at all and just write it as a constraint problem
This nails it. The point of these problems is to test your cleverness. That's it. Presenting a not-clever solution of using constraint solvers shows that you have experience and your breadth of knowledge is great. It doesn't show any cleverness.
If my wife's blood sugar is high, she takes insulin. If you need to solve a constraint problem, use a constraint solver.
If your company doesn't make and sell constraint solving software, why do you need me to presume that software doesn't exist and invent it from scratch?
Because of this, I've just started rejecting outright leetcode/ai interview steps... I'll do homework, shared screen, 1:1, etc, but won't do the above. I tend to fail them about half the time. It only feels worse in instances, where I wouldn't even mind the studying on leetcode types sites if they actually had decent explainers for the questions and working answers when going through them. I know this kind of defeats the challenge aspect, but learning is about 10x harder without it.
It's not a matter of skill, it's just my ability to take in certain types of problems doesn't work well. Without any chance of additional info/questions it's literally a setup to fail.
edit: I'm mostly referring to the use of AI/Automated leetcode type questions as a pre-interview screening. If you haven't seen this type of thing, good for you. I've seen too much of it. I'm fine with relatively hard questions in an actual interview with a real, live person you can talk to and ask clarifying questions.
This completely undermines the author's main point. Constraint solvers don't solve hard leetcode problems if they can't solve large instances quickly enough.
Many hard leetcode problems can be solved fairly simply with more lax runtime requirements -- coming up with an efficient solution is a large part of the challenge.
> Given a list of stock prices through the day, find maximum profit you can get by buying one stock and selling one stock later.
It was funny to see this, because I give that question in our interviews. If someone suggested a constraint solver... I don't know what I'd have done before reading this post (since I had only vaguely even heard of a constraint solver), but after reading it...
Yeah, I would still expect them to be able to produce a basic algorithm, but even if their solution was O(n^2) I would take it as a strong sign we should hire them, since I know there are several different use cases for our product that require generalized constraint solving (though I know it by other names) and having a diverse toolset on hand is more important in our domain than writing always-optimal code.
He wanted to make an app to help sports club owners schedule players for the day based on a couple simple rules. I thought this was going to be easy, and failed after not realizing what I was up against. At the time I didn't even know what I didn't know.
I often look back on that as a lesson of my own hubris. And it's helped me a lot when discussing estimates and timelines and expectations.
(I think it's almost impossible to convince your interviewer into constraint solvers, while the concept itself is great)
I played it for a while when interest rates were really low and used the thing for my own rainy day savings(I did get tired changing accounts all the time)
% Given a set of coin denominations,
% find the minimum number of coins
% required to make change.
% IE for USA coinage and 37 cents,
% the minimum number is four
% (quarter, dime, 2 pennies).
num(0). num(1). num(2).
num(3). num(4). num(5).
?- num(Q), num(D), num(P),
37 is Q * 25 + D * 10 + P
You can just paste it into [1] to execute in the browser. Using 60 as target sum is more interesting as you can enumerate over two solutions.(Posting again what I already posted two days ago [2] here)
[1]: https://quantumprolog.sgml.net/browser-demo/browser-demo.htm...
Well said. One of the big benefits of general constraint solvers is their adaptability to requirements changes. Something I learned well when doing datacenter optimization for Google.
[1] Logic, Optimization, and Constraint Programming: A Fruitful Collaboration - John Hooker - CMU (2023) [video]:
https://www.youtube.com/live/TknN8fCQvRk
[2] "We Really Don't Know How to Compute!" - Gerald Sussman - MIT (2011) [video]:
https://youtube.com/watch?v=HB5TrK7A4pI
[3] Google OR-Tools:
https://developers.google.com/optimization
[4] MiniZinc:
The mature, state-of-the-art software companies do not give me leetcode problems to solve. They give me interesting & challenging problems that force me to both a) apply best practices of varying kinds and yet b) be creative in some aspects of the solution. And these problems are very amenable to “talking through” what I’m doing, how I’m approaching the solution, etc. Overall, I feel like they are effective and give the company a good sense of how I develop software as an engineer. I have yet to “fail” one of these.
It is the smaller, less mature companies that give me stupid leetcode problems. These companies usually bluntly tell me their monolithic codebase (always in a not-statically-typed language), is a total mess and they are “working on domain boundaries”.
I fail about 50% of these leetcode things because I don’t know the one “trick” to yield the right answer. As a seasoned developer, I often push back on the framing and tell them how I would do a better solution by changing one of the constraints, where the change would actually better match the real world problem they’re modeling.
And they don’t seem to care at all. I wonder if they realize that their bullshit interviewing process has both a false positive and a false negative problem.
The false negatives exclude folks like myself who could actually help to improve their codebase with proper, incremental refactorings.
The false positives are the people who have memorized all the leetcode problems. They are hired and write more shitty monolithic hairball code.
Their interviewing process reinforces the shittiness of their codebase. It’s a spiral they might never get out of.
The next time I get one of these, I think I’m going to YOLO it, pull the ripcord early and politely tell them why they’re fucked.
https://pierre-flener.github.io/research/NordConsNet/NordCon...
The reason was that aboint 70% of candidates couldn't write a simple loop -- to filter those out. The actual solution didn't matter much, I gave a binary decision. The actual conversation matters more.
This doesn't mean they can't provide a constraint solver solution, but if they do, they'd better be prepared to address the obvious follow-ups. If they're prepared to give an efficient solution afterward in the time left, then more power to them.
> We can solve this with a constraint solver
Ok, using your favorite constraint solver, please write a solution for this.
> [half an hour later]
Ok, now how would you solve it if there was more than 100 data points? E.g. 10^12?
But how do they work, what is the complexity of the solution, for example for the stock prices, is it O(n^2)?
It's about signaling. That's all it is. At least it's not finance where it's all dictated by if you were born into the right family that got you into the elite boarding schools for high school, etc. I would've never made it into finance unless I did a math phd and became a quant.
It's interesting how powerful contraint solvers are (Ive never used one).
But actually all of these problems are fairly simple if we allow brute force solutions. They just become stacked loops.
Really? This kind of interview needs to go away.
However, coding interviews are useful. It's just that "knowing the trick" shouldn't be the point. The point is whether the candidate knows how to code (without AI), can explain themselves and walk through the problem, explain their thought processes, etc. If they do a good enough reasoning job but fail to solve the problem (they run out of time, or they go on an interesting tangent that ultimately proves fruitless) it's still a "passed the test" situation for me.
Failure would mean: "cannot code anything at all, not even a suboptimal solution. Cannot reason about the problem at all. Cannot describe a single pitfall. When told about a pitfall, doesn't understand it nor its implications. Cannot communicate their thoughts."
An interview shouldn't be an university exam.
If not, how can you claim you have solved the problem?
Leetcode requires a very different set of skills from software engineering. Software engineering isn't so much about solving puzzles as it is about making good decisions. It's about knowing what's important and knowing where the boundaries are. It's about anticipating problems in their broadest form; creating just the right amount of flexibility and allowing the solution to solidify as your understanding of the problem deepens.
Greedy algorithms tell you nearly nothing about the candidate's ability to code. What are you going to see? A single loop, some comparison and an equality. Nearly every single solution that can be solved with a greedy algorithm is largely a math problem disguised as programming. The entire question hinges on the candidate finding the right comparison to conduct.
The author himself finds that these are largely math problems:
> Lots of similar interview questions are this kind of mathematical optimization problem
So we're not optimizing to find good coders, we're optimizing to find mathematicians who have 5 minutes of coding experience.
At the risk of self-promotion, I'm fairly opinionated on this subject. I have a podcast episode where I discuss exactly this problem (including discuss greedy algorithms), and make some suggestions where we could go as an industry to avoid these kind of bad-signal interviews:
https://socialengineering.fm/episodes/the-problem-with-techn...
That's why in so many industries they prefer to hire engineers and OR grads and teach them python, than hire SWE and teach them modeling
That's a bad algorithm, then, not a greedy algorithm. Wouldn't a properly-implemented greedy algorithm use as many coins as possible of a given large denomination before dropping back to the next-lower denomination?
If a candidate's only options are to either use a constraint solver or to implement a naïver-than-usual greedy algorithm, well, sorry, but that's a no-hire.
They aren't testing if you can write a solver. They are testing if you can use bricks that solvers are built out of because other software when it gets interesting is built out of the same stuff.
Maybe it's my graphics programmer brain firing on all cylinders, but isn't this just a linear scan, maintaining a list of open rectangles?
Interviewer: You can't use a constraint solver
The interviewers were clueless so after 10 minutes of trying to explain to them I quit and fell back to just writing the freaking algo they were expecting to see.
Second, it's a covert test for culture fit. Are you young (and thus still OK with grinding for tests)? Are you following industry trends? Are you in tune with the Silicon Valley culture? For the most part, a terrible thing to test, but also something that a lot of "young and dynamic" companies want to select for without saying so publicly. An AI startup doesn't want people who have family life and want to take 6 weeks off in the summer. You can't put that in a job req, but you can come up with a test regime that drives such people away.
It has very little to do with testing the skills you need for the job, because quite frankly, probably fewer than 1% of the SWE workforce is solving theoretical CS problems for a living. Even if that's you, that task is more about knowing where to look for answers or what experiments to try, rather than being able to rattle off some obscure algorithm.
coins = [100,50,25,10,5,1]
change = 1234;
result = [0,0,0,0,0,0];
for(i=0:i<coins.length;i++){
while(change>coins[i]){
result[i]++;
change-=coins[i];
}
}
//[12,0,1,1,4]
Coudnt help myself sorry(if you have enough time)