by MeetingsBrowser
9 subcomments
- The craziest thing about AI is you can just try it yourself and check if the claims are true.
I use Claude code and codex daily. They have become an integral part of my workflow.
There is no task that takes me a day that they can complete in five minutes.
Even with the lightning fast progress being made, it looks like LLMs are a decade or more away from being that good.
If AI can do your job for you, you should be the first to know. Just try it and see!
by OldSchool
2 subcomments
- More "bad news" and from the man who helped create and then promote Agile to dilute the value of software developers by forcing software development out of the control freak's nightmare where it started: seemingly esoteric, non-understandable by management, and make sure the next generation of developers knows their place. That's Agile's insidious purpose as far I am concerned.
As for AI-written code, I wouldn't fly on a plane controlled by AI-designed and AI-tested code, but much of development is busy work, not problem solving or design. AI excels at turning a protocol spec into a parser for example. I'll take that any day. AI excels at finding stuff, particularly non-code, thesis-level ideas for algorithms and also at about the same level, what's been shown not to work when solving a non-deterministic problem.
If we're lucky, AI will fill in after exposing who is only doing busy work and who is creating.
by doginasuit
1 subcomments
- There are probably some respectable workflows that involve an LLM writing most of the code, but AI is still terrible at understanding some critical parts of the problem. You still have to tell it what to write and how it should work or there are high odds that you'll get a hot mess. And there still needs to be a human that understands everything there and how to debug it. For me, the most enjoyable path there is to write it myself, because I would rather be involved in writing the code than only involved in reading it. It might not be the fastest path there, but it gets the job done for the foreseeable future. I could end up like the Amish who choose not to use technology that was developed after a certain point, from what I can tell they do alright.
- Over on reddit and over here as well people seems to be reacting to the title of the video or the first 5 seconds or just the author. On the original x[1] post however, the top replies are about the subject matter, which is about having agents write tests and refactor code.
And speaking of agents writing tests, I have an ask. The tests agents love to write are in a lot of ways like human written tests, perfunctory and smelly. They are there to check coverage or prompt checkbox, but they barely stress the system under test. I often find that the tests are faking and mocking so many inputs, methods, and side effects, that they aren't testing anything at all. Asking the agent to write the tests first so that they the underlying implementation is more testable has yielded no results.
What has worked for people to get agents to write more testable implementations and better tests?
PS. Reacting to Uncle Bob, I found metric driven agentic refactors just push complexity to outside the scope of the metric. I am finding I need to actively guide the agents for the refactors to actually improve things without increasing the entropy of the codebase.
[1] https://x.com/unclebobmartin/status/2046206145597972849
- I’m not so sure. I had a recent experience where Kiro was convinced there was a defect in the testing library when I asked it to refactor some existing project code.
However this conclusion made no sense as we had similar scenarios across our project that worked flawlessly. After intervening I determined the root cause was a combination of an async issue with the production code and some incorrect mocking that was covering up the async issue.
It never occurred to the AI agent to do some simple cross examination before essentially throwing in the towel?
- Yes I get the irony but also let's not forget that it's over for the Code that Uncle Bob likes. Which is bad, verbose, dogmatic, unreadable, elitist code [1] with "discipline" [2] and a dash of sexism. And that has luckily been over for a _long_ time before LLMs.
1) https://qntm.org/clean
2) https://blog.cleancoder.com/uncle-bob/2017/10/04/CodeIsNotTh...
- I don't have a lot of patience for Bob. That being said I have to agree with him on test coverage (that's as far as I made it through his monologue). IMHO, that is something that I 100% am okay letting the LLM tooling write and manage. I used to argue about whether or not we needed a test that verified that the value of a constant didn't change, and if 100% coverage was really that important. Now I don't care, I just let Claude write the test and keep it up-to-date.
- Kind of a great video! I enjoyed it. His point about testing coverage and generating mutations to ensure the tests fail resonated. I get concerned sometimes that the AI is writing tests not to ensure the logic is correct, but to ensure the tests pass against the code it already wrote. Any other ideas on this? Is there a code review step or CI checkpoint that would decrease the likelihood of that?
by recursivedoubts
1 subcomments
- "it can chop up all your functions into tiny functions..."
And now you just played yourself, by creating a morass of tiny functions. Well tested (CRAP says so!) And impossible to understand how they compose together.
AI will happily return the next token and ruin your codebase, if you ask it.
- It’s hard to give up, but likely necessary. That doesn’t mean quality has to suffer, we can still gate with deterministic quality tooling where it matters. But yeah, at some scale it stops mattering how human readable the code is, as long as AI can effectively and efficiently (token-wise) make edits or add features.
by relativeadv
0 subcomment
- "Forty years later, in September of 2018, I started working on this version of Space War. It's an animated GUI driven system with a frame rate of 30fps. It is written entirely in Clojure and uses the Quil shim for the Processing GUI framework." - Robert Martin
https://blog.cleancoder.com/uncle-bob/2021/11/28/Spacewar.ht...
- That gotta be a joke right? It's like running agents to write agent ochestrators to write orchestrators for orchestrators just for clean code
- That was a bizarre performance.
- Gives me a whole new perspective to the phrase clean code.
- I'm an AI skeptic, but I do think that _he_ will be out-coded by AI, no problem.
- English is the new programming language.
by RobRivera
3 subcomments
- That's just, like, his opinion man
- For all LLM flaws, if it kills the whole Agile/SCRUM/whatever grift, it will have been worth it. The damage these guys have done to software industry at large is unfathomable.
- I tend to agree with his point.
But I found myself laughing at the style; just ranting about software like a cartoon villain in his bathrobe. No fucks given.
- spent a 30-year lustrous SWE career avoiding reading and listening anything this dude says, probably among the smartest things I’ve ever done
- I fully believe AI can write better code faster than Robert C. Martin.
by mrcartmeneses
0 subcomment
- Uncle Bob full of shit? Colour me purple!
- "It is unavoidable. It is your destiny. You, like your father, are now mine."
by HumblyTossed
0 subcomment
- He helped enshitify the industry - empowering midlings to cry about "clean code" instead of actually learning to produce a great product. No thanks, Bob.
- Clean Architecture and Uncle Bob can take a hike.
by abbadadda
1 subcomments
- I thought this was about Uncle Bob being “canceled.”