Show HN: I built a tool to assist AI agents to know when a PR is good to go
35 points by dsifry
by rootnod3
9 subcomments
Sorry, so the tool is now even circumventing human review? Is that the goal?
So the agent can now merge shit by itself?
Just the let damn thing push nto prod by itself at this point.
by philipp-gayret
1 subcomments
Very interesting! This has a gem in the documentation: Using the tool itself as a CI check. I hadn't considered unresolved comments by say a person, or CodeRabbit or similar tool being a CI status failure. That's an excellent idea for AI driven PR's.
On a personal note; I hate LLM output to advertise a project. If you have something to share have the decency to type it out yourself or at least redact the nonsense from it.
by furyofantares
0 subcomment
Then you had the LLM write the blog post as well as your post on HN.
by joshribakoff
0 subcomment
I dislike the idea of coupling my workflow to saas platforms like github or code rabbit. The fact that you still have to create local tools is a selling point for just doing it all “locally”.
by nyc1983
1 subcomments
I don’t understand how this provides anything above using GitHub status checks and branch protections to require conversations to be resolved before merging. Combined with the GitHub CLI, this gives agents everything they need to achieve the same result. More AI slop on top of AI slop. At this point when seeing these kinds of posts I feel like Edward Norton in front of the copy machine.
by joshuanapoli
1 subcomments
This looks nice! I like the idea of providing more deterministic feedback and more or less forcing the assistant to follow a particular development process. Do you have evidence that gtg improves the overall workflow? I think that there is a trade-off between risk of getting stuck (iteration without reaching gtg-green) versus reaching perfect 100% completion.
by mcolley
1 subcomments
Super interesting, any particular reason you didn't try to solve these prior to pushing with hooks and subagents?