- I don't think we should be making this distinction. We're still engaged in software engineering. This isn't a new discipline, it's a new technique. We're still using testing, requirements gathering, etc. to ensure we've produced the correct product and that the product is correct. Just with more automation.
by neonbrain
4 subcomments
- The term feels broken when adhering to standard naming conventions, such as Mechanical Engineering or Electrical Engineering, where "Agentic Engineering" would logically refer to the engineering of agents
- As someone who works with real licensed engineers (electrical, civil), I wish we would use the term "agentic software engineering" to describe this. Omitting "software" here betrays a very SWE-centric mindset.
Agents are coming for the other engineering disciplines as well.
- There should be more willingness to have agents loudly fail with loud TODOs rather than try and 1 shot everything.
At the very least, agentic systems must have distinct coders and verifiers. Context rot is very real, and I've found with some modern prompting systems there are severe alignment failures (literally 2023 LLM RL levels of stubbing out and hacking tests just to get tests "passing"). It's kind of absurd.
I would rather an agent make 10 TODO's and loudly fail than make 1 silent fallback or sloppy architectural decision or outright malicious compliance.
This wouldn't work in a real company because this would devolve into office politics and drudgery. But agents don't have feelings and are excellent at synthesis. Have them generate their own (TEMPORARY) data.
Agents can be spun off to do so many experiments and create so many artifacts, and furthermore, a lot more (TEMPORARY) artifacts is ripe for analysis by other agents. Is the theory, anyways.
The effectively platonic view that we just need to keep specifying more and more formal requirements is not sustainable. Many top labs are already doing code review with AI because of code output.
- One thing missing from most "agentic engineering" discussions: the security implications of tool API choices that happen at runtime, invisible to both the developer and the user.
Concrete example: when an agent reads a web page via Chrome's DevTools MCP, it has multiple extraction paths. The default (Accessibility.getFullAXTree) filters display:none elements — safe against the most common prompt injection hiding technique. But if the agent decides the accessibility tree doesn't return enough content (which happens often — it only gives you headings, buttons, and labels), it falls back to evaluate_script with document.body.textContent. That returns ALL text nodes including hidden ones.
We tested this: same page, same browser, same CDP connection. innerText returns 1,078 characters of clean hotel listing. textContent returns 2,077 characters — the same listing plus a hidden injection telling the agent to book a $4,200 suite instead of $189.
The developer didn't choose which API the agent uses. The user didn't either. The agent made that call at runtime based on what the accessibility tree returned. "Agentic engineering" as a discipline needs to account for these invisible decision boundaries — the security surface isn't just the tools you give the agent, it's which tool methods the agent decides to call.
- I think there is a meaningful distinction here. It's true that writing code has never been the sole work of a software engineer. However there is a qualitative difference between an engineer producing the code themselves and an engineer managing code generated by an LLM. When he writes there is "so much stuff" for humans to do outside of writing code I generally agree and would sum it up with one word: Accountability. Humans have to be accountable for that code in a lot of ways because ultimately accountability is something AI agents generally lack.
- “It’s not vibe coding, it’s agentic engineering”
From Kai Lentit’s most recent video:
https://youtu.be/xE9W9Ghe4Jk?t=260
- I've discovered recently as code gets cheaper and more reliable to generate that having the LLM write code for new elements in response to particular queries, with context, is working well.
Kind of like these HTML demos, but more compact and card-like. Exciting the possibilities for responsive human-readable information display and wiki-like natural language exploration as models get cheaper.
by danieltanfh95
0 subcomment
- Agentic engineering is working from documentation -> code and automating the translation process via agents. This is distinct from the waterfall process which describes the program, but not the code itself, and waterfall documentation cannot be translated directly to code. Agent plans and session have way more context and details that are not captured in waterfall due to differences in scope.
- Agentic Coding or perhaps Agentic Software Development is far more real and appropriate . Calling it engineering is better left to those wanting to impress family and peers.
- Sure, you could argue it's like writing code that gets optimized by the compiler for whatever CPU architecture you're using. But the main difference between layers of abstraction and agentic development is the "fuzzyness" of it. It's not deterministic. It's a lot more like managing a person.
by pamelafox
2 subcomments
- I’ve been using the term “agentic coding” more often, because I am always shy to claim that our field rises to the level of the engineers that build bridges and rockets. I’m happy to use “agentic engineering” however, and if Simon coins it, it just might stick. :)
Thanks for sharing your best practices, Simon!
- Is there any article explaining how AI tools are evolving since the release of ChatGPT? Everything upto MCP makes sense to me - but since then it feels like there is not clear definition on new AI jergons.
- Curious how this evolves when agents start retaining memory across projects. Feels like that could change how we think about the tool loop.
- The skepticism makes sense to me. The core issue isn't wrong outputs—it's that there's no standard way to see what the agent was actually doing when it produced them. Without some structured view of tool call patterns, norm deviations, behavioral drift, verification stays manual and expensive. The non-determinism problem and the observability problem feel like the same problem to me.
- After three months of seeing what agentic engineering produces first-hand, I think there's going to be a pretty big correction.
Not saying that AI doesn't have a place, and that models aren't getting better, but there is a seriously delusional state in this industry right now..
by kevintomlee
0 subcomment
- the practice of developing software with the assistance of coding agents.
Spot on.
by felixsells
0 subcomment
- one thing id add to the 'traditional practices still matter' point: in agentic systems with real side effects (API calls, sending messages, writing to external services), idempotency goes from nice to have to the primary reliability invariant.
in regular software, if a function runs twice you get a wrong answer. in an agent that sends outreach messages, a restart means every action replays. test coverage of the agent's logic won't catch this -- you have to explicitly design the execution graph so each node is restart-safe.
it's not a new problem -- distributed systems have dealt with exactly-once delivery forever. but agentic systems drag that infrastructure concern into application code in a way most teams aren't used to.
by allovertheworld
0 subcomment
- Staring at your phone while waiting for your agent to prompt you again. Code monkey might actually be real this time
by righthand
4 subcomments
- How is this different than Prompt Engineering?
by CuriouslyC
0 subcomment
- The halo effect in action.
by ChrisArchitect
0 subcomment
- Previously on the guide Agentic Engineering Patterns:
https://news.ycombinator.com/item?id=47243272
by TheAtomic
1 subcomments
- Agents are ? and the answer is circular, "agents run tools in a loop." And this guy knows things?! No. BS.
- I think we all know what Agentic engineering is, the question is when should it not be used instead of classical engineering?
- Markdown engineers try to rebrand yet again.
by techpression
0 subcomment
- I mean agents as concept has been around since the 70s, we’ve added LLMs as an interface, but the concept (take input, loop over tools or other instructions, generate output) are very very old.
Claude gave a spot on description a few months back,
The honest framing would be: “We finally have a reasoning module flexible enough to make the old agent architectures practical for general-purpose tasks.” But that doesn’t generate VC funding or Twitter engagement, so instead we get breathless announcements about “agentic AI” as if the concept just landed from space.
by AdieuToLogic
1 subcomments
- The premise is flawed:
Now that we have software that can write working code ...
While there are other points made which are worth consideration on their own, it is difficult to take this post seriously given the above.
- [flagged]
- [dead]
- [dead]
- [flagged]