by cobolcomesback
13 subcomments
- This “mandatory meeting” is just the usual weekly company-wide meeting where recent operational issues are discussed. There was a big operational issue last week, so of course this week will have more attendance and discussion.
This meeting happens literally every week, and has for years. Feels like the media is making a mountain out of a mole hill here.
by happytoexplain
31 subcomments
- >Junior and mid-level engineers can no longer push AI-assisted code without a senior signing off
Review by a senior is one of the biggest "silver bullet" illusions managers suffer from. For a person (senior or otherwise) to examine code or configuration with the granularity required to verify that it even approximates the result of their own level of experience, even only in terms of security/stability/correctness, requires an amount of time approaching the time spent if they had just done it themselves.
I.e. senior review is valuable, but it does not make bad code good.
This is one major facet of probably the single biggest problem of the last couple decades in system management: The misunderstanding by management that making something idiot proof means you can now hire idiots (not intended as an insult, just using the terminology of the phrase "idiot proof").
by 33MHz-i486
1 subcomments
- In case it isn’t completely obvious from this, it is indeed hellish to work there. Most of AWS has a 2 reviewer requirement. If AI is writing most of the code (and it is because most Amazon code is copypasta boilerplate) you need 3 developers to sign off to ship anything. But of course due to headcount attrition, managers have ~1.5 developers to a project. Meanwhile the L8 manager is doing nothing except stack ranking each level of engineers according to number of commits merged & customer facing features shipped, and firing 15% of the bottom at the end of each year. There is no notion of subject matter expertise or technical depth, theyre happy to replace whoever with fresh-grads (theyre all just cogs anyway right!). Between that and voluntary departures, teams having 80-100% turnover every 5 years is basically par.
Also while this is happening most developers are getting constantly hammered by operational issues and critical security tasks because 1) the legacy toolchain imports 6 different language package ecosystems and 2)no one ever pays down tech debt in legacy code until its a high severity ticket count in a KPI dashboard visible to the senior management.
by prakhar897
4 subcomments
- From the amazon I know, people only care about a. not getting fired and b. promotions. For devs, the matrix looks like this:
1. Shipping: deliver tickets or be pipped.
2. Having Less comments on their PRs: for some drastically dumb reason, having a PR thoroughly reviewed is a sign of bad quality. L7 and above use this metric to Pip folks.
3. Docs: write docs, get them reviewed to show you're high level.
Without AI, an employee is worse off in all of the above compared to folks who will cheat to get ahead.
I can't see how "requesting" folks for forego their own self-preservation will work. especially when you've spent years pitting people against each other.
by sdevonoes
5 subcomments
- Reviewing AI generated code at PR time is a bottleneck. It cancels most of the benefits senior leadership thinks AI offers (delivery speed).
There’s also this implicit imbalance engineers typically don’t like: it takes me 10 min to submit a complete feature thanks to Claude… but for the human reviewing my PR in a manual way it will take them 10-20 times that.
Edit: at the end real engineers know that what takes effort is a) to know what to build and why, b) to verify that what was built is correct. Currently AI doesn’t help much with any of these 2 points.
The inbetweens are needed but they are a byproduct. Senior leadership doesn’t know this, though.
by philip1209
8 subcomments
- I think the deeper need is a "self-review" flow.
People push AI-reviewed code like they wrote it. In the past, "wrote it" implies "reviewed it." With AI, that's no longer true.
I advocate for GitHub and other code review systems to add a "Require self-review" option, where people must attest that they reviewed and approved their own code. This change might seem symbolic, but it clearly sets workflows and expectations.
- The optics here are really bad for Amazon. The continuing mass departures of long tenured folks, second-rate AI products, and a string of bad outages paints a picture that current leadership is overseeing a once respected engineering train flying off the tracks.
News from the inside makes it sound like things are getting pretty bad.
- The only way to see the kinds of speed-up companies want from these things, right now, is to do way too little review. I think we're going to see a lot of failures in a lot of sectors where companies set goals for reduced hours on various things they do, based on what they expected from LLM speed-ups, and it will have turned out the only way to hit those goals was by spending way too little time reviewing LLM output.
They're torn between "we want to fire 80% of you" and "... but if we don't give up quality/reliability, LLMs only save a little time, not a ton, so we can only fire like 5% of you max".
(It's the same in writing, these things are only a huge speed-up if it's OK for the output to be low-quality, but good output using LLMs only saves a little time versus writing entirely by-hand—so far, anyway, of course these systems are changing by the day, but this specific limitation has remained true for about four years now, without much improvement)
by petterroea
1 subcomments
- I feel bad for the seniors who have to take on this workload. The general pattern I am seeing is that seniors at "AI-first" companies are being held back from doing their work by reviewing junior PRs, who are now able to ship much more code they don't understand the badness of.
Mentoring Juniors is an important part of the job and crucial service to the industry, but juniors equipped with LLMs make the deal a bit more sour. Anecdotally, they don't really remember the feedback as well, because they weren't involved in writing the code. Its burnout-inducing to see your hard work and feedback go in one ear and out another.
I personally know people looking to jump ship because they waste too much time at their current employer on this.
- > Company that lays-off 20% of its staff every year in an attempt to "reduce inefficiency" and "remain agile in the adoption of new technologies and workflows" finds they cannot run a stable service, have more inefficiency than ever, and have also failed to establish leadership in the adoption of any new technologies or workflows. They plan to solve these problems by introducing more inefficiency (making your most expensive employees review the work of others).
We love this for Amazon, they're a very strong company making bold decisions.
- If this is true, it misunderstands the primary goals of code review.
Code review should not be (primarily) about catching serious errors. If there are always a lot of errors, you can’t catch most of them with review. If there are few it’s not the best use of time.
The goal is to ensure the team is in sync on design, standards, etc. To train and educate Jr engineers, to spread understanding of the system. To bring more points of view to complex and important decisions.
These goals help you reduce the number of errors going into the review process, this should be the actual goal.
- Someone should teach the decision makers how pipelines work. If AI-created diffs are being churned out at 10x the previous rate but manual reviews are the bottleneck then the overall system is producing at the exact same rate as before. The only thing you have added is cost, uncertainty and engineers being less familiar with the system.
by Lalabadie
1 subcomments
- I'm not sure the sustainable solution is to treat an excess of lower-quality code output as the fixed thing to work with, and operationalize around that, but sure.
- 16000 layoffs cost just $180 million—it is a win so far
https://x.com/gothburz/status/2031778265958842541
- The amount of time and money being wasted chasing this dragon is unreal.
- > The response for now? Junior and mid-level engineers can no longer push AI-assisted code without a senior signing off.
So basically, kill the productivity of senior engineers, kill the ability for junior engineers to learn anything, and ensure those senior engineers hate their jobs.
Bold move, we'll see how that goes.
- I think the problem of responsibility will come for many more companies sooner than later. It is possible that some of the alleged efficacy gains by using ai are not so big anymore when someone has to be accountable for it.
by AlotOfReading
2 subcomments
- I'm not surprised by the outages, but I am surprised that they're leaning into human code review as a solution rather than a neverending succession of LLM PR reviewers.
I wonder if it's an early step towards an apprenticeship system.
- What are we going to do about software for critical infrastructure in the coming decade?
Feels inevitable that code for aviation will slowly rot from the same forces at play but with lethal results.
by AlexeyBrin
3 subcomments
- I wonder how this will work in practice. Say I'm a senior engineer and I produce myself thousands of lines of code per day with the help of LLMs as mandated by the company. I still need to presumably read and test the code that I push to production. When will I have time to read and evaluate similar amounts of code produced by a junior or a mid level engineer ?
- I just met a guy from Amazon this past weekend who was bragging, "We've got unlimited access to LLMs and our developers have 10 agents going at a time.". I tried telling him it wasn't all unicorns and rainbows but I didn't get the impression he cared and just kept crapping out skittles.
by captainkrtek
0 subcomment
- One challenge with code review as an antidote to poor quality gen-AI code, is that we largely see only the code itself, not the process or inputs.
In the pre-gen-AI days, if an engineer put up a PR, it implied (somewhat) they wrote their code, reviewed it implicitly as they wrote it, and made choices (ie: why is this the best approach).
If Claude is just the new high level programming language, in terms of prompting in natural language, the challenge is that we're not reviewing the natural language, we're reviewing the machine code without knowing what the inputs were. I'm not sure of a solution to this, but something along the lines of knowing the history of the prompting that ultimately led to the PR, the time/tokens involved, etc. may inform the "quality" or "effort" spent in producing the PR. A one-shotted feature vs. a multi-iteration feature may produce the same lines of code and general shape, but one is likely to be higher "quality" in terms of minimal defects.
Along the same lines, when I review some gen-AI produced PR, it feels like I'm reading assembly and having to reverse how we got here. It may be code that runs and is perfectly fine, but I can't tell what the higher level inputs were that produced it, and if they were sufficient.
by mancerayder
0 subcomment
- Is the rebuttal posted anywhere? I collapsed the huge first few threads but nothing is there. True or false, Amazon are saying it's not true that it's due to AI, and in fact their change in operational processes to add review is broader:
https://www.aboutamazon.com/news/aws/aws-service-outage-ai-b...
by julienchastang
1 subcomments
- > best practices and safeguards are not yet fully established
The way I am working with AI agents (codex) these days is have the AI generate a spec in a series of MD documents where the AI implementation of each document is a bite sized chunk that can be tested and evaluated by the human before moving to the next step and roughly matches a commit in version control. The version control history reflects the logical progression of the code. In this manner, I have a decent knowledge of the code, and one that I am more comfortable with than one-shotting.
- If they do not also increase the senior devs’ allotted time for code reviews to make up for the increased volume of changes due to increased productivity of the junior to mid level devs, or hire more seniors, this will just lead to burnout (on top of Amazon’s already high levels) and scapegoating seniors for having waved through a change because they materially can’t review them fast enough.
- If Seniors are going to review every GenAI generated code, how do they keep up with the volume of changes?
So you have 2 systems of engineers: Sr- and Sr+
1. Both should write code to justify their work and impact
2. Sr- code must be reviewed by Sr+
What happens:
a. Sr+ output drops because review takes their time more and more
b. Sr+ just blindly accepts because of the volume is too high, and they should also do their own work
c. Sr+ asks Sr- to slow-down, then Sr- can get bad reviews for the output, because on average Sr+ will produce more code
I think (b) will happen
by VorpalWay
1 subcomments
- I'm bewildered by Amazon here. I would assume every change require code review by another enigneer already, as is standard practice in the industry I work in (industrial equipment). Is the change just that it has to be a senior engineer specifically, rather than any engineer? Or did Amazon really not have mandatory code review before?
by sizzzzlerz
2 subcomments
- Who fixes code that gets rejected? Do you simply try again and hope or does someone go into this computer-generated code that they didn't write and do the equivalent of battlefield triage?
And what are they going to do when they've fired all the senior engineers because they make too much money, leaving just juniors and AI?
by LogicFailsMe
0 subcomment
- For the good of the company's future, all code should be reviewed by L10s going forward before they are accepted. They're the only ones with enough skin in the game to know what really matters after all.
And from their sagely reviews, we shall train a large language model to ultimately replace them because the most fungible thing at Amazon is the leadership.
- It's only going to get worse with the brain drain as a result of the layoffs. Which will increase the use of AI assisted coding and increase the number of outages related to this.
Imagine having to debug code that caused an outage when 80% is written by an LLM and you now have to start actually figuring out the codebase at 2am.. :)
- You could create an agent template for each incident you've ever had, with context pre-cached with the postmortem report, full code change, and any other information about the incident. Then for every new PR you could clone agents from all those templates and ask whether the PR could cause something similar to the pre-loaded incident. If any of them say yes, reject the PR unless there's a manual override. You'd never have a repeat incident.
Obviously it's probably cost-prohibitive to do an all to all analysis for every PR, but I imagine with some intelligent optimizations around likelihood and similarity analysis something along those lines would be possible and practical.
- the funniest part is amazon literally started tying AI usage to performance reviews like 6 months ago and now theyre doing damage control. you cant simultaneously pressure every engineer to use more AI AND be shocked when AI-assisted code breaks prod. pick one lol
by kmg_finfolio
0 subcomment
- The accountability problem is real but I think it's slightly different from what's being described. The issue isn't just "who signs off"; it's that the reasoning behind a change becomes invisible when AI generates it. A senior engineer can approve output they don't fully understand, and six months later when something breaks, nobody can reconstruct why that decision was made.
Human review works when the reviewer can actually interrogate the logic. At LLM-assisted velocity, that bar gets harder to clear every month.
by dragonelite
5 subcomments
- Expect a shitload of AI powered code review products the next 18 months.
- .agentignore/.agentnotallowed file
force agents to not touch mission critical things, fail in CI otherwise
let it work on frontends and things at the frontier of the dependency tree, where it is worth the risk
- Speed of code-writing was never the issue at Amazon or AWS. It was always wrong-headed strategic directions, out to lunch PMs, dogshit testing environment, stakeholder soup, high turnover, bureaucracy, a pantheon of legacy systems, insane operational burdens, garbage tooling, and last but not last -- designing for inter-system failure modes, which let's be real, AI has no chance of having context for -- and so on...
Imagine if the #1 problem of your woodworking shop is staff injuries, and the solution that management foists on you is higher RPM lathes.
by PeterStuer
0 subcomment
- There's going to be some serious acceleration in senior engineer burnout. Maybe they can use more genAI to support their increased workload.
by tracerbulletx
0 subcomment
- The way we used to build confidence in what we shipped was by beating our head against it for a week figuring it all out. You really can't have the same confidence with code reviews unless you basically do the same work you'd do to write it by hand for a lot of these things.
by butILoveLife
1 subcomments
- Maybe its my 1 buddy that works at amazon, but they seemed extremely slow to adopt LLMs. Big ships take a long time to turn, but this seemed hostile.
I am seeing this mindset still, with AI Agents. I imagine they will slowly realize they need to use this stuff to be competitive, but being slow to adopt AI seems like it could have been the source of this.
- I anticipate they will fix this by adding better AI evaluation tools that work better to test their infra and changes.
In the meantime they will be quite a bit slower I’d imagine.
Also wonder if those seniors will ever get to actually do any engineering themselves now that they’re the bottleneck. :)
- AI seems to be the whipping boy, but to me, it really seems more of a symptom than a cause. At its root, isn't this an issue of a decline in critical thinking?
I do think AI adoption exacerbates said falloff.
- An outage could cost Amazon ~millions to tens of millions. Most of the time, we want the junior to learn from the outage and fix the process. With AI agent, we can only update the agent.md and hope it will never happen again.
- So Amazon senior SWEs now have to review every single PR for all intents and purposes? I didn't think Amazon could get worse.
by dedoussis
1 subcomments
- How do they determine whether a PR is AI-assisted and therefore requires senior review? A junior engineer could still copy-paste AI-generated code and claim it as their own.
- "Make senior engineer sign off ai-assisted changes" sounds incredibly weird.
First thing that comes to mind is: reminds me of those movie where some dictatorship starts to crumble and the dictator start being tougher and tougher on generals, not realizing the whole endeavor is doomed, not just the current implementation.
Then again, as a former amazon (aws) engineer: this is just not going to work. Depending how you define "senior engineer" (L5? L6? L7?) this is less and less feasible.
L5 engineers are already supposed to work pretty much autonomously, maybe with L6 sign-off when changes are a bit large in scope.
L6 engineers already have their own load of work, and a fairly large amount of engineers "under" them (anywhere from 5 to 8). Properly reviewing changes from all them, and taking responsibility for that, is going to be very taxing on such people.
L7 engineers work across teams and they might have anywhere from 12 to 30 engineers (L4/5/6) "under" them (or more). They are already scarce in number and they already pretty much mostly do reviews (which is proving not sufficient, it seems). Mandating sign-off and mandating assumption of responsibility for breaking changes means these people basically only do reviews and will be stricter and stricter[1] with engineers under them.
L8 engineers, they barely do any engineering at all, from what I remember. They mostly review design documents, in my experience not always expressing sound opinions or having proper understanding of the issues being handled.
In all this, considering the low morale (layoffs), the reduced headcount (layoffs) and the rise in expectations (engineers trying harder to stay afloat[2] due to... layoffs)... It's a dire situation.
I'm going to tell you, this stinks A LOT like rotting day 2 mindset.
----
1. keep in mind you can't, in general, determine the absence of bugs
2. Also cranking out WAY MUCH MORE code due to having gen-ai tools at their fingertips...
by joeyguerra
0 subcomment
- I’m wonder how many sr engineers are going to quit because they don’t want to read a bunch of code?
by monster_truck
0 subcomment
- Have they tried simply not writing bugs? I've found that works best for me personally
- Ugh. The Great Oops has never been closer.
- digression: the long twitter urls make this entire page wider and the text smaller on iOS for me. Feels like a minor bug. Maybe a `overflow-wrap: anywhere` CSS rules needs to be added to URLs.
- Some years of AI Technical Debt will be something to behold.
by dude250711
1 subcomments
- I knew this would happen.
Take a perfectly productive senior developer and instead make him be responsible for output of a bunch of AI juniors with the expectation of 10x output.
- How much of damage is 6 hours offline for Amazon?
by th2o34i3432897
0 subcomment
- First Microsoft and now Amazon (eg. their RufusAI is useless compared to the old comment search!)
Has Seattle now become the code-slop capital ? Or is SFO still on top ?
- A few days ago, after some very weird failed purchase attempts I made (payment couldn’t be validated or
Smth) I received an even weirder mail from Amazon saying they had detected suspicious activity, all my devices got logged out and I was forced to change my password. I did it, after verifying it was a legit email (even if it looked sketchy af, pure text, unstyled, but sender verified and confirmed with in-app behavior), and next I know all my orders and browsing history had disappeared - +15 yrs of history, done.
Over the next few days my account history came back, except purchases made Q1 2026. Those are still missing. There are a few substantial purchases I made that are nowhere to be found anymore.
I attributed this Iranian missiles hitting some of their infrastructure in EU, as it had been reported.
Now I am not sure if it was blast radius from missiles or AI mishaps. Lmao - couldn’t happen to a worse company…
- > the affected tool served customers in mainland China
Thought this blurb most interesting. What's the between-lines subtext here? Are they deliberately serving something they know to be faulty to the Chinese? Or is it the case that the Chinese use it with little to no issue/complaint? Or...?
- "After outages due to outsourcing the economically convenient developers with no skin in what your building or care, company X requires all senior engineers to review all code from outsourcing company".
- > Junior and mid-level engineers can no longer push AI-assisted code without a senior signing off
So what incentive is there for juniors to look at the code at all? Seniors are now just another CI stage for their slop to pass.
- Not fun to work at amazon.com it seems.
- use AI!
no! not that way!
by softwaredoug
0 subcomment
- Getting junior / mid-level people to slop cannon PRs at seniors will just burn out seniors. The team might be better having fewer developers using AI more thoughtfully.
- A year later, they will require AI to sign off engineer changes.
by mattschaller
3 subcomments
- Anyone work with Kiro before? As I understood, it was held as an INTERNAL USE ONLY tool for much longer than expected.
- Then what’s the point of AI? Pay for the code gen, pay a human to review the code gen, when the senior can train a junior and coordinate output with their incentives and performance reviews, problems largely solved.
Seems to me too low level in everyone’s stack to not have humans doing the work, especially at this stage. But what do I know, I certainly am not at the helm of a multibillion dollar operation.
- Hope this happens at GitHub since there are constant outages on the entire platform.
by CodingJeebus
0 subcomment
- I'm at a small company struggling with this problem. Fundamentally, we have a limited context and AI is capable of generating tremendous amounts of output that exceed our ability to deeply process.
I find myself context-switching all the time and it's pretty exhausting, while also finding that I'm not retaining as much deep application domain knowledge as I used to.
On the surface, it's nice that I can give my LLM a well-written bug ticket and let it loose since it does a good job most of the time. But when it doesn't do a good job or it's making a change in an area of the codebase I'm not familiar with, auditing the change gets tiring really fast.
- very expected outcomes.
- "We want you to use AI for everything!"
"No, not like that though!"
- A former colleague of mine recently took a role that has largely turned out to be "greybeard that reviews the AI slop of the junior engineers". In theory it sounds workable, but the volume of slop makes thoughtful review impossible. Seems like most orgs will just put pressure on the slop generators to do more and put pressure on the approvers and then scape goat the slop approvers if necessary?
- so, seniors will review now the AI slop code.. I am also doing this task and reviewing this kind of code takes time as the code is often overengineered. Code works but will have potential bugs. I am not able to find every bug or implication quickly. But I am also using ai to review the ai slop lol, because why now. After that I am also quickly reviewing by myself.
- This is what humans will become, on call to take the blame for AI. It will be less about skill and confidence and more about being on the hook to take the fall for when things go wrong.
- If you don't use ~crypto~ AI you will go broke!
by secondcoming
0 subcomment
- Has Amazon's advertising TAM product been affected by AI?
by mikkupikku
0 subcomment
- lgtm
by ChrisArchitect
0 subcomment
- Previously: https://news.ycombinator.com/item?id=47319273
- Unfortunately you can’t just yell at the AI so it learns never to do this again. Humans take such a large range of feedback that LLMs can’t.
by fredgrott
2 subcomments
- Curious question, how many Amazon Engineers flunk basic CS?
If you know CS you know two things:
1. AI can not judge code either noise or signal, AI cannot tell.
2. CS-wise we use statistic analysis to judge good code from bad.
How much time does it take to take AI output and run the basic statistic tools for most computer languages?
Some juniors need firing outright
by adamzwasserman
0 subcomment
- I'm sure they are going to have a ball reading through thousands of lines of AI slop.
by HeavyStorm
0 subcomment
- This is so fucking ridiculous.
by luxuryballs
0 subcomment
- They weren’t already signing off on them? o.O
by recallingmemory
0 subcomment
- .. So our jobs aren't going away?
by camillomiller
0 subcomment
- Such fun.
On top of your already strapped schedule, now you have to bet your career on vibe code that you will now have to spend time reading and debugging. All that instead of a chain of accountability that has people in place instead of stupid bots with fake agency.
This is beyond corporate satire.
There was never before a technology capable of convincing leadership of its usefulness despite its constant blunders and despite the low quality of its output.
This feels like a corporate mass delusion of unprecedented scale.
- This looks like a blame allocation exercise to me.
The seniors will now be directly responsible for all the AI slop that goes in. But how can they possibly properly review reams of code to a sufficient degree they can personally vouch for it?
by desireco42
0 subcomment
- So essentially they will be blamed, everything will stay the same.
I do consulting and use AI a lot. You just have to take responsibility for the code. We are delivering like never before, but have a lot of experience into how to do it as safe as possible. And we are learning along the way. They say you need a year to build up experience fyi.
I feel bad for those engineers who will have to sign off for things they will most likely not have enough time to review. Kiro is nice and all.
by throw_m239339
1 subcomments
- Yet another example of vibe coding at scale. You'll have to hire a lot of seniors out of retirement to fix that mess of gigantic proportions... and don't blame "the juniors" for that, they didn't make the decision to allow those tools at first place.
- [dead]
by adrien_dev
0 subcomment
- [dead]
by aplomb1026
0 subcomment
- [dead]
- [dead]
by throwaway613746
0 subcomment
- [dead]
by josefritzishere
1 subcomments
- The excessive exuberance of AI adoption is all part of the bubble.
- With AI it makes sense to have leaner teams. Being able to go faster requires greater responsibility.
by letitgo12345
0 subcomment
- Worth noting that this is when they used Amazon's own AI product, not when using Claude Code or Codex.
- So the take-away here is maybe we should read the code that "we" wrote? :)
(Before injecting it into global infra...)
by andsoitis
1 subcomments
- > Amazon’s website and shopping app went down for nearly six hours this month in an incident the company said involved an erroneous “software code deployment.” The outage left customers unable to complete transactions or access functions such as checking account details and product prices.
The environment breathed a little.
by rubyrfranklin2
0 subcomment
- We ran into something similar at heyvid.ai — shipped AI-generated code without a proper review gate and ended up with a subtle bug in our rendering pipeline that took the team a week to trace. Not catastrophic, but it seriously eroded trust in the tooling for a while. Amazon's approach makes total sense at their scale. The honest reality is that LLMs are great at producing plausible-looking code and genuinely bad at knowing when they're wrong. Senior sign-off isn't overhead — it's what makes AI-assisted development actually sustainable.