Look, I am going to say something that might get me uninvited from a few Slack channels: I think Amazon is right about this one.
Last week, after a string of outages — including one memorable 13-hour AWS incident where an AI coding tool decided the best course of action was to delete and recreate an entire environment — Amazon announced a new policy. Junior and mid-level engineers now need a senior engineer to sign off on any AI-assisted code changes before they go live.
And the internet, predictably, lost its mind.
“This is anti-innovation!” “They are treating AI like a junior dev!” “Next they will require sign-off for Stack Overflow answers too!”
I get it. I do. But here is the thing: Amazon is not saying AI coding tools are bad. They are saying nobody should ship code they do not fully understand. And that is a principle we somehow forgot in the rush to let Copilot drive.
The Outage That Started It All
Let me paint the picture. December 2025. AWS is running smoothly — or so everyone thought. An engineering team was using Kiro, Amazon’s internal AI coding tool, to make what should have been a routine infrastructure change. The AI looked at the existing setup, decided it was suboptimal, and chose to “delete and recreate the environment.”
Not modify. Not update. Delete and recreate.
Thirteen hours later, a cost calculator service used by customers in parts of mainland China was back online. Amazon called it an “extremely limited event.” Having been on the receiving end of “extremely limited events” that somehow required three all-hands calls and a postmortem the length of a novella, I can tell you that phrasing does a lot of heavy lifting.
My friend Greg, who works at a mid-size fintech company, had a nearly identical experience two months ago. “We gave our junior devs access to an AI assistant for infrastructure scripts,” he told me over coffee last Tuesday. “One of them asked it to optimize our database connection pooling. The AI decided to drop the existing pool configuration and rebuild it from scratch. During peak hours. On a Friday.”
I almost choked on my latte.
The Real Problem Is Not AI — It Is the Skill Gap
Here is what nobody wants to talk about: AI coding tools are incredible at generating code that looks correct. They produce clean syntax, follow naming conventions, add comments that sound reasonable. The code compiles. The tests pass — at least the tests the AI wrote, which test exactly the things the AI thought to test.
But understanding what code does at a systems level? Knowing that this particular change will cascade through three microservices and a message queue? That is still a human skill. And it is a skill that takes years to develop.
Sandra, a principal engineer at a company I will not name because she specifically asked me not to, put it bluntly during a panel we were both on last month: “Junior engineers are shipping more code than ever. They are also understanding less of it than ever. That is not a productivity gain. That is a time bomb.”
I wanted to disagree with her. I really did. But then I looked at the incident reports from the last quarter at companies I consult for, and — yeah. She has a point.
What Amazon’s Policy Actually Says
Let me be precise about what Amazon is doing, because the headlines made it sound more dramatic than it is:
- Junior and mid-level engineers need a senior engineer to review and approve any changes that were substantially generated or modified by AI tools
- This applies to production-bound code, not personal projects or experimentation
- The review is specifically focused on understanding — can the submitting engineer explain what the code does and why?
That is... not unreasonable? We already have code review processes. This is basically saying “pay extra attention when AI was involved.” It is the engineering equivalent of “measure twice, cut once.”
The Layoff Connection Nobody Wants to Acknowledge
Here is where it gets uncomfortable. Amazon eliminated 16,000 corporate roles in January 2026. Multiple Amazon engineers have gone on record saying their business units now deal with more “Sev2” incidents — the kind requiring rapid response to prevent outages — as a direct result of reduced headcount.
Amazon disputes this. Of course they do.
But connect the dots: fewer experienced engineers plus more AI-generated code plus the same amount of complex infrastructure equals... well, exactly the kind of outages we have been seeing.
My take? The sign-off policy is not just about AI quality. It is about knowledge transfer. When you fire a bunch of senior people and hand their responsibilities to AI-augmented junior teams, you need some mechanism to ensure institutional knowledge does not just evaporate. The review process forces conversations that would have happened organically when those senior engineers were still around.
Should Your Company Do the Same?
Short answer: probably.
Long answer: it depends on what you are building and how much damage a bad deployment can cause. Here is my rough framework:
Definitely Require Senior Sign-Off If:
- You are deploying to production infrastructure
- The change touches authentication, payments, or data storage
- The change was generated by AI and the submitting engineer has less than 3 years of experience with the relevant system
- You have had an AI-related incident in the past 6 months (be honest with yourself)
Probably Fine to Skip If:
- It is a frontend styling change
- The AI suggestion was minor (variable rename, simple refactor)
- The engineer can whiteboard the change and explain every line
The question I always ask teams: “If I woke you up at 3 AM because this code broke, could you debug it without the AI tool?” If the answer is no, that code needs another set of eyes.
The Inconvenient Truth About Productivity Metrics
I know what the “ship faster” crowd is thinking: this will slow us down. And yes, it will. A little. The same way seatbelts slow you down when you get in a car.
Priya, who leads engineering at a Series B startup I advise, ran the numbers after implementing a similar policy three months ago. “Our deployment velocity dropped about 12 percent,” she told me. “But our rollback rate dropped 60 percent. We went from four significant incidents per month to one. The math is not hard.”
Twelve percent slower shipping. Sixty percent fewer fires. I will take that trade every single time.
What This Means for the Future of AI-Assisted Development
I want to be clear: I am not anti-AI coding tools. I use them daily. They make me faster. They catch mistakes I would miss. They are genuinely, meaningfully useful.
But we are in the “move fast and break things” phase of AI-assisted development, and some of the things we are breaking are production systems serving real customers. Amazon, to their credit, is saying “maybe we should move fast and break fewer things.”
The companies that figure out how to integrate AI coding tools with proper human oversight are going to eat the ones that just hand juniors a Copilot subscription and say “ship it.” That is not a hot take. That is just engineering.
And if you are a junior engineer reading this and feeling defensive — I get it. Nobody likes being told their work needs extra review. But here is a secret: I have been doing this for 15 years, and I still have my code reviewed. Not because anyone requires it. Because I have seen what happens when clever code meets complex systems at 2 AM, and I would rather have the conversation at 2 PM.
My code has gotten better because of review. Not in spite of it.
The Bottom Line
Amazon made senior engineers sign off on AI-generated code changes. The internet called it regression. I call it engineering.
The AI tools are not going anywhere. They are going to get better, faster, more capable. But “this code was written by AI” should never be a substitute for “I understand what this code does.” Those are different sentences. And right now, too many teams are treating them as the same one.
(Also, if your AI coding tool’s idea of “optimization” is “delete everything and start over,” maybe that should have been a red flag before the 13-hour outage. Just saying.)