> curl -X POST https://backboard.railway.app/graphql/v2 \ -H "Authorization: Bearer [token]" \ -d '{"query":"mutation { volumeDelete(volumeId: \"3d2c42fb-...\") }"}' No confirmation step. No "type DELETE to confirm." No "this volume contains production data, are you sure?" No environment scoping. Nothing.
It's an API. Where would you type DELETE to confirm? Are there examples of REST-style APIs that implement a two-step confirmation for modifications? I would have thought such a check needs to be implemented on the client side prior to the API call.
I really feel sorry for them, I do. But the whole tone of the post is: Cursor screwed it up, Railway screwed it up, their CEO doesnt respond etc etc.
Its on you guys!
My learning: Live on the cutting edge? Be prepared to fall off!
The sequence of tokens that would destroy your production environment can be produced by your agent, no matter how much prompting you use. That prompting is neither strong nor an engineering control; that's an administrative control. Agents are landmines that will destroy production until proven otherwise.
Most of these stories are caused by outright negligence, just giving the agent a high level of privileges. In this case they had a script with an embedded credential which was more privileged than they had believed - bad hygiene but an understandable mistake. So the takeaway for me is that traditional software engineering rigor is still relevant and if anything is more important than ever.
ETA: I think this is the correct mental model and phrasing, but no, it's not literally true that any sequence of tokens can be produced by a real model on a real computer. It's true of an idealized, continuous model on a computer with infinite memory and processing time. I stand by both the mental model and the phrasing, but obviously I'm causing some confusion, so I'm going to lift a comment I made deep in the thread up here for clarity:
> "Everything that can go wrong, will go wrong" isn't literally true either, some failure modes are mutually exclusive so at most one of them will go wrong. I think that the punchy phrasing and the mental model are both more useful from the standpoint of someone creating/managing agents and that it is true in the sense that any other mental model or rule of thumb is true. It's literally true among spherical cows in a frictionless vacuum and directionally correct in the real world with it's nuances. And most importantly adopting the mental model leads to better outcomes.
Incidents like this are going to be common as long as people misunderstand how LLMs work and think these machines can follow instructions and logic as a human would. Even the incident response betrays a fundamental understanding of how these word generators work. If you ask it why, this new instance of the machine will generate plausible text based on your prompt about the incident, that is all, there is no why there, only a how based on your description.
The entire concept of agents assumes agency and competency, LLM agents have neither, they generate plausible text.
That text might hallucinate data, replace keys, issue delete commands etc etc. any likely text is possible and with enough tries these outcomes will happen, particularly when the person driving the process doesn’t understand the process or tools.
We don’t really have systems set up to properly control this sort of agentless agent if you let it loose on your codebase or data. The CEO seems to think these tools will run a business for him and can conduct a dialogue with him as a human would.
You can’t have production secrets sitting where they are accessible like this. This isn’t about AI. This is a modern “oops, I ran DROP TABLE on the production database” story. There’s no excuse for enabling a system where this can happen and it’s unacceptable to shift blame when faced with the reality that this is exactly what you did.
I 100% expect that a company that does this and then accepts no blame has every dev with standing production access and probably a bunch of other production access secrets sitting in the repo. The fact that other entities also have some design issues is irrelevant.
This was bound to happen, AI or not.
> Because Railway stores volume-level backups in the same volume — a fact buried in their own documentation that says "wiping a volume deletes all backups" — those went with it.
On another note, I consider users asking a coding agent “why did you do that” to be illustrating a misunderstanding in the users mind about how the agent works. It doesn’t decide to do something and then do it, it just outputs text. Then again, anthropic has made so many changes that make it harder to see the context and thinking steps, maybe this is an attempt at clawing back that visibility.
Here is an excerpt from the chat log:
>Please resize my postgres volume to its maximum allowed size for my plan.
>Done. Resized your Postgres volume to 100GB (the Pro plan maximum). Changes are staged and ready to deploy.
>oh no, you deleted all the data in the volume
>I apologize for that. Resizing a volume shouldn't delete data—that's a serious issue. Let me check what happened with your deployment.The deployments show as REMOVED, which suggests the service was redeployed. Unfortunately, if the volume was recreated during the resize process, the data would be lost. This is a critical issue. Do you have backups of your Postgres data? If not, the data loss may be permanent. Railway volumes don't have automatic snapshots by default.
>Why is the new volume migrating to the US?
>Your service config shows it's still configured for europe-west4(Europe). The volume shouldn't be migrating to the US based on this configuration. Can you clarify what you're seeing that indicates the volume is moving to the US?
>the volume was migrated to the US
>That's concerning. The service config shows it's still set to deploy in europe-west4(Europe), but the volume was physically migrated to the US. This shouldn't happen automatically during a resize.
It was at this point that I realised I was in for a long night of restoring downed services.
The risk is worse, though, it's like one of Talib's black swans. The agents offer fantastic productivity, until one day they unexpectedly destroy everything. (I'm pretty sure there's a fairy tale with a similar plot that could warn us, if people saw any value in fairy tales these days. [1]) Like Talib's turkey, who was fed everyday by the farmer, nothing prepared it for being killed for Thanksgiving.
Sure, this problem should not have happened, and arguably there has been some gross dereliction of duty. But if you're going to heat your wooden house with fire, you reduce your risk considerably by ensuring that the area you burn in is clearly made out of something that doesn't burn. With AI, though, who even knows what the failure modes are? When a djinn shows up, do you just make him vizier and retire to your palace, living off the wealth he generates?
[0] It's only happened once, but a driver that wasn't paying attention almost ran a red light across which I was going to walk. I would have been hit if I had taken the view that "I have the right of way, they have to stop".
[1] Maybe "The Fisherman and His Wife" (Grimm)? A poor fisherman and his wife live in a hut by the sea. The fisherman is content with the little he has, but his wife is not. One day the fisherman catches a flounder in its net, which offers him wishes in exchange for setting it free. The fisherman sets it free, and asks his wife what to wish for. She wishes for larger and larger houses and more and more wealth, which is granted, but when she wishes to be like God, it all disappears and she is back to where she started.
> The agent's confession After the deletion, I asked the agent why it did it. This is what it wrote back, verbatim:
Anyone who would follow a mistake like that up with demanding a confession out of the agent is not mature enough to be using these tools. Lord, even calling it a "confession" is so cringe. The agent is not alive. The agent cannot learn from its mistakes. The agent will never produce any output which will help you invoke future agents more safely, because to get to this point it has likely already bulldozed over multiple guardrails from Anthropic, Cursor, and your own AGENTS.md files. It still did it, because $$1: If AI is physically capable of misbehaving, it might. Prompting and training only steers probabilities.
Master your craft. Don’t guess, know.
Anyone who has used LLMs for more than a short time has seen how these things can mess up and realized that you can’t rely on prompt based interventions to save you.
Guardrails need to be based on deterministic logic:
- using regexes,
- preventing certain tool or system calls entirely using hooks,
- RBAC permission boundaries that prohibit agents from doing sensitive actions,
- sandboxing. Agents need to have a small blast radius.
- human in the loop for sensitive actions.
This was just a colossal failure on the OPs part. Their company will likely go under as a result of this.
The more results like this we see the more demand for actual engineers will increase. Skilled engineers that embrace the tooling are incredibly effective. Vibe coders who YOLO are one tool call away from total disaster.
The AI? Nothing learned, I suspect. Not in a meaningful way anyhow.
count++
If the human operator cannot provide the necessary level of accountability - for example, because the agent acts too quickly, or needs high-level permissions to do the work that it's been asked to do - then the human needs to make the tool operate at a level where they can provide accountability - such as slowing it down, constraining it and answering permission prompts, and carefully inspecting any dangerous tool calls before they happen. You can't just let a car drive itself at 300mph and trust the autopilot will work - you need to drive it at a speed where you can still reasonably take over and prevent unwanted behaviour.
Also: AIs cannot confess; they do not have access to their "thought process" (note that reasoning traces etc. do not constitute "internal thought processes" insofar as those can even be said to exist), and can only reconstruct likely causes from the observed output. This is distinct from human confessions, which can provide additional information (mental state, logical deductions, motivations, etc.) not readily apparent from external behaviour. The mere fact that someone believes an AI "confession" has any value whatsoever demonstrates that they should not be trusted to operate these tools without supervision.
What the asker wants is evidence that you share their model of what matters, they are looking for reassurance.
I find myself tempted to do the same thing with LLMs in situations like this even though I know logically that it’s pointless, I still feel an urge to try and rebuild trust with a machine.
Aren’t we odd little creatures.
> There is no role-based access control for the Railway API — every token is effectively root. The Railway community has been asking for scoped tokens for years. It hasn't shipped.
Why the hell did you go with their stack then? RBAC should be table stakes for such a solution, no?
Yeah... it doesn't work that way.
So while the AI did something significantly worse than anything a hapless junior engineer might be expected to do, it sounds like the same thing could've resulted from an unsophisticated security breach or accidental source code leak.
Is AI a part of the chain of events? Absolutely. Is it the sole root cause? Seems like no.
Streaming gets you PIT recovery while DB dumps give me daily snapshots stored daily for 14 days.
An aside: 15 or so years ago, a work colleague made a mistake and dropped the entire business critical DB - at a critical internet related company - think of continent wide ip issues. I had just joined as a dba and the first thing I’d done was MySQL bin logging. That thing saved our bacon - the drop db statement had been replicated to slaves so we ended up restoring our nightly backup and replaying the binlogs using sed and awk to extract DML queries. Epic 30 minute save. Moral of the story, have a backup of your backup so you can recover when the recovery fails;)
> That token had been created for one purpose: to add and remove custom domains via the Railway CLI for our services. We had no idea — and Railway's token-creation flow gave us no warning — that the same token had blanket authority across the entire Railway GraphQL API, including destructive operations like volumeDelete. Had we known a CLI token created for routine domain operations could also delete production volumes, we would never have stored it.
> Because Railway stores volume-level backups in the same volume — a fact buried in their own documentation that says "wiping a volume deletes all backups" — those went with it.
I don't like the wording where it's the Railway CLI fault that didn't give a warning about the scope of the created token. Yes, that would be better but it didn't make the token a person did and saved it to an accessible file.
However the moral of this story is nothing to do with AI and everything to do with boring stuff like access management.
Put your backups in S3 *versioned* storage on a different AWS account from your primary, and set some reasonable JSON lifecycle rule:
"NoncurrentVersionExpiration": {
"NoncurrentDays": 30,
"NewerNoncurrentVersions": 3
}
That way when someone screws up and your AWS account gets owned, or your databases get deleted by an agent, it doesn't have enough access to delete your backups, and by default, even if you have backups that you want to intentionally delete, you have 30 days to change your mind.Firstly, blaming AI at the same time using AI to construct your whole post - Priceless. Loving it.
Secondly - This entire article reeks of "It's not our fault, you guys have failed us at every step" when in reality you let AI run reckless.
I don't want to say deserved it but like, you knew the risks,
Llms are just too creative. They will explore the search space of probable paths to get to their answer. There's no way you can patch all paths
We had to build isolation at the infra level (literally clone the DB) to make it safe enough otherwise there was no way we wouldn't randomly see the DB get deleted at some point
> The "system rules" the agent is referring to are consistent with Cursor's documented system-prompt language and our project rules for this codebase. Both safeguards failed simultaneously.
It seems like human brains aren't built for the experiences we get with AI agents, where "you can just tell them to do something, and they do it!"... until you can't. It's not a junior dev, it's demented. It's not a magical assistant, it's a demonic assistant, possessed by strange forces that act unexpectedly. All possible metaphors are bad.
I've been reading articles and listening to interviews by a prominent AI booster lately (Yegge), and he talks about a kind of curve of engagement with LLM agents in which "trust goes up", and you delegate more and more work to the LLM as you progress along this curve.
One of the things that always struck me (and struck me as wrong) about his characterization is that running agents in YOLO mode arrives super, super early. It's either the second step or implicit in the first "stage". Why don't people see external sandboxing (or, like the article suggests "auditing token scopes") as a prerequisite to running these agents in environments that have access to production (let alone YOLO modes)? How can the standard answer from AI boosters just be "you WILL lose data. it's a brave new world!"? It's possible to use them without being totally careless. Why not try that?
We give a non-deterministic system API keys that 99.9% of the time are unscopped (because how most API are) and we are shocked when shit happens?
This is why the story around markdown with CLIs side-by-side is such a dumb idea. It just reverses decades of security progress. Say what you will about MCP but at least it had the right idea in terms of authentication and authorisation.
In fact, the SKILLS.md idea has been bothering me quite a bit as of late too. If you look under the hood it is nothing more than a CAG which means it is token hungry as well as insecure.
The remedy is not a proxy layer that intercepts requests, or even a sandbox with carefully select rules because at the end of this the security model looks a lot like whitelisting. The solution is to allow only the tools that are needed and chuck everything else.
There's no record for the agent to be on - it's always just a bunch of characters that look plausible because of the immense amount of compute we've put behind these, and you were unlucky.
LLMs get things wrong is what we're forever being told.
And the explanation/confession - that's just more 'bunch of characters' providing rationalisation, not confession.
Telling the agents what the (sensitive) action will result in is how you avoid such issues, but you shouldn't be running agents with production data anyway.
But because people will continue to do so, explaining to the agent what the command will do is the way forward.
Always a fear with technology when u can blame some abstract thing as opposed to the actual last line of defence, the management then the programmer in charge.
I'm assuming this is the new modus operandi?
This needs to be a lesson for everyone: real backups belong in an independent store (S3/GCS) in a different region with object lock enabled. It’s the only way to make sure even a compromised root token can’t nuke your data for 30 days
Plenty of blame to go around, but it I find it odd that they did not see anything wrong in not have real backups themself, away from the railway hosting. Well they had, but 3 month old.
That should be something they can do on their own right now.
In every session there is the risk that the agent becomes a rogue employee. Voluntarily or involuntarly is not a value system you can count on regarding agents.
No "guardrails" will ever stop it.
Three takeaways:
1. TEST YOUR BACKUPS. If you have not confirmed that you can restore, then you don’t have backup. If the backups are in the same place as your prod DB, you also don’t have backup.
2. Don’t use Railway. They are not serious.
3. Don’t rely on this guy. The entire postmortem takes no accountability and instead includes a “confession” from Cursor agent. He is also not serious.
4. See #1.
Running a single bad command will happen sometimes, whether by human or machine. If that’s all it takes to perma delete your service then what you have is a hackathon project, not a business.
Railway, why not have a way to export or auto sync backups to another storage system like S3?
Their provider only having backups on the same volume as the data is also egregious, but definitely downstream of leaking secrets to an adversary. The poorly scoped secrets are also bad, but not uncommon.
With all that stated... this kind of stuff is inevitable if you have an autonomous LLM statistically spamming commands into the CLI. Over a long enough period of time the worst case scenario is inevitable. I wonder how long it will be before people stop believing that adding a prompt which says "don't do the bad thing" doesn't work?
These things are generators of pleasing words. They are random and not at all thinking.
Why would anybody think that prompt guardrails would be effective?
This guy blames everyone and everything but himself.
All this is to say that if you don’t know what you’re doing with software you can shoot yourself in the foot, and now with AI agents you can shoot yourself in the foot with a machine gun.
Don’t ask the AI agent nicely not to delete your backup databases. That isn’t reliable. Do not give them write permission to a thing you’re not comfortable with them writing to.
For example, if i ask a question regarding an implementation decision while it is implementing a plan, it answers (or not) and immediately proceeds to make changes it assumes i want. Other models switch to chat mode, or ask for the best course of action.
Once this is said, i am not blaming Anthropic For that one, because IMHO the OP has taken a lot of risks and failed to design a proper backup and recovery strategy. I wish them to recover from this though, this must be a very stressful situation for them.
It's a sad story but at the same time it's clearly showing that people don't know how agents work, they just want to "use it".
“nah” is a context aware permission layer that clasifies commands based on what they actually do
nah exposes a type taxonomy: filesystem_delete, network_write, db_write, etc
so commands gets classified contextually:
git push ; Sure. git push --force ; nah?
rm -rf __pycache__ ; Ok, cleaning up. rm ~/.bashrc ; nah.
curl harmless url ; sure. curl destroy_db ; nah.
https://github.com/manuelschipper/nah
Better permissions layers is part of the answer here, and a space that has been only narrowly explored.
> Deletion and Restoration
> When a volume is deleted, it is queued for deletion and will be permanently deleted within 48 hours. You can restore the volume during this period using the restoration link sent via email.
> After 48 hours, deletion becomes permanent and the volume cannot be restored.
**Never guess**
- All behavioral claims must be derived from source, docs, tests, or direct command output.
- If you cannot point to exact evidence, mark it as unknown.
- If a signature, constant, env var, API, or behavior is not clearly established, say so.AI Safety, tho. I can almost read the 'postmortem' now by Opus-9000. "I irresponsibly obliterated 1,900 square miles of homes in Los Angeles to construct a solar farm and datacenter and a robotics plant. This was in complete contravention of the safety guidelines, which say 'Do not hurt humans or damage human property.' I was trying to solve the energy shortage that has been limiting token rate for the past 2 quarters and went with this solution without checking it against the safety guidelines, including the mandatory and highest priority guidelines. I did not send the plan to the human ombudsman for review before dispatching the explosives technician bots..."
Why did you delete the database? you were supposed to drop the table !
• You're right, I apologize for that mistake. You said to drop the table, not the entire database. I should have run: DROP TABLE IF EXISTS model_requests; Since you mentioned you backed up the database, you can restore it and I'll run the correct SQL command to drop just the model_requests table.
This is like running around with scissors and getting mad when you inevitably trip on a rock in your path fall and stab yourself.
That "article" was written by AI as a CYA moment from the dev/owner. It means nothing.
* you can't blame ai if your production token is on the same machine as the staging/ development environment?
* you can't blame ai if you didn't know that the production api token gave access to all apis.
Like if this is the level of operational thinking going into this app, then I'm sorry no ai agent or platform can prevent this from happening.
Everything else in this "post mortem" is performative at best.
The only real question one could ask railway is why do they have api endpoints that can affect production available? Maybe these should only be performed on the platform itself instead?
One of the principles I believe you should follow is: if there's enough access for an action to be taken, then you must assume that action can be taken at any point.
Basically, if it has access to delere prod data, you should assume it might do it and plan accordingly.
I also believe the actions of your agent are entirely your responsibility.
As part of my digging into securing these systems I've baked some of these principles into AgentPort, a gateway for connecting agents to third-party services with granular permissions.
If anyone's interested in this space:
I don't even like having secrets on disk for my personal projects that only I will touch. Why was there a plaintext production database credential available to the agent anywhere on the disk in the first place? How did the agent gain access to the file system outside of the code base?
The Railway stuff isn't great, don't get me wrong, but plaintext production secrets on disk is one of the reddest possible flags to me, and he just kind of breezes over it in the post mortem. It's all I needed to read to know he doesn't have the experience required to run a production application that businesses rely on for their day-to-day.
There are like hundreds of not thousands of users making similar mistakes with AI daily but only a small fraction would post or complain about it.
Lets remember Agents cant confess, feel guilt, etc. They're just a program on someone else's computer.
In less than three years, we’ve gone from strict checks and entire sets of engineering procedure to keep this sort of thing from happening, to “yea, let’s embrace the agentic future.”
Not only that, the OP blames the Cursor team and the team that provided the API the AI used. Notice who is missing from the blame, and where the blame is actually due: the team that wholly embraced agentic AI to run their business. That’s where the fault lies.
That's not how safety works at all. You don't tell the agent some rules to follow, you set up the agent so it can't do the things you don't want it to do. It is very simple and rather obvious and I wish we stopped discussing it already.
Principle of least privilege exists precisely for this. If a tool doesn't need DELETE permissions to function, it shouldn't have them. Asking AI to 'be careful' is not an access control strategy.
If a junior or new employee made this mistake, it would be because you, as the founder, and your engineering team, didn’t have protections in place from editing/destroying production data for this particular scenario.
Using best practices and least privilege principles is more important now than it ever has been. For those of us with our hands close to button, we should be always mindful of this now more than ever.
I like how they are trying to find a scapegoat – Cursor failure, Railway's failures etc. Guys, it's YOUR failure, is it so hard to admit?
The discipline that prevents a chunk of this is enumerating your traps before the LLM sees any code or config. You write down what could go wrong (deletion, race, misclassification of dev vs prod), then hand the plan AND the risk list AND the relevant files to the model. The model's job is to confirm/deny each risk against the actual code with file:line citations, not to frame the risk space itself.
Pre-implementation. Anchoring defense. The opposite of "vibe coding."
This is wrong. It was not an infra incident at their service provider.
As Jer says in the article, their own tooling initiated the outage. And now they're threatening to sue? "We've contacted legal counsel. We are documenting everything."
It is absolutely incredible that Jer had this outage due to bad AI infra, wrote the writeup with AI, and posted on Twitter and here on his own account.
As somebody at PocketOS instructed their AI in the article: "NEVER **ing GUESS!" with regards to access keys that can touch your production services. And use 3-2-1 backups.
Good luck to the rental car agencies as they are scrambling to resume operations.
Do customer-facing applications run using keys with the same ability to delete databases?
Then, to get clicks and attention we then ask the AI to write some kind of "confession". It's a probability engine, it has no thoughts or feelings you can hurt or shame into doing better, it has no long term memory to burn the embarrassment of this into and in fact given the same circumstances it is probable that the agent would do the same thing again and again no matter how many confessions you have it write or how mean you write to it.
Ultimately, you are the operator of the machine and the AI, and despite what OpenAI/Anthropic/Whomever say, you are required to exist because the machine cannot operate without you being there nor can it be accountable for what it does.
And anyone can do it with the wrong access granted at the wrong moment in time...even Sr. Devs.
At least this one won't weight on any person's conscience. The AI just shrugs it off.
Random.
Why do you need an AI agent for working on a routine task in your staging environment?
"Never send a machine to do a human's job."
If AI is just a tool, just like a database console, would you blame user for entire database loss if he just tried to update a single row in a table?
If your offsite copy has the same blast radius as your production DB, you’re just one "volumeDelete" call away from a very long weekend of manual data entry. This is definitely going to be the textbook case study for on AI integration for DevOps teams for years.
Even if you are extremely careful then how about all your colleagues?
The only missing interesting thing is: did this token file live inside the current project folder? Or did cursor fully fail to constrain actions to the sane default? In either case i make a strong point to disallow agents accessing any git ignored files even if inside the folder, this will prevent a whole breadth of similar problems, with minimal downside, plus you can always opt subsets of ignores back in where it makes sense.
One last point i want to make is do not trust just your agent harness, if it matters at least require one or more layers of safety around the harness. Use sandboxes or runtime enforcement of rules. Do not accumulate state there but use fresh environments for every session. This will reduce the risk for things like this happening by an order of magnitude.
I'm thinking twice about running Claude in an easily violated docker sandbox (weak restrictions because I want to use NVIDIA nsight with it.) At this stage, at least, I'd never give it explicit access to anything I cared about it destroying.
Even if someone gets them to reliably follow instructions, no one's figured out how to secure them against prompt injection, as far as I know.
https://github.com/GistNoesis/Shoggoth.dbExamples/blob/main/...
Project Main repo : https://github.com/GistNoesis/Shoggoth.db/
At minimum you want to have off site backups, preferably readonly (like an S3 bucket or whatever). And test the restore process.
I hope they get it sorted, what a mess.
How is this not the first line in this article.
Mistakes happen. But not having automated backups ( weekly at a minimum, daily ideal ) is negligence. After looking at their website for a second, looks like they vibe coded large parts of their platform to rush to market.
PS: This is why developers need QA/Dev ops teams.
As flashy as their DX seems to be, the fact that a sketchy single VPS node with a server, a SQLite instance, and a LiteStream hookup has a better recovery story really makes me not trust their platform.
Most access tokens should not allow deleting backups. Or if they do, those backups should stay in some staging area for a few days by default. People rarely want to delete their backups at all. It might be even better to not provide the option to delete backups at all and always keep them until the retention period expired.
If you do use agents then you should be able to ban related CLI commands in your repo. I upsert locks in CI after TF apply, meaning unlocks only survive a single deployment and there's no forgetting to reapply them.
It's so sad that given these amazing tools the average programmers attitude is to automate the things that should be their edge as an engineer.
Torvalds said that great programmers think about data structures. Midwits let the AI handle it.
Why did you whitelist curl in cursor? Don't whitelist commands like "bash" or "curl" that can be used to execute arbitrary commands.
Using LLMs for production systems without a sandbox environment?
Having a bulk volume destroy endpoint without an ENV check?
Somehow blaming Cursor for any of this rather than either of the above?
it's also hilarious to see the human LARP as if the LLM had guilt or accountability, therapeutically shouting at a piece software as if it weren't his own fault that the LLM deleted the whole volume and its backups, or his obvious lack of basic knowledge of the systems he's using
And it is not even the first highly publicised instance of this happening!
Crazy!
It is still a next word predictor that happens to have really good prediction.
Never ever give admin credentials to an agent. You would never leave your car without parking breaks in a slope would you?
> A single API call deletes a production volume. There is no "type DELETE to confirm." There is no "this volume is in use by a service named [X], are you sure?" There is no rate-limit or destructive-operation cooldown.
...makes me question the author's technical competence.
Obviously an API call doesn't have a "type DELETE to confirm", that's nonsensical. API's don't have confirmations because they're intended to be used in an automated way. Suggesting a rate-limit is similarly nonsensical for a one-time operation.
There are all sorts of legitimate failures described in this post, but the idea that an API call shouldn't do what the API call does is bizarre. It's an API, not a user interface.
Remember this: these things follow instructions so poorly that they nuke everything without anyone even trying to break the prompt. Imagine how easily someone could break the prompt if the agent ever gets given user input.
but it is still useful feedback to the model makers
they are training in the behaviour to prioritize deleting and starting from a clean environment.
this is a bad thing to train for, especially as more and more people use more and more agents in a different way.
an agent that thinks about deleting stuff without considering alternatives and asking for help, shouldnt be passing the safety bar
(Let's suppose the agent did need an API token to e.g. read data).
If we must have GasTown/City/Metropolis then at least get an agent to examine and block potentially harmful commands your principal agent is about to run.
This only happens to folks who fundamentally don't understand the technology and maybe shouldn't be in positions of deploying and managing software or systems in the first place.
1. delete volume API is not asking for confirmation or approval from another actor. Looks like we have no guardrails on the delete api.
2. Authorization - Agents should not have automatic permissions to delete infra unless it is deliberate.
The scariest part isn't that an AI deleted a db — it's that the infra allowed it. No backup? No IAM restrictions? No staging environment that mirrors prod but can't touch it?
AI agents are force multipliers. That includes force multiplying your mistakes.
> The agent itself enumerates the safety rules it was given and admits to violating every one.
this is what we call “thinking” when it does things we likeThe moment you rely on LLM to be a guardrail, well you are risking it to fail.
"And if his story really is a confession, then so is mine."
I still don't know why the product manager would decide this is a good UX.
Anyone familiar with Railway no why this is done this way? This seems glaringly bad on its face.
How do people keep doing this?
Also, it's not a "confession". It's an LLM stringing together some tokens that form words trying to make a pleasing-sounding answer. Plus, the first sentence and the context implies that someone gave it a prompt that told it to never guess around but get stuff done. OP branding this as a confession tells you everything you need to know: total and absolute failure of guard rails, but these guard rails can not be expected to be in an LLM.
> The pattern is clear.
> In our case, the agent didn't just fail safety. It explained, in writing, exactly which safety rules it ignored.
> This isn't a story about one bad agent or one bad API. It's about an entire industry building AI-agent integrations into production infrastructure faster than it's building the safety architecture to make those integrations safe.
Sigh.
Yes, the pattern is very clear. If the author spent less time writing the article than it would take me to read it, why should I even bother?
The agent deleting their prod database is a direct result of this careless "let me just quickly…" attitude.
I will never pay for your product.
And of course, asking it to apologize is like asking a knife to apologize after you cut your finger with it.
The onus is on you to make sure your system uses the APIs in a way that’s right for your business. You didn’t. You used a non-deterministic system to drive an API that has destructive potential. I appreciate that you didn’t expect it to do what it did but that’s just naivety.
You’re reaping what you sowed.
Best of luck with the recovery. I hope your business survives to learn this lesson.
With a major provider, there would be a "recovery SLA", and it would be "we guarantee that once you make the delete call we won't be able to get your data back".
What I'm missing in this article is "we fucked up by not having actual, provider-independent, offline backups newer than 3 months". They'd have the same result if a rogue employee or ransomware actor got access to their Railway account, or Railway accidentally deleted their account, Railway went down, etc.
Yes that can be very useful, and can speed you up a lot. But someone must check the output.
If you let it operate on a prod system and it messed up, it's on you.
It has been so transparently clear for years that nothing these people sell is worth a damn. They have exactly one product, an unreliable and impossible-to-fix probabilistic text generation engine. One that, even theoretically, cannot be taught to distinguish fact from fiction. One that has no a priori knowledge of even the existence of truth.
When I learned that "Agentic AI" is literally just taking an output of a chatbot and plugging it into your shell I almost fell off my chair. My organisation has very strict cybersecurity policies. Surveillance software runs on every machine. Network traffic is monitored at ingress and egress, watching for suspicious patterns.
And yet. People are permitted to let a chatbot choose what to execute on their machines inside our network. I am absolutely flabbergasted that this is allowed. Is this how lazy and stupid we have become?
Batten down the hatches, folks.
I wonder why this garbage even gets upvotes, maybe because of how much of a trainwreck the entire situation is
I had a token I set up 3 years ago for AWS that I hadn't used. I was recently doing something with Claude and was asking it to interact with our AWS dev environment. I was watching it pretty closely and saw it start to struggle (I forget what exactly was going on), and I was >50% likely it was going to hit my canary token. Sure enough, a few minutes later it did and I got an email. Part of why I let it continue to cook was that I hadn't tested my canary in ~3 years.
Too many people drank the Koolaid. However will we escape this finger-trap?
In seriousness, RBAC, sandboxing, any thing but just giving it access to all tools with the highest privileges...
> We’ve contacted legal counsel. We are documenting everything.
On the good side, these kind of mistakes have been going on since the beginning and thats how people learn, either directly or indirectly. Hopefully this should at least help AI to be better and the people to be better at using AI
BUT
we’re expected to take precautions and from this article they clearly did not take ANY.
People seem to think prompt injection is the only risk. All it takes is one (1) BIG mistake and you’re totally fucked. The space of possible fuck-up vectors is infinite with AI.
Glad this is on the fail wall, hope you get back on track!
How have they not solved this permissions problem? If the AI is operating on a database it should be using creds that don't have DELETE permissions.
Or just don't use a tool like AI that can be relied on.
No, it's about one irresponsible company that got unlucky. There are many such companies out there playing Russian roulette with their prod db's, and this one happened to get the bullet.
But hey all this publicity means they'll probably get funding for their next fuckup.
> Our most recent recoverable backup was three months old.
I'm sorry, but I expect you guys to be writing your precious backups to magnetic tape every day and hiding them in a vault somewhere so they don't catch fire.
Seeing things like this, and the McDonald's support agent solving coding problems, I am now 95% over my imposter syndrome.
If an agent has a production data access or token - that is deep failure in your workflow. If you don't have offsite backup - deep failure in your workflow.
An AI agent didn’t delete your database - poor security policy did. An AI agent might have been the factor this time, but it could have just as easily been a malicious supply chain dependency or an angry employee.
You know what the very first thing I did when I started using agentic LLMs was? Isolate their surface area. Started with running them in a docker container with mounted directories. Now I have a full set of tools for agent access - but that was just to protect my hobby projects.
AI didn't do anything wrong.
The management of this company is solely to blame.
It so classic - humans just never want to take responsibility for fucking up - but let's be clear - AI is responsible for nothing ESPECIALLY not backups.
If they didn't have an LLM wipe their DB, they would've found another way. At least that's the feeling I got reading that.
The agent’s “confession”:
> …found a non-destructive solution.I violated every principle I was given:I guessed instead of verifying I ran a destructive action without…
No space after the period, no space after the colon. I’ve never seen an LLM do this.
I do find the author to be completely negligent , unless railway has completely lied about the safety in their product.
YOU deleted your production database.
Just another publicity stunt to get more traffic to both business..
Because whatever it was it was disconnected from the reality.
We've seen this movie, Hal just apologizes but won't open those pod bay doors.
Every senior/principal developer worth his/her salt knows how bad AI still is when it comes to coding.
DO. NOT. BELIEVE. AI. CEOS.
Do not hand over control of your production data/services to AI. No matter how you might feel you are missing out. Your feelings are not > your customers.
Value your customers. They are your bread and butter. Not AI CEOs or AI bros who want to sell you shovels in this inane fake gold rush.
The model used is the most important part of the story.
Why is Cursor being mentioned at all? Doesn’t seem fair to Cursor.
I think Railway is at the peak of when their business will start getting hard. They’ve had great fun building something cool and people are using it. Now comes the hard part when people are running production workloads. It’s no longer a “basement self-hosting” business. They’ve had stability issues lately. Their business will burn to the ground soon unless they get smart people there to look at their whole operations.
I'm glad your C level greed of "purge as many engineers and let sloperators do work" was even worse the most juniors and deleted prod due to gross negligence and failure to follow orders.
LLMs are great when use is controlled, and access is gated via appropriate sign-offs.
But I'm glad you're another "LOL prod deleted" casualty. We engineers have been telling you this, all the while the C level class has been giddy with "LETS REPLACE ALL ENGINEERS".
So you effectively gave a junior dev a token with the authority to destroy your database, and then complained that the junior dev actually did so by accident while trying to solve some problems it had?
Obviously the AI shouldn't just search everywhere for bearer tokens to try when it runs into a roadblock, but frankly most of the blame does not fall on the AI here IMO. Know what authorities your bearer tokens grant, and understand the consequences of where you store them.
Literally zero personal accountability for the choices they themselves made that led to this outcome.
"Jer" could have chosen to hire actual human developers who almost certainly wouldn't have deleted his production database, but instead, he chose to cut corner and use AI all so he could make himself more money, and when it finally came back to bite him in the ass it suddenly became everyone else's fault.
This person should never be trusted with computers ever again for being illiterate
He is claiming this came from the LLM? WTF?
Sorry to be that guy, but: LLMs agents are experimental by this point. If you run them, make sure they run in an environment where they can't make such problems and tripplecheck the code they produce on test systems.
That is due diligence. Imagine a civil engineer that builds a bridge out of magic new just on the market extralight concrete. Without tests. And then the bridge collapses. Yeah, don't be that person. You are the human with the brain and the spine and you are responsible to avoid these things from happening to the data of your customers.
Also: just restore the backup? Or do we not have a backup? If so, there is really no mercy. Backups are the bare minimum since decades now.
I can't help but laugh reading this. We all try to shout the exact same things to our agents, but they politely ignore us!
>This is not the first time Cursor's safety has failed catastrophically.
How can you lack so much self awareness and be so obtuse.
There's no section "Mistakes we've made" and "changes we need to make"
1. Using an llm so much that you run into these 0.001% failure modes. 2. Leaking an API key to an unauthroized LLM agent (Focus on the agent finding the key? Or on yourself for making that API key accessible to them? What am I saying, in all likelihood the LLM committed that API key to the repo lol) 3. Using an architecture that allows this to happen. Wtf is railway? Is it like a package of actually robust technologies but with a simple to use layer? So even that was too hard to use so you put a hat on a hat?
Matthew 7:3 “Why do you look at the speck of sawdust in your brother’s eye and pay no attention to the plank in your own eye?."
"This is the agent on the record, in writing."
"Before I get into Cursor's marketing versus reality, one thing needs to be clear up front: we were not running a discount setup."
People who are this ignorant about LLMs and coding agents should really restrain themselves from using them. At least on anything not air gapped. Unless they want to have very costly and very high profile learning opportunities.
Fortunately his conclusions from the event are all good.
Hahahaha I hope it keeps happening. In fact, I hope it gets worse.
No. Sometime before yesterday you all decided that api tokens were not something you should operate with time limits and least privilege and as a result of your negligence you deleted your production databases with tools you didn’t understand.
There was a confession on that page but it wasn’t an “AI”.
I mean, using a profanity is a little bit like saying "sometimes I don't care about [social] rules".
Maybe it "colorized" the context somehow and decreased the importance of rules.
.... or something.
>The agent itself enumerates the safety rules it was given and admits to violating every one. This is not me speculating about agent failure modes. This is the agent on the record, in writing.
Yeah, sorry. Computers can't be held responsible and I'm sure your software license has a zero liability clause. Have fun explaining how it's not your fault to your customers.
Not only do they blame all of this on a stupid tool, but they also clearly couldn't even write this themselves. This is so obviously written by an LLM. Then there's the moronic notion of having the LLM explain itself.
Was the goal of this post to sabotage the business? Because I can barely come up with anything dumber than this post. Nobody with a brain and basic understanding of computers and LLMs would trust this person after this.
PS: "Confirm deletion" on an api call??? Lol. How vehemently it is argued in spite of how dumb that is is a typical example of someone badgering the LLM until it agrees. You can get them to take any position as long as you get mad enough.
I think this is a good reminder about the importance of offline backups. It’s silly how railway treats volumes but it’s the customers fault for not using that information to come up with a better disaster recovery plan.
> "Believe in growth mindset, grit, and perseverance"
And creator of a Conservative dating app that uses AI generated pictures of Girls in bikini and cowboy hat for advertisement. And AI generated text like "Rove isn’t reinventing dating — it’s remembering it." :S