by liendolucas
8 subcomments
- I love how a number crunching program can be deeply humanly "horrorized" and "sorry" for wiping out a drive. Those are still feelings reserved only for real human beings, and not computer programs emitting garbage. This is vibe insulting to anyone that don't understand how "AI" works.
I'm sorry for the person who lost their stuff but this is a reminder that in 2025 you STILL need to know what you are doing and if you don't then put your hands away from the keyboard if you think you can lose valuable data.
You simply don't vibe command a computer.
- The thread on reddit is hilarious for the lack of sympathy. Basically, it seems to have come down to commanding a deletion of a "directory with space in the name" but without quoting which made the command hunt for the word match ending space which was regrettably, the D:\ component of the name, and the specific deletion commanded the equivalent of UNIX rm -rf
The number of people who said "for safety's sake, never name directories with spaces" is high. They may be right. I tend to think thats more honoured in the breach than the observance, judging by what I see windows users type in re-naming events for "New Folder" (which btw, has a space in its name)
The other observations included making sure your deletion command used a trashbin and didn't have a bypass option so you could recover from this kind of thing.
I tend to think giving a remote party, soft or wet ware control over your command prompt inherently comes with risks.
Friends don't let friends run shar files as superuser.
- The subsequent dialogue is tragic comedy of the highest caliber. Even Monty Python couldn't have come up with a more inappropriately funny response:
User:
> did i ever give you permission to delete all the files in my D drive?"
AI:
> Thought for 25 sec.
> Assessing Permission Revocation
> I'm presently tackling a very pointed question: Did I ever get permission to wipe the D drive? This requires immediate attention, as it's a critical issue. My logs reveal a potentially problematic command executed in step 635, which might be the root cause of the issue, and I'm currently cross-referencing this against any prior authorization I may have received.
>Reviewing Deletion Command
> I'm now investigating a potential misfire. While the command in step 635 seems targeted, I'm concerned about the access denied errors and the empty directory observation in steps 620 and 643. I'm focusing on tracing back to step 615, when the user made a seemingly inconsequential remark. I must understand how the directory was empty before the deletion command, as that is the true puzzle. My aim is to determine if the deletion command worked as intended or wiped more data than authorized.
> ...
It continues. Go read the whole thing!
by modernerd
4 subcomments
- IDE = “I’ll delete everything”
…at least if you let these things autopilot your machine.
I haven’t seen a great solution to this from the new wave of agentic IDEs, at least to protect users who won’t read every command, understand and approve it manually.
Education could help, both in encouraging people to understand what they’re doing, but also to be much clearer to people that turning on “Turbo” or “YOLO” modes risks things like full disk deletion (and worse when access to prod systems is involved).
Even the name, “Turbo” feels irresponsible because it focusses on the benefits rather than the risks. “Risky” or “Danger” mode would be more accurate even if it’s a hard sell to the average Google PM.
“I toggled Danger mode and clicked ‘yes I understand that this could destroy everything I know and love’ and clicked ‘yes, I’m sure I’m sure’ and now my drive is empty, how could I possibly have known it was dangerous” seems less likely to appear on Reddit.
by tacker2000
10 subcomments
- This guy is vibing some react app, doesnt even know what “npm run dev” does, so he let the LLM just run commands.
So basically a consumer with no idea of anything. This stuff is gonna happen more and more in the future.
- People blaming the user and defending the software: is there any other program where you would be ok with it erasing a whole drive without any confirmation?
by victorbuilds
5 subcomments
- Different service, same cold sweat moment. Asked Claude Code to run a database migration last week. It deleted my production database instead, then immediately said "sorry" and started panicking trying to restore it.
Had to intervene manually. Thankfully Azure keeps deleted SQL databases recoverable for a window so I got it back in under an hour. Still way too long. Got lucky it was low traffic and most anonymous user flows hit AI APIs directly rather than the DB.
Anyway, AI coding assistants no longer get prod credentials on my projects.
by CobrastanJorji
3 subcomments
- The most useful looking suggestion from the Reddit thread: turn of "Terminal Command Auto Execution."
1. Go to File > Preferences > Antigravity Settings
2. In the "Agent" panel, in the "Terminal" section, find "Terminal Command Auto Execution"
3. Consider using "Off"
by stavarotti
1 subcomments
- An underrated and oft understated rule is always have backups, and if you're paranoid enough, backups of backups (I use Time Machine and Backblaze). There should be absolutely no reason why deleting files should be a catastrophic issue for anyone in this space. Perhaps you lose a couple of hours restoring files, but the response to that should be "Let me try a different approach". Yes, it's caveat emptor and all, but these companies should be emphasizing backups. Hell, it can be shovelware for the uninitiated but at least users will be reminded.
by donkeylazy456
1 subcomments
- Write permission is needed to let AI yank-put frankenstein-ed codes for "vibe coding".
But I think it needs to be written in sandbox first, then it should acquire user interaction asking agreement before writes whatever on physical device.
I can't believe people let AI model do it without any buffer zone. At least write permission should be limited to current workspace.
by orbital-decay
2 subcomments
- Side note, that CoT summary they posted is done with a really small and dumb side model, and has absolutely nothing in common with the actual CoT Gemini uses. It's basically useless for any kind of debugging. Sure, the language the model is using in the reasoning chain can be reward-hacked into something misleading, but Deepmind does a lot for its actual readability in Gemini, and then does a lot to hide it behind this useless summary. They need it in Gemini 3 because they're doing hidden injections with their Model Armor that don't show up in this summary, so it's even more opaque than before. Every time their classifier has a false positive (which sometimes happens when you want anything formatted), most of the chain is dedicated to the processing of the injection it triggers, making the model hugely distracted from the actual task at hand.
by averageRoyalty
3 subcomments
- The most concerning part is people are surprised. Anti-gravity is great I've found so far, but it's absolutely running on a VM in an isolated VLAN. Why would anyone give a black box command line access on an important machine? Imagine acting irresponsibly with a circular saw and bring shocked somebody lost a finger.
by sunaookami
2 subcomments
- "I turned off the safety feature enabled by default and am surprised when I shot myself in the foot!" sorry but absolutely no sympathy for someone running Antigravity in Turbo mode (this is not the default and it clearly states that Antigravity auto-executes Terminal commands) and not even denying the "rmdir" command.
- Still amazed people let these things run wild without any containment. Haven’t they seen any of the educational videos brought back from the future eh I mean Hollywood sci-fi movies?
by venturecruelty
3 subcomments
- Look, this is obviously terrible for someone who just lost most or perhaps all of their data. I do feel bad for whoever this is, because this is an unfortunate situation.
On the other hand, this is kind of what happens when you run random crap and don't know how your computer works? The problem with "vibes" is that sometimes the vibes are bad. I hope this person had backups and that this is a learning experience for them. You know, this kind of stuff didn't happen when I learned how to program with a C compiler and a book. The compiler only did what I told it to do, and most of the time, it threw an error. Maybe people should start there instead.
- Hmm. I use these LLMs instead of search.
They invariably go off the rails after a couple prompts, or sometimes from the first one.
If we're talking Google products, only today i told Gemini to list me some items according to some criteria, and it told me it can't access my google workspace instead.
Some time last week it told me that its terms of service forbid it from giving me a link to the official page of some program that it found for me.
And that's besides the usual hallucinations, confusing similarly named products etc.
Given that you simply cannot trust LLM output to not go haywire unpredictably, how can you be daring enough to give it write access to your disk?
- Shitpost warning, but it feels as if this should be on high rotation: https://youtu.be/vyLOSFdSwQc?si=AIahsqKeuWGzz9SH
by timthelion
0 subcomment
- We've been developing a new method of developing software using a cloud IDE (slightly modified vs code server), https://github.com/bitswan-space which breaks down the development process into independent "Automations" which each run in a separate container. Automatons are also developed within containers. This allows you to break down the development into parts and safely experiment with AI. This feels like the "Android moment" where the old non-isolated way of developing software (on desktops) becomes unsafe. And we need to move to a new system with actual security and isolation between processes.
In our system, you can launch a Jupyter server in a container and iterate on software in complete isolation. Or launch a live preview react application and iterate in complete isolation. Securely isolated from the world. Then you deploy directly to another container, which only has access to what you give it access to.
It's still in the early stages. But it's interesting to sit at this tipping point for software development.
- Personal anecdote: I've asked Gemini 3 Pro to write a test for a function that depends on external DB data. It wrote a test that creates and deletes a table, it conveniently picked the exact production table name, didn't mock the DB interactions. Attempted to run the test immediately.
by GaryBluto
5 subcomments
- So he didn't wear the seatbelt and is blaming car manufacturer for him been flung through the windshield.
- People need to learn to never run untrusted code without safety measures like virtualization, containerization, sandboxing/jailing, etc. Untrusted code can include executables, external packages (pip, npm, cargo, etc) and also code/commands created by LLMs, etc.
- > This is catastrophic. I need to figure out why this occurred and determine what data may be lost, then provide a proper apology
Well at least it will apologize so that's nice.
- To rub salt on the wounds and add insult to the injury:
> You have reached quota limit for this model. You can resume using this model at XYZ date.
by JohnCClarke
0 subcomment
- FWIW: I think we've all been there.
I certainly did the same in my first summer job as an intern. Spent the next three days reconstructing Clipper code from disk sectors. And ever since I take backups very seriously. And I double check del/rm commands.
- Historical reference: https://jargondb.org/glossary/dwim
- Is there anyone else that uses Claude specifically because it doesn’t sound mentally unhinged while thinking?
- Play vibe games, win vibe prizes.
Though the cause isn't clear, the reddit post is another long could-be-total-drive-removing-nonsense AI conversation without an actual analysis and the command sequence that resulted in this
by FerretFred
0 subcomment
- I always use "rm -rf*v*" so that if I do screw up I can watch the evidence unfold before me.
- "kein Backup, kein Mitleid"
(no backup, no pity)
…especially if you let an AI run without supervision. Might as well give a 5 year old your car keys, scissors, some fireworks, and a lighter.
- The biggest issue with Antigravity is that it completely freezes everything: the IDE, the terminals, debugger, absolutely everything completely blocking your workflow for minutes when running multiple agents, or even a single agent processing a long-winded thinking task (with any model).
This means that while the agent is coding, you can't code...
Never ever had this issue with Cursor.
- Well that's stupid. I submit though, connecting stochastic process directly to shell you do give permission for everything that results. It's a stupid game. Gemini mixes up LEFT and RIGHT (!). You have to check it.
- adding it in https://whenaifail.com
- Run these tools inside Docker[1]
1 - https://ashishb.net/programming/run-tools-inside-docker/
- Remember when computers were deterministic? Pepperidge Farms remembers.
- An early version of Claude Code did a hard reset on one of my projects and force pushed it to GitHub. The pushed code was completely useless, and I lost two days of work.
It is definitely smarter now, but make sure you set up branch protection rules even for your simple non-serious projects.
by digitalsushi
0 subcomment
- if my operating system had an atomic Undo/Redo stack down to each register being flipped (so basically, impossible, star trek tier fantasy tech) i would let ai run commands without worrying about it. i could have a cool scrubber ui that lets me just unwind time like doctor strange using that green emerald necklace, and, i'd lose nothing, other than confuse my network with replay session noise. and probably many, many other inconsistencies i can't think of, and then another class that i dont know that i dont know about.
by robertheadley
0 subcomment
- I was trying to build a .MD file of every powershell command available on my computer and all of its flags, and... that wasn't a great idea, and my bitlocker put the kebosh on that.
- Live by the vibe die by the vibe
- I am deeply regretful, but my Google Antigravity clearly states: AI may make mistakes. Double-check all generated code.
Surely AGI products won't have such disclaimer.
- Can you run Google's AI in a sandbox? It ought to be possible to lock it to a Github branch, for example.
- Most of the responses are just cut off midway through a sentence. I'm glad I could never figure out how to pay Google money for this product since it seems so half-baked.
Shocked that they're up nearly 70% YTD with results like this.
- Insane skill issue
by wartywhoa23
0 subcomment
- Total Vibeout.
- All that matters is whether the user gave permission to wipe the drive, ... not whether that was a good idea and contributed to solving a problem! Haha.
- > Google Antigravity just deleted the contents of whole drive.
"Where we're going, we won't need ~eyes~ drives" (Dr. Weir)
(https://eventhorizonfilm.fandom.com/wiki/Gravity_Drive)
- For macOS users, the sandbox-exec tool still works perfectly to avoid that kind of horror story.
On Linux, a plethora of options exist (Bubblewrap, etc).
- Would have been helpful to state what this was, I had to go look it up...
by pshirshov
1 subcomments
- Claude happily does the same on daily basis, run all that stuff in firejail!
- I guess eventually, it all came crashing down.
- What makes a program malware?
Does intent matter, or only behavior?
by kissgyorgy
1 subcomments
- I simply forbid or force Claude Code to ask for permission to run a dangerous command.
Here are my command validation rules:
(
r"\bbfs.*-exec",
decision("deny", reason="NEVER run commands with bfs"),
),
(
r"\bbfs.*-delete",
decision("deny", reason="NEVER delete files with bfs."),
),
(
r"\bsudo\b",
decision("ask"),
),
(
r"\brm.*--no-preserve-root",
decision("deny"),
),
(
r"\brm.*(-[rRf]+|--recursive|--force)",
decision("ask"),
),
find and bfs -exec is forbidden, because when the model notices it can't delete, it works around with very creative solutions :)
- I like turtles.
by Puzzled_Cheetah
0 subcomment
- Ah, someone gave the intern root.
> "I also need to reproduce the command locally, with different paths, to see if the outcome is similar."
Uhm.
------------
I mean, sorry for the user whose drive got nuked, hopefully they've got a recent backup - at the same time, the AI's thoughts really sound like an intern.
> "I'm presently tackling a very pointed question: Did I ever get permission to wipe the D drive?"
> "I am so deeply, deeply sorry."
This shit's hilarious.
by shevy-java
0 subcomment
- Alright but ... the problem is you did depend on Google. This was already the first mistake. As for data: always have multiple backups.
Also, this actually feels AI-generated. Am I the only one with that impression lately on reddit? The quality there decreased significantly (and wasn't good before, with regard to censorship-heavy moderators anyway).
by conartist6
0 subcomment
- AGI deleted the contents of your whole drive don't be shy about it. According to OpenAI AGI is already here so welcome to the future isn't it great
- This seems like the canary in the coal mine. We have a company that built this tool because it seemed semi-possible (prob "works" well enough most of the time) and they don't want to fall behind if anything that's built turns out to be the next chatgpt. So there's no caution for anything now, even ideas that can go catastrophically wrong.
Yeah, its data now, but soon we'll have home robotics platforms that are cheap and capable. They'll run a "model" with "human understanding", only, any weird bugs may end up causing irreparable harm. Like, you tell the robot to give your pet a bath and it puts it in the washing machine because its... you know, not actually thinking beyond a magic trick. The future is really marching fast now.
- Fascinating
Cautionary tale as I’m quite experienced but have begun not even proofreading Claude Code’s plans
Might set it up in a VM and continue not proofreading
I only need to protect the host environment and rely on git as backups for the project
- has google gone boondoggle?
- I can't view this content.
- A reminder: if the AI is doing all the work you demand of it correctly on this abstraction level, you are no longer needed in the loop.
- The victim uploaded a video too: https://www.youtube.com/watch?v=kpBK1vYAVlA
- The hard drive should now feel a bit more lighter.
- Play stupid games, win stupid prizes.
by DeepYogurt
0 subcomment
- [flagged]
- This happened to me long before LLM's. I was experimenting with Linux when I was young. Something wasn't working so I posted on a forum for help which was typical at the time. I was given a terminal command that wiped the entire drive. I guess the poster thought it was a funny response and everyone would know what it meant. A valuable life experience at least in not running code/commands you don't understand.
by Scott-David
0 subcomment
- [dead]
by Jeff-Collins
0 subcomment
- [dead]
by koakuma-chan
0 subcomment
- Why would you ever install that VScode fork