So your grant applications were written by AI, your lectures were written by AI, your publications were written by AI, and your students exams were marked by AI? Buddy what were YOU doing?
It might be my professional deformation, but I never store anything in ChatGPT and Claude for longer than a day or two.
”Dr Flattery always wrong robot” is such a wonderful way to describe ChatGPT and friends when used like this <3
Keep a copy (cloud) and a backup (offline) for all you own data.
Even for my daughters’ much simpler school homework, projects, and the usual drawings/sketches, I’ve set up Backups so they don’t cry when their work gets lost. I set up the Macs I handed down to them to be backed up to iCloud, and added a cheap HDD for Time Machine. They think I’m a Magician with Computers when I teach them to use the Time Machine and see the flying timeline of their work. The other thing is the Google Workspace for Schools. I have found that having a local copy always available via a tool (such as InSync) does wonders.
The only sob story now is Games. They sometimes lose points, the game coin thingies, and developer-kids with bugs that reset gameplay earnings. I have no idea how to help them there besides emotional support and how the world works — one step at a time.
How about if ChatGPT/Claude writes a local Markdown copy of each conversation? Won’t that be Nice?
1) go ahead and delete everything 2) back up and then go ahead 3) abort and keep things as they are
ChatGPT definitely wants to be the copilot of all your work. Guy didn’t just have chats, he had drafts that his virtual assistant helped formulate and proof read. Give how big and used ChatGPT has become, it shouldn’t be a surprise to anyone tech savvy that this is being used for serious work outside of vibecoders.
This guy [1] (in Swedish) was digitizing a municipal archive. 25 years later, the IT department (allegedly) accidentally deleted his entire work. With no backup.
Translated:
> For at least 25 years, work was underway to create a digital, searchable list of what was in the central archive in Åstorp municipality. Then everything was deleted by the IT department.
> “It felt empty and meaningless,” says Rasko Jovanovic.
> He saw his nearly 18 years of work in the archive destroyed. HD was the first to report on it.
> “I was close, so close to taking sick leave. I couldn't cope,” he says. The digital catalog showed what was in the archive, which dates back to the 19th century, and where it could be found.
> "If you ask me something today, I can't find it easily, I have to go down and go through everything.
> “Extremely unfortunate”
> Last fall, the IT department found a system that had no owner or administrator. They shut down the system. After seven months, no one had reported the system missing, so they deleted everything. It was only in September that Åstorp discovered that the archive system was gone.
> “It's obviously very unfortunate,” says Thomas Nilsson, IT manager. Did you make a mistake when you deleted the system?
> “No. In hindsight, it's clear that we should have had different procedures in place, but the technician who did this followed our internal procedures.”
In typical Swedish fashion, there cannot have been a mistake made, because procedures were followed! Or to put it in words that accurately reflect having 25 years of work removed: "Own it, you heartless bastard."
Translated with DeepL.com (free version) [1] https://www.svt.se/nyheter/lokalt/helsingborg/rasko-digitali...
https://archive.ph/2026.01.27-112714/https://www.nature.com/...
https://web.archive.org/web/20260127100255/https://www.natur...
But I have to say, quite an incredible choice! ChatGPT released in Nov 2022. This scientist was an early adopter and immediately started putting his stuff in there with the assumption that it would live there forever. Wow, quite the appetite for risk.
But I can't call him too many names. I have a similar story of my own: one thing I once did was ETL a bunch of advertising data into a centralized data lake. We did this through the use of a Facebook App that customers would log in to and authorize ads insights access to. One of the things you need to do is certify that you are definitely not going to do bad things with the data. All we were doing was calculating ROAS and stuff like that: aggregate data. We were clean.
But you do have to certify that you are clean if you even go close to user data, which means answer a questionnaire (periodically). I did answer the questionnaire, but for everyone who has used anything near Meta's business and advertising programs (at the control plane, the ad delivery plane must be stupendous) you know they are anything but reliable. The flaky thing popped up an alert the next day that I had to certify again and it wouldn't go away. Okay, fine, I do need the one field but how about I just turn off the permission and try to work without it. I don't want anyone thinking I'm doing shady stuff when I'm not.
Only problem? If you have an outstanding questionnaire and you want to remove a permission you have to switch from Live to Development. That's fine too, normally, it's a 5 second toggle. Works every time. Except if you have an outstanding questionnaire you cannot switch from Development to Live. We were suddenly stuck, no data, nothing and every client is getting this page about app not approved. And there's nothing to be done but to beg Meta Support who will ignore you. I just resubmitted the app and we waited 24 hours and through the love of God it all came back.
But I was oh-so-cavalier clicking that bloody button! The kind of mistake you make once before you treat any Data Privacy Questionnaire like it's the Demon Core.
I frown when people currently trust AI, let alone have been doing so for 2 years already.
> [...] but large parts of my work were lost forever [...]
I wouldn't really say parts of his work were lost. At most the output of an AI agent, nothing more.
If somehow e-mails, course descriptions, lectures, grant applications, exams and other tools, over the period of two years disappeared in an instant, they did not really exist to begin with.
For once, the actual important stuff is the deliverable of these chats, meaning these documents should exist somewhere. If we're being honest everything should be able to be recreated in an instant, given the outputs and if the actual intellectual work was being done by Mr. Bucher.
Does it suck to lose data? Even if just some AI tokens we developed an attachment to? Sure.
Would I have outed myself and my work shamelessly, to the point that clicking a "don't retain my data" option undermines your work like this? Not really.
How can you loose "important work" of multiple years? -- can't be important and how can somebody _expected to become management_ be so incompetent?
"...two years of carefully structured academic work disappeared. No warning appeared. There was no undo option. Just a blank page. Fortunately, I had saved partial copies of some conversations and materials, but large parts of my work were lost forever." -- stupid: that drive could have died, the building could have burned down, the machine could have been stolen, the data could have been accidentally deleted... and all there was: "a partial" backup.
I mean, that isn't even a scenario where he didn't know about the data ("carefully structured") and discovered it wasn't covered by the backup schema (that would be a _real_ problem) Another problem would be of your churn is so high that backing up becomes a real issue (bandwidth, latency, money, ...). None of that applies.
And yet they reserve a spot in "nature" for such whining and incompetence?
If that was the intellectual calibre of the person, I wonder how truly worthwhile the lost work was.
The user without backups lost their own work.
Simple as that, no argument.
No backups, you the loser.
You might WANT someone else to be responsible but that doesn't change anything.
What the average human needs is laws and enforcement, and trust in both.
Is anyone familiar with current academic culture in Germany, to comment on how (or if) it warns its members about such risks?
How dare you not let us steal your data.
The worst thing is all the people looking at this behaviour as normal and totally acceptable, this is where ai-sloppiness is taking us guys. I hope it's just the ai bros talking in the comments, otherwise we are screwed.