- There are two issues I see here (besides the obvious “Why do we even let this happen in the first place?”):
1. What happened to all the data Copilot trained on that was confidential? How is that data separated and deleted from the model’s training? How can we be sure it’s gone?
2. This issue was found; unfortunately without a much better security posture from Microsoft, we have no way of knowing what issues are currently lurking that are as bad as —- if not worse than —- what happened here.
There’s a serious fundamental flaw in the thinking and misguided incentives that led to “sprinkle AI everywhere”, and instead of taking a step back and rethinking that approach, we’re going to get pieced together fixes and still be left with the foundational problem that everyone’s data is just one prompt injection away from being taken; whether it’s labeled as “secure” or not.
by observationist
3 subcomments
- Seems like every day there's another compelling reason to switch to Linux. Microsoft is doing truly incredible work this year!
- Microsoft somehow sees a future where LLMs have access to everything in your screen. In that dystopia, adding "confidential" tags or prompt instructions to ignore some types of content is never going to be enough. If you don't want LLMs to exfiltrate content then they cannot have access to it, period.
by childofhedgehog
4 subcomments
- > However, this ongoing incident has been tagged as an advisory, a flag commonly used to describe service issues typically involving limited scope or impact.
How is having Copilot breach trust and privacy an “advisory”? Am I missing something?
by codeulike
1 subcomments
- Reads to me like it is not accessing other users mailboxes, its just accessing the current user's mailbox (like its meant to) but its supposed to ignore current user's emails that have a 'confidential' flag and that bit had a bug
- The article doesn't say if the confidentiality labels were created with encryption. I've been using the latter (with Preview DLP) to prevent emails leaking out to _external_ integrations, which can't access the keys. With MS internal tooling, it's feasible that it access to the key, in which case that would be even worse. Does anyone know if this happened?
by allthetime
0 subcomment
- This is one of many reasons we are taking all our current and future private repos off of GitHub.
- All these government contractors are forced to pay astronomical cloud bills to get "GCC-High" because it passes the right security-theater checklist, and then it totally ignores the DLP settings anyway!
by 8cvor6j844qw_d6
0 subcomment
- Is this a real bug or is it a "lets train on more emails" by being careless?
I assume that whatever that is processed by AI service are generally retained for product improvements (training).
by mikrotikker
0 subcomment
- This company is an absolute joke now, if you're not desperately trying to jump ship at this point then you will go down with it.
by dolphinscorpion
1 subcomments
- A bug here and a bug there...
- I more and more see a bug in my mouth that tries to encourage my boss to cancel Microsoft 365. I did not find the root cause yet
by josefritzishere
0 subcomment
- AI is such garbage. There is considerable overlap between the security practices of AI and that of the slowest interns in the office.
by nickdothutton
0 subcomment
- "...including messages that carry confidentiality labels."
Trusted operating system Mandatory Access Control where art thou?
- None of this should surprise anyone by now. You are being lied to, continually.
You guys need to read the actual manifestos these AI leaders have written. And if not them, then read the propagandist stories they have others write like The Overstory by Richard Powers which is an arrogant pile of trash that culminates in the moral:
humans are horrible and obsolete and all should die and leave the earth for our new AI child
Which is of course, horseshit. They just want most people to die off, not all. And certainly not themselves.
They don't care about your confidential information, or anything else about you.
by kevincloudsec
0 subcomment
- calling it a bug is generous. the whole point of these tools is to read everything you have access to. the 'bug' is that it worked exactly as designed but on the wrong emails
by Blackstrat
0 subcomment
- Just one more reason to abandon Microsoft. If ever Linux had an opportunity to breakout on the desktop, the proliferation of "AI" and privacy intrusion from the likes of Microsoft would seem to have opened that window. Yes, it would mean giving up some applications, at least temporarily, but the benefit in control and privacy makes that a fair trade off. Unlike the majority here, I don't want ANY "AI" features on my desktop, phone, car, or any appliance that I own. This is true of the "cloud" as well. Trusting corporate entities to have your best interests in mind is naive at best.
by wartywhoa23
0 subcomment
- An exemplar BaaF corporation (Bug as a Feature).
- Initial date of issue 3rd Feb 2026
by DecoPerson
0 subcomment
- I wonder, is Microsoft doing “outsider trading”, where they covertly pipe analytical data to the executives’ independently-owned stock trading houses as “tips”? They’ve had access to so many corporate internal emails for so long, with MS365, but Copilot is the perfect way to mask such analysis. Also Copilot would be good at analysing emails and providing useful “tips”.
Just my whacky conspiracy theory of the day!
- microsoft may very well be the MOST sinking ship to ever sink.
- Microsoft deploying buggy software is hardly news.
- Why was this bug not found in testing?
by indiekitai
4 subcomments
- [dead]
- [dead]
by ghostclaw-cso
0 subcomment
- [dead]
- I'm shocked. Shocked!
- Oh, poor desperate Microsoft. No amount of bug fixing is going to fix Microsoft. Now that they've embarked on the LLM journey they're not going to know what's going to hit them next.