To reduce this risk, either completely remove truly sensitive documents from cloud services or implement client-side encryption before uploading them anywhere. The key insight is that if the service can read your files to train models, you don't actually have privacy regardless of what the policy says today.
I'm building PrivaVault specifically because I got burned by a similar policy change last year. The approach is zero-knowledge encryption, where we literally can't read user documents even if we wanted to. Launching in 7 days if anyone wants to check it out, but honestly the broader principle applies: encrypt before it leaves your device, or don't be surprised when it ends up training someone's AI.
For my taste the sentences are over the top and full of weasel words. It's not even something I'd call legalese because it just sounds so insincere.
- "We may share your personal information with our affiliates, service providers, and third-party collaborators" or
- “Share personal data with Starlink’s trusted collaborators to train AI models."
Initially I was very critical of GDPR but when I see these kind of vague formulations I'm really happy that as a European I can expect companies to provide an itemized list of people and companies they will share the data with, and what kind of security measures these subprocessors are employing.There's still a lot of wiggle room for lawyers to work around GDPR limitation, but at least you'd know if their "trusted collaborators" and "affiliates" are Google or Facebook, are domiciled in a foreign country, or if they are just to some small data science consultancy.