I outrifht stopped using Facebook.
We are doomed if AI is allowed to punish us.
The only real solution is locally running models, but that goes against the business model. So instead they will seek regulation to create privacy by fiat. Fiat privacy still has all the same problems as telling your therapist that you killed someone, or keeping your wallet keys printed out on paper in a safe. It's dependent on regulations and definitions of greater good that you can't control.
EDIT: I want to add that "training on chat logs" isn't even the issue. In fact it understates the danger. It's better to imagine things like this: when a future ad-bot or influence-bot talks to you, it will receive your past chatlogs with other bots as context, useful to know what'll work on you or not.
EDIT 2: And your chatlogs with other people I guess, if they happened on a platform that stored them and later got desperate enough to sell them. This is just getting worse and worse as I think about it.
Even in real life, the police in the UK now deploy active face recognition and makes tonnes of arrests based on it (sometimes wrongly). Shops are now looking to deploy active face recognition to detect shoplifters (although it's unclear legally what they will actually do about it).
The UK can compel any person commuting through the UK to give over their passwords and devices - you have no right to appeal. Refusing to hand over the password can get you arrested under the Terrorist Act, where they can hold you indefinitely. When arrested under any terrorism offence you also have not right to legal representation.
The days of privacy sailed unnoticed.
But a llm is not a human, and I think OpenAI and all the others should make it clear that you are NOT talking to a human. Repeatedly.
I think if society were trained to treat AI as NOT human, things would be better.
It seems like having LLM providers not train on user data is a big part of it. But is using traditional ML models to do keyword analysis considered “AI” or “surveillance”?
The author…and this community in general…are much more prepared to make full recommendations about what AI surveillance policy should be. We should be super clear to try to enact good regulation without killing innovation in the process.
As I write this, sitting in Peet's Coffee in downtown Los Altos, I count three different cameras recording me, and I'm using their public wifi which I assuming is also being used to track me. That's the world we have now.
The opposite of "if you build it they will come".
(The difference being the AIs in the book were incredibly needy, wanting too much to please the customer to the point of annoyance, which is a heavy contrast against the current reality of the AI working to appease the parent organisation)
Maybe it could be good to have some integrations between this data and law enforcement to reduce leading to tragedy? Maybe start not with crime but suicide - I think a search result telling you to call this number if you are feeling bad saves far less lives than a feed into social workers potentially could.
Just a thought, and this isn't have a computer sentence someone to prison but providing data to people to in the end make informed decisions to try to prevent tragedy. Privacy is important to a degree but treating it as absolute seems to waste potential to save lives.
In essence, there is a general consensus on the conduct concerting trusted advisors. They should act in the interest of their client. Privacy protections exist to enable individuals to be able to provide their advisors the context required to give good advice without fear of disclosure to others.
I think AI needs recognition as a similarly protected class.
AI actions should be considered to be acting for a Client (or some other specifically defined term to denote who they are advising). Any information shared with the AI by the client should be considered privileged. If the Client shares the information to others, the privilege is lost.
It should be illegal to configure an an AI to deliberately act against the interests of their Client. It should be illegal to configure an AI to claim that their Client is someone other than who it is (it may refuse to disclose, it may not misrepresent). Any information shared with an AI misrepresenting itself as the representative of the Client must have protections against disclosure or evidential use. There should be no penalty to refusing to provide information to an AI that does not disclose who its Client is.
I have a bunch of other principles floating around in my head around AI but those are the ones regarding privacy and being able to communicate candidly with an AI.
Some of the others are along the lines of
It should be disclosed(of the nutritional information type of disclosure) when an AI makes a determination regarding a person. There should be a set of circumstances where, if an AI makes a determination regarding a person, that person is provided with means to contest the determination.
A lot of the ideas would be good practice if they went beyond AI, but are more required in the case of AI because of the potential for mass deployment without oversight.
The incentives are all wrong.
I'm fundamentally a capitalist because I don't know another system that will work better. But, there really is just too much concentrated wealth in these orgs.
Our legal and cultural constructs are not designed in a way that such disparity can be put in check. The populace responds by wanting ever more powerful leaders to "make things right" and you get someone like Trump at best and it goes downhill from there.
Make the laws, it will help, a little, maybe.
But I think something more profound needs to happen for these things to be truly fixed. I, admittedly, have no idea what that is.
Most of the controversial stuff he has done is being whitewashed from the internet and is now hard to find.
Or they have and they simply don't care, or they feel they can't change anything anyway, or the pay-check is enough to soothe any unease. The net result is the same.
Snowden's revelations happened 12 years ago, and there were plenty of what appeared to be well-intentioned articles and discussions in the years that followed. And yet, arguably, things are even worse today.
DuckDuckGo aren’t perfect, but I think they do a lot to all our benefit. Theirs have been my search engine of choice for many years and will continue being so.
Shout outs to their amazing team!
Represents a fundamental misunderstanding of how training works or can work. Memory is more to do with retrieval. Finetuning on those memories would not be useful given the data is going to be minuscule to affect the probablity distribution in the right way.
While everyone is for privacy (and thats what makes these arguments hard to refute), this is clearly about using privacy as a way to argue against using conversational interfaces. Not just that, it's using the same playbook to use privacy as a marketing tactic. The argument starts from highly persuasive nature of chatbots, to somehow privacy preserving chatbots from DDG wont do it, to being safe with hackers stealing your info elsewhere and not on DDG. And then asking for regulation.
> Use our service
Nah.
The ChatGPT translation on the right is a total nothingburger, it loses all feeling.
Merely being surveilled and marketed at is a fairly pedestrian application from the rolodex of AI related epistemic horrors.
I mean a PARKING LOT in my town is using AI cameras to track and bill people in a parking lot! The people of my town are putting pressure on the parking lot owner to get rid of it but apparently the company is paying him too much money for having it in his lot.
Like the old video says "Don't talk to the Police" [1], but now we have to expand it to say "Don't Do Anything", because everything you do is being fed into a database that can possibly be searched.
The next politician to come in will retroactively pardon everyone involved, and will create legislation or hand down an executive order that creates a "due process" in order to do the illegal act in the future, making it now a legal act. The people who voted the politician in celebrate their victory over the old evil, lawbreaking politician, who is on a yacht somewhere with one of the billionaires who he really works for. Rinse and repeat.
Eric Holder assured us that "due process" simply refers to any process that they do, and can take place entirely within one's mind.
And we think we can ban somebody from doing something that they can do with a computer connected to a bunch of thick internet pipes, without telling anyone.
That's libs for you. Still believe in the magic of these garbage institutions, even when they're headed by a game show host and wrestling valet who's famous because he was good at getting his name in the NY Daily News and the NY Post 40 years ago. He is no less legitimate than all of you clowns. The only reason Weinberg has a voice is because he's rich, too.
Ultimately it's one of those arms races. The culture that surveills its population most intensely wins.
Banning it just in USA leaves you wide open to be defeated by China, Russia, etc….
Like it or not it’s a mutually assured destruction arms race.
AI is the new nuclear bomb.