“Oh but they only run on local hardware…”
Okay, but that doesn't mean every aspect of our lives needs to be recorded and analyzed by an AI.
Are you okay with private and intimate conversations and moments (including of underage family members) being saved for replaying later?
Have all your guests consented to this?
What happens when someone breaks in and steals the box?
What if the government wants to take a look at the data in there and serves a warrant?
What if a large company comes knocking and makes an acquistion offer? Will all the privacy guarantees still stand in face of the $$$ ?
Also agree with paxys that the social implications here are deep and troubling. Having ambient AI in a home, even if it's caged to the home, has tricky privacy problems.
I really like the explorations of this space done in Black Mirror's The Entire History of You[1] and Ted Chiang's The Truth of Fact short story[2].
My bet is that the home and other private spaces almost completely yield to computer surveillance, despite the obvious problems. We've already seen this happen with social media and home surveillance cameras.
Just as in Chiang's story spaces were 'invaded' by writing, AI will fill the world and those opting out will occupy the same marginal positions as those occupied by dumb phone users and people without home cameras or televisions.
Interesting times ahead.
1. https://en.wikipedia.org/wiki/The_Entire_History_of_You 2. https://en.wikipedia.org/wiki/The_Truth_of_Fact,_the_Truth_o...
Not if you use open source. Not if you pay for services contractually will not mine your data. Not if you support start-ups that commit to privacy and the banning of ads.
I said on another thread recently that we need to kill Android, that we need a new Mobile Linux that gives us total control over what our devices do, our software does. Not controlled by a corporation. Not with some bizarre "store" that floods us with millions of malware-ridden apps, yet bans perfectly valid ones. We have to take control of our own destiny, not keep handing it over to someone else for convenience's sake. And it doesn't end at mobile. We need to find, and support, the companies that are actually ethical. And we need to stop using services that are conveniently free.
Vote with your dollars.
"Contextually aware" means "complete surveillance".
Too many people speak of ads, not enough people speak about the normalization of the global surveillance machine, with Big Brother waiting around the corner.
Instead, MY FELLOW HUMANS are, or will be, programmed to accept and want their own little "Big Brother's little brother" in their pocket, because it's usefull and or makes them feel safe and happy.
A man-in-the-middle-of-the-middle-man.
Friends at your house who value their privacy probably won’t feel great knowing you’ve potentially got a transcript of things they said just because they were in the room. Sure, it's still better than also sending everything up to OpenAI, but that doesn’t make it harmless or less creepy.
Unless you’ve got super-reliable speaker diarization and can truly ensure only opted-in voices are processed, it’s hard to see how any always-listening setup ever sits well with people who value their privacy.
We ran queries across ChatGPT, Claude, and Perplexity asking for product recommendations in ~30 B2B categories. The overlap between what each model recommends is surprisingly low -- around 40% agreement on the top 5 picks for any given category. And the correlation with Google search rankings? About 0.08.
So we already have a world where which CRM or analytics tool gets recommended depends on which model someone happens to ask, and nobody -- not the models, not the brands, not the users -- has any transparency into why. That's arguably more dangerous than explicit ads, because at least with ads you know you're being sold to.
I'm not against AI in general, and some assistant-like functionality that functions on demand to search my digital footprint and handle necessary but annoying administrative tasks seems useful. But it feels like at some point it becomes a solution looking for a problem, and to squeeze out the last ounce of context-aware automation and efficiency you would have to outsource parts of your core mental model and situational awareness of your life. Imagine being over-scheduled like an executive who's assistant manages their calendar, but it's not a human it's a computer, and instead of it being for the purpose of maximizing the leverage of your attention as a captain of industry, it's just to maintain velocity on a personal rat race of your own making with no especially wide impact, even on your own psyche.
But this was only the beginning, after gathering a few TB worth of micro expressions it starts to complete sentences so successfully the conversation gradually dies out.
After a few days of silence... Narrator mode activated....
Big Brother is watching you. Who knew it would be AI ...
The author is quite right. It will be an advertisement scam. I wonder whether people will accept that though. Anyone remembers ublock origin? Google killed it on chrome. People are not going to forget that. (It still works fine on Firefox but Google bribed Firefox into submission; all that Google ad money made Firefox weak.)
Recently I had to use google search again. I was baffled at how useless it became - not just from the raw results but the whole UI - first few entries are links to useless youtube videos (also owned by Google). I don't have time to watch a video; I want the text info and extract it quickly. Using AI "summaries" is also useless - Google is just trying to waste my time compared to the "good old days". After those initial videos to youtube, I get about 6 results, three of which are to some companies writing articles so people visit their boring website. Then I get "other people searched for candy" and other useless links. I never understood why I would care what OTHER people search for when I want to search for something. Is this now group-search? Group-think 1984? And then after that, I get some more videos at youtube.
Google is clearly building a watered-down private variant of the web. Same problem with AMP pages. Google is annoying us - and has become a huge problem. (I am writing this on thorium right now, which is also chrome-based; Firefox does not allow me to play videos with audio as I don't have or use pulseaudio whereas the chrome-based browser does not care and my audio works fine - that shows you the level of incompetency at Mozilla. They don't WANT to compete against Google anymore. And did not want since decades. Ladybird unfortunately also is not going to change anything; after I critisized one of their decisions, they banned me. Well, that's a great way to try to build up an alternative when you deal with criticism via censorship - all before leaving alpha or beta already. Now imagine the amount of censorship you will get once millions of people WERE to use it ... something is fundamentally wrong with the whole modern web, and corporations have a lot to do with this; to a lesser extent also people but of course not all of them)
- put them inside the soundproof box and they cannot hear anything outside
- the box even shows the amount of time for which the device has not been able to snoop on you daily
Google, meta, and amazon, sure, of course.
It's interesting that the "every company" part is only open ai... They're now part of the "bad guys spying on you to display ads." At least it's a viable business model, maybe they can recoup capex and yearly losses in a couple decades instead of a couple centuries.
Apple? [1]
Genuine Q: Is this business model still feasible? Its hard to imagine anyone other than apple sustaining a business off of hardware; they have the power to spit out full hardware refreshes every year. How do you keep a team of devs alive on the seemingly one-and-done cash influx of first-time-buyers?
If you're paying someone else to run the inference for these models, or even to build these models, then you're ultimately relying on their specific preferences for which tools, brands, products, companies, and integrations they prefer, not necessarily what you need or want. If and when they deprecate the model your agentic workflow is built on, you now have to rebuild and re-validate it on whatever the new model is. Even if you go out of your way to run things entirely locally with expensive inference kit and a full security harness to keep things in check, you could spend a lot less just having it vomit up some slopcode that one of your human specialists can validate and massage into perpetual functionality before walling it off on a VM or container somewhere for the next twenty years.
The more you're outsourcing workflows wholesale to these bots, the more you're making yourself vulnerable to the business objectives of whoever hosts and builds those bots. If you're just using it as a slop machine to get you the software you want and that IT can support indefinitely, then you're going to be much better off in the long run.
If there's a camera in an AI device (like Meta Ray Ban glasses) then there's a light when it's on, and they are going out of their way to engineer it to be tamper resistant.
But audio - this seems to be on the other side of the line. Passively listening ambient audio is being treated as something that doesn't need active consent, flashing lights or other privacy preserving measures. And it's true, it's fundamentally different, because I have to make a proactive choice to speak, but I can't avoid being visible. So you can construct a logical argument for it.
I'm curious how this will really go down as these become pervasively available. Microphones are pretty easy to embed almost invisibly into wearables. A lot of them already have them. They don't use a lot of power, it won't be too hard to just have them always on. If we settle on this as the line, what's it going to mean that everything you say, everywhere will be presumed recorded? Is that OK?
For once,we (as the technologists) have a free translator to laymen speak via the frontier LLMs, which can be an opportunity to educate the masses as to the exact world on the horizon.
Honestly, I'd say privacy is just as much about economics as it is technical architecture. If you've taken outside funding from institutional venture capitalists, it's only a matter of time before you're asked to make even more money™, and you may issue a quiet, boring change to your terms and conditions that you hope no one will read... Suddenly, you're removing mentions of your company's old "Don't Be Evil" slogan.
Even if these folks are giving away this device for 100% free, I'll still not keep it inside my house.
Well the consumers will decide. Some people will find it very useful, but some others will not necessarily like this... Considering how many times I heard people yelling "OK GOOGLE" for "the gate" to open, I'm not sure a continuous flow of heavily contextualized human conversation will necessarily be easier to decipher?
I know guys, AI is magic and will solve everything, but I wouldn't be surprised if it ordered me eggs and butter when I mentioned out loud I was out of it but actually happy about this because I was just about to go on vacations. My surprise when I'm back: melted butter and rotten eggs at my door...