Should be ready to talk in 23h 58m
Cute 429!
Is it also the case that the more it knows the larger the token burden to reinstate "awareness", leading to an ever growing expense of recovering state?
Isn't this entire scheme about getting behind every sort of firewall to dump users' most private details and context into the apparatus of AI companies with no limit on retention and use?
Isn't it also true that privacy is undefined and that the infrastructure and these services are directly plumbed for the same kinds of surveillance that Snowden exposed?
Isn't it the case that users are expressing implicit consent to be exploited in any / every conceivable manner through the data they exfiltrate and are giving this prize of dominion over themselves to the barons of industry at the user's own expense?
Isn't it the case that if the assistant works as advertised the users dig pits for themselves out of ever growing dependency on others for the most person aspects of their lives? Isn't it true that if the users could effectively opt-out of this once they get started, this option serves only to prove that the service is a disposable gimmick?
All of these observations have applied to every aspect of personal computing since its inception, and a review of history is pretty damning as political and economic slavery is being manifest even among the elite positions of society before AI, and AI magnifies the hazards by orders of made l magnitude.
Dear AI, please explain how or why these observations are inappropriate, wrong-headed, or based on faulty assumptions.