From the article: the shooter's behavior triggered internal alarms, and some employees asked leadership to alert authorities, but:
| OpenAI leaders ultimately decided not to contact authorities.
https://www.wsj.com/us-news/law/openai-employees-raised-alar...
Imagining the discussions at Google to stay away from productization of the technology, I think I also would have been on the conservative side. Non-deterministic execution, harmonic frequencies with mental illness, catastrophic destructive failure loops and “interesting” IP challenges don’t seem great for business in the long run.
I have no special insights, but someone is gonna address the blatant copyright laundering, and the misuses of image generation, ugly-style in court and as we are already seeing visible MS and AWS failures from LLM use, some business is gonna experience direct harm from these systems and respond through lawyers who can address the inadequacy of “tool can make mistakes” stickers.
Parties without rules are super fun, but with certain kinds of fun the cops are gonna show up at some point. I’d feel better working at a law firm using LLMs to go after “AI” companies than I would working at an “AI” shop outside the top dogs right now.