by kennywinker
10 subcomments
- Anybody who stays at openai is signing on to build machines that will be used to kill innocent people and control people who think that’s a bad idea.
- “I don’t think we should spy on Americans and I don’t think we should kill people without human oversight but I still have respect for the guy willing to do that”. Please, make it make sense.
- https://xcancel.com/kalinowski007/status/2030320074121478618 to see replies.
- I have a hard time with this separation of “principle” from “people”. Isn’t it people who have principles?
- Good for Caitlin. Sam Altman is awful. He literally admitted on Twitter that they rushed their military contract to get it done. Are you kidding me? You rushed your military contract?
Any employee who stays, especially given the financial cushion they have, is complicit. Shame on all of them.
But here’s the sad truth: most of the knowledge workers at OpenAI won’t be of any value sometime soon because of the very tool they’re building.
by replwoacause
0 subcomment
- Good. Proud of her. We need more like her who have principles.
by voganmother42
0 subcomment
- Respect for standing up
- To save a click
> I resigned from OpenAI. I care deeply about the Robotics team and the work we built together. This wasn’t an easy call. AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got. This was about principle, not people. I have deep respect for Sam and the team, and I’m proud of what we built together.
- Is "Why I left OpenAI" this decade's version of "Why I left Google"?
- Always surprised when these "smart people" didn't see these things coming from several years away... Its honestly hard for me to believe it.
Going to work for these big SV corps is and always has been directly in service of US empire, that's literally what built the valley in the first place.
by camillomiller
0 subcomment
- If you don’t wanna upset your stomach, don’t make the mistake of reading the replies. What a cesspool of humanity X is.
- Whatever happened to this all powerful non profit that would ensure OAI is doing right? Something tells me they just cashed in and run a corrupt shell at this point.
by structuredPizza
0 subcomment
- Autocomplete > Automurk
by slopinthebag
2 subcomments
- I can't help but to feel like this is an odd moral position to take. OP is apparently fine with building technology to spy on civilians in other countries, and I don't see a moral relevance to citizenship on this matter. If spying on civilians is fundamentally wrong, it doesn't become OK when the people live in a different region of the world. If spying on civilians is fundamentally OK, then why would there be a moral exception for civilians who live inside the geographical region in which the company is legally registered? Perhaps someone can enlighten me here.
The autonomous killing thing is more reasonable, but still, if you're OK building death technology, I'm not exactly sure what difference having a human in the loop makes. It's still death.
by ClaudioAnthrop
0 subcomment
- [dead]
by LeoPanthera
2 subcomments
- Their justification rings hollow when they continue to use X.
- In Germany it made it even to the general news https://www.spiegel.de/wirtschaft/unternehmen/openai-manager...
So it wouldn't even be worth a HN submission. Well, I think it can still go under exception for exceptional news.
by threethirtytwo
5 subcomments
- That twitter post was clearly written by AI along with the instructions for the AI to avoid "tells" and other tropes common to AI.
Absolutely nothing wrong with something written with AI. Just pointing it out.