Many employers want employees to act like cult members. But then when going gets tough, those are often the first laid off, and the least prepared for it.
Employers, you can't have it both ways. As an employee don't get fooled.
got it
I'd be very happy to be convinced that supporting the coup was the right move for true-believer missionaries.
(Edit: It's an honest and obvious question, and I think that the joke responses risk burying or discouraging honest answers.)
Unsurprising, unhelpful for anyone other than sama, unhealthy for many.
This is the same Sam Altman who abandoned OpenAI’s founding mission in favour of profit?
No it can’t be
they went from open to closed. they went from advocating ubi to for profit. they went from pacific to selling defense tech. they went from a council overseeing the project to a single man in control.
and thats fine, go make all the money you can, but don't try do this sick act where you try to convince people to thank you for acting in your own self interest.
If missionaries could be mercenaries, they would.
When you hear this reiterated by employees, who actually believe it, then it's sad. Obviously not in this situation, but I've actually heard this from people. Some of them were even pros. "There is no fool like an educated fool."
https://knowyourmeme.com/memes/friendship-ended-with-mudasir
- Ilya Sutskever, Co-founder, co-lead of Superalignment Team , Departed early 2024
- May 15, 2025, The Atlantic
Anyway, I concur it's a hard choice as one other comment mentions.
Let’s assume for a moment that OpenAI is the only company that can build AGI (specious claim), then the question I would have for Sam Altman: what is OpenAI’s plan once that milestone is reached, given his other argument:
> And maybe more importantly than that, we actually care about building AGI in a good way,” he added. “Other companies care more about this as an instrumental goal to some other mission. But this is our top thing, and always will be.
If building AGI is OpenAI’s only goal (unlike other companies), will OpenAI cease to exist once mission is accomplished or will a new mission be devised?
Job market forces working as they should.
If Sam Altman is upset, he should look in the mirror for making his people work so many hours. They didn't leave because of the pay.
i’m noticing more and more lately that our new monarchs really do have broken thought patterns. they see their own abuse towards others as perfectly ok but hilariously demand people treat them fairly.
small children learn things that these guys struggle to understand.
1) They are far from profitability. 2) Meta is aggressively making their top talent more expensive, and outright draining it. 3) Deepseek/Baidu/etc are dramatically undercutting them. 4) Anthropic and (to a lesser extent?) Google appear to be beating them (or, charitably, matching them) on AI's best use case so far: coding. 5) Altman is becoming less like-able with every unnecessary episode of drama; and OpenAI has most of the stink from the initial (valid) grievance of "AI-companies are stealing from artists". The endless hype and FUD cycles, going back to 2022, have worn industry people out, as well as the flip flop on "please regulate us". 6) Its original, core strategic alliance with Microsoft is extremely strained. 7) and, related to #6, its corporate structure is extremely unorthodox and likely needs to change in order to attract more investment, which it must (to train new frontier models). Microsoft would need to sign off on the new structure. 8) Musk is sniping at its heels, especially through legal actions.
Barring a major breakthrough with GPT-5, which I don't see happening, how do they prevail through all of this and become a sustainable frontier AI lab and company? Maybe the answer is they drop the frontier model aspect of their business? If we are really far from AGI and are instead in a plateau of diminishing returns that may not be a huge deal, because having a 5% better model likely doesn't matter that much to their primary bright spot:
Brand loyalty from the average person to ChatGPT is the best bright spot, and OpenAI successfully eating Google's search market. Their numbers there have been truly massive from the beginning, and are I think the most defensible. Google AI Overviews continue to be completely awful in comparison.
Therefore, wish for the army with the best immune system.
In other words, we should probably be asking what viral/bacterial content is transferred in these employee trades and who mates with who. This information is probaly as important to the outcome as the notions of "AGI" swirling around.
All these articles and videos of people "slamming" each other; it doesn't move the needle, and it's not really news.
I work at OAI, but I'm speaking for myself here. Sam talks to the company, sometimes via slack, more often in company-wide meetings, all the time. Way more than any other CEO I have worked for. This leaked message is one part of a long, continuing conversation within the company.
The vast majority of what he and others say doesn't get leaked. So you're eavesdropping on a tiny portion of a conversation. It's impossible not to take it out of context.
What's worse, you think you learned something from reading this article, even though you probably didn't, making you more confident in your conclusions when you should be less confident.
I hope everyone here gets to have the experience of seeing HN discuss something that you're an expert in. It's eye-opening to see how confidently wrong most poasters are. It certainly has humbled my own reactions to news. (In this particular instance I don't think there's so much right and wrong but more that I think if you had actually been in the room for more of the conversation you'd probably feel different.)
Btw Sam has tweeted about an open source model. Stay tuned... https://x.com/sama/status/1932573231199707168
If the person next to you gets paid 20x more than you, you might be a bit unhappy when they are not 20x more helpful.
As AlbertaTech says, “we make sparkling water.” I mean, what’s the mission? A can of sparkling water on every table? Spreading the joy of carbonated water to the world? No. You sell sparkling water because you want to make a profit. That kind of speech is just a way to hide the fact that you're trying to cut three full-time positions and make your employees work off-hours to increase margins. Or, like in this case, pay them less than the competition with the same objective.
Sam Altman might actually have a mission, turning us all into robot slaves, but that’s a whole different conversation.
If you've ever browsed teamblind.com (which I strongly recommend against as I hate that site), you'll see what the people who work at Meta are like.
But then again, maybe they have such a menagerie of individuals with their heads in the clouds that they've created something of an echo chamber about the 'pure vision' that only they can manifest.
In the context of the decisions of largely East Asia born technical staff, can’t help but reflect on the role of actual western missionaries and mercenaries in East Asia over the last 100+ years & also the DeepSeek targeted sinophobia.
https://www.britannica.com/event/Boxer-Rebellion
https://en.m.wikipedia.org/wiki/Protestant_missions_in_China
https://en.m.wikipedia.org/wiki/Operation_Beleaguer
https://monthlyreview.org/2025/02/01/imperialism-and-white-s...
And hypocrites will never stop whining
Meta doesn’t really have a product unless you count the awful “Meta AI” that is baked into their apps. Unless these acquisitions manifest in frontier models getting open sourced, it feels like a gigantic brain drain.
Is it the researchers or the system engineers that scale the prototypes? Or other skills/expertise?
Imagine if in 2001 Google had said "I'm sorry, I can't let you search that" if you were looking up information on medical symptoms, or doing searches related to drugs, or searching for porn, or searching for Disney themed artwork.
It's hard for me to see anyone with such a strong totalitarian control over how their technology can be used as a good guy.
Wonder if that applies here.
It is always surprising to me when billionaire CEOs are complaining that their own employees are min-maxing their earning potential.
[1] https://www.ere.net/articles/tech-firms-settle-case-admit-se...
A decade ago Apple, Google, Intel, Intuit, and Adobe all had anti poaching agreements, and Facebook wouldn’t play ball, paid people more, won market share, and caused the salary boom in Silicon Valley.
Now Facebook is paying people too much and we should all feel bad about it?
Another said: “Yes we’re quirky and weird, but that’s what makes this place a magical cradle of innovation,” wrote one. “OpenAI is weird in the most magical way. We contain multitudes.”
i thought i was reading /r/linkedinlunaticsSad to see Nat Friedman go there. He struck me as "one of the good ones" who was keen to use tech for positive change. I don't think that is achievable at Meta
TL ; DR
Some other company paid more and got engineers to join them because the engineers care more about themselves and their families than some annoying guy's vision.
Unfortunately, productive research doesn't necessarily improve with increased cash-burn rates. As many international post docs simply refuse to travel into the US these days for "reasons". =3
"The CEO and the Three Envelopes" ( https://news.ycombinator.com/item?id=38725206 )
1. “So much money your grandchildren don’t need to work”
2. 100M
3. Not 100M
So what is it? I’m just curious, I find 100M hard to believe but Zuck is capable of spending a lot.
Being a missionary for big ideas doesn't mean dick to a creditor.
And before you make your rebuttal, if you wouldn’t accept $30,000 equivalent for your same tech job in Poland or whatever developed nation pays that low, then you have no rebuttal at all.
"What Meta is doing will, in my opinion, lead to very deep cultural problems. We will have more to share about this soon but it's very important to me we do it fairly and not just for people who Meta happened to target."
Translation from corporate-speak: "We're not as rich as Meta."
"Most importantly of all, I think we have the most special team and culture in the world. We have work to do to improve our culture for sure; we have been through insane hypergrowth. But we have the core right in a way that I don't think anyone else quite does, and I'm confident we can fix the problems."
Translation from corporate-speak: "We're not as rich as Meta."
"And maybe more importantly than that, we actually care about building AGI in a good way." "Other companies care more about this as an instrumental goal to some other mission. But this is our top thing, and always will be." "Missionaries will beat mercenaries."
Translation from corporate-speak: "I am high as a kite." (All companies building AGI claim to be doing it in a good way.)
Calling these statements "slamming" (a specific word I see with curious frequency) is so riling to me because they are so impotent but are described with such violent and decisive language.
Often it's a politician, usually liberal, and their statement is such an ineffectual waste of time, and outwardly it appears wasting time is most of what they do. I consider myself slightly left of center, so seeing "my group" dither and waste time rather than organize and do real work frustrates me greatly. Especially so since we are provided with such contrast from right of center where there is so much decisive action happening at every moment.
I know it's to feed ranking algorithms, which causes me even more irritation. Watching the brain rot get worse in real time...
This is a deliberate obfuscation pattern. If the model is ever consistently useful at a high-risk task (e.g., legal advice, medical interpretation, financial strategy), it triggers legal, regulatory, and reputational red flags. a. Utility → Responsibility
If a system is predictably effective, users will reasonably rely on it.
And reliance implies accountability. Courts, regulators, and the public treat consistent output as an implied service, not just a stochastic parrot.
This is where AI providers get scared: being too good makes you an unlicensed practitioner or liable agent.
b. Avoid “Known Use Cases”
Some companies will actively scrub capabilities once they’re discovered to work “too well.”
For instance:
A model that reliably interprets radiology scans might have that capability turned off.
A model that can write compelling legal motions will start refusing prompts that look too paralegal-ish or insert nonsense case law citation.
I think we see this a lot from ChatGPT. It's constantly getting worse in real world uses while exceeding at benchmarks. They're likely, and probably forced, to cheat on benchmarks by using "leaked" data.
It's always challenging to judge based entirely on public perceptions, but at some point public evidence adds up. The board firing, getting maybe fired from YC (disputed), people leaving to start anthropic because of him, people stating they don't want him in charge of AGI. All the other execs leaving. His lying in congress, his lying to the board, his general affect just seems off - not in an aspie way, but in some dishonest way. Yeah it's subjective, but it's a point and it's different from Zuckerberg, Musk etc. who come across as earnest. Even PG said if dropped on an island of cannibals you'd come back and Sam would be king.
I'm rooting for basically any of the other (American) players in the game to win.
At least Zuck is paying something close to the value these people might generate instead of having them sign hostile agreements to claw back their equity and then feigning ignorance. If NBA all stars get 100M$+ contracts, it's not crazy for a John Carmack type to command the same or more - the hard part is being able to identify the talent, not justify the value created by the leverage of the correct talent (which is huge).