And sure enough, my reading of it left the impression the OAI conditions were basically "DoW won't do anything which violates the rules DoW sets for itself."
It doesn't match.
Would buy their stock, would sell OpenAI, maaybe. If it was public. Maybe instead of MSFT and AMZN I bought
In retrospect this quote comes across as way more foreboding given what we've learned about the scale of his ambitions and his willingness to lie and bend reality to gain power.
Dario on the other hand seems to have an integrity that's particularly rare in this era. I hope he remains strong in the face of the regime.
He doesn't seem to care if the DoW uses his AI for international spying.
That's one more reason why Europe needs sovereign tech.
I've now moved to Claude and it's much better actually, if like me you hate their fonts (Anthropic Sans) select System fonts in the Claude preferences and you can use this snippet in Safari's Settings -> Advanced -> Stylesheet to make everything your default system font:
[data-theme=claude] * { font-family: system-ui, sans-serif !important; }
Posted here: https://news.ycombinator.com/item?id=47195085
They are not the exception, and are just as bloodlessly, shamelessly publicity hungry as any other tech co, if not more so. No surprise based on their conduct up until this fake event.
I encourage you to do the same.
Claude Desktop is better anyway -- and, as we have seen, Anthropic is a more ethical company.
It's difficult to get someone to understand something when their paycheck depends on their not understanding.
Does the administration really believe these AIs are like digital humans?
He has my respect for that
Those who know better please correct me. My current understanding of Palantir (and other surveillance tech companies like Peregrine) is:
1. They facilitate the sale of data to law enforcement, enabling the government to circumvent fourth amendment protections.
2. They fuse cross-government agency data through Foundry and fuse them into unified profiles which the government can use to surveil and pressure citizens without probable cause or a warrant.
ICE also uses a Palantir tool called ELITE to build deportation target lists.
EDIT: Downvoting my comment without any proper rebuttal or clarification is pretty silly.
Neither know how to solve the alignment problem while market pressures are making them race towards capabilities (long horizon, continual learning) that will have disastrous consequences .
I've long thought that OpenAI was a corrupt bunch.
Except for embedding (which I plan to move soon), I have quit my OpenAI accounts. I don't like them.
https://www.reddit.com/r/Anthropic/comments/1rl1ula/dario_tr...
HypocrAIsy...
Of course, a company should have freedom to choose not to do business with the government. I just think that automatically assuming the worst intention of the government is not as productive as setting up good enough legal framework to limit government's power.
In a way, I admire Dario’s stance and having the backbone to stand up to a government that is so happy to punish, legally or illegally, those that disagree with them. I certainly wouldn’t have the bravery (or stupidity) in his position — which frankly makes me happy that he’s running Anthropic and not someone like me…
Maybe it’s not much and they probably won’t care but taking no action here it’s the same as being complicit.
The dead internet is alive and well.
~93 Employees signed up the notdivided.org petition. Some of OAI employees could be reading this comment right now.
Let's be real, OpenAI backstabbed Anthropic. Even Dario has essentially just said it now.
(Shameless plug?) but I created an ASK HN about it: Ask HN: What will OpenAI employees do now who have signed notdividedorg petition [0] and not a single person from OAI responded when I just wanted to discuss :/ and hey that's fine I don't mind but please don't mind me either when I re-raise this topic
From a comment from the thread about OAI on hackernews by tedsanders (OAI employee) [Please don't harass anybody]
> I'm an OpenAI employee and I'll go out on a limb with a public comment. I agree AI shouldn't be used for mass surveillance or autonomous weapons. I also think Anthropic has been treated terribly and has acted admirably. My understanding is that the OpenAI deal disallows domestic mass surveillance and autonomous weapons, and that OpenAI is asking for the same terms for other AI companies (so that we can continue competing on the basis of differing services and not differing scruples). Given this understanding, I don't see why I should quit. If it turns out that the deal is being misdescribed or that it won't be enforced, I can see why I should quit, but so far I haven't seen any evidence that's the case.
Ted, if you are reading this, I truly felt like you were right. I was still skeptic because part of me felt like it doesn't make sense and well it didn't. But I had trusted ya and I thought that you had far greater insights than us but now I am not sure...
Sir, I have no ill-will towards you but I just want to know, you have gone silent after this comment and one another about GPT 5.3 instant as far as I can see. You did say in the first that you will go out on a limb with public comment, so please don't mind me if I ask questions in public about that comment
The question is: But what now? Do you see now why you should quit?
That being said, I still respect you ted for atleast trying to say it on a community, you had no reason to but took the risk. I genuinely hope that you realize that this question is coming from a place of concern. OpenAI employees like you , were also deceived by OpenAI/Sam altman itself, in a way even more so than us. You had no monetary reason I suppose to go ahead and say it but you did based on your understanding at that time. and I respect it because it shows to me that maybe just maybe OAI employees aren't driven by just money as people would like to point out.
If this is what an OAI employee is saying, weren't they deceived too? weren't they humiliated in public by being proven wrong, losing their accountability/trust within a community?
The comments just turn to well money speaks, I agree, but does money speak so much that you cannot hear your peers/own community?
I still believe on the fringe thought that OAI employees have some say in all of this. 98 employees (no of employees who signed notdivided.org) leaving have 1000 fold more magnitude than 98 people not using OAI. You have power, and with it comes responsibility.
I just want a discussion with OpenAI Employees in general / especially with those who signed NotDivided.org or who are part of this community of hackernews like ted. what do YOU guys make up of all the situation?
A lot of this situation if historians ever write about it, would feel so close to "I was just following orders" than not. No sadly this is not hyperbole now because what we are talking about is the creation of autonomous killing machines which can kill anyone without any human in the loop.
People from the future are also gonna ask us general public why we didn't held the people working accountable, in a similar fashion as to the past.
Once again, I still mean to bring no hate towards anyone. Make peace not war. I just want to think that the world would be a better place for my future children and generation and I would like to hope that this comment can be meaningful towards it.
Have a nice day as one can in a situation like this. A lot of the things I say or do is the same things I asked the people of past when reading history in my classes, Why didn't you guys do X or Y, Why didn't the public say anything. Why was it silent? But we are gonna be history too and someone is gonna ask us why were we silent and I just want to make the answer I tried rather than I don't know. I sort of wanted to learn something from history.
Sincerely, We (the public) want a discussion with OpenAI employees about it. Please don't be silent as silence will be interpreted by the future generations as agreement. Please speak. Tell us what you all are doing
A lot of the times it feels like I am shouting in the void tho in these matters as these messages just straight up don't go to the right people and that feeling sucks because at some point, I am gonna get tired shouting in the void too.
If anyone also has contacts with OAI employees, please ask them such questions and share us the responses if possible. I just want some answers, that's all.
[0]: Ask HN: What will OpenAI employees do now who have signed notdividedorg petition: https://news.ycombinator.com/item?id=47231498
I want to be very clear on the messaging that is coming from OpenAl, and the mendacious nature of it. This is an example of who they really are, and I want to make sure everything [sic] sees it for what it is. Although there is a lot we don't know about the contruct they signed with DoW [shorthand for the Department of Defense] (and that maybe they don't even know as well — it could be highly unclear), we do know the following:
Sam [Altman]'s description and the DoW description give the strong impression (although we would have to see the actual contract to be certain) that how their contruct works is that the model is made available without any legal restrictions ("all lawful use") but that there is a "safety layer", which I think amounts to model refusals, that prevents the model from completing certain tasks or engaging in certain applications.
"Safety layer" could also mean something that partners such as Palantir [Anthropic's business partner for serving U.S. agency customers] tried to offer us during these negotiations, which is that they on their end offered us some kind of classifier or machine learning system, or software layer; that claims to allow some applications and not others. There is also some suggestion of OpenAT employees ("FDE>" [shorthand for forward deployed engineers]) looking over the usage of the model to prevent had applications.
Our general sense is that these kinds of approaches, while they don't have zero efficacy. are, in the context of military applications, maybe 20% real and 80% safery theater: The basic issue is that whether a model is conducting applications like mass survelllance or fully autonomous weapons depends substantially on wider context: a model doesn't "know" if there's a human in the loop in the broad situation it is in (for autonorous weapons), and doesn't know the provenance of the data it is analyzing (so doesn't know if this is US domestic data vs foreign, doesn't know if it's enterprise data given by customers with consent or data bought in sketchier ways, etc).
We also know — those in safeguards know painfulty well — that refusals aren’t reliable and jailbreaks are common, often as easy as just misinforming the model about the data it is analyzing. An important distinction here that makes it much harder than the safeguards probiem is that while it's relatively easy to, for example, determine if a model is being used to conduct cyberattacks from inputs und outputs, it's very hard to determine the nature and context of the cyber attacks, which is the kind of distinction needed here. Depending on the details this task can be difficult or impossible.
The kind of "safety layer" stuff that Palantir offered us (and presumably offered OpenAI) is even worse: our sense was that it was almost entirely safety theater, and that Palantir assumed that our problem was "you have some unhappy employees, you need to offer them something that placates them or makes what is happening invisible to them, and that's the service we provide”.
Finally, the idea of having Anthropic/OpenAl employees monitor the deployments is something that came up in discussion within Anthropic a few months ago when we were expanding our classified AUP [acceptable use policy] of our own accord. We were very clear that this is possible only in a small fraction of cases, that we will do it as much as we can, but that it's not a safeguard people should rely on and isn't easy to do in the classified world. We do, by the way, try to do this as much as possible, there's no difference between our approach and OpenAl's approach here.
So overall what I'm saying here is that the approaches OAI [shorthand for OpenAl] is taking mostly do not work: the main reason OAf accepted them and we did not is that they cared about placating employees, and we actually cared about preventing abuses. They don't have zero efficacy, and we're doing many of them as well, but they are nowhere near sufficient for purpose. It is simuitaneously the case that the DoW did not treat OpenAl and us the same here.
We actually attempted to include some of the same safeguards as OAI in our contract, in addition to the AUP which we considered the more important thing, and DoW rejected them with us. We have evidence of this in the email chain of the contract negotiations (I'm writing this with a lot to do, but I might get someone to follow up with the actual language). Thus, it is false that "OpenAl's terms were offered to us and we rejected them", at the same time that it is also false that OpenAls terms meaningfully protect them against domestic mass surveillance and fully autonomous weapons.
Finally, there is some suggestion in Sam/OpenAl's language that the red lines we are talking about, fully autonomous weapons and domestic mass surveillance, are already illegal and so an AUP about these is unnecessary. This mirrors and seems coordinated with DoW's messaging. It is however completely false. As we explained in our statement yesterday, the DoW does have domestic surveillance authorities, that are not of great concern in a pre-Al world but take on a diferent meaning in a post-Al world.
For example, it is legal for DoW to buy a bunch of private data on US citizens from vendors who have obtained that data in some legal way (often involving hidden consents to sell to third parties) and then analyze it at scale with AI to build profiles of citizens, their loyalties, movement patterns in physical space (the data they can get includes GPS data, etc), and much more.
Notably, near the end of the negotiation the DoW offered to accept our current terms if we deleted a specific phrase about "analysis of bulk acquired data", which was the single line in the contract that exactly matched this scenario we were most worried about. We found that very suspicious. On autonomous weapons, the DoW claims that "human in the loop is the law", but they are incorrect. It is currently Pentagon policy (set during the Biden admin[istration]) that a human has to be in the loop of firing a weapon. But that policy can be changed unilaterally by Pete Hegseth, which is exactly what we are worried about. So it is not, for all intents and purposes, a real constraint.
A lot of OpenAI and DoW messaging just straight up lies about these issues or tries to confuse them.
I think these facts suggest a pattern of behavior that I've seen often from Sam Altman, and that I want to make sure people are equipped to recognize:
He started out this morning by saying he shares Anthropte's redlines, in order to appear to support us, get some of the credit, and not be attacked when they take over the contract. He aiso presented himself as someone who wants to "set the same contract for everyone in the industry" — e.g. he’s presenting himself as a peacemaker and dealmaker.
Behind the scenes, he's working with the DoW to sign a contract with them, to replace us the instant we are designated a supply chain risk. But he has to do this in a way that doesn't make it seem like he gave up on the red lines and sold out when we wouldn't. He is able to superficially appear to do this, because (1) he can sign up for all the safety theater that Anthropic rejected, and that the DoW and partners are willing to collude in presenting as compelling to his employees, und (2) the DoW is also willing to accept some terms from him that they were not willing to accept from us. Both of these things make it possible for OAI to get a deal when we could not.
The real reasons DoW and the Trump admin do not like us is that we haven't donated to Trump (while OpenAl/Greg [Brockman, OpenAl's president] have donated a lot), we haven't given dictator-style praise to Trump (while Sam has), we have supported AI regulation which is against their agende, we've told the truth about a number of Al policy issues (like job displacement), and we've actually held our red lines with integrity rather than colluding with them to produce "safety theater" for the benefit of employees (which, I absolutely swear to you, is what literally everyone at DoW, Palanti, our political consultants, etc, assumed was the problem we were trying to solve).
Sam is now (with the help of DoW) trying to spin this as we were unreasonable, we didn't engage in a good way, we were less flexible, etc. I want people to recognize this as the gaslighting it is.
Vague justifications like "person X was hard to work with" are often used to hide real reasons that look really bad, like the reasons I gave above about political donations, political loyalty, and safety theater. It's important that everyone understand this and push back on this narrative at least in private, when talking to OpenAI employees.
Thus, Sam is trying to undermine our position while appearing to support it. I want people to be really clear on this: he is trying to make it more possible for the admin to punish us by undercutting our public support. Finally, I suspect he is even egging them on, though I have no direct evidence for this last thing.
I think this attempted spin/gaslighting is not working very well on the general public or the media, where people mostly see OpenAl's deal with DoW as sketchy or suspicious, and see us as the heroes (we're #2 in the App Store now!). [Anthropic's Claude chatbot later rose to no. 1 on one of Apple's App Store download rankings.] It is working on some Twitter morons, which doesn't matter, but my main worry is how to make sure it doesn't work on OpenAl employees.
Due to selection effects, they're sort of a gullible bunch, but it seems important to push back on these narratives which Sam is peddling to his employees.
Not the first time, not the last time, add it to the list of shit he's done that should put him in a little cell for the rest of his life.
Anthropic might not sign up with DoD but they definitely still live in a glass house.
Also, its extremely evident that we live in a post truth world. The accusation of Lies dont hold any teeth anymore. Especially in the post law gov of America
Source: https://www.ft.com/content/97bda2ef-fc06-40b3-a867-f61a711b1...