Also, I remember reading this guy has close ties to Anthropic. Also, I find it suspicious how he came to prominence out of nowhere. Like Big Tech and the establishment are propping podcasts of controlled narrative/opposition. I don't buy any of it.
> The whole background of this AI conversation is that we’re in a race with China, and we have to win. But what is the reason we want America to win the AI race? It’s because we want to make sure free open societies can defend themselves. We don’t want the winner of the AI race to be a government which operates on the principle that there is no such thing as a truly private company or a private citizen.
In the US currently, there are private citizens, and there are 'not-the-1%' citizens, where a Kavanaugh stop is legal, your voter information may be (or may have already been) seized by the DoJ or FBI, you may be tracked by out of state or federal agents on ALPRs with no warrant, for any reason, and where attending a legal protest may have your biometrics added to a database of potential domestic terrorists.
Or maybe your tax money will just be used to blow up unidentified boaters or bomb girls' schools and homes, and you'll get no say in whether that's the case because the elected body that is there to issue a declaration of war (or not) as representatives of you, has abdicated that power to a cabinet of unelected white nationalists.
But go off about how we're such a better country that believes in freedom and goodness.
The part of the Pentagon that did this is, to put it politely, not the part that's good at planning.
Who is learning this for the first time only now? Even just restricting ourselves to the current administration, look at how many times Trump has directed punitive actions against private entities! Look at his actions against law firms like Perkins Coie or Covington & Burling. This is not something that just arose out of nowhere with Anthropic.
I mean... isn't that pretty much the way the current administration behaves in general? If the answer to this question is "yes", and the US executive does not in fact share the values of the author about free and open society, then the rest of the article is kinda moot (except the point that we should be talking about these things now, and encouraging congress to act).
As for whether code written with Claude Code should be so considered - if it’s just code that is subject to human review, I would argue that this use shouldn’t be a supply chain risk. But with Claude Code PR Review and similar products, the chance that an AI product (not limiting to Anthropic here) could own a load-bearing part of the lifecycle of a critical piece of code becomes much larger, and deserves scrutiny.
I speculate we'll discover there's very few unambiguously ethical uses of AI, much less for military applications. Them's the breaks.
The problem with democracy is that it can easily become a revolving door wherein capital holders can choose which candidates are allowed to approach the door.
I think democracy works well when the monetary system is constrained; for example on gold or other scarce asset because that creates a better separation between money and state because then there would be less of an incentive for big companies to corrupt the revolving door to gain a financial advantage.
In a monetary system where the government can create an unlimited amount of money, the incentive to corrupt the government and political process keeps increasing.
I think democracy with a soft fiat money system is probably the most dangerous system because any moral objection can be filtered out of the running as we saw happen with Anthropic and the Department of War. It's because clearly it's the weapon manufacturers running that department behind the scenes; they have a huge financial interest to do so. The Department of War is the bread and butter of weapon manufacturers and defense contractors.
I haven't seen this much hype and hopium since the dot com boom. The whole open AI -> Anthropic saga just reeks of the same evolution of Viant/Scient.
Look we have an amazing tool, but it has some fundamental shortcomings that the industry seems to want to burry its head in the sand about. The moment the hype dies and we get to engineering and practical implementations a lot is going to change. Does it have the potential to displace a lot of our current industry: why yes it does. Agents can force the web open (have you ever tried to get all your amazon purchase history?) can kill dark patterns (go cancel this service for me), and crush wedge services (how many things are shimmed into sales force that should really be stand alone apps). And the valuable engagement is going to be by PEOPLE, good UI, good user experiences are gonna be what sells (this will hit internet advertising hard for the middle men like google and Facebook).
The lawfare part of it is that to coerce an individual or a company, governments are willing to abuse their power. The Biden administration did it when pressuring social media companies to censor content. The Trump administration is doing it to a much greater extent with things like ordering every government agency to stop using Anthropic and by labeling them a supply chain risk.
The ideological part of it is when Defense Sec Hegseth and Trump and AI Czar / PayPal Mafia member David Sacks repeatedly attack Anthropic as “woke”, and it is clear they’re undermining this company from their government positions based on Anthropic’s speech (first amendment violation). This obviously is part of why they attacked Anthropic in such a public way.
And the corruption part of it is OpenAI’s leaders being big supporters of the MAGA movement and the Trump administration. Greg Brockman, president of OpenAI, is the biggest donor ever to the MAGA PAC. Why did Hegseth grant a contract to OpenAI after banning Anthropic, even though OpenAI has the same red lines in their agreement (what Sam Altman claimed)? It’s because of the corruption - give Trump and his family/friends money, and you’ll get something back.
The fight against these types of government abuse have ALWAYS been happening. But the abuse is much more in the open today, and much larger in scale than ever before. Scandals like Watergate would not even make the news today. And that is what the public should be waking up to and focusing on. We need to rethink our political system significantly and add a lot more protections against the kind of things the Trump 2.0 administration has done.
> Our future civilization will run on AI labor. And as much as the government’s actions here piss me off, in a way I’m glad this episode happened - because it gives us the opportunity to think through some extremely important questions about who this future workforce will be accountable and aligned to, and who gets to determine that.
I stopped reading there because this is a pointless exercise.[1]
This isn’t a roundtable. You are not even at the table. There isn’t some “thankfully time to discuss this...”—you are just out.
The Machine doesn’t need your labor? You are out. No norms. No discussions.
You either try to forcefully take control of the situation or you see yourself get discarded.
(I am here just assuming all the AI Maximalist (doom maximalist in this context, Trump and all) premises for the sake of the argument.)
[1] I did read the last paragraphs and the tenor is the same. “We must make laws and norms through our political system”… just like with nuclear bombs, of all things.
But on the substance they're equally vapid. Dwarkesh's interview with Richard Sutton was especially cringe.