I've read a few people this week discuss the consideration that Anthropic's behavior itself will likely impact Claude's training.
The concern there is that if Claude ingests news articles that show Anthropic behaving in a manner that clashes significantly with the values they want to instill in Claude, it could make training less effective.
It's all very weird.
Informing the public of this dispute would highlight Anthropic's mission (ie: responsible AI), which is a market differentiator.
The Pentagon would crawl back, anyways, since Claude is the most effective model for programming tasks.