Follow up to this post
Anthropic made their choice in negotiations last week. They will not yield their AI product, "Claude," to every decision of the U.S. Dept. of War even if it's lawful (the terms of their contract). They want the authority to limit the military's use of Claude to situations they approve of.
No nation would likely submit to that. The U.S. will not, so they immediately stopped using Anthropic's technology and it can't be used by any government agency or Pentagon contractor.
OpenAI was waiting in the wings. They now will supply the U.S. military with artificial intelligence (photo). But the Pentagon added one interesting exception (which Anthropic wanted too) in this new contract: “the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”
I asked Grok whether America has a law against domestic surveillance, and the answer was "No." That could be why the U.S. was willing to specify this condition in the new AI contract with OpenAI.
from CNBC

