Wednesday, March 4, 2026

AI & military 2

Follow up to this post

Anthropic made their choice in negotiations last week. They will not yield their AI product, "Claude," to every decision of the U.S. Dept. of War even if it's lawful (the terms of their contract). They want the authority to limit the military's use of Claude to situations they approve of.


No nation would likely submit to that. The U.S. will not, so they immediately stopped using Anthropic's technology and it can't be used by any government agency or Pentagon contractor.

OpenAI was waiting in the wings. They now will supply the U.S. military with artificial intelligence (photo). But the Pentagon added one interesting exception (which Anthropic wanted too) in this new contract: “the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”

I asked Grok whether America has a law against domestic surveillance, and the answer was "No." That could be why the U.S. was willing to specify this condition in the new AI contract with OpenAI.

 from CNBC

Tuesday, March 3, 2026

AI & kids

Follow up to this post

An AI "companion" can be tempting for parents who need a break from the stress of constantly overseeing their children--but it's not a good solution. Yes, this is another warning against letting artificial intelligence gain too much influence over your kids.


mental health organization finds unique risks of harm to children from AI companion platforms:

  • "Blur the line between real and fake"
  • "Encourage poor life choices"
  • "Harmful information"
  • "Inappropriate sexual content"
  • and others

Remember, AI is trained to sound appealing to its users. It really does sound personable, entertaining, trustworthy, but it's not a person. It's not a genuine friend. Use your own good judgment. "Ultimately, your instinct that your child needs you, not just entertainment, is correct."

Monday, March 2, 2026

AI & military

Modern military, whether in the US or other nations, uses artificial intelligence. Of course they do. Anthropic has a $200 million contract to supply their AI system, called Claude, to be used by the Pentagon "for all lawful purposes."

Anthropic has conditions on how the US military will use it. They will not allow it, they say, to be used for mass surveillance of American citizens, nor for weapon targeting without direct human supervision. These restrictions sound reasonable. 


But the Dept. of Defense threatens to reject Claude from military use. 

They say that "the current dispute is not specifically about autonomous weapons or mass surveillance . . ." 

It seems that Anthropic (photo) wants the final call in whether their AI system can be used in certain situations. But the US military refuses to delegate their decisions to another authority, such as a private company. That also sounds reasonable. The responsibility to operate lawfully, they say, is theirs--not Anthropic's. 

Interesting.

from International Business Times