A.I. News OpenAI strikes a deal with the Defense Department to deploy its AI models

Brownie2019

Level 23
Thread author
Verified
Well-known
Forum Veteran
Mar 9, 2019
969
4,663
2,168
Germany
The agreement comes after the US government soured on Anthropic for refusing to remove its AI's safety guardrails.
OpenAI has reached an agreement with the Defense Department to deploy its models in the agency’s network, company chief Sam Altman has revealed on X. In his post, he said two of OpenAI’s most important safety principles are “prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems.” Altman claimed the company put those principles in its agreement with the agency, which he called by the government’s preferred name of Department of War (DoW), and that it had agreed to honor them.

The agency has closed the deal with OpenAI, shortly after President Donald Trump ordered all government agencies to stop using Claude and any other Anthropic services. If you’ll recall, US Defense Secretary Pete Hegseth previously threatened to label Anthropic “supply chain risk” if it continues refusing to remove the guardrails on its AI, which are preventing the technology to be used for mass surveillance against Americans and in fully autonomous weapons.

It’s unclear why the government agreed to team up with OpenAI if its models also have the same guardrails, but Altman said it’s asking the government to offer the same terms to all the AI companies it works with. Jeremy Lewin, the Senior Official Under Secretary for Foreign Assistance, Humanitarian Affairs, and Religious Freedom, said on X that DoW “references certain existing legal authorities and includes certain mutually agreed upon safety mechanisms” in its contracts. Both OpenAI and xAI, which had also previously signed a deal to deploy Grok in the DoW’s classified systems, agreed to those terms. He said it was the same “compromise that Anthropic was offered, and rejected.”

Anthropic, which started working with the US government in 2024, refused to bow down to Hegseth. In its latest statement, published just hours before Altman announced OpenAI’s agreement, it repeated its stance. “No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons,” Anthropic wrote. “We will challenge any supply chain risk designation in court.”

Altman added in his post on X that OpenAI will build technical safeguards to ensure the company’s models behave as they should, claiming that’s also what the DoW wanted. It’s sending engineers to work with the agency to “ensure [its models’] safety,” and it will only deploy on cloud networks. As The New York Times notes, OpenAI is not yet on Amazon cloud, which the government uses. But that could change soon, as company has also just announced forming a partnership with Amazon to run its models on Amazon Web Services (AWS) for enterprise customers.
 
This is the way to do things with A.I. Supply the technology but put guidelines in place like no chemical weapons or WMDs. That is reasonable and morally and ethically correct.
You're absolutely right, @Zero Knowledge . Setting clear boundaries on such critical issues as weapons of mass destruction is a fundamental and necessary ethical step.

Following that same line of thought, I believe it's also vital not to lose sight of risks that are sometimes less visible but just as deep, such as mass surveillance or autonomous control. Ethics is the key, and in these areas, technology often moves faster than regulation.

Ultimately, trust in AI is built by respecting both the most extreme boundaries and those that affect our daily lives, and that's why I think it's important to keep it in mind. ⚖️ 👁️ 🛡️
 
This is the way to do things with A.I. Supply the technology but put guidelines in place like no chemical weapons or WMDs. That is reasonable and morally and ethically correct.
1. Chemical weapons do not need AI
2. WMDs do not need AI

such as mass surveillance or autonomous control.
AI for mass surveillance and autonomous control of military assets has existed long before Anthropic, OpenAI, xAI, and whatever.

Autonomous AI weapons systems existed before Anthropic, OpenAI, xAI and others came along. The U.S. DoW has its own internal AI - and not just a single one.

The DoW does not perform mass surveillance except when authorized to do so. Mass surveillance is the purview of the U.S. Congress as it directs various Executive and non-Executive Agencies to conduct domestic (and international) surveillance. Such surveillance does not require AI.

The key is that U.S. law ties both domestic and foreign surveillance authority to the scope of the threat, not to a fixed percentage of the population or a permanent “mass surveillance switch.”

Patriot Act was "ended" in 2015, but it was never abolished. It was "deactivated." It can be resurrected through Congressional reauthorization, if required.

The primary agencies performing U.S. domestic surveillance are the FBI, NSA, CIA, DHS, and NCTC.

At the highest level of national survival, the Intelligence Support Activity - a Tier 1 DoW component - can be authorized to conduct whatever domestic and foreign surveillance and intelligence gathering required, ultimately with absolute immunity through Article VI of the U.S. Constitution - which is otherwise known as the U.S. Supremacy Clause.
 
  • Like
Reactions: Zero Knowledge
@bazang Sometimes your lucidity frightens me, because it forces us to ask whether we prefer the comfort of ignorance 🧠👁️
 
  • Wow
Reactions: Jonny Quest
In Dutch we say "one man's dead provides another man bread" (contract Antrophic killed, OpenAI stepped in)

The AI discussion has simulaties with Nuclear weapon. When one party applies moral standards by not wanting to develop nuclear weapons, while others do. In what position are the people with the moral standards when they are confronted with others willing to use this technology?
 
Last edited:
Meanwhile
This is the major problem we face. Would you rather use a A.I. like DeepSeek built on stolen I.P with zero guardrails made by a major adversary toward the liberal West.

Or would you rather use a home built, made and produced A.I. with stringent guidelines and guardrails with moderation that you can regulate with local laws?

Nothing against China and it's people but choice is clear I believe.

In Dutch we say "one man's dead provides another man bread" (contract Antrophic killed, OpenAI stepped in)

The AI discussion has simulaties with Nuclear weapon. When one party applies moral standards by not wanting to develop nuclear weapons, while others do. In what position are the people with the moral standards when they are confronted with others willing to use this technology?
This is also my main point and problem with Anthorpic's POV. You can ban or deny use of your companies technology for sovereign governments and milatry but IT DOES NOT STOP other countries from using home grown built and produced and probably 'stolen' A.I. LLM models for military operations or research.
 
Meanwhile any attempt to disparage DoW choice or use of AI don't matter.

DoW has had it for years. It does not advertise the fact. By the time y'all read mainstream media garbage, it's already been a thing for years.

What does consumer downloads of AI have to do with the U.S. DoW?

LOL. Nothing. Absolutely nothing.

Certain people here need to try much harder.
 
  • +Reputation
Reactions: Zero Knowledge
Sam Altman backpedals as ChatGPT uninstalls surge 295%, and critics torch Pentagon fiasco — calls deal "opportunistic and sloppy"
On Monday, OpenAI CEO Sam Altman acknowledged that the company had rushed into its recent deal with the U.S. Department of Defense. He further revealed that OpenAI plans to amend parts of the contract, adding new language to reinforce its principles on sensitive issues such as surveillance.

The executive also clarified that the amendments will categorically indicate that “the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”

"There are many things the technology just isn’t ready for, and many areas we don’t yet understand the tradeoffs required for safety,"
Altman indicated.