Pentagon shuns Anthropic, picks OpenAI models in its classified network

openAI
Pic Credit: Pexel

New Delhi, Feb 28: The United States Department of Defense has decided to deploy OpenAI’s artificial intelligence models on its classified network, even as it distances itself from Anthropic over disagreements on AI safety and military use, OpenAI chief Sam Altman said on Saturday.

Altman confirmed the development, saying the company has reached an agreement with the Pentagon to move forward with the deployment.

In a post on X, Altman said OpenAI’s discussions with the Department of Defense showed “deep respect for safety” and a shared goal of achieving the best possible outcome.

Referring to the department as the “Department of War” (DoW), he added that OpenAI remains committed to serving humanity, while acknowledging that the world is “complicated, messy, and sometimes dangerous.”

“Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome,” Altman stated.

Altman said OpenAI continues to prioritise AI safety and the wide distribution of benefits. He stressed that two of the company’s core safety principles are a ban on domestic mass surveillance and ensuring that humans remain responsible for the use of force, including in autonomous weapon systems.

“AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” he added.

According to him, these principles have not been compromised in the deal with the Pentagon. He said the Department of Defense agrees with these principles and reflects them in its laws and policies, and that they are included in the final agreement.

“The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement,” Altman mentioned.

“We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place,” he stated.

As part of the arrangement, OpenAI will build technical safeguards to ensure its models behave as intended.

The company will also deploy field deployment engineers to support the models and ensure their safe use. Altman added that the models will be deployed only on secure cloud networks.

The Pentagon’s decision comes amid a public clash with Anthropic, the maker of the Claude AI model.

According to reports, the Defense Department had pushed for full military use of AI tools for all lawful purposes, including in sensitive areas such as weapons development, intelligence gathering and battlefield operations.

Anthropic had reportedly insisted on limits, particularly around fully autonomous weapons and mass surveillance of Americans.

–IANS