AI & Emerging Tech

OpenAI robotics team member resigns over Pentagon AI agreement

Article cover image

OpenAI robotics engineer Caitlin Kalinowski steps down citing concerns about how the company pursued its AI partnership with the U.S. Department of Defense.

A member of OpenAI’s robotics team has resigned after raising concerns about the company’s newly announced agreement to provide artificial intelligence technology to the U.S. Department of Defense, a move that has intensified debate over the role of commercial AI in national security.

Caitlin Kalinowski, who worked as a member of technical staff focused on robotics and hardware, said she stepped down “on principle” after the company revealed plans to make its AI systems available within secure Pentagon computing environments, NPR reported.

Kalinowski announced the decision in public posts on social media, saying the choice had not been easy. “I resigned from OpenAI. I care deeply about the Robotics team and the work we built together. This wasn’t an easy call,” she wrote.

Concerns over AI policy guardrails

Kalinowski said her concerns centred on how the agreement was announced and the absence of clearer policy guardrails governing potential uses of the technology.

“AI has an important role in national security,” she wrote. “But surveillance of Americans without judicial oversight and lethal autonomy without human authorisation are lines that deserved more deliberation than they got,” NPR reported.

She emphasised that her decision was not directed at specific individuals within the company, noting she had “deep respect” for OpenAI chief executive Sam Altman and colleagues across the organisation.

The departure highlights the ongoing tensions within the technology industry as companies navigate the ethical implications of supplying advanced AI systems to military and intelligence agencies.

OpenAI defends defence collaboration

OpenAI said its partnership with the Pentagon is designed to enable responsible uses of artificial intelligence within national security frameworks.

An OpenAI spokesperson told NPR the agreement “creates a workable path for responsible national security uses of AI while making clear our red lines: no domestic surveillance and no autonomous weapons.”

The company added that it recognises strong opinions exist around the military use of AI and said it would continue discussions with employees, governments and civil society groups.

Growing competition for government AI contracts

Kalinowski’s resignation comes at a time when major technology firms are increasingly competing to supply AI capabilities to the U.S. government.

Federal agencies have recently turned to companies including OpenAI and Google to integrate advanced AI tools into defence and intelligence operations, NPR reported.

The shift has sparked broader industry debate over acceptable uses of artificial intelligence in military contexts.

Anthropic, another major AI developer, has previously raised concerns about allowing AI systems to be used for applications such as domestic mass surveillance or autonomous weapons. Those concerns led to disagreements with defence officials, including U.S. Defense Secretary Pete Hegseth, who has argued that the Pentagon must retain flexibility to deploy commercial AI technologies in lawful operations.

Robotics work at OpenAI

Within OpenAI, Kalinowski’s work focused on helping build the company’s robotics capabilities as it expanded its research into systems that combine artificial intelligence with physical machines and infrastructure.

According to her LinkedIn profile, she played a role in recruiting talent and supporting the company’s efforts to develop “physical AI”, an area that applies AI technologies to robotics and real-world systems.

Kalinowski indicated that she intends to remain active in the field.

“I’m taking a little time, but I remain very focused on building responsible physical AI,” she wrote.

Debate over AI and national security continues

Her departure underscores the wider debate unfolding across the technology sector over how artificial intelligence should be used in defence and security contexts.

As governments increasingly seek to integrate advanced AI systems into national security operations, technology companies are facing growing scrutiny from employees, policymakers and civil society groups over ethical boundaries and governance frameworks.

The tensions are likely to intensify as AI capabilities expand and governments accelerate efforts to deploy them across defence, intelligence and surveillance systems.

Loading...

Loading...