AI & Emerging Tech
Around 600 Google employees ask CEO Sundar Pichai to stop AI work with US military

Internal letter highlights renewed employee concern over defence use of AI as Google expands government engagements.
Around 600 employees at Google have urged chief executive Sundar Pichai to reject the use of the company’s artificial intelligence systems in classified work with the US military. The appeal, made through an internal letter, underscores growing tension between commercial AI expansion and employee concerns over ethical use.
The letter follows reports that Google is in discussions with the Pentagon to deploy its Gemini AI systems in classified settings, according to Business Insider, which cited earlier reporting by The Information. Employees from divisions including DeepMind and Cloud signed the letter, raising concerns about how such technology could be used.
Employees flag risks of military AI use
In the letter, employees warned that AI systems can centralise power and are prone to errors, arguing that this creates a responsibility to prevent misuse.
“As people working on AI, we know that these systems can centralise power and that they do make mistakes,” employees wrote, according to Business Insider. “We feel that our proximity to this technology creates a responsibility to highlight and prevent its most unethical and dangerous uses.”
The group called on leadership to reject any classified military workloads, stating that such use could happen without sufficient oversight or employee awareness.
Employees also outlined broader concerns around potential applications:
- Use of AI in autonomous weapons systems
- Deployment in mass surveillance programmes
-
Limited visibility into how systems are used in classified environments
They argued that preventing such outcomes requires clear boundaries at the company level rather than relying on downstream controls.
A familiar fault line inside Google
The latest pushback reflects a longer history of internal debate at Google over military contracts. In 2018, the company chose not to renew its involvement in Project Maven, a US Department of Defense initiative to apply AI to military operations, following employee protests.
Subsequently, Google introduced a set of AI principles, including commitments not to develop technology for weapons or surveillance. However, Business Insider reported that the company updated those principles last year, removing specific references to weapons and surveillance.
At the same time, Google has continued to expand its engagement with government agencies. The company secured contracts with the Pentagon last year for AI and cloud services and announced in March that it would provide AI agents for non-classified military use.
Employees indicated that further expansion into classified work would represent a significant shift, particularly given the reduced visibility and accountability associated with such projects.
Leadership response remains unclear
Google had not publicly responded to the employee letter at the time of reporting, Business Insider said. A representative for the employees, cited by the publication, confirmed that the company had not formally addressed the concerns.
The letter also warned of reputational and operational risks if the company proceeds with such work. Employees argued that decisions taken at this stage could have long-term consequences for trust in the organisation and its role in shaping the use of AI globally.
“Making the wrong call right now would cause irreparable damage to Google’s reputation, business, and role in the world,” the letter stated.
Balancing commercial growth and ethical boundaries
The situation reflects a broader challenge facing large technology companies as AI capabilities expand. Governments are increasingly seeking partnerships with private firms to deploy advanced systems, while employees and external stakeholders continue to scrutinise how these technologies are used.
For Google, the issue is not limited to policy statements but extends to operational decisions around contracts, governance, and transparency.
The internal debate also highlights a shift in workforce expectations. Employees are not only building AI systems but are increasingly seeking a role in shaping how those systems are deployed.
As AI becomes more deeply integrated into national security and public sector applications, tensions between commercial opportunity and ethical accountability are likely to intensify. For companies such as Google, managing this balance will require not only technological capability but also clear governance and sustained internal alignment.
Topics
Author
Loading...
Loading...






