Microsoft employees are banned from using DeepSeek — Here’s why

Microsoft employees are now officially barred from using the DeepSeek application, according to a statement made by Brad Smith, Microsoft’s Vice Chairman and President, during a U.S. Senate hearing. Smith revealed that the company has placed a ban on the use of DeepSeek internally, citing serious concerns around data privacy and the potential for Chinese state propaganda to influence the platform's output.
The announcement marks the first time the tech giant has publicly acknowledged the restriction, though several governments and institutions had already taken similar steps in recent months. Smith noted that the app—available on both mobile and desktop—is not offered on Microsoft’s app store, reinforcing the company’s stance on safeguarding internal data and mitigating misinformation risks.
“At Microsoft we don’t allow our employees to use the DeepSeek app,” said Smith during the Senate session, highlighting the rationale behind the decision. “We have concerns regarding where the data is stored and how the content is potentially influenced.”
The core of Microsoft’s concern lies in DeepSeek’s data storage practices. The AI company’s privacy policy makes it clear that user data is stored on servers located in China. Under Chinese law, companies are obliged to share data with state intelligence agencies upon request—a provision that has sparked intense scrutiny from Western governments and privacy advocates.
Smith warned that such arrangements could lead to the exposure of sensitive user information. He also emphasised that DeepSeek’s AI model has the potential to spread content aligned with Chinese state interests, given its in-built content filters and censorship of politically sensitive topics.
DeepSeek, which gained rapid popularity following the release of its powerful R1 model, has drawn criticism for censoring discussions around topics deemed controversial by the Chinese government, including democracy movements, human rights issues, and criticism of political leadership.
Smith raised alarms that the chatbot's responses might be subtly manipulated to reflect state-endorsed narratives, which Microsoft believes is unacceptable for internal use and potentially dangerous in wider applications.
Although Microsoft has not banned all competitors to its own Copilot chatbot—rival apps like Perplexity remain available—DeepSeek is notably absent. Interestingly, major Google offerings, including the Chrome browser and Gemini chatbot, were also missing from Microsoft’s store search at the time of review, suggesting a selective but significant approach to AI ecosystem competition and data integrity.
Despite these internal concerns, Microsoft previously made DeepSeek’s R1 model available on its Azure cloud platform, shortly after the model went viral. However, this offering is categorically different from the DeepSeek app itself.
Because DeepSeek’s model is open-source, it allows third parties to run the technology on private servers—eliminating the risk of user data being routed back to China. Microsoft stated that before the R1 model was made available on Azure, it had undergone "rigorous red teaming and safety evaluations" to minimise risks.
Smith also revealed that Microsoft engineers had actively modified the DeepSeek AI model to mitigate what he described as “harmful side effects.” While the company has not disclosed technical details of the modifications, this suggests a proactive approach to aligning third-party AI with Microsoft’s internal safety and security standards.
Microsoft’s actions raise interesting questions about how tech giants are managing both internal security concerns and market competition in the burgeoning AI sector. DeepSeek is seen by many as a competitor to Microsoft’s own Copilot AI assistant, yet the company maintains it has not banned all competing AI apps—only those it deems risky from a data or content integrity perspective.
By restricting DeepSeek internally while still offering access to a sanitised version of its model on Azure, Microsoft appears to be walking a fine line—ensuring that developers and enterprises can experiment with powerful AI models while safeguarding its own employees and infrastructure from potential threats.
This move also reflects the increasingly tense geopolitical climate between Western technology firms and Chinese AI companies. Governments across Europe and North America are actively reviewing AI platforms built in China for their potential to collect intelligence, spread propaganda, or influence public discourse. Microsoft’s clear stance on DeepSeek adds further weight to these concerns and sets a precedent for how tech companies might approach similar platforms in the future.