Wellbeing
AI surveillance at work raising new mental health risks, warns ILO study

In some cases, employees may be unable to challenge or even understand decisions made about them by algorithmic systems, further deepening concerns around fairness and dignity at work.
A new working paper by the International Labour Organisation (ILO) has flagged growing concerns over the impact of artificial intelligence (AI) on workers’ mental well-being, warning that intrusive surveillance, shrinking autonomy, and opaque data practices are creating a new generation of psychosocial risks in workplaces worldwide.
The analysis highlights a paradox at the heart of AI adoption. While these technologies are boosting efficiency and productivity, they are also reshaping how work is managed in ways that may undermine employee health, dignity, and trust.
Surveillance and control intensifying under AI
According to the ILO paper, the rapid expansion of AI-powered monitoring tools, from performance tracking systems to real-time activity logs, has significantly intensified workplace surveillance. Digital platforms and everyday tools can now capture vast amounts of employee data, ranging from communication patterns to time spent in meetings.
While monitoring itself is not new, the scale and intrusiveness enabled by AI are raising red flags. Excessive surveillance, particularly when constant or opaque, can erode trust, reduce creativity, and negatively affect workers’ sense of autonomy.
In some cases, employees may be unable to challenge or even understand decisions made about them by algorithmic systems, further deepening concerns around fairness and dignity at work.
Erosion of job autonomy and decision-making
The report also points to a gradual erosion of job control as AI systems increasingly influence, or even determine, key workplace decisions such as hiring, task allocation, performance evaluation, and promotions.
Low levels of autonomy have long been linked to stress, burnout, and mental health issues. The ILO warns that AI-driven decision-making could amplify these risks by limiting workers’ ability to influence their own tasks and outcomes.
Research cited in the paper suggests that high levels of automation may increase mental workload while diminishing workers’ sense of control and situational awareness.
Regional lens: Patchy regulation, rising adoption
Across Asia-Pacific and the Middle East, AI adoption in HR and workplace management is accelerating, but regulatory responses remain uneven.
In markets like Singapore and Australia, there is growing recognition of psychosocial risks, with guidance emerging on workplace surveillance and mental health.
India, meanwhile, is witnessing rapid enterprise adoption of AI in hiring, productivity tracking, and gig work platforms, but lacks comprehensive legislation addressing AI-specific workplace risks.
In parts of the Gulf, including the United Arab Emirates and Saudi Arabia, digital transformation agendas are accelerating AI integration, yet policy frameworks are still evolving around worker protections.
The ILO warns that such disparities could lead to unequal protections for workers, especially in high-growth digital economies where AI is being deployed at scale without parallel safeguards.
Data overreach and transparency gaps
Another major concern is the unprecedented scale of data collection enabled by AI systems. From wearable devices to algorithmic analytics, employers are now able to gather detailed and continuous data on workers’ behavior and performance.
The ILO cautions that such “datafication” can lead to overexposure of personal information, especially when employees are not fully aware of how their data is being collected or used.
A lack of transparency in AI systems further complicates the issue, making it difficult for workers to understand or contest decisions that affect their employment.
Existing frameworks falling short
Despite the growing risks, the report finds that current occupational safety and health (OSH) frameworks are not fully equipped to address these challenges. In many countries, regulations still focus primarily on physical workplace hazards, with limited attention to mental and social dimensions of work.
Although international labour standards, such as conventions on workplace safety and harassment, provide a foundation for addressing psychosocial risks, they were largely designed for traditional, human-managed work environments.
The ILO notes that AI introduces dynamic and unpredictable risks that may not be adequately captured by existing “technology-neutral” regulations.
What employers and HR leaders should do
As regulation lags, the onus is increasingly shifting to organisations. The ILO paper points to several immediate priorities for employers and HR leaders navigating AI transformation:
1. Build transparency into AI systems - Clearly communicate when and how AI is being used in decisions affecting employees, especially in hiring, performance evaluation, and promotions.
2. Limit intrusive surveillance - Ensure monitoring practices are proportionate, justified, and not excessive. Avoid “always-on” tracking that can heighten stress and reduce trust.
3. Protect employee autonomy - Design AI systems that augment, not replace, human decision-making. Maintain meaningful human oversight and allow employees to influence their work.
4. Strengthen data governance - Be explicit about what data is collected, how it is used, and who has access. Avoid unnecessary or excessive data collection.
5. Integrate mental health into OSH frameworks - Expand workplace safety strategies to include psychosocial risks linked to AI, such as burnout, anxiety, and digital fatigue.
6. Enable new workplace rights - Consider mechanisms such as the right to explanation of AI decisions, human review processes, and even the “right to disconnect” to protect employee well-being.
A turning point for AI-led workplaces
The ILO underscores that AI’s impact on work is not just technological, it is deeply human. As organisations accelerate digital transformation, the challenge will be to ensure that efficiency gains do not come at the expense of employee well-being.
Without coordinated policy action and responsible employer practices, the report warns, AI could redefine work in ways that intensify stress, weaken autonomy, and widen gaps in worker protection, particularly in fast-growing regions where adoption is outpacing regulation.
Call for integrated policy response
With no comprehensive legislation yet addressing AI’s impact on work globally, the ILO is calling for a more integrated regulatory approach. This would involve aligning labour laws, occupational health standards, data protection rules, and anti-discrimination frameworks to better safeguard workers in the digital age.
The paper also underscores the need for emerging workplace rights, including the right to explanation of AI decisions, the right to human review, and stronger protections around data privacy and workplace surveillance.
As AI continues to transform the nature of work, the ILO stresses that protecting workers’ mental health and autonomy must remain central to policy discussions. Without proactive intervention, the organisation warns, the benefits of AI could come at a significant human cost.
Author
Loading...
Loading...






