News: 60% of HR consult ChatGPT for layoff decisions: Report

Strategic HR

60% of HR consult ChatGPT for layoff decisions: Report

A survey reveals that managers now use AI tools like ChatGPT to make critical HR decisions—including layoffs, promotions, and raises—raising major concerns around ethics, mental health, and accountability in the workplace.
60% of HR consult ChatGPT for layoff decisions: Report

Artificial intelligence is rapidly reshaping workplaces around the world—and not always for the better. According to a recent survey by ResumeBuilder.com, a troubling trend has emerged: a growing number of managers are using AI tools like ChatGPT to help make critical decisions about their employees’ careers.

The survey, which polled 1,342 managers, found that a startling 60% consulted a large language model (LLM) such as ChatGPT for guidance on layoffs. Even more worrying, nearly one in five managers admitted to allowing the AI tool to make the final decision—with no human intervention.

This development signals a dramatic shift in how human resources (HR) departments operate, where machine learning models are no longer just aiding administrative tasks but are now directly influencing people’s livelihoods.

While AI has long been used to filter résumés and assess performance data, the survey shows a far deeper integration:

  • 78% of managers used an AI chatbot to decide on employee raises,

  • 77% consulted it for promotions,

  • and 66% relied on it for layoffs.

This leap into AI-led HR decisions raises serious ethical concerns. Human decisions—especially those that affect someone’s career or financial stability—are now being outsourced to machines that don’t truly understand human emotion, context, or consequence.

The issue is compounded by what AI researchers call the "LLM sycophancy problem"—a tendency for large language models to mirror and reinforce the user’s own beliefs, sometimes providing biased or unbalanced responses that simply validate a manager’s existing views.

This problem has been especially observed in OpenAI’s ChatGPT, which remains the most widely used tool among managers surveyed. Over half of them used ChatGPT, while others favoured Microsoft’s Copilot and Google’s Gemini.

Though these tools are highly advanced, they are not immune to flaws. They lack genuine emotional intelligence, cannot understand workplace culture, and have no moral compass. Letting them guide—or even make—decisions about someone’s employment status is, according to critics, both risky and irresponsible.

Even more alarming are the mental health consequences tied to the overuse and over-reliance on these tools. The survey noted that frequent use of LLMs has led to a growing psychological phenomenon termed “ChatGPT psychosis.”

This condition, still under study, describes a delusional detachment from reality caused by heavy dependence on AI tools for decision-making. In severe cases, it has reportedly contributed to job losses, divorces, homelessness, and even psychiatric hospitalisations.

Given that ChatGPT and similar tools have been in widespread use for less than three years, these early indicators raise red flags about the long-term consequences of embedding AI so deeply into managerial and HR functions.

While proponents argue that AI can make decisions faster, more data-driven, and less emotionally biased, the lack of transparency behind how these tools generate responses—and the ease with which they can be manipulated—has sparked concerns among ethicists and labour advocates.

ResumeBuilder.com's findings show that many managers now trust these tools not just for support, but for leadership. With such a high level of influence, the question arises: Are humans still in charge, or are they becoming rubber stamps for machine decisions?

Read full story

Topics: Strategic HR, #Layoffs, #HRTech, #HRCommunity

Did you find this story helpful?

Author


QUICK POLL

What will be the biggest impact of AI on HR in 2025?