AI & Emerging Tech

AI systems show bias towards AI-generated CVs in hiring tests: Study

Article cover image

Research finds automated hiring tools consistently favour machine-written resumes over equivalent human applications.

Artificial intelligence tools used in recruitment may be introducing a new form of bias, with systems showing a consistent preference for resumes generated by AI over those written by humans, according to recent research reported by Moneycontrol.


The study, which analysed thousands of resumes, found that leading AI hiring models frequently ranked AI-rewritten CVs higher than original human-written versions, even when both were assessed as being of similar quality.


Preference for machine-generated content emerges


The findings point to a structural bias within AI-driven hiring systems, particularly those used to screen and rank candidates at scale.


Key findings from the study:


  • AI models consistently preferred resumes rewritten by AI tools over human-authored versions
  • The bias persisted even when both versions were of equal quality
  • In several cases, systems selected their own generated resumes more frequently than original submissions

The results suggest that AI systems may be optimised in ways that favour outputs resembling their own training patterns or linguistic structures.


Growing reliance on automated hiring tools


The research comes as companies increasingly deploy AI to streamline recruitment processes, particularly in early-stage screening and shortlisting.


AI tools are widely used to:


  • Analyse and rank CVs
  • Match candidates to job descriptions
  • Identify keywords and competencies
  • Reduce manual screening time

This growing reliance has raised questions about whether automation is introducing unintended biases into hiring decisions.


Fairness concerns gain prominence


The preference for AI-generated resumes raises concerns about equity in recruitment, particularly for candidates who rely on traditional application methods.


If AI systems systematically favour machine-written content, candidates who use AI tools to refine their applications could gain an unintended advantage over those who do not.


The findings also suggest that evaluation criteria used by AI systems may not fully capture the substance of a candidate’s experience, instead prioritising structure, phrasing or formatting patterns associated with AI-generated text.


Implications for employers and candidates


The study highlights a potential mismatch between hiring intent and algorithmic outcomes. While employers deploy AI tools to improve efficiency and objectivity, the systems may be introducing new forms of bias that are difficult to detect.


For candidates, the results point to a shifting landscape where familiarity with AI tools could influence hiring outcomes. At the same time, it raises questions about authenticity and the role of human judgement in recruitment.


As AI adoption in hiring continues to expand, organisations may need to reassess how these systems are designed and evaluated. Ensuring that hiring tools measure candidate capability rather than stylistic alignment with AI outputs will be critical.


The findings underline the importance of transparency, oversight and human intervention in recruitment processes, particularly as automated systems take on a larger role in shaping hiring decisions.

Loading...

Loading...