People Matters Logo

Around 50% say 'fixing' AI outputs takes as long as doing the task manually

• By Anjum Khan
Around 50% say 'fixing' AI outputs takes as long as doing the task manually

Artificial intelligence may be deeply embedded in daily workflows, but most organisations are far from trusting it to operate on its own. 

A new study from Connext Global suggests that AI’s reliability still hinges heavily on human oversight, and that dependency may only grow.

The Connext Global 2026 AI Oversight Survey, conducted among 1,000 U.S. workers who use AI at work, paints a nuanced picture: while adoption is accelerating, autonomy remains limited and the human “safety net” is becoming standard operating practice.

Reliability tied to human involvement

The report draws a sharp distinction between AI adoption and AI autonomy. Only 17% of respondents said workplace AI is reliable enough to run independently, while 70% defined reliability as a hybrid model, either AI with light review (35%) or AI with dedicated oversight (35%).

This framing effectively turns reliability into a workflow question rather than a technology feature. If dependable AI requires humans in the loop, organisations must design clear guardrails around review, ownership and escalation.

Notably, expectations for oversight are rising. Nearly 64% of respondents expect the need for human review to increase, including 26% who foresee a significant rise.

‘Set and forget’ remains rare

In day-to-day operations, most employees report that AI requires regular supervision.

The findings suggest AI currently functions more as a managed assistant than a fully autonomous system.

The hidden ‘after-work’ layer

For many teams, the work begins after AI produces output. Only 4% of respondents said they rarely perform follow-up work. The most common post-AI tasks include:

This effectively converts AI workflows into a two-step process, with quality assurance built in. Productivity gains, the report notes, depend heavily on how efficiently organisations manage this follow-up layer.

Output quality remains a constraint

The need for review is driven largely by inconsistent output quality. Only 37% of users said AI is correct without fixes most of the time, while 63% reported it is right only sometimes or less. When corrections are required, the time advantage often disappears:

In total, 57% report that time savings can vanish once corrections are needed, directly reshaping AI’s everyday ROI.

Context gaps drive most failures

When asked where AI breaks down, respondents pointed primarily to missing context.

Top issues included:

These shortcomings are not merely technical — they carry operational risk, especially when AI-generated content reaches customers.

Customer impact already visible

The study indicates that AI missteps are already affecting external outcomes.

The data highlights a growing tension: AI adoption is expanding rapidly, but trust continues to be shaped by real-world performance.

Governance emerges as the differentiator

The report asserts that these insights do not undermine AI’s value but rather redefine what successful deployment looks like.

“AI can be a powerful accelerator, but this research shows most teams are still doing the hard part, making output accurate, complete and ready for real-world use,” said Tim Mobley, President and CEO of Connext Global.

The report concludes that organisations that succeed with AI will focus less on tool deployment and more on repeatable oversight, validation workflows and escalation paths.

As AI moves from pilot programmes into everyday operations, it becomes clear that in today’s workplace, reliable AI is not autonomous, it is supervised.