AI & Emerging Tech

CHROs face healthcare’s biggest AI question: Who will own the outcomes?

Article cover image

Sowmya Santhosh, CHRO, CitiusTech, says responsible AI in healthcare depends on workforce readiness, governance and clear accountability.

Artificial intelligence may be advancing at speed, but in healthcare, the central question is not technical capability. It is accountability.


That is the view of Sowmya Santhosh, CHRO at CitiusTech, who argues that AI adoption in healthcare is fundamentally a workforce and governance issue. In an interview with People Matters, she says the sector’s uniquely high stakes demand a different approach from industries where AI is primarily used to drive efficiency.


“Healthcare is uniquely high-stakes. Every AI output can influence a clinical decision, a patient outcome, or a regulatory audit,” she says. “This raises the bar significantly compared to industries where AI optimizes marketing or automates back-office processes.”


Beyond algorithms


Santhosh is clear that healthcare AI cannot be treated as a pure technology play. “Success therefore depends on advanced algorithms and on people who understand the clinical, regulatory, and ethical context in which those algorithms operate,” she says. “Healthcare AI must be clinically safe, explainable, interoperable, and compliant by design, and these requirements cannot be met through technology alone.”


In her assessment, workforce readiness is not an adjunct to AI transformation but its foundation. “Workforce readiness is critical because AI reshapes roles and workflows. Teams must adapt to new ways of working, collaborate with AI systems, and evaluate outputs responsibly.”


She frames the issue starkly: “In healthcare, AI is inseparable from the people who guide, validate, govern, and take accountability for it. Workforce readiness, clinical context, and human oversight form the foundation of safe and ethical AI adoption.”


The plug-and-play fallacy


Santhosh warns that organisations that treat AI as a plug-and-play tool create systemic risks. “Treating AI as plug-and-play creates significant risk in healthcare. AI must be treated as a capability that is built and governed over time.”


She points to four distinct talent risks.


The first is “false confidence from surface-level adoption”. Without domain-aware talent, she says, AI systems “may appear accurate while violating regulatory, interoperability, or safety requirements.”


The second is weak governance. “Without internal capability in PHI handling, auditability, bias monitoring, and policy-as-code, organizations risk deploying AI systems that drift or operate without transparency.”


A third risk is failed adoption. “When employees are not trained or engaged, AI becomes disruptive rather than enabling. Clinicians may distrust outputs, bypass tools, or create workarounds that introduce safety and compliance risk.”


Finally, she highlights long-term capability erosion. “Without investment in hybrid skills, they rely heavily on vendors, lose institutional knowledge, and struggle to scale from pilots to enterprise adoption.”


Her conclusion is direct: “When organizations underinvest in people and change readiness, they risk patient safety, regulatory exposure, workflow disruption, and loss of trust. Healthcare AI requires deliberate talent and governance investment.”


Redefining AI talent


For Santhosh, the talent model required in healthcare diverges sharply from other sectors. “Deploying AI in healthcare requires a talent model that blends deep domain knowledge, robust technical engineering, and a culture of governance-led responsibility.”


Unlike consumer technology or marketing analytics, healthcare solutions must be “clinically safe, explainable, interoperable, and auditable from day one”. That requirement shifts what she calls “the center of gravity of AI talent” towards professionals able to navigate clinical workflows, regulatory frameworks and advanced AI systems simultaneously.


She outlines three foundational capabilities.


The first is healthcare domain nativity, encompassing clinical and regulatory fluency. “Teams must understand real clinical workflows and guardrails (HIPAA/GDPR/FDA/MDR), with privacy, safety, and explainability embedded as process, not a post-hoc checklist.”


The second is engineering for trust, where governance and MLOps are built in by design. “We operationalize AI with automated validation, deployment, monitoring, drift/bias checks, and RBAC, turning compliance into a trust enabler rather than a bottleneck.”


The third is interoperability and data craftsmanship. “Talent must be fluent in HL7/FHIR, DICOM, and payer/provider data schemas to avoid models that fail downstream.”


Together, she says, these capabilities ensure AI is “not just powerful but safe, explainable, and aligned with real-world care outcomes.”


The rise of hybrid talent


Santhosh repeatedly returns to the idea of “hybrid talent”. In healthcare AI, she defines it as professionals operating “comfortably at the confluence of three disciplines: deep healthcare domain expertise, strong engineering and AI fluency, and a governance-first, ethics-oriented mindset.”


Building that at scale is difficult because “these skills do not traditionally coexist”. Machine learning engineers may lack clinical context; clinicians may not understand MLOps pipelines; compliance specialists may struggle to encode policies as guardrails.


Bridging these divides, she argues, requires intentional systems rather than ad hoc hiring. “Hybrid talent is rare but with the right system, highly scalable.”


Continuous upskilling, not episodic training


Santhosh rejects the idea that AI readiness can be achieved through one-off programmes. “Upskilling for healthcare AI cannot be a one-time training event. It must be continuous, contextual, and tied tightly to the governance and safety fabric of the organization.”


She advocates a shift from episodic learning to embedded, experiential learning. Structured programmes, simulation-based training, hackathons and generative AI-powered teaching assistants, she suggests, turn abstract concepts into applied capability.


The goal, she says, is not speed at any cost but responsible scaling. “By combining structured learning, real-world practice, governance orientation, and clinician-first design, organizations can build a workforce that scales GenAI responsibly.”


A CHRO agenda


For chief human resources officers, the implications are strategic.


“I see four pragmatic shifts,” Santhosh says.


“Talent composition must focus on developing professionals who combine digital skills, domain understanding, and accountability for outcomes.”


“Capability-building must shift from episodic training to continuous learning ecosystems.”

“Governance must move from checklist compliance to responsibility embedded into daily work.”


And “change management must emphasize role-specific training and workflow alignment, so AI supports existing care delivery models.”


The thread running through her analysis is clear: in healthcare, AI is not simply a technology deployment challenge. It is an organisational accountability challenge.


As generative AI moves from pilots to clinical environments, the defining question for leadership may not be how fast systems can be built — but who will own the outcomes when they are used.

Loading...

Loading...