AI & Emerging Tech

AI With Integrity: Embedding Ethical Principles Into Tech Strategy

Article cover image

AI is here and reshaping work and life—but without fairness, transparency, and human oversight, innovation risks losing trust and sustainability.

Artificial Intelligence (AI) is no longer the future; it is the present and actively reshaping the way we shop, hire, learn and even converse. In this context, the question for businesses is no longer whether to adopt AI, but how to ensure its responsible use. At the People Matters TechHR India 2025 conference, a thought-provoking fireside chat session brought this issue to the forefront.


Moderated by Maninder Kapoor Puri (Head of Human Resources, Biocon) and featuring perspectives from industry leaders Meenakshi Cornelius (Head - HR, India Cluster, JLL India), Manu Wadhwa (CHRO & Head Enterprise - IT, Sony Pictures Networks), Sunil Goyal (CTO Hotel, MakeMyTrip) and Vinay Pradhan (Country Manager & Senior Director, India & South Asia, Udemy), the discussion underscored a powerful reality: the promise of AI cannot be unlocked sustainably without embedding principles like fairness, accountability, transparency, inclusivity and human oversight at the very core of technological strategies.


While businesses are experimenting at great speed, governance frameworks and ethical guardrails are struggling to keep pace, making conversations like these both urgent and significant for the future of work. This article is based on key excerpts from the discussion. 


Operationalising ethical AI principles in practice


The starting point of the conversation was how companies are beginning to translate abstract ideas such as fairness and accountability into tangible practices when it comes to AI. Enterprises are not simply deploying AI to automate tasks, but rethinking how bias-free and transparent systems can be incorporated into everyday functions, from hiring to customer service.


One visible pathway is through inclusive data practices. Leadership teams are recognising that the datasets feeding machine learning tools must represent the full spectrum of geographies, demographics and cultural differences. Failing to do so risks replicating outdated prejudices in recruitment, product recommendations or employee experience tools. Meenakshi cited the example of the automated soap dispensers that made news a few years ago. The sensors of the dispensers were trained on limited data, which failed to work for people of different skin tones; a reminder of what is at stake when models built on exclusionary data are scaled.


Cross-functional committees promoting ethical AI collaboration


The panellists stressed that no single department can ‘own’ ethical AI implementation. Organisations must define their ‘ethical North Star’ as a guiding principle that cuts across teams and ensures alignment. 


To address this, one approach gaining traction is the establishment of ethics committees. These cross-disciplinary bodies include HR, legal, business representatives and, in some cases, external experts. Their role is to ensure that AI-related decisions are evaluated from multiple angles: business outcomes, employee trust, legal compliance, cultural nuances and consumer expectations. By framing oversight as a collective mandate rather than a technical silo, organisations are closing the gap between principle and execution.


The company leadership must go beyond vision statements and embed ethics into their decision-making. As Manu highlighted, fairness cannot begin unless HR, legal, business and technology all have a seat at the table. The true strength of governance teams lies in their inclusivity, which allows diverse perspectives to feed into workplace practices.


Transparency is equally important. Some businesses are experimenting with AI systems that explain the rationale behind their answers, allowing users to see the reasoning that shaped an output. This shift towards ‘explainable AI’ may be slower to develop, but offers a safeguard against opaque machine-made judgements. A collaborative effort also ensures ethical guardrails are woven into design, operations and adoption strategies, rather than bolted on later. Such integration makes it harder for blind spots, biases or irresponsible use of AI to slip through unnoticed.


Addressing bias at the source: The role of data and diversity


If there is one truth about AI systems, it is that they are only as unbiased as the data we feed them. Since humans themselves carry implicit bias, training datasets often reflect prejudices, whether related to gender, ethnicity, geography or language. Left unexamined, these imbalances risk being amplified by algorithms.


Sunil noted that traditional system monitoring approaches are no longer enough as AI grows more sophisticated. Organisations must now invest in tools that dynamically evaluate responses, not only for factual correctness but also sentiment, tone and consistency with company precedent. Such measures embed accountability within the system itself.


Yet technology alone is insufficient. Diverse teams play a decisive role in ensuring multiple perspectives shape both design and governance. For example, engineers from a specific region or cultural background may overlook issues that appear obvious to others. Encouraging diversity of thought provides a practical safeguard against unintentional bias. To further mitigate risks, organisations are implementing impact assessments before deploying new AI features. Key questions include: how will this affect different users, cultures or regions? Could it unintentionally disadvantage some groups?


Human oversight and leadership in ethical AI


The role of human oversight becomes crucial to balance ethics in AI. AI systems generate outputs at an extraordinary scale, but still require intervention to maintain checks and balances. Unlike traditional software, AI evolves based on new data and user interactions. Oversight, therefore, cannot be occasional; it must be built into the entire lifecycle.


Scaling oversight means reimagining existing evaluation processes. Not every decision can be checked manually, but transparent monitoring mechanisms and user feedback loops help track how AI behaves in diverse contexts.


Leadership plays perhaps the most decisive role in this regard. Without senior leaders dedicating time to understand AI, from technical underpinnings to ethical dilemmas, frameworks risk remaining superficial. Vinay emphasised that this is not a one-time challenge but a continuous transformation: leaders must view ethical AI as a business principle, invest in educating employees, and have the courage to decline applications that present ethical risks, even where financial incentives are strong.


Ethical leadership often means making difficult calls. In a world of innovation pressure, the temptation to pursue AI-driven revenue at any cost is high. Ethical AI requires the courage to prioritise long-term trust and sustainability over short-term returns.


The future of ethical AI: Innovation with integrity


The conversation underscored a central message: AI is no longer hypothetical, nor are its ethical challenges. With 85% of CEOs believing AI will reshape business within five years, yet only 25% having formal governance frameworks in place, the imbalance between innovation and responsibility is striking. But there is a clear path forward. Building fairness, transparency and accountability into AI is not just a moral imperative; it is also a business advantage. Research also confirms that businesses embedding ethical principles are more likely to sustain trust and outperform peers in innovation.


The discussion revealed an emerging playbook: inclusive data, multi-stakeholder governance, explainable systems, ongoing monitoring and leadership that treats ethical AI not as compliance, but as a core business principle. As AI becomes ubiquitous, embedding integrity into design and deployment is the only way to ensure these technologies remain a force for good, driving progress while safeguarding human values.

Loading...

Loading...