Responsible AI for a human-centric approach to workplace transformation

Global leaders predict that 2025 will be a breakout year for Artificial Intelligence, marking the transition from experimentation in non-core areas for building experience and confidence, to scaling AI for optimisation (46 percent plan to do this) and leveraging it for innovation (44 percent). Only 6 percent will be in the experimentation stage in 2025, versus 30 percent in 2024.
What are the implications of widespread AI adoption on the workforce? And how are organisations preparing them for it?
A March 2024 report on ethics and trust in technology highlighted the focus on ethical AI; 88 percent of the 100 leaders surveyed said that their organisations were communicating the importance of ethical AI to their employees. Mushrooming regulation on responsible AI is clearly a driving factor, but it is not the only one. The goal of ethical AI is not just compliance, but also to enable human-centric workplace transformations.
Ethical, human-centric AI for inspiring trust and confidence
Generative AI revolutionised the use of AI, but it also opened up serious threats, such as deepfakes, rampant misinformation, and data leakage. Bombarded with reports of AI-enabled cyberattacks, it is natural for employees to mistrust the technology, or fear that they might inadvertently use it to spread bias or cause harm. Responsible AI helps to ameliorate these concerns by ensuring ethical and appropriate use: by training employees in responsible AI – for example, secure data practices and behaviours, using personal information only with the owner’s consent, ascertaining data origin or ownership, clearly declaring when content is AI-generated, and avoiding bias and subjectivity when writing prompts – employers can build employees’ trust and confidence in AI.
Enterprises should emphasise the quality of training data, ensuring it is clean, accurate, fair, consistent, and complete, so as to produce accurate, unbiased and reliable algorithmic outcomes. This would put employees at ease and encourage them to confidently use AI models in day-to-day work. Employers should also be transparent about how they use AI in employee-related matters – for example, are they using it to screen resumes while hiring, or for collecting performance data to support employee evaluation, and so on. There should always be a human-in-the-loop overseeing AI, making sure it is working in an ethical and responsible manner, respecting human values such as fairness and diversity.
Inclusiveness is also key to human-centric AI workplace transformation: for example, employers should address the challenges of less digitally-savvy workers by providing basic AI training and nominating co-workers to provide “handholding” support in the beginning. Another tenet is empathy: rather than decreeing the use of AI, sparking fears of displacement or job loss, a human-centric approach helps employees understand how AI can empower them – for example, by taking over non-value adding tasks so that they can focus on higher pursuits such as problem solving and strategic thinking.
Building ethical AI solutions
The onus for ethical AI is also on technology companies, whose responsibility it is to design the right models and tools. Since AI systems are tightly coupled with human beings – they are trained on human-generated data, frequently interact with humans, and are designed to mimic human intelligence – their creators must have a nuanced understanding of how humans behave and societies operate, to avoid inaccuracy, stereotyping, and negative discrimination. They can achieve this by collaborating with social scientists to tap their knowledge of human behaviors, cultures, and societies, and asking the experts to evaluate their AI models for inadvertent bias and other ethical violations. The AI Ethics boards of giant tech companies like Microsoft and Google have social scientists overseeing responsible development. At leading U.S. universities, data scientists and social scientists are working together to come up with tools for discovering and reducing algorithmic bias. Another great example is a U.K. initiative whereby philosophers, social scientists, designers, data scientists, policymakers and industry representatives gather on an interdisciplinary platform to align AI development with social values and mores.
Putting the human into AI
AI is arguably, the most exciting and transformative phenomenon of our time. Organisations accelerating AI adoption should pause to consider its impact on their workforce. The principles of Responsible AI can guide them in using the technology in an ethical, transparent, fair and compliant manner; responsible AI also helps them take a human-centric approach to AI-driven workplace transformation that prioritises employee interest and well-being ahead of everything else. Mechanisms to enforce strong principles of Responsible AI (RAI) are some steps enterprises can take to mandate tighter ethical and responsible development practices across the AI value chain.