Article: Making AI responsible by weeding out human bias

Technology

Making AI responsible by weeding out human bias

Artificial Intelligence (AI) may amplify historical biases against certain sections of society and actively neglect them and hence, organisations need to deliberately and consciously reflect on the impact AI systems have towards humans, society and the planet.
Making AI responsible by weeding out human bias

A focus purely on getting the best artificial intelligence (AI)  may lead to unintentional consequences that affect human lives negatively and many large tech organisations are realising this.

Certain sections of the society can actively be neglected because of historical biases in society that the AI can amplify – credit-worthy loan applicants can be denied just based on gender, health access can be reduced to certain sections of society, or AI-based recruitment bots can neglect women who can code.

“These are real problems of AI today. Organisations need to deliberately and consciously reflect on the impact AI systems have towards humans, society and the planet. Focusing purely on rational objectives may not necessarily result in outcomes aligned with legal, societal or moral values,” says Akbar Mohammed, lead data scientist at US-based artificial intelligence firm Fractal.

In an interaction with People Matters, Mohammed underscores major biases in AI in organisations and ways to ensure that they are not replicated in 'responsible AI'.

What are some of the major biases we see in AI?

Largely, there are two that often crop up in AI – one is systemic biases where institutional operations that have historically neglected certain groups or individuals, based on their gender, race or even region can be a major concern.

Secondly, human biases, such as how people use data or interpret data to fill in missing information, such as a person’s neighbourhood of residence, influencing how likely loan officers consider the person to be credit-worthy.

When human, and systemic biases combine with computational biases, they can cause significant risks to both society and individuals — especially when explicit guidance is lacking for addressing the risks associated with leveraging AI systems.

How would you define Responsible AI?

Responsible AI is a practice of creating AI in an ethical manner and one that can act, behave or help make decisions in a responsible manner towards humans, society and even the planet.

What are the challenges for organisations to ensure responsible AI practices?

One is simply becoming aware of the risks of AI.

Second is placing the right policy, guidelines and governance on responsible AI practices.

Finally, encouraging behaviour like contestability of any AI or augmented human decisions - making it a safe place to openly discuss ethical challenges and issues is less likely to create harmful AI and encourages people to tackle the problems not just through technology but also through the human-centred lens.

What are some of the ways to root out AI bias?

Some of the ways to root out AI bias will be by being aware of the risks and aligning your organisation towards a shared principle will initiate the journey.

However, organisations need to go beyond principle-based frameworks and incorporate the right behaviours and toolkits to empower people to deal with responsible practices.

We see a combination of design and behavioural science-driven frameworks, along with technology-driven toolkits, can be combined to deliver responsible AI for any enterprise or government.

Read full story

Topics: Technology, Employee Engagement, Employee Relations, #FutureOfWork

Did you find this story helpful?

Author

QUICK POLL

How do you envision AI transforming your work?