Article: Staying human in an AI age


Staying human in an AI age

Puncturing the popular myths around the apocalyptic effects of AI, ManpowerGroup's Dr Tomas Chamorro-Premuzic points us past the red herring of blaming AI for social ills - and towards a clearer way forward.
Staying human in an AI age

Take a look at the history of humanity, and we have a long, long track record of blaming technological innovations for our own demise" - dating all the way back to the invention of writing, to the publication of the first newspaper, and now to the advent of AI. And so it's unsurprising that today's overall sentiment toward AI is not only negative, but extremist, says Dr Tomas Chamorro-Premuzic, Chief Innovation Officer of ManpowerGroup.

Delivering the kickoff keynote at People Matters TechHR 2023, Dr Chamorro-Premuzic identified three of the most common arguments against AI today: that it will kill jobs, that it will introduce bias to human society, and that it will make people antisocial. 

Is AI really that bad?

These accusations aren't entirely false, he says. On the question of jobs, for example, he points out that AI does in fact eliminate jobs. But the argument has two sides.

"While it is true that AI is making certain jobs obsolete, it mostly automates tasks within jobs, changing the skill constellation that talent requires to survive," he says. On the other hand, as he quickly adds, it won't necessarily make us more productive. Statistics, after all, suggest a correlation between technological use in the workplace and a fall in productivity.

"It's true that when you are in the office, a lot of time is spent on the performative effects of job performance," he says. But opportunities to get distracted are much higher at home than in the office, to the point that it's difficult to definitively ascribe blame to one or the other cause.

Similarly, the bias argument is true: AI is only as objective as humans are, "which essentially means, not objective at all," says Dr Chamorro-Premuzic. "If you show me an unbiased human, I will tell you they're not a human at all."

But at the same time, AI can be designed to be less biased than humans, and as he points out, ChatGPT is already well down this road; he has repeatedly heard that it has so many guardrails for political correctness, it produces nothing useful or interesting at all.

In fact, he thinks that AI is more likely to expose existing human biases than to introduce new ones...and when this happens, people will turn away from AI simply because it exposes them to information and viewpoints that they disagree with or flat out dislike.

Possibly the most accurate of the three accusations is that AI makes people antisocial, he thinks: AI has normalised digital narcissism in the sense that it rewards self-promotion, self-entitlement, self-centred behaviour.

"If you go around your office behaving like influencers do online, people will find you incredibly obnoxious," he says. "But online, that behaviour is rewarded. AI co-opts our narcissistic behaviour to such an extent that even generative AI is imitating these tendencies in its responses."

Most of all, AI reproduces what he calls the most "offensive and daunting" of human inclinations: the urge to inflate, exaggerate, and outright lie, what has been termed 'hallucinations' in generative AI output.

"The more AI becomes like us, the less we like it - maybe because we don't like what we see in the mirror image it reflects when we look in it," he says.

The importance of radical nuance

If we want to stay human against the pressure of AI shaping our behaviour, we need to cultivate what Dr Chamorro-Premuzic calls radical nuance: the ability to see both sides of the problem, to not just understand the substance, but also find the way forward.

That way, he suggests, is to harness the skills that AI probably will not master, such as empathy or EQ, creativity, curiosity, self awareness. If we are to live and work with AI effectively, we need to develop deep expertise in vetting fast and making smart decisions as we use this tool, he believes. 

And, critically, we must abandon double standards when we judge technology; we cannot hold AI to different standards than we do humans.

"The objective with AI is not perfection, but better than the status quo," he says firmly.

What kind of future will that bring us into? Perhaps it's not so much a question of what the 'future' will look like, as what we ourselves will look like when we finally arrive at a widely accepted way of living and working with artificial intelligence.

Read full story

Topics: Technology, #TechHRIN, #Artificial Intelligence

Did you find this story helpful?



How do you envision AI transforming your work?

Your opinion matters: Tell us how we're doing this quarter!

Selected Score :