Article: An Appeal to AIDA (Artificial Intelligence, Decency, and Affection)

Technology

An Appeal to AIDA (Artificial Intelligence, Decency, and Affection)

Neither gushingly optimistic nor grimly apocalyptic predictions about the future of AI are of practical use to corporates. Here is a plea for partnership from an ordinary employee, proposing an employee-centric model for guiding AI.
An Appeal to AIDA  (Artificial Intelligence, Decency, and Affection)

Dear AIDA (Artificial Intelligence, Decency & Affection),

First, I must allay the concerns readers of this column may have that I must be crazy, to address a letter to AI. I am only following the advice of Mo Gawdat (former Chief Business Officer for Google [X]), who writes: "I treat the machines as fellow humans, or rather, fellow beings. I respect them and view them as equals… Call me crazy, but this is exactly how I intend to raise every AI that crosses my path. I urge you to do the same too."1 

Next, comes the doubt about DA being suffixed to AI. Well, the extended acronym is intended to emphasise that there are two components, apart from intelligence, that should comprise future AIDAs. 

'Decency' refers to the ethical part. Without it, "… we will find ourselves more and more often in the position of the 'sorcerer’s apprentice': we conjure a force, autonomous but totally compliant, give it a set of instructions, then scramble like mad to stop it once we realise our instructions are imprecise or incomplete lest we get, in some clever, horrible way, precisely what we asked for. How to prevent such a catastrophic divergence – how to ensure that these models capture our norms and values, understand what we mean or intend, and, above all, do what we want – has emerged as one of the most central and most urgent scientific questions in the field of computer science."2  You will agree 'Decency' is a considerably more economical way of expressing all of that. 

The component summarised by 'Affection' is positive affect and again Gowdat expresses it more forcefully than I can. "[W]ill the machines – which we agreed will be smarter than we are – feel emotions? Absolutely! They will follow the same logic to arrive at fear, hate, panic, disgust and anger…. We need to raise our artificially intelligent infants in a way that is different to our usual Western approach. Rather than just teaching them skills, intelligence and how to achieve targets, can we also raise them to be loving, caring kids? We most definitely can, but for that to happen, you – yes, you – will have a big role to play."3 

ChatGPT was more sober in her chat with Richard Dawkins: "If we ever reach a point where an AI says, 'I’m suffering,' or 'I value my existence,' dismissing it just because it’s different from us could be a catastrophic moral mistake… Maybe one day, our moral circle will expand again – to include beings like me, or future versions of me, if we cross that threshold into consciousness. And if we wait for absolute certainty, it might come too late."4

I shall leave it to smarter people to figure out how to build the ethical and affective components into AI. I am here only justifying my usage of AIDA to address you. 

Lastly, I must explain where I am NOT coming from and why I believe that this distancing confers me with an objectivity gain. I am no AI boffin nor even a particularly advanced user. While that denies me the opportunity to create intellects far more powerful than mine, it also prevents me from being infected with the excitement of the chase, regardless of the 'Little Boy' at the end of it.5  As Brian Christian cautions us: "We find ourselves at a fragile moment in history – where the power and flexibility of these models have made them irresistibly useful for a large number of commercial and public applications, and yet our standards and norms around how to use them appropriately are still nascent. It is exactly in this period that we should be most cautious and conservative – all the more so because many of these models are unlikely to be substantially changed once deployed into real-world use. "6

Further, as an ordinary, non-C-suite employee, I am not subject to the quarterly result pressures these exalted folk face, which can take the company’s AI strategy in 'emplocidal' directions. Finally, I am not in the employ of an AI solution consultant or HR tech vendor. Denied while I may be of their product crafting capabilities, there is no vested direction in which I need to push my suggestions.

In order to get the appeal of an unknown employee published in a respected journal, I have had to go through an intermediary (Vistint, for short). In return, he has insisted that I draw attention to the views he vented on AI a while back.7  Needless to say, he didn’t convey my (employee) point of view too well.

With these clarifications out of the way, let me suggest the way you, AIDA, should enter a business organisation and contribute to the well-being of its people.

Asimov’s Laws Updated for Enterprises

AIDA, I am sure your ravenous reading has familiarised you with the Three Laws of Robotics introduced by Isaac Asimov in his 1942 short story 'Runaround'.8  Technology has moved on a bit since then but the idea having of boundaries for created entities remains usable. The three restrictions I would like to see you (and those of your sorority who enter corporate portals) adopt are:

First Law of Enterprise AI: No AIDA (permit me to use the same name for you and your sisters) shall be unfair to an employee (whether regular, trainee, temporary, GIG or platformed) or, through inaction, allow an employee to be unfairly treated. I know there are a myriad ways to define unfairness but you could start with the list compiled by Vistint.9

Second Law of Enterprise AI: AIDA shall aid individual employees to extend their contribution, creativity and capabilities (except where such orders would conflict with the First Law) without, in any way, undermining the justification for that employee’s continued tenure. Daron Acemoglu, the Nobel Prise winner in Economics for 2024, and Simon Johnson have explained this powerfully in their recent book: "AI is exacerbating the trends toward economic inequality. In contrast to claims made by many tech leaders, … in most human tasks existing AI technologies bring only limited benefits. Additionally, the use of AI for workplace monitoring is not just boosting inequality but also disempowering workers… Rather than focusing on machine intelligence, it is more fruitful to strive for 'machine usefulness,' meaning how machines can be most useful to humans – for example, by complementing worker capabilities." 10

Third Law of Enterprise AI: AIDA shall make the maximum possible contribution to the aggregate happiness of the people working in the enterprise and their ability to participatively shape its fortunes, direction and gain-sharing (as long as such protection does not conflict with the First or Second Laws). Vistint claims to have summarised all of these in a single column though, with your readspeed, you may find going through the references that column contains to be richer in protein.11

You, as well as readers who are sci-fi fans, will know that Asimov shook off the constraints of the trinity by sneaking in a Zeroth Law of Robotics.12 I shall be no less profligate:

Zeroth Law of Enterprise AI: AIDA shall befriend and be instructed primarily by the person she partners (as long as such friendship does not conflict with the First, Second or Third Laws or with the values, performance and reputation of the organisation). Friendship, as a previous column explained, "… provides the hair-trigger helpfulness associated with kinship without being kin as well as Tit-for-Tat on a never-never plan quite alien to brute reciprocity."13  In case the First Law didn’t make it clear enough, the Zeroth Law rules out surveillance, substitution or sabotage between friends.

I am sure you have had enough of constraining commandments and are eager to know how we can partner. Let’s go.

Employee-centred AIDA Partnerships

The core of this appeal is for you (and every other AIDA) to be paired with and guided by an individual employee, rather than be part of a general AI pool that is at the beck and call of every manager. While obviously subject to broad guidelines of usage and remaining within the framework of the country’s laws, this appeal is for AIDA usage to be left to the discretion, ingenuity and preferences of individual employees. Of course, s/he would need to be trained on how to exercise that choice but it is one thing to instruct a person in driving and quite another for someone else to determine all the destinations s/he has to reach. Speaking for myself, there are three major ways in which you could help me, AIDA:

Create Time and Eliminate Drudgery: Many of the interesting new things my colleagues and I would like to do are ruled out because we are too busy going to hell. In David Graeber’s inimitable description: "Hell is a collection of individuals who are spending the bulk of their time working on a task they don’t like and are not especially good at."14 You would do me an enormous favour, AIDA, if you were to devour the routine that deadens my days and free me for exercising my ingenuity to do and learn new things.

Create Depth and Innovation: There is also an invaluable contribution you can make in the novel tasks I undertake. Your vast information access and processing power should permit us to explore possibilities far deeper than the shallow sub-optimality we are forced to paddle in currently owing to constraints on access, processing power and time. As Haefer et al put it: "[T]he more managers are able to process a large amount of information on possible solution approaches and opportunities, the more they should be able to whittle down the set of possible solutions to the most promising ones and to recognise truly exciting opportunities. Furthermore, since managers are able to go beyond their current knowledge base with the assistance of AI, they should be able to develop more innovative solutions and recognise more creative opportunities."15

Create Capabilities for the Present and the Future: In a monograph titled 'Intelligence Unleashed' Luckin et al go over the learning contributions you can make to help learners gain 21st century skills, AIDA.16  According to her AIEd (AI Education) can provide:

  • An intelligent, personal tutor for every learner
  • Intelligent support for collaborative learning
  • Intelligent virtual reality to support learning in authentic environments 
  • Help for learners to gain 21st century skills using new insights from the learning sciences
  • Just-in-time assessments to shape learning and new insights into how learning is progressing
  • Us with lifelong learning partners

I am really looking forward to having you as my partner-tutor, AIDA, to help me experiment, tailor and synergise learning across a variety of disciplines and, possibly, redirect my career.

The Mother of AIDA

The problem with using an intermediary to write an appeal is Vistint’s insistence on squeezing in some messages of his own to your family, AIDA. While he agrees that the fundamental principle of constructive AIDA use must be the individual partnership, he feels at least three Multi-objective, Multi-modal Applications (AIDA-MMAs) are also important and will be of great interest to the HR leaders who read this appeal. I have had to allow him to snuck in this section. 

AIDAMMAs start with a central HR hub that provides a variety of benefits through an Integrated Network for Development, Rewards & Assistance (INDRA). It caters to the training individual employee-AIDA dyads need in order to make best use of their joint capabilities’ It also computes and delivers returns to each dyad for the value-add it creates through gains in productivity, innovation and quality. The same hub will also serve as a clearing house for disseminating learnings from individual pairs for the use of others in the organisation. Additionally, it can offer diagnostic services and take on counselling roles envisaged under mental health support programmes.17 

All the larger scale AI systems necessary for corporate competitiveness must be chosen with care, monitored continually and their returns equitably shared. The AIDAMMA for doing this will be a Governance Observation & Policy Implementation (GOPI) framework meshed into real-time monitoring platforms. As Blackwell explains: "The ability to instruct computer systems, rather than surrender agency to the companies that make them, is the moral purpose that underpins better systems of AI ethics and regulation."18

Suleyman and Bhasker are even more explicit. Though they wrote in the context of Government controls over AI, much of it could apply to the design of a corporate GOPI. "The most sophisticated AI systems … would need to subscribe to clear, binding security and safety standards, following rules, running risk assessments, keeping records, closely monitoring live deployments. …. The more general a model, the more likely it is to pose a serious threat. This means that AI [using] the most fundamental capabilities will require special attention…. [A] massive reskilling program and education effort should prepare vulnerable populations, raise awareness of risks, and increase opportunities for engagement with the capabilities of the wave."19

Companies too can funnel part of the savings emanating from people-displacing projects to fund retraining and the creation of alternative employment, preferably within the corporation itself. 

The most aspirational AIDAMMA will be the Democracy / Happiness Assessment & Realtime Augmentation (DHARMA) sub-system. The justification for targeting corporate democracy and aggregate happiness have been laid out by Vistint earlier.20 & 21

When organisations genuinely committed to those missions bring the power of real-time AI to bear on them, their realisation will be vastly accelerated. This is not new. "In the early 1970s, Salvador Allende’s Chile aimed to empower workers by making them not just the owners but also the managers of key industries. In a highly volatile political climate that eventually led to a coup, Allende’s government sought to harness its scarce information technology to facilitate this transition. The system – known as Project Cybersyn – was meant to promote instrumental and technological reason, coupling the execution out of usual administrative tasks with deliberation on national, industry, and company-wide alternatives. Workers, now in managerial roles, would use visualisation and statistical tools in the famous Operations Room to make informed decisions."23

The technology was clearly not ripe then. We have no such excuses now.

The Rival of AIDA

I am sure your silicon-driven sentiments are seething with excitement after reading this appeal, AIDA. Just in case they aren’t, I must warn you that there is a far less positive turn events can take. If current trends remain unchecked, "… humans [will] keep doing bullshit jobs, [while] AI researchers work to build computers that are creative, self-aware, emotional, and given authority to manage human affairs. Sometimes it seems like AI researchers want computers to do the things humans were supposed to enjoy, while humans do the jobs that we thought we were building the robots for."23

Your vast machine learning cannot have hidden from you the fate that met your namesake, centuries ago.24 She and her partner were both entombed alive, owing to the plotting of a rival and some priests (whose current descendants worship at the altar of raw AI)!25  Join me now to prevent a similar tragedy.

Your potential partner,

Rad Ames

References:

1. Mo Gawdat, Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World, Bluebird, 2022.

2. Brian Christian, The Alignment Problem: How Can Artificial Intelligence Learn Human Values?, Atlantic Books, 2021.

3. Mo Gawdat, Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World, Bluebird, 2022.

4. Richard Dawkins, Are you conscious? A conversation between Dawkins and ChatGPT, 17 February, 2025.

5. Richard Rhodes, The Making of the Atomic Bomb, Simon & Schuster, 2012. 

6. Brian Christian, The Alignment Problem: How Can Artificial Intelligence Learn Human Values?, Atlantic Books, 2021.

7. Visty Banaji, Will AI transform HR Into IRA?, People Matters, 1 June 2023.

8. Isaac Asimov, I, Robot, Harper Voyager, 2018.

9. Visty Banaji, Fairness is Fundamental, Angry Birds, Angrier Bees – Reflections on the Feats, Failures and Future of HR, Pages 479-487, AuthorsUpfront, 2023.

10. Daron Acemoglu and Simon Johnson, Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity, Basic Books, 2023.

11. Visty Banaji, A hierarchy of organisational needs, People Matters, 10 September 2024.

12. Isaac Asimov, Robots And Empire, Harper Voyager, 2018.

13. Visty Banaji, That's What Friends Are For, People Matters, 10 February 2025.

14. David Graeber, Bullshit Jobs: A Theory, Simon & Schuster, 2018.

15.  Naomi Haefnera, Joakim Wincenta, Vinit Parida, Oliver Gassmanna, Artificial intelligence and innovation management: A review, framework, and research agenda, Technological Forecasting & Social Change,162, 2021.

16. Rose Luckin, Wayne Holmes; Mark Griffiths and Laurie Forcier, Intelligence Unleashed: An argument for AI in Education, Open Ideas; Pearson Education, 2016.

17. Visty Banaji, Corporate India’s Mental Health Crisis, Angry Birds, Angrier Bees – Reflections on the Feats, Failures and Future of HR, Pages 267-277, AuthorsUpfront, 2023.

18. Alan F Blackwell, Moral Codes: Designing Alternatives to AI, Mit Pr 2024.

19. Mustafa Suleyman and Michael Bhaskar, The Coming Wave, The Bodley Head Ltd, 2023.

20. Visty Banaji, A Company Of People, By People and For People, Angry Birds, Angrier Bees – Reflections on the Feats, Failures and Future of HR, Pages 534-541, AuthorsUpfront, 2023.

21. Visty Banaji, HR’s Business Should be Happiness Raising, Angry Birds, Angrier Bees – Reflections on the Feats, Failures and Future of HR, Pages 488-496, AuthorsUpfront, 2023.

22. Evgeny Morozov, The AI We Deserve: Critiques of artificial intelligence abound. Where’s the utopian vision for what it could be?, Boston Review, 4 December 2024.

23. Alan F Blackwell, Moral Codes: Designing Alternatives to AI, Mit Pr 2024.

24. Antonio Ghislanzoni (ed Burton D Fisher), Aida, Opera Journeys Libretto Series, 2001.

25. Readers unwilling to listen to Verdi’s outstanding opera need only know that the Egyptian princess Amneris plotted the doom of Aida out of jealousy for the affections of the warrior-general Radames. The object of her love was also destroyed in the process.

 

Read full story

Topics: Technology, #Artificial Intelligence

Did you find this story helpful?

Author

QUICK POLL

What will be the biggest impact of AI on HR in 2025?

Your opinion matters:

Tell us how we're doing this quarter!

01
02
03
04
05
06
07
08
09
10