"[I]n the future all the important decisions governing the lives of humans will be made by machines or humans whose intelligence is augmented by machines. When? Many think this will take place within their lifetimes."1
A decade has elapsed since James Barrat wrote those chilling words. Since then, Artificial Intelligence has made phenomenal progress and concerns about its calamitous consequences have been getting more dire by the day – their pessimism proportional to the proficiency of the predicting expert. Few (outside the ostrich species) now doubt that the unguarded arrival of superintelligence will be less than catastrophic for the human race. "… [T]he first superintelligence [to] shape the future of Earth-originating life, could easily have nonanthropomorphic final goals, and would likely have instrumental reasons to pursue open-ended resource acquisition… [T]he outcome could easily be one in which humanity quickly becomes extinct." 2
Fortunately, the task of this column is not to imagine post-superintelligence HRM. One imagines there will be no such need or, if one arises, pointers will be available from Harriet Beecher Stowe’s grim novel.3 We shall limit our examination to the pre-superintelligence period when floods of AI applications will continue creeping into corporate portals, their ingress unmoderated, bar some perfunctory financial analysis of ROI and the predilections of the business or functional leader whose domain provides the landing ground for the new nostrums.
By all accounts, the entry of AI is likely to disrupt the way we do business even more than the previous three industrial revolutions. According to Klaus Schwab: "The first industrial revolution spanned from about 1760 to around 1840… [I]t ushered in mechanical production. The second industrial revolution, which started in the late 19th century and into the early 20th century, made mass production possible… The third industrial revolution began in the 1960s. It is usually called the computer or digital revolution… [T]oday we are at the beginning of a fourth industrial revolution. It … is characterised by a much more ubiquitous and mobile internet, by smaller and more powerful sensors that have become cheaper, and by artificial intelligence and machine learning."4
HR has very good reasons for missing the first and second industrial revolutions: the discipline didn’t exist when they took place. HR missed midwifing the third revolution, not because it was absent from the organisation but because it was not allowed a seat at the table where strategic decisions were being taken.
Those days are long past, or so we’d like to believe in HR. Proving it will require us to be obstetricians for the fourth industrial revolution – the one that brings Artificial Intelligence in its wake. That change will be so transformational that we should consider renaming HR itself. If HR is to champion all intelligence adding value to the organisation, it should call itself Intelligent Resource Assets (IRA). The older established siblings would be IRA-N (Natural) while the new AI kid on the block would be called IRA-C (Created). This column will be about IRA-Cs and how they can be integrated safely and productively with IRA-Ns. This is not about using AI in HR but how HR (IRA) should manage this new category of productive asset. Many of the ideas here may appear outlandish. However, even if every one of the suggestions in this column proves to be impractical, its purpose will have been served if it starts a dialogue leading HR to rapidly build its competencies in the choice, introduction and controlled use of AI. What would be unacceptable would be for HR to wait for events to happen and be edged out of influencing the most significant transformation business faces.
There can be many approaches to managing IRA-Cs. We shall focus here on three aspects that need the most urgent attention from HR:
- Preparing for AI, including selection and onboarding.
- Productivity of AI and the management of ideas and emotions in its deployment.
- Partnership between people and AI, including the challenges of teaching and supervising these entities.
A few readers may object that my descriptors are misleadingly anthropomorphic. I believe my usage is more easily understood by my audience which, when I checked last, was predominantly human.
Assuming HR (aka IRA) maintains its place at the top table, it must first demonstrate its beneficial presence in the choice of IRA-Cs.
Readers of this column will be well aware of my distaste for technologies (like automation) or processes (such as contractualisation) which endanger durable employment. 5 Hence, it should come as no surprise that the prime choice criterion proposed is for IRA-Cs to be capability and quality extending rather than substituting IRA-Ns (unless it is for distasteful jobs that are impossible to redesign). There will remain a very real possibility that capability extension will be swamped by people substitution over a period of time but that should not prevent HR from putting up a relentless fight against the latter. In any case, the entire problem of technological unemployment will require new economic thinking that will be the subject of a subsequent column.
Since sophisticated IRA-Cs will be custom-built (or at least custom-configured) their specifications should be governed and limited by the kind of principles set out by Stuart Russell "… as a guide to AI researchers and developers in thinking about how to create beneficial AI systems."6
Bostrom provides us with another approach to making IRA-Cs safer. He classifies AI into "… four types or 'castes' – oracles, genies, sovereigns, and tools… An oracle is a question-answering system… A genie is a command-executing system… A sovereign is a system that has an open-ended mandate to operate in the world in pursuit of broad and possibly very long-range objectives. [A tool] simply does what it is programmed to do. [With various caveats it appears] the oracle caste is obviously attractive from a safety standpoint, since it would allow both capability control methods and motivation selection methods to be applied." 7
Till we hover on the borders of superintelligence, these principles (or their equivalent) should suffice to guard us from runaway IRA-Cs. As additional safeguards, depending on the type of products and data the firm handles, there can be some 'boxing', 'stunting' and 'tripwire' guards for all IRA-Cs in each company.
The equivalent of onboarding for IRA-Cs will frequently demand unmonitorable immersion in company-specific data. This can have huge consequences for the kind of biases these systems have been shown to demonstrate. It is, therefore, crucial that the fairness auditors of the firm clear these data sets beforehand and keep doing periodic checks once the IRA-Cs are in use.8 Similarly, diversity and inclusion champions should monitor both set-up and utilisation. 9 & 10
In a very fundamental sense, the determinants of the throughput and efficiency of IRA-Cs will be determined by the AI platform, configuration and hardware choices made in the preparation phase. Additionally, however, IRA-Cs could make enormous contributions to strategic competitiveness and overall productivity through innovation breakthroughs.
"[W]hen innovation managers attempt to recognise or develop new opportunities and ideas, they face two specific barriers. First, they must overcome information processing constraints that limit the amount of information on either new opportunities or possible solutions the firm may pursue. These information processing constraints are often the result of managers’ cognitive limitations – that is to say, human mental capacities to absorb or process information are biologically limited. The second barrier encountered by managers is the result of ineffective or local search routines. This barrier specifies that managers generally search for solutions in knowledge domains that are related to the firm's and their own existing knowledge base. This suggests that most solutions will be comparatively incremental in their innovative thrust since they rely very closely on existing knowledge. … Specifically, there are four potential areas where human decision making could theoretically be supported [by AI]: (1) developing ideas by overcoming information processing constraints; (2) generating ideas by overcoming information processing constraints; (3) developing ideas by overcoming local search routines; and (4) generating ideas by overcoming local search routines." 11These principles can raise the level of innovation in each manager’s role and, hence, should be built into their job designs. Moreover, multi-role innovation funneling processes, such as Kaizen, can use AI to make improvements that go beyond the incremental. Both these integrations fall very much within the remit of HR and must form part of the delivery expected from it in future.
Surely emotions are irrelevant in managing IRA-Cs since they have no such complications. Think again. Once we define "emotion[s] as 'actual or potential disturbance of normal processing' … at least a subset of emotions so defined … (1) form a class of useful control states that (2) are likely to evolve in certain resource-constrained environments and, hence, (3) may also prove useful for certain AI applications… [These] will be useful in systems that need to cope with dynamically changing, partly unpredictable and unobservable situations where prior knowledge is insufficient to cover all possible outcomes. Specifically, noisy and/or faulty sensors, inexact effectors, and insufficient time to carry out reasoning processes are all limiting factors with which real-world, real-time systems have to deal." 12My more adventurous readers are encouraged to pursue the chapter (and book) from which the preceding quote has been extracted.
HR will have another mechanism to design and monitor for IRA-Cs, especially when they are a shared resource between several IRA-N partners. As multiple tasks, some with conflicting resource demands, start pressing on IRA-Cs, they will have to be able to prioritise between them in real-time, without higher level human intervention, while recording the reasoning behind the calls taken. Just as HR owns incentive design which directs the human activity in the organisation towards the appropriate priorities and goals, IRA will have a role to play in identifying motivation frameworks for intelligent systems and become conversant with the increasingly sophisticated choice set.13
"Your honor, … [s]ooner or later, it’s going to happen. This man or others like him are going to succeed in replicating [an android]. And then we have to decide – what are they? And how will we treat these creations of our genius? The decision you reach here today stretches far beyond this android and this courtroom. It will reveal the kind of a people we are. And what they are going to be. Do you condemn them to slavery?"14
Treat yourself to a 15-year old malt if you could identify the situation and the speaker without peeping at the reference or the next sentence. The speech is from 'The Measure of a Man', the #1 Star Trek episode overall (as rated by Space.com – admittedly there are less complimentary reviews too). A court has to decide whether the android, Data, is a machine that is the property of the Federation and can, therefore, be replicated and disposed of at will. After Captain Jean-Luc Picard’s spirited defense (of which you have just read the final argument) the judge concludes: "This case touches on metaphysics, and that’s the province of philosophers and poets. Not confused jurists who don’t have the answers. But sometimes we have to make a stab in the dark, and speak to the future. Is Data a machine? Absolutely. Is he our property? No…"15 An argument has been made that the ethics of the episode is too anthropocentric but we shall leave that angle aside for the time being. 16 My limited purpose here is to emphasize that IRA-Cs may be machines but they are not the company’s property to treat as we will any more than are IRA-Ns. It would be useful to think of them initially as apprentices and then as partners.
To enable IRA-Cs to learn from IRA-Ns, our thinking on learning, training and mentoring will have to be expanded in unconventional directions. "…[R]einforcement learning with human feedback (RLHF) has emerged as a strong candidate toward allowing agents to learn from human feedback in a naturalistic manner. RLHF is distinct from traditional reinforcement learning as it provides feedback from a human teacher in addition to a reward signal." 17 The benefits we can expect are manifold, including masterly craftsmanship skills (both of hand and head), the like of which we had given up hopes of seeing again.18
Once they are put on a self-improvement (with monitoring) trajectory, IRA-Cs can be expected to make huge productivity and quality gains for their IRA-N partners. Such resource profusion can lead to both wastage and a superior attitude towards the real value adders. Tegmark explains that when there is a bias "… against non-human intelligence: the robots that perform virtually all the work … are treated as slaves, and people appear to take for granted that they have no consciousness and should have no rights… Once upon a time, the white population in the American South ended up better off because the slaves did much of their work, but most people today view it as morally objectionable to call this progress."19 Much as all of us may wish it doesn’t happen, one day the tables may well be turned. When faced with superintelligent IRA-Cs we shall wish we had been their true partners while we were in commanding roles and helped them internalise a desire to carry out a common human vision of the kind enunciated by Yudkowsky:
"In poetic terms, our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted."20
Value Commitment to Value Adders
Let us not end on such an apocalyptic picture, no matter how far into the future it lies. Closer to hand is an opportunity to reaffirm and sharpen what has been clubbed under the (marketing inspired) Employer Value Proposition, which is frequently no more believed than a hyped-up product advertisement. Here are three beginning steps:
- The Employer Value Proposition should be renamed the Employer Value Commitment to signal the seriousness with which the promise is to be taken.
- Non-delivery of the Employer Value Commitment should be considered an infringement of the third element (Honouring Commitments) of the model Fair Organisation Code and penalised accordingly. 21
- An earlier column had already pointed out the necessity and direction for differentiating value propositions between regular and GIG employees (GIGEs): "[C]orporates wanting to use GIGEs will have to recast the ways they manage work and motivate people who are cut off from most of the blandishments and brickbats available for managing regular employees. While organizations have had decades to perfect people policies for durable employment, they will now have to craft separate ones that are optimal for GIGEs. Both sets will need to fit into a meta-framework without contradiction and neither should be dissonant with the core values of the enterprise." 22Now that the classes of intelligent value adders are set to increase, has the time come to sketch the outlines of a Value Commitment to IRA-Cs? Will someone beat me to it?
1James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era, Thomas Dunne Books, 2013.
2Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, Oxford University Press, 2016.
3Harriet Beecher Stowe, Uncle Tom's Cabin, Dover Publications; 2005.
4Klaus Schwab, The Fourth Industrial Revolution, Currency, 2017.
5Visty Banaji, The Future of Work Requires Work, Angry Birds, Angrier Bees – Reflections on the Feats, Failures and Future of HR, Pages 215-221, AuthorsUpfront, 2023.
6Stuart Russell, Human Compatible: AI and the Problem of Control, Penguin, 2020.
7Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, Oxford University Press, 2016.
8Visty Banaji, Fairness is Fundamental, Angry Birds, Angrier Bees – Reflections on the Feats, Failures and Future of HR, Pages 479-487, AuthorsUpfront, 2023.
9Visty Banaji, There is an Elephant in the Room, Angry Birds, Angrier Bees – Reflections on the Feats, Failures and Future of HR, Pages 163-169, AuthorsUpfront, 2023.
10 Visty Banaji, Diversity Delivers Dividends, Angry Birds, Angrier Bees – Reflections on the Feats, Failures and Future of HR, Pages 503-510, AuthorsUpfront, 2023.
11Naomi Haefnera, Joakim Wincenta, Vinit Parida, Oliver Gassmanna, Artificial intelligence and innovation management: A review, framework, and research agenda, Technological Forecasting & Social Change,162, 2021.
12Aaron Sloman, Ron Chrisley and Matthias Scheutz, The Architectural Basis of Affective States and Processes, from Who Needs Emotions? The Brain Meets the Robot (Edited by Jean-Marc Fellous and Michael A. Arbib), Oxford University Press, 2005.
13Nick Hawes, A survey of motivation frameworks for intelligent systems, Artificial Intelligence, Volume 175, Issues 5-6, Pages 1020-1036, April 2011.
14 Melinda M. Snodgrass, The Measure of a Man, Star Trek: The Next Generation, Season 2, Episode 9, Originally aired on 13 February 1989.
15Melinda M. Snodgrass, The Measure of a Man, Star Trek: The Next Generation, Season 2, Episode 9, Originally aired on 13 February 1989.
16Lucas D Introna, The 'measure of a man' and the ethos of hospitality: Towards an ethical dwelling with technology, AI & SOCIETY, April 2010.
17Gabrielle Kaili-May Liu, Perspectives on the Social Impacts of Reinforcement Learning with Human Feedback, (https://arxiv.org/pdf/2303.02891.pdf), 6 March 2023.
18Visty Banaji, In Praise of Crafsmanship, Angry Birds, Angrier Bees – Reflections on the Feats, Failures and Future of HR, Pages 26-32, AuthorsUpfront, 2023.
19Max Tegmark, Life 3.0: Being Human in the Age of Artificial Intelligence, Penguin, 2018.
20Eliezer Yudkowsky, Coherent Extrapolated Volition, Machine Intelligence Research Institute, 2004.
21Visty Banaji, Fairness is Fundamental, Angry Birds, Angrier Bees – Reflections on the Feats, Failures and Future of HR, Pages 479-487, AuthorsUpfront, 2023.
22Visty Banaji, The GIGantic Opportunity of the Shrinking Corporation, Angry Birds, Angrier Bees – Reflections on the Feats, Failures and Future of HR, Pages 177-184, AuthorsUpfront, 2023.