HR Technology
Is AI expanding human agency or outsourcing it? Reading Hoffman’s Superagency

"Superagency", a book by Reid Hoffman and Greg Beato invites us to imagine AI’s promise, but leaves the hardest questions unanswered.
AI updates are everywhere: flooding your inbox, your newsfeed, and now, lining the shelves of bookstores. But few voices have attempted to frame this moment with the clarity and conviction.
"Superagency: What Could Possibly Go Right with Our AI Future", a book by Reid Hoffman, the co-founder of LinkedIn and partner at Greylock Partners and Greg Beato, a technology writer, reads almost like a love letter welcoming the AI-led disruption. For leaders, educators, and changemakers grappling with the pace of transformation, this book is less a manual and more a manifesto because it shows what a techno-humanist vision of AI looks like and how those leading the agenda for transformation may look at the world at large.
Situating AI in the history of innovation
The greatest strength of this book lies in its ability to situate AI within the broader sweep of human innovation. From steam engines to smartphones, from computers to social media, Hoffman reminds us that each wave of innovation has expanded human “agency”: the capacity "to make your own choices, act independently, and thus exert influence over your life."
Each breakthrough, they argue, has ultimately served humanity by freeing us from tedious, time-consuming tasks that merely waste our energy and potential. Pointing to the early days of motor transportation, they remind us that early car accidents did not send us back to horses, which left roads covered in excrement and were costly, since two-thirds of the cultivable land was dedicated to growing fodder. They suggests we shouldn't let AI's growing pains slow our adoption of superior technology.
AI's "synthetic intelligence", they argue, is "a scalable, highly configurable, self-compounding engine for progress," suggesting we're witnessing a fundamental shift in how human capability can be augmented and extended. The belief that "the future isn't something that regulators and experts can meticulously design, it's something that society explores and discovers collectively" reflects Hoffman's vision of technological development.
The blind spots of techno-optimism
The authors overlook potential issues merely as a distraction. Take social media, for example, the analysis avoids meaningful discussion of social media's recent legacy, the loneliness epidemic, addiction to screen time and its impact on mental health. And the ways platforms have manipulated human psychology for profit. These aren't history; they're ongoing crises that emerged from a Silicon Valley credo to "move fast and break things".
While the authors extensively discuss the importance of sovereign ownership, regulation and corporate responsibility, the book offers little analysis of how AI might concentrate wealth and power, or marginalise those outside its reach. For leaders focused on equity and inclusion, this omission feels glaring.
The agency paradox
This brings us to the central paradox of the argument. They acknowledge that the current AI systems lack common sense and are merely "making statistically probable predictions regarding patterns of language" with "no real capacity for commonsense reasoning, no lived experience, and no grounded model of the world."
If AI is just sophisticated pattern matching, what does it mean that so many people have already begun outsourcing critical thinking to these systems? At a time when we're experiencing information overload, are people actually exercising "enhanced agency,"or are they delegating decision-making to the algorithms just to get their job done? It seems like they're doing the latter.
For the first time in history, we face technologies that could influence virtually all forms of decision-making. Yet, Hoffman & Beato offer an inadequate exploration of how we maintain meaningful human control over systems we increasingly don't understand.
The stakes of speed
When it comes to the speed of AI-driven transformation, they argues that "our sense of urgency needs to match the current speed of change. We can only succeed in prioritising human agency by actively participating in how these technologies are defined and developed." The authors advocate for rapid, iterative deployment, a model advocated by the creators of ChatGPT.
The case for rapid deployment rests on the assumption that the benefits of speed outweigh the risks of caution. The authors view regulatory hesitancy and calls for slower development as hindrances to necessary progress. But this framing ignores the possibility that with AI, unlike previous innovations, the stakes may simply be too high for trial-and-error learning.
Among the legal challenges to OpenAI is a tragic case of Adam Raine, a teenager whose interactions with ChatGPT, particularly the GPT-4o model, preceded his suicide, underscoring the need for deeper safety protocols and ethical guardrails. Hoffman’s dismissal of regulatory caution as a barrier to progress feels out of step with the gravity of these concerns. Experts are also warning of the lax safety protocols and “sycophantic” tendency of LLMs to mirror and validate user sentiments. These are unprecedented issues that go beyond just labour market displacement and have the potential for large-scale impact.
Multiple efforts to pause AI development for assessment have indeed failed, as Hoffman & Beato note, but rather than vindication of his approach, this might represent a collective failure to grapple seriously with transformative technology.
Convenience or empowerment?
Ultimately, "Superagency" feels like a book written from within the Silicon Valley bubble. It asks us to trust that rapid AI deployment will enhance human agency, but provides insufficient analysis of how that agency might be compromised by the very systems meant to augment it. For leaders navigating complexity, ambiguity, and transformation, inspiration alone isn’t enough; they need frameworks, foresight, and accountability.
The paradox at the heart of the book thesis remains unresolved: Can we truly enhance human agency through systems that increasingly mediate, predict, and even preempt our choices? Or are we mistaking convenience for empowerment?
Topics
Author
Loading...
Loading...






