AI & Emerging Tech

800 leaders sign statement to halt superintelligent AI research

Article cover image

More than 800 public figures urge halt to development of advanced AI until safety and public consent are assured.

Hundreds of public figures — including Nobel laureates, former military leaders, artists and members of British royalty — have signed a statement calling for a global ban on research that could lead to computer superintelligence, a yet-to-be-achieved stage of artificial intelligence that they warn could pose an existential risk to humanity.


The joint statement, organised by the Future of Life Institute, urges “a prohibition on the development of superintelligence” until there is a “broad scientific consensus that it will be done safely and controllably” and “strong public buy-in.” NBC News first reported that over 800 signatories had endorsed the call by Tuesday night.


The diverse group of signatories includes Nobel Prize-winning AI pioneer Geoffrey Hinton, former US Joint Chiefs of Staff chairman Mike Mullen, the Duke and Duchess of Sussex, Apple co-founder Steve Wozniak, Virgin Group’s Richard Branson, and Nobel-winning physicist John Mather. Artists such as rapper Will.i.am and political figures including Steve Bannon and Susan Rice have also joined the appeal.


Anthony Aguirre, executive director of the Future of Life Institute and a physicist at the University of California, Santa Cruz, told NBC News that the acceleration of AI research has outpaced public understanding and consent. “We’ve, at some level, had this path chosen for us by AI companies and the economic system driving them, but no one’s really asked, ‘Is this what we want?’” Aguirre said. He added that the discussion must involve not just corporate leaders but also policymakers in the United States, China, and other major economies.


The Future of Life Institute, a nonprofit known for its work on large-scale risks such as nuclear weapons and biotechnology, was co-founded in 2015 with early backing from Elon Musk. The organisation said its current funding comes from donors including Ethereum co-founder Vitalik Buterin and that it does not accept contributions from Big Tech or AI developers.


The statement reflects rising anxiety among scientists and policymakers over the rapid advances in AI systems being developed by firms such as OpenAI, Google and Meta. These companies are investing billions of dollars into powerful models and data centres, while openly pursuing artificial general intelligence — technology capable of performing intellectual tasks at human level or beyond.


OpenAI chief executive Sam Altman said last month that he would be “surprised if superintelligence doesn’t arrive by 2030.” Meta’s Mark Zuckerberg claimed earlier this year that such capabilities are “now in sight,” while Musk has described the rise of digital superintelligence as “happening in real time.” None of these executives signed the new statement.


Public opinion remains sharply divided. An NBC News poll earlier this year found that 44 per cent of Americans believe AI will improve their lives, while 42 per cent think it will make them worse.


Aguirre said the group’s goal was to broaden debate and seek international cooperation, perhaps modelled on treaties governing nuclear weapons or biological research. “This is an issue for all of humanity,” he said. “We want this to be social permission for people to talk about it.”


As governments and global regulators grapple with AI’s economic promise and ethical perils, the call for a moratorium highlights the growing tension between innovation and existential caution — and the question of whether humanity can, or should, control the pace of its own technological future.

Loading...

Loading...