TechyMag.co.uk - is an online magazine where you can find news and updates on modern technologies


Back
AI

AI's 'Existential Threat': Steve Wozniak, Prince Harry, and 1000+ Signatories Demand Halt to Superintelligence Research

today, 12:56 PMComments (0)Views (4)3 min. read
AI's 'Existential Threat': Steve Wozniak, Prince Harry, and 1000+ Signatories Demand Halt to Superintelligence Research
0 0 4 0
A Global Call to Pause: Esteemed Figures Urge Halt on Superintelligence Development

In a striking convergence of high-profile individuals, hundreds of prominent figures, including pioneers of early artificial intelligence, have jointly penned an open letter demanding a moratorium on the development of Artificial General Intelligence (AGI). This powerful statement, titled the "Statement on Superintelligence," advocates for an immediate halt to AGI research and deployment until a broad scientific consensus can be achieved, ensuring that advancements in AI proceed safely, under rigorous control, and with substantial public backing.

The initiative gains significant weight from recent findings by the Future of Life Institute (FLI). A survey conducted by FLI reveals a stark public sentiment: a mere 5% of US citizens favor rapid, unregulated AI innovation. Conversely, an overwhelming majority, exceeding 73%, expressed strong support for stringent AI regulation. Furthermore, a significant 64% of respondents believe that the development of AGI should be prohibited until its safety and controllability are unequivocally proven.

A Diverse Coalition United by Concern

The signatories represent an extraordinary spectrum of influence and expertise. Among them are Apple co-founder Steve Wozniak, Virgin founder Sir Richard Branson, media personalities like Steve Bannon and Glenn Beck, and entertainers such as Joseph Gordon-Levitt, Prince Harry, and Meghan Markle. The letter also bears the signatures of formidable figures from the military and religious spheres, including retired US Navy Admiral Mike Mullen, former Chairman of the Joint Chiefs of Staff, and Father Paolo Benanti, an AI advisor to Pope Francis. The scientific community is powerfully represented by Turing Award laureate Yoshua Bengio and Nobel laureate Geoffrey Hinton, alongside a vast consortium of AI experts and researchers.

“Newer AI systems will be able to surpass most human work in all cognitive tasks within a few years. These advancements could pave the way for a prosperous future, but they also carry profound risks. To navigate the path to superintelligence safely, we must scientifically determine how to design AI systems that are fundamentally incapable of harming humans, whether through misalignment or malicious use. We also need to ensure the public has significantly more say in the decisions that will shape our collective future,” emphasized Professor Yoshua Bengio of the University of Montreal.

Crucially, the letter underscores that catastrophic consequences stemming from AI do not hinge solely on the advent of AGI. Even current generative AI models, far from reaching superintelligence, are already profoundly altering educational paradigms, flooding the internet with misinformation and low-quality content, and contributing to a rise in mental health challenges among users. The urgency is palpable, as these technologies, even in their nascent stages, demonstrate a capacity for widespread disruption.

A Stark Divide and a Public Plea

The signatories' call for caution stands in stark contrast to the ambitions of several leading AI companies. Notably absent from the letter's endorsements are key figures such as OpenAI CEO Sam Altman, DeepMind co-founder and Microsoft's Head of AI Mustafa Suleyman, Anthropic CEO Dario Amodei, White House AI and crypto advisor David Sachs, and xAI founder Elon Musk. This divergence highlights a critical debate within the AI community and beyond.

“Many people want powerful AI tools for science, medicine, productivity, and other purposes. But the path that AI corporations are taking, striving to create human-surpassing AI designed to replace people, is entirely misaligned with public needs, scientists’ opinions on safety, and religious leaders’ opinions on morality. None of the people developing these AI systems have asked humanity if this is acceptable. We have asked – and the answer is that it is not,” stated FLI co-founder Anthony Aguirre.

The open letter serves as a potent reminder that the trajectory of artificial intelligence is not merely a technical pursuit but a societal one, demanding thoughtful deliberation and broad public engagement. The collective voice of these influential individuals is a powerful plea for responsible innovation, urging a pause to ensure that the future of AI aligns with the well-being and values of humanity.

LinkedIn's default AI training on user profiles sparks privacy concerns; here's how to opt out
Post is written using materials from / futurism /

Thanks, your opinion accepted.

Comments (0)

There are no comments for now

Leave a Comment:

To be able to leave a comment - you have to authorize on our website

Related Posts