Tech

Prince Harry And Meghan Markle Join Global Effort To Halt Development Of Superintelligent AI



Prince Harry and Meghan Markle, the Duke and Duchess of Sussex, have joined others in signing a statement calling for “a prohibition on the development of superintelligence” AI systems for the time being.

Future of Life Institute (FLI), an AI safety group based in the United States that called for a pause on advanced AI development in 2023, organized the statement, according to The Guardian. It is aimed at tech companies, governments, and lawmakers, and includes signatures from AI pioneers and Nobel laureates.

While artificial intelligence (AI) refers to systems that can match human intelligence in most cognitive tasks, artificial superintelligence (ASI) describes hypothetical systems that would exceed human intelligence.

“The future of AI should serve humanity, not replace it. The true test of progress will be not how fast we move, but how wisely we steer,” Harry wrote in a comment alongside his signature.

The statement urges that the ban on ASI development remain in place until there is “broad scientific consensus that it will be done safely and controllably,” and there is “strong public buy-in.”

Other signatories include Yoshua Bengio, professor of computer science, Turing Laureate, and one of the world’s most cited scientists; Apple co-founder Steve Wozniak; and Susan Rice, former U.S. national security adviser under former President Barack Obama.

As of this writing, the statement has garnered over 28,000 signatures.

FLI warns that ASI emerging “in the coming decade” carries risks, including mass job loss and civil liberty erosion, national security threats, and even human extinction, The Guardian reports.

The organization released the findings of a national U.S. poll of 2,000 adults, which shows that about three-quarters of Americans favor strong regulation of advanced AI. Six in 10 are against creating superhuman AI until it is proven safe and controllable, while just 5% support continuing fast, unregulated development.

Existential concerns center on the fear that a superintelligent system could evade human control, act against human interests, and even harm humans.

Experts also warn that leading AI companies, including OpenAI and Google, could pose existential risks if their systems become capable of self-improvement toward superintelligence, while also threatening the stability of the modern labor market.

Mark Zuckerberg Says Superintelligence Development Is Near

In a statement released on July 30, 2025, Meta CEO Mark Zuckerberg — whose company is one of the leading U.S. developers of AI — said the creation of superintelligence was “now in sight,” adding that he is “extremely optimistic” it will help humanity speed up its pace of progress.

“In some ways this will be a new era for humanity, but in others it’s just a continuation of historical trends,” Zuckerberg wrote.

“As recently as 200 years ago, 90% of people were farmers growing food to survive. Advances in technology have steadily freed much of humanity to focus less on subsistence and more on the pursuits we choose,” he added. “At each step, people have used our newfound productivity to achieve more than was previously possible, pushing the frontiers of science and health, as well as spending more time on creativity, culture, relationships, and enjoying life.”



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button