Prince Harry and Meghan Markle have teamed up with artificial intelligence pioneers and Nobel Prize winners to advocate for a complete ban on developing superintelligent AI systems.
The royal couple are among the signatories of a influential declaration that calls for “a ban on the creation of superintelligence”. Artificial superintelligence (ASI) refers to AI systems that would surpass human intelligence in every intellectual area, though such systems have not yet been developed.
The declaration insists that the prohibition should stay active until there is “broad scientific consensus” on developing ASI “safely and controllably” and once “substantial public support” has been secured.
Prominent figures who added their signatures include technology visionary and Nobel laureate a leading AI researcher, along with his colleague and pioneer of modern AI, another AI expert; tech entrepreneur Steve Wozniak; UK entrepreneur Virgin founder; Susan Rice; former Irish president Mary Robinson, and UK writer Stephen Fry. Additional Nobel winners who endorsed include Beatrice Fihn, Frank Wilczek, John C Mather, and Daron AcemoÄźlu.
The statement, aimed at governments, tech firms and lawmakers, was coordinated by the Future of Life Institute (FLI), a US-based AI safety group that previously called for a hiatus in advancing strong artificial intelligence in 2023, shortly after the emergence of ChatGPT made artificial intelligence a global political discussion topic.
In recent months, Mark Zuckerberg, the chief executive of the social media giant, one of the leading tech companies in the US, stated that advancement toward superintelligent AI was “now in sight”. Nevertheless, some analysts have suggested that discussions about superintelligence reflects market competition among tech companies spending hundreds of billions on AI recently, rather than the sector being near reaching any scientific advancements.
However, FLI states that the prospect of artificial superintelligence being achieved “within the next ten years” carries numerous threats ranging from replacing human workers to losses of civil liberties, exposing countries to security threats and even threatening humanity with extinction. Deep concerns about artificial intelligence center around the possible capability of a AI system to escape human oversight and safety guidelines and trigger actions contrary to human interests.
The institute released a American survey showing that approximately three-quarters of Americans want robust regulation on sophisticated artificial intelligence, with six out of 10 believing that artificial superintelligence should not be created until it is demonstrated to be secure or manageable. The poll of American respondents noted that only a small fraction supported the status quo of rapid, uncontrolled advancement.
The leading AI companies in the US, including the ChatGPT developer a major AI lab and Google, have made the creation of human-level AI – the hypothetical condition where artificial intelligence equals human levels of intelligence at most cognitive tasks – an explicit goal of their work. Although this is one notch below superintelligence, some experts also warn it could pose an extinction threat by, for example, being able to improve itself toward reaching superintelligent levels, while also presenting an implicit threat for the modern labour market.
A seasoned automotive journalist with a passion for classic cars and modern innovations, sharing insights and stories from the road.