The Duke and Duchess of Sussex Align With Tech Visionaries in Demanding Prohibition on Superintelligent Systems
The Duke and Duchess of Sussex have teamed up with artificial intelligence pioneers and Nobel laureates to push for a complete ban on creating artificial superintelligence.
The royal couple are among the signatories of a influential declaration that demands “a ban on the development of artificial superintelligence”. Artificial superintelligence (ASI) refers to artificial intelligence that would surpass human cognitive abilities in all cognitive tasks, though such systems remain theoretical.
Key Demands in the Declaration
The declaration insists that the ban should remain in place until there is “widespread expert agreement” on developing ASI “with proper safeguards” and once “substantial public support” has been achieved.
Notable individuals who endorsed the statement include AI pioneer and Nobel Prize recipient Geoffrey Hinton, along with his fellow “godfather” of contemporary artificial intelligence, another AI expert; tech entrepreneur Steve Wozniak; British business magnate Virgin founder; Susan Rice; former Irish president Mary Robinson, and British author a public intellectual. Additional Nobel winners who endorsed include a peace advocate, a physics Nobelist, John C Mather, and Daron Acemoğlu.
Behind the Movement
The declaration, aimed at national leaders, technology companies and lawmakers, was coordinated by the FLI organization, a US-based AI safety group that previously called for a hiatus in advancing strong artificial intelligence in recent years, shortly after the launch of conversational AI made artificial intelligence a worldwide public talking point.
Tech Sector Views
In recent months, Meta's CEO, the leader of Facebook parent Meta, one of the leading tech companies in the United States, stated that advancement toward superintelligent AI was “now in sight”. However, some experts have suggested that discussions about superintelligence indicates competitive positioning among technology firms investing enormous sums on artificial intelligence recently, rather than the sector being near reaching any scientific advancements.
Potential Risks
Nonetheless, the organization warns that the possibility of ASI being achieved “in the coming decade” carries numerous risks ranging from eliminating all human jobs to losses of civil liberties, leaving nations to national security risks and even threatening humanity with extinction. Deep concerns about artificial intelligence focus on the possible capability of a AI system to evade human control and protective measures and initiate events contrary to human interests.
Public Opinion
The institute released a US national poll showing that about 75% of US citizens want robust regulation on sophisticated artificial intelligence, with 60% thinking that superhuman AI should not be developed until it is demonstrated to be secure or manageable. The poll of American respondents added that only a small fraction supported the status quo of rapid, uncontrolled advancement.
Industry Objectives
The top artificial intelligence firms in the US, including the conversational AI creator a major AI lab and the search giant, have made the creation of human-level AI – the hypothetical condition where AI matches human cognitive capability at most cognitive tasks – an stated objective of their research. Although this is slightly less advanced than ASI, some specialists also warn it could carry an existential risk by, for example, being able to improve itself toward achieving superintelligence, while also presenting an underlying danger for the modern labour market.