The Duke and Duchess of Sussex Join Tech Visionaries in Calling for Prohibition on Advanced AI
The Duke and Duchess of Sussex have joined forces with AI experts and Nobel laureates to push for a complete ban on creating artificial superintelligence.
Harry and Meghan are part of the group of a influential declaration that calls for “a prohibition on the development of artificial superintelligence”. Superintelligent AI refers to artificial intelligence that could exceed human cognitive abilities in every intellectual area, though such systems remain theoretical.
Key Demands in the Declaration
The declaration states that the prohibition should stay active until there is “widespread expert agreement” on developing ASI “safely and controllably” and once “substantial public support” has been achieved.
Notable individuals who endorsed the statement include AI pioneer and Nobel laureate a leading AI researcher, along with his colleague and pioneer of contemporary artificial intelligence, Yoshua Bengio; Apple co-founder a Silicon Valley legend; British business magnate Virgin founder; Susan Rice; former Irish president an international leader, and British author Stephen Fry. Other Nobel laureates who endorsed include Beatrice Fihn, a physics Nobelist, John C Mather, and an economics expert.
Organizational Background
The declaration, aimed at governments, technology companies and lawmakers, was organized by the Future of Life Institute (FLI), a US-based AI safety group that previously called for a hiatus in advancing strong artificial intelligence in recent years, shortly after the emergence of ChatGPT made artificial intelligence a worldwide public discussion topic.
Industry Perspectives
In recent months, Mark Zuckerberg, the leader of Facebook parent Meta, one of the major AI developers in the US, claimed that development of superintelligence was “now in sight”. However, some analysts have argued that discussions about superintelligence reflects market competition among technology firms investing enormous sums on AI recently, rather than the industry being near reaching any scientific advancements.
Potential Risks
Nonetheless, FLI warns that the possibility of artificial superintelligence being developed “within the next ten years” carries numerous risks ranging from eliminating all human jobs to losses of civil liberties, leaving nations to security threats and even endangering mankind with existential risk. Existential fears about AI focus on the possible capability of a system to evade human control and safety guidelines and trigger actions contrary to human interests.
Citizen Sentiment
FLI published a American survey showing that approximately three-quarters of US citizens want robust regulation on sophisticated artificial intelligence, with 60% thinking that superhuman AI should not be created until it is proven safe or manageable. The survey of 2,000 US adults noted that only 5% supported the current situation of fast, unregulated development.
Industry Objectives
The leading AI companies in the United States, including the ChatGPT developer OpenAI and Google, have made the development of artificial general intelligence – the theoretical state where artificial intelligence equals human levels of intelligence at most cognitive tasks – an stated objective of their research. Although this is slightly less advanced than ASI, some experts also caution it could pose an existential risk by, for example, being able to improve itself toward achieving superintelligence, while also carrying an implicit threat for the contemporary workforce.