Industry experts warn of the existential threat from AI
Multiple figures responsible for the development of artificial intelligence (AI) systems at the very echelon of the emerging industry have signed an alarming letter that warns of the science-fiction-style threat of catastrophe that the exponentially growing AI systems of recent times pose.
Industry-leading innovators such as OpenAI’s Sam Altman and Google DeepMind CEO Demis Hassabis have been undeterred of late to slow or halt the development of the technology, despite the claims contained within this letter which they have jointly signed.
This is due to several factors, only some of which the public is currently aware of, including the massive pressure being placed on developers, both private and inter-governmental, to both stay relevant and competitive with each other, along with the increasing, daily demands the human race will continue to have for AI as the world’s industries begin to lean ever more comprehensively on the technology’s abilities.
The letter, which carries over 350 signatories, was published by the not-for-profit organisation Centre for AI Safety (CAIS) and states that AI could either accidentally or purposefully and of its own volition, cause an existential threat to humanity.
Prior to the release of this letter, Sam Altman the creator of the seminal AI chatbot ChatGPT, has been circulating numerous claims about the great dangers AI could pose.
We understand that people are anxious about how it can change the way we live. We are, too. If this technology goes wrong, it can go quite wrong.
The ideas in the letter seem to bring the most intense and intrusive fears to light, depicting numerous, wide-ranging effects and although there is most certainly an element of truth to many of them, one can’t help but feel much may be exaggerated to produce a more urgent effect. One particular statement can be seen as a key example of this:
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
Interestingly only 2 of the 3 so-called “godfathers of AI”, Geoffrey Hinton and Yoshua Bengio signed the letter, with the text even highlighting this fact in criticism of the third godfather Yann LeCun and his employer Meta for their abstention.
This is probably because of this fear-mongering approach to convince regulators of how AI regulation is urgently necessary, using scenarios and ideas that are more removed than perhaps the whole truth entails.
The far-reaching benefits of AI now mean that a monumental shift in daily life of veritably revolutionary proportions is on the cards. In light of this along with the specific risks that it has been claimed AI technology carries with it, means legislation must follow innovation and not the other way around.
Regardless of how devastating the risks may or may not be, there can definitely be a consensus that there are significant risks that will cause harm if not attended to, with time a factor of paramount importance.
The European Commission has already made tentative steps to regulate AI to an unfortunately unceremonious response from Altman who described it as over-regulation, further accompanying his terse statement with a threat from OpenAI to leave Europe.
Much back peddling must have occurred with the publication of this letter, with many hoping a fairer, more balanced approach can be agreed upon whereby both AI development can be secured while still considering the safety of the public and global entities.
Altman will even be attending several meetings with leaders from numerous regulatory bodies including the European Commission and the European Union in order to achieve this aim.
For the latest AI and tech updates visit AIBC News.