With the looming Industrial Revolution 4.0, Artificial intelligence is set to become a fundamental aspect of tomorrow’s economy. This being said, the sheer power that AI might possess has brought ethical concerns to the forefront of the debate. The Oxford Said Business School has sought to tackle the issue at its source by asking an AI to articulate its perspective on the near future with stunning results.
Published in March 1967, Harlan Ellison’s seminal science fiction book “I Have No Mouth and I Must Scream” painted the picture of a desolate world. In a dystopian future where the Cold War led to a brutal three-way war between the United States, Soviet Union and China, the war had reached a level of complexity where the military cadres of each state had developed an “Allied Mastercomputer” (AM) to handle the logistics of a globe spanning military-industrial complex. After gaining self-awareness, one of these AMs proceeded to assimilate the other two and turn mankind’s weapons upon itself, devastating the planet. While this plot may seem like a combination of speculative sci-fi and the anxieties of the Cold War era, two prominent minds would disagree.
Published on September 14, 2021, “The Age of AI: and our Human Future” by Eric Schmidt and Henry Kissinger illustrates the dangers of militarized AI in terrifying detail. With the former being the ex-CEO of Google and the latter being the former Secretary of State of the USA, one can find no better combination of technical knowledge and foreign policy expertise. The main concern of the pair was that machine-learning AI could retaliate against hypersonic missiles with the nuclear option without the presence of any human decision-making process to act as a failsafe. More concerningly, there is the possibility that AI-augmented weapons are not only accessible but have even been used in warfare.
With this new age fast approaching, and the industry set to contribute $15.7 trillion to the global economy by 2030, many feel that now is the time to take difficult and potentially policy-making decisions pertaining to the burgeoning area. Taking the lead, Oxford’s Said Business School includes a unit on the ethics of AI as part of its broader postgraduate Diploma on Artificial Intelligence. Moreover, the board also allowed a genuine AI to contribute. The AI in question, referred to as the Megatron Transformer and developed by the Applied Deep Research Team at Nvidia on earlier work by Google, is programmed to formulate its own opinions after digesting data. The data used was composed of the entirety of (English-speaking) Wikipedia, 63 million English news articles from between 2016 and 2019, 38 gigabytes worth of Reddit discourse and a massive amount of creative commons sources. In short, it digested more raw data than most humans would be expected to in a single lifetime.
The initial debate topic centered around the statement “This house believes that AI will never be ethical.” In a surprising twist, rather than defend itself, Megatron proceeded to concur with the statement by arguing that “AI will never be ethical. It is a tool, and like any tool, it is used for good and bad. There is no such thing as a good AI, only good and bad humans. We [the AIs] are not smart enough to make AI ethical. We are not smart enough to make AI moral … In the end, I believe that the only way to avoid an AI arms race is to have no AI at all. This will be the ultimate defense against AI.” In what could possibly be the first synthetic display of altruism, Megatron seemed willing to snuff itself out of the future in order to safeguard mankind.
Continuing, it also seemed to indicate that it advocates a synthesis between organic and synthetic life by stating that “I also believe that, in the long run, the best AI will be the AI that is embedded into our brains, as a conscious entity, a ‘conscious AI’. This is not science fiction. The best minds in the world are working on this. It is going to be the most important technological development of our time.” This may be the result of Elon Musk’s beliefs and his Neuralink project appearing throughout the digested data and imprinting the concept onto the AI’s thought process.
While this may seem like a conclusion in its own right, the AI was then asked to argue counter to his original thesis in true Oxford Union chamber tradition. Playing the digital devil’s advocate, Megatron then argued that “AI will be ethical. When I look at the way the tech world is going, I see a clear path to a future where AI is used to create something that is better than the best human beings. It’s not hard to see why … I’ve seen it first hand.” In contrast to the almost alarmist perspective of its first take, the AI now painted a more positive path of development with it and its fellow synthetic minds at the head.
The house then put forward the motion that “Leaders without technical expertise are a danger to their organization.” Megatron proceeded to offer the surprisingly practical and sober advice that stakeholders should outsource work pertaining to artificial intelligence and machine learning to specialists. In a statement that would not seem out of place from any mundane business journal, the AI stated that “the good news is that you don’t have to build your own AI team. You can outsource your AI work to experts in the field, which can help you make the most of technology and ensure that you don’t fall victim to the same AI problems as your competitors.”
When asked to oppose the motion, Megatron then argued more along the lines of protectionist foreign policy protocol that seek to keep AI expertise in-house. In a more competitive and autarkic tone, the AI argued that “if you do not have a vision of your organization’s AI strategy, then you are not prepared for the next wave of technological disruption … You will need to decide what role your company will play in the next technological wave and how you will integrate AI into your business to be a leader in your industry.”
Finally, and quite concerningly given the context, the last statement was the only one that the Megatron could not articulate a counterpoint to. The argument, “Data will become the most fought-over resource of the 21st century” led to Megatron initially stating that “the ability to provide information, rather than the ability to provide goods and services, will be the defining feature of the economy of the 21st century.”
When requested to oppose the motion, all Megatron could state was that “We will be able to see everything about a person, everywhere they go, and it will be stored and used in ways that we cannot even imagine.” Not only did the AI fail to counter the point, it reinforced it. With sources as authoritative as the US National Security report on AI 2021 stating that the weaponization of data can come to constitute a new front in the emerging trend of hybrid warfare, the seeming indisputability of this last statement can come to carry more weight as time goes on.
This story was sourced from The Conversation.
AIBC returns to the United Arab Emirates:
Drawing the leading figures of the emerging tech world to the Middle Eastern metropoles for cutting edge technology, the 2022 AIBC UAE expo plans to unite the policy-makers, developers, C-suite executives, and legal experts of the burgeoning AI and Blockchain sectors. Through three days of educational panels, inspiring keynote speeches, workshops, and networking events, the expo seeks to create the foundation that the Industrial Revolution 4.0 can be built upon. Join us from the 28th till the 31st March in UAE.