Generative AI has exploded onto the scene and captured the minds of people from all walks of life the world over. The incredibly powerful abilities and implementations available on a mass scale have garnered a veritable flurry of excitement and scepticism from the exponentially growing number of those involved.
The impressive list of possibilities and applications for modern generative AI are something to behold, comparable to what has long been perpetuated and touted in science fiction.
However, as has been the case with many a remarkable innovation, finding a carefully crafted balance between creating the precise environment for a developmental ecosystem and mitigating the multitude of risks in all of their potentiality is a task that few would gleefully envy.
The ADPPA a sluggish start?
If one were to start with the examination of the risks involved, the most pressing issue, not only related to AI, is the comprehensive discussion revolving around data protection and privacy. In 2022, the United States Congress’ House Energy and Commerce Committee passed the American Data Privacy and Protection Act (ADPPA).
This act was targeted at improving the effectiveness of regulation intended to protect data privacy rights, inevitably creating a seminal legislature in the regulation of the then emerging generative AI systems. Winning very nearly unanimous support from both sides of the congressional aisle along with encouragement from AI producers who believe this will help their innovations build trust.
The trustworthiness of generative AI
The trustworthiness of most tech or modern day media companies has taken several inconceivable blows in the not so distant past that continue to affect the sector to this day. To such a level it brought unification to a legislature that is defined by its perpetual opposition.
Occurring before the truly powerful emergence of generative AI on the level we interact with today, the pre-existing data privacy concerns have now not only been sustained but heightened.
Generative AI in the vein of most notably ChatGPT or even Midjourney AI, are platforms that are indispensably reliant on the continuous collection of data from their users and the massive swathes of information exponentially present across the internet.
Generative AI for humans?
In a debate at the recent AIBC Eurasia Summit, Jayden Sage, CEO of Global Crypto Council, claimed the infrastructure that is rapidly becoming commonplace and surrounding a generative AI platform is performing a diversion.
Offering some phenomenal capabilities that have simply never been seen in a human creation before, especially abilities that are so durably applied. In exchange for data on such a microscopic level as collection of every user keystroke in and out of context.
Jayden is fearful that so many applications for such a vast sum of data, along with the capabilities generative AI has to weaponize against humans, either for other humans or for AI’s own potential desires, that the systems will be able to overwhelm us.
The issue of which hands this data will end up. This is made even more complex by AI’s machine learning abilities. Changing and improving its algorithms and capabilities with every input and reverted answer.
A global squabble
This aspect has national and international implications with news breaking recently of US President Joe Biden’s plans to divert a diverse range of technological investment including that which would affect AI development away from their trans continental rivals China. If these plans are to meet expectations this will maintain and secure the United States global edge over the rest of the world.
The politically strategic importance of AI cannot be underestimated either with several nations looking to adopt AI in a variety of forms. Most notably Japan has begun exploring the possibility of AI running governmental administrative functions. Israel has also made significant steps towards integrating AI into their already formidable battlefield operations.
Propagation of propaganda
Another argument has also been raised about the generative AI’s programming being manipulated and moulded into a tool for a highly potent form of propaganda. Seminal figures such as Elon Musk in particular have raised concerns over certain developers creating politically correct fail safes and mitigating the value of chatbot’s responses by omitting some key details to its answers.
So much so that Musk announced he will be inaugurating his own “TruthGPT” to counteract this discouraging development. This of course brings up even more questions related to autonomy, which perhaps can only be definitively answered by stringent and effective regulatory action.
Ethical implications have also arisen with platforms such as Hereafter AI that can create AI-operated avatars of the deceased, generated from the stories, mannerisms and information of individuals that are recorded prior to their passing. This brings into play a confusing number of ethical considerations regarding both that of the deceased and the ones who cared for them in life.
The effects of this were felt in real time, when the family of F1 legend, Michael Schumacher, were forced to consider legal action when an interview was published in a German magazine featuring AI generated responses to an interview, despite the German racer still being in coma at the time.
Further ethical considerations should also be made for those still living when one considers the integration of AI into an array of social applications, including the large social media platforms and the infamous dating apps such as Tinder and Hinge.
When regulating generative AI, governments and legislatures must consider people on an individual level, how at risk they can be to the influx of AI, not just professionally but also personally and emotionally. There are many that are worried that if people are left without regulatory protections the risks will be so severe that individuals could end up withdrawing on many levels from society and rely more heavily on AI companionship than on other humans.
What does the legislation look like now?
When AI regulation is implemented these are just a few of the miscellaneous factors that legislators around the world will have to consider. Such multifaceted and rapid growth on the part of generative AI developers has meant that legislators have been making moves to favour a regulator angel focused on nurturing development within a practical and comprehensive framework that succeeds in addressing all these issues aforementioned without destroying AI’s progression.
Craig Albright, BSA vice president for U.S. government relations has stated recently that:
There is a growing consensus that what we are suggesting is an important element of guardrails for AI
This is far easier said and done as an inescapable component in the landscape of generative AI development is the fierce and rapidly escalating competition that is so rife it is plaguing the people at the very top of the veritable metaphor of the AI mountain.
The severity of the contest
Sundar Pichai, CEO of Google’s parent conglomerate, Alphabet, has spoken at length on the pressure AI innovators and companies have been feeling of late to keep up with all the trends and the undoubtedly significant consequences of falling behind.
Improving the pace and the accuracy, along with expanding the applicable methods of harnessing the AI. Integrating it into as many varied niches within existing technology are all aspects that nations and multinational companies must not drop the ball on or risk being left in the dust and more importantly at the mercy of direct rivals and competitors, both politically and financially.
Numerous approaches have been suggested with many more likely to be presented as to how, precisely, should this balancing act be performed.
Approaches to regulate generative AI
Italy to begin with, have placed a ban on some seminal AI’s such as OpneAI’s ChatGPT, displaying the government’s lack of ability to cope with either the risk involved or the rapidly uncontrolled development we are witnessing today.
Elsewhere an open letter has been penned by some entities with key expertise in the industry, including Elon Musk, who have urged a halt to all AI development, in order to alleviate the competitive pressure on those responsible for generative AI, while adequate protections and legislations are developed both by AI creating organisations and executive regulatory bodies.
A call that was promptly ignored by a vast majority of those involved most probably due to the distrust that has been bred by such fierce competition between all the organisations that have a hand in the pot.
The United Kingdom has shown a similar eagerness that began in powerhouse nation’s such as the US and China, recently, declaring a primary fund of a sizable and impressive £100 million. This will foster trust between the public and both commercially and state developed generative AI.
UK Prime Minister, Rishi Sunak had this to say on the matter:
By investing in emerging technologies through our new expert taskforce, we can continue to lead the way in developing safe and trustworthy AI as part of shaping a more innovative UK economy
There has been even more radical support from governments such as in India where regulatory upgrades have been completely dismissed, in the hopes of fostering and harnessing the uncontrollable and thus highly scalable capabilities of an economy intrinsically laced with generative AI.
A virtual challenge with real consequences
These are a mass of regulatory challenges that need to be solved by legislators, and quickly. There is no slowing generative AI and balancing the pros and cons will have to be both a taxing and timely task. The world is changing rapidly and we as humans must protect and support each other in balancing the progression of society and protecting as many as possible from the risks the speed and nature of these developments are occurring.
Unfortunately the response, although it has not been slow by many measures, has certainly not been fast enough at this stage. One gets the impression that some lessons will have to be learnt, whether necessary or not, before effective and potent regulation is enacted. These lessons could be quite painful but it seems although progress has been expansively exponential, the way in which society is protected by legislation still has a ways to catch up.
For the latest AI and tech updates visit AIBC News.