Questions surrounding liability in cases involving artificial intelligence are a blindspot some are overlooking. Dr Stefano Filletti, Head of Department of Criminal law at the University of Malta, lawyer and Senior Managing Partner at Filletti & Filletti Advocates, spoke with Jeremy Micallef on how judicial reform must take place to cater for the future.
Artificial Intelligence is a difficult term to define and there is no one universally accepted definition. The word “artificial” denotes something which is man-made. The word “intelligence” is more abstract and more difficult to define. Intelligence refers to the capacity to develop and evolve over time within a given environment. Intelligent beings not only adapt to their environment but also ‘learn’ to become more efficient at what they do.
In the words of the renowned British physicist Stephen Hawkings, “intelligence is the ability to adapt to change”. This definition denotes the innate capacity of any intelligent being to be able to adapt to changing circumstances or environments. This ability to adapt must be innate to it.
Placed together the term ‘artificial intelligence’ would refer to a man-made object capable to autonomously adapt to change, to develop and evolve.
Artificial intelligent beings can really be anything from robots to weapons, software on a machine to virtual bots on the internet.
The Agency to Commit Crimes
If artificial intelligence refers to the ability to change and develop autonomously and independently, it would be hard to predict what artificial intelligence is ‘learning’ and how it is adapting. In children the process of ‘learning’ includes learning from mistakes, including potentially dangerous mistakes. Likewise artificially intelligent beings, or AI systems, will ‘learn’ in an unpredictable manner.
There can be no doubt that any intelligent being, including AI systems, will adapt differently to different situations. It is likewise impossible to determine given results and outcomes following a process of adaptation.
Therefore, if left unchecked, AI systems have the potentiality of committing crimes. For example, AI systems left unchecked in financial services may soon learn that committing financial crime may yield higher profits (albeit illegal gains), whilst believing that it became more efficient at what it does. Although this example could easily be countered by introducing parameters to the AI system requiring it to avoid illegal transactions, the unpredictability of the adaptation means that it is humanly impossible to predict any and all actions of the AI system. Artificial intelligence can, therefore, in practice, commit acts or transactions which would be criminal (if committed by humans).
Humans vs Artificial Intelligence: The Criminal Law Conundrum
The reason why criminal law exists is to punish those who chose to obtain an unfair advantage in violation of the law to the detriment of the law-abiding or to punish those who engage in behaviour which is legally and morally reprehensible. Criminal law presupposes that the subject of criminal law is a human person having the moral capacity to determine what is morally right and morally wrong. It also presupposes that the subject of criminal law has the basic capacity to understand his or her actions and have the volitional capacity to commit or refrain from committing these acts. Finally, criminal law has the purpose of creating deterrence: deterring persons from engaging in criminal behaviour with fear of sanction or punishment.
Criminal law has evolved over time creating space for a legal personality of companies. Hence the concept of criminal corporate liability developed. This allowed criminal law to extend to the business of companies and corporate structure, ostensibly to freeze and confiscate assets and proceeds of crime being the property of companies rather than natural persons.
Yet criminal corporate liability has a limited scope. A company cannot commit all crimes which a human can be found guilty for. Criminal corporate liability is reserved for financial crime in general.
AI systems can engage in far more dangerous activity if left unchecked. Yet, artificial intelligence and AI systems challenge the core foundations of criminal law. Artificial intelligence does not have an innate moral compass and a capacity of right and wrong found in humans. It has no volitional capacities. Artificial intelligence does not “fear” punishment and therefore criminal law poses no deterrence to artificial intelligence.
The point is that criminal law in its current state does not and, by and large, cannot cater to crimes committed by artificially intelligent beings. It can only hold humans owning or using them (limitedly) responsible for actions their artificial beings commit.
Rights and Responsibilities
Artificially Intelligent beings can commit wrongful actions for a multitude of reasons.
The first could be an error in its programming or manufacturing. A second scenario could be that the user inputs a wrong set of instructions. A third could be that any person charged with overseeing the AI system, realizing that something is wrong, fails to take the appropriate action.
A more difficult scenario is where the artificial being, without human solicitation and without a person in its charge, engaged autonomously in an unpredicted and unforeseeable course of action leading to criminal or tragic results. Although rare, this is not only a possibility but has already happened. Although efforts are made to reduce the unpredictability factor, it can never be excluded. The question is how to tackle it. If it is shown that the manufacturer and its user, performed all reasonable tests and used all reasonable precaution, can they be held responsible for an unpredicted and unforeseeable outcome? Even more so when coming from an AI system adapting and developing on its own accord?
Quo Vadis? Where do we go from here? The current state of criminal law does not offer a solution. It is clear that with the further advancement of artificial intelligence, this problem needs to be addressed. A new legal regime needs to be developed to cater for the non-human actor even in the field of criminal law.
The Road Ahead
The legal responsibility of AI systems is a matter which has been taken up by many countries and is also a matter of legal debate at the European level, both at EU and Council of Europe levels. Two distinct streams between civil liability and criminal liability are identifiable. From a local perspective, the topic is also being actively discussed with local authorities actively engaged in the debate. Only time, however, will tell if AI systems would be adequately regulated by civil and criminal legislation or whether this is an elusive ideal with AI systems outsmarting the law.
Block Issue 5 is out:
The Block is a bi-annual publication which illuminates the cutting-edge sectors of AI, blockchain, crypto and emerging tech, with a print run of 5000 delivered to leading brands across the global industry. View our latest issue of the Block here.