A man in the mainland Chinese city of Fuzhou, had been scammed out of millions of yuan through the illicit application of generative AI technology.
The individual in question received a video call last month from a person who seemed by all discernible means to be the man’s close friend. However the caller was actually a con artist who was using some form of machine learning AI to transform their appearance along with their audible presentation.
The fraudster was then able to convince the victim into transferring him 4.3 million yuan, a sum equivalent to $609, 000, by claiming another friend needed the money to be sent from a company bank account to pay the guarantee on a public tender.
Asking for the victim’s personal bank account number, the fraudster claimed cash in the form of an equal sum had been wired into that very account, even going so far as to send the victim a forged payment record.
The victim did not check to verify this before sending two payments from his company bank account, accumulating to the total amount. The victim testified to how believable the AI generated disguise seemed.
The victim only realised that he had been scammed when he messaged the friend who had been impersonated and learnt that their identity had been stolen.
Luckily he messaged the bank at the last moment which prevented the transaction and succeeded in recovering almost all of the funds that would have been lost to the scammer’s illicit activities.
This very case brings up yet another use for AI that will require heavy scrutiny not only in this case but in all cases stemming from every instance of illicit incompliance. The versatility and the incredibly rapid pace of AI development in every application one can imagine and even ones many have not imagined yet is so exponential that there is not even close to a clear or direct route to defend from the illegal misuses of the technology.
In recent years there has been heavy and increasing scrutiny placed upon both the ethical implications of AI and deliberation on the regulatory legislation that could be implemented to prevent AI’s ill effects.
Entities considering both or either one of these actions have so far been rendered almost ineffective in preventing the misuse of AI. Due to the highly competitive nature of the industry developing AI, which has been alluded to by seminal figures in the sector including the CEO of Alphabet’s Google and the polarising and ever present Elon Musk, there seems to be no organic way to slow the massive leaps AI is making in its development.
In the wake of this circumstance, many alternative and external issues have begun to gain prominence. Using the technology that has for various reasons been discontinued or migrated in its development, leaving the not so cutting edge but in some ways highly effective AI tools to be unregulated and thus not subjected to as much scrutiny.
In this case many instances of face and sound augmentation have been in some form or another been available and easily accessible online and on multiple social media platforms. Probably deemed to be harmless and in many cases rightly so however, a more serious or comprehensive stance must be taken in order to safeguard all technology users from the highly plausible risks of such fraudulent instances.
AIBC Asia is heading to the Philippines this July. An unmissable event is set to take centre stage in Manila, with a wealthy of industry leading knowledge, innovative insights and a plethora of networking opportunities.