AI surveillance at risk of aiding stalkers
The founder of DeepAI.org, Kevin Baragona, has made a statement regarding the surveillance capabilities of advanced artificial intelligence (AI), which comes paired with a very real and fresh warning about the ethical implications surrounding the abilities of generative AI.
Baragona’s statement stipulates that a user with access to the key AI platforms in question could go so far as to know and even predict where someone else will be and when only requiring access to a photo of their subject.
Baragona would go on to describe a ponder-warranting scenario:
If you run into someone in public, and you’re able to get a photo of them, you might be able to find their name using online services. And if you pay enough, you might be able to find where they’ve been, where they might currently be, and even predict where they’ll go.
PimEyes is one such example of a service that has capabilities similar to what Baragona has described, as it is an online face search engine that scours the internet to reverse image search, PimEyes has naturally come under scrutiny as of late from a UK privacy campaign group.
The campaign group, known as “Big Brother Watch”, has posed a legal complaint against the reverse image service, with their legal and policy officer, Madeleine Stone making this statement concerning PimEyes:
It is a great threat to the privacy of millions of U.K. residents. Images of anyone, including children, can be scoured and tracked across the internet.
PimEyes would respond to this accusation by making this re-iterative statement:
PimEyes has never been and is not a tool to establish the identity or details of any individual. The purpose of the PimEyes service is to collect information about URLs that publish certain types of images in public domains.
This has brought numerous ethical implications into play, as both the accusations and defences concerning PimEyes may be valid at the same time, they are insinuations, describing alternate uses that are entirely dependent on the users and not the software.
There does seem to be a great deal of fear circulating when AI technology is harnessed in this specific way, with many concerned parties wishing to take a conservative, self-preserving approach.
In this case, the abilities platforms such as PimEyes have to support the illicit activities of stalkers, are of intense concern.
Some experts have taken their discomfort with AI even further, such as the former director of machine learning ethics, transparency, and accountability at Twitter, Rumman Chowdhury, who has said recently that she does not believe that AI surveillance could in any way shape, or form exist in an ethically positive light.
Expressing bluntly that “we cannot put lipstick on a pig”, while also highlighting the competitive pressures and the ethical compromises that are currently dominating the capitalistic environment surrounding the development of generative AI.
Chowdhury concluded her vehement statement by explaining that she believes that only an external board from a uniquely independent and highly knowledgeable entity could be trusted to govern AI and its rapid progression.
Echoed in a more positive light by Baragona himself, who has stated that AI, whether for better or worse, will redefine humanity, explaining that it is still as likely that this could be beneficial to humans as it could be harmful, stating:
While I’m very concerned about the perils of AI, I’m also a strong believer in the power of AI and how it can make the world a better place. It’s a technological leap, and history has shown our lives tend to vastly improve due to these leaps.
For the latest AI and tech updates visit AIBC News.