Karla Ortiz, a San Francisco based artist, is suing 2 AI firms, claiming that the way in which they mimic an artist’s style constitutes a form of identity theft. Additionally she is also claiming that this unjust and incongruous nature of this means that the products generated through the aforementioned theft directly competes with the artists.
A preliminary argument against the use of generative AI has gained far more traction in recent years with several lawsuits being brought to the forefront of public consciousness. The issue of copyright infringement in the training process of developing an AI system, particularly for image generation is not a new complaint but one that has now seen some truly significant expansion.
The firms that are subject to this suit are MidjourneyAI and Stable Diffusion, who are self-professed to use text and image across the internet and numerous other sources to train their platforms, time and time again to the will of their customers.
Ortiz is claiming that not just her art but her data was used to train the technology, which in her eyes was grounds to file a copyright infringement and right of publicity violations lawsuit. Expressing her vehement disdain for how her information was use, she stated:
It feels like someone has taken everything that you’ve worked for and allowed someone else to do whatever they want with it for profit.
She claimed the issues are just too explicit to ignore, explaining that prior to filing her suit, she could prompt MidjourneyAI and Stable Diffusion to create imagery “in the style of Karla Ortiz” and this would unfortunately be too successful of an exercise for her tastes.
MidjourneyAI and Stable Diffusion’s response
The grounds of similarity are perhaps not the most compelling aspects of this case, as Stable Diffusion’s creator, Stability AI, filed a motion against Ortiz’s case, claiming that the artist had failed:
To identify a single allegedly infringing output image, let alone one that is substantially similar to any of their copyrighted works.
A separate motion along the same thread was filed by MidjourneyAI later the same day.
The copyright back and forth between artists and the industry has existed, possibly in a more rudimentary form, for many years previous even to the inception of generative AI. Played close to a stalemate as it stands, perhaps a re-definition or re-regulation of legislation is what is required, especially in this rapidly escalating and changing environment perpetuated by generative AI.
Ortiz’s claims did bring to light a far more discernible concern related to data privacy however,
For these models to generate the imagery that you see today, or anything for that matter, they have to be first trained on massive amounts of data, data that includes image and text. That data, it includes everything.
Ortiz referred to documentation that is seemingly unrelated to art generation, most certainly suggesting a highly invasive method of data collection. Documents such as medical records, peoples housing, likeness and in Ortiz’s case, all of her fine artwork.
AI training and data collection
Other artists as of late have scrutinised a numerous variety of tech company’s approaches to data and its collection. Accusing some of training their models for potential data exploitation, in multiple forms, not least in machine learning.
What is also worth noting is that artists in all forms including illustrators, writers and musicians cannot legally copyright their own style, but only the specificities related to singular artworks and pieces.
Subsequent to her suit, MidjourneyAI and Stability AI ceased the use of Ortiz’s data voluntarily. However, a further issue arose in the midst of this.
It generates imagery that is meant to look like yours and potentially even compete in your own market, utilising your own name and your own work.
In this environment, artists are forced to compete in a highly unsporting contest with not only their own work, but a digital copy of their own style and experiences, driven by a machine that “does not sleep, does not rest and does not get paid.”
Cases such as these, that challenge the very legislation constructed to protect people and their livelihoods, are of the utmost importance, as AI regulation, much as the sector itself, moves out of its infancy. The verdicts and findings delivered in these situations will shape the industry and define many aspects of human life to come.
The effects are only just beginning to be felt with some highly significant consequences. Instances such as the recent shuttering of the Buzzfeed’s news department, ushering out as many as 180 staff members with plans to re-fabricate the media firm around AI for increased efficiency and rate of industry.
In another example, tech giant IBM is intent on pausing the search for roles that AI could replace, with CEO Arvin Krishna also announcing that a possible 30% of non-customer-facing roles which comprises nearly 8,000 jobs could be removed as early as within the next 5 years.
The predictions seem to get even bleaker as nearly 400 million jobs are expected to be impacted by the scaling up of AI. Many have countered these predictions with a certain positivity, encouraging individuals towards the new possibilities and opportunities this could offer, however time is yet to tell of what these could possibly be. What is certain is that generative AI is coming to take its place and things will change.
AIBC Americas is coming to Brazil. In this latest instalment of this unmissable event, AIBC bring a wealth of knowledge from a plethora of seminal industries packed full of innovation and inspiration.