AI Copies Human Creativity on a Massive Scale. So Shouldn't Artists & Creators Get Paid?
And If So, How is That Even Possible? Both Chat GPT & I Give Some Ideas.
The Davos World Economic Forum just recently ended, but the real chatter about artificial intelligence (AI) lingers on. Chat GPT and AI were a headline story at Davos, as it should be given AI’s anticipated global societal disruption that Chat GPT portends. But AI – and the transformation that it increasingly drives in all aspects of our lives – should also be “the talk” in Hollywood and across all sectors of media and entertainment and in the creative community in general.
Chat GPT rocketed into our worlds just a few weeks back, and rocked many of us with its game-changing mainstream ease and sophistication (I wrote about it myself in a column just over one month ago). And now an overall mass realization and unease are beginning to hit home across the creative community and the business ecosystem that supports and profits from it. No one knows where AI will take content creation in the years ahead, but thanks to Chat GPT we are finally beginning to understand where it has already been - and beginning to understand that the genie is out of the bottle. So we must embrace this AI reality and learn to leverage its power, while also preserving humanity as being at the center of the overall creative process (at least I think so). Our collective very real artificial intelligence epiphany is even more poignant as we consider how AI’s power will grow exponentially in the years ahead.
Entertainment community, consider this. In the textual world, AI can already write increasingly compelling stories and even full screenplays. Just add water (basic plot points, characters, and setting) and out grows an entirely “new” creative work (and endless iterations if that’s what you want). Similarly, suggest some visuals (photographs, artwork, a certain style – which may be even artist-specific) and AI paints an entirely “new” canvas of images and color (and endless iterations of them if that’s your desire). Think moving pictures (film, television) are exempt? Think again. Full-length 100% AI-created videos are coming soon to a screen near you. An AI-only generated short, “The Crow,” already recently won the Jury Award at the Cannes Short Film Festival. And don’t forget about music and other forms of media. AI can be your composer that doesn’t sleep or need to get paid. Just ask it to create a new tune based on ingredients of successful songs of the past and, voila, a “new” track is born! Check out Boomy – just one of the many AI-driven music-focused companies – that markets itself as democratizing the music creation process. You can generate endless new songs and upload them directly to Spotify to compete with human streams (and earn real human dollars in the process).
The only problem is that none of this “creative output” is entirely new. In fact, it is inherently derivative, based on scores – if not hundreds, thousands, and even millions or more – existing creative works that find there way across the Internet. You give the AI (like Chat GPT) the input, and out pops – literally in seconds – as many seemingly novel works based on AI’s instantaneous “scraping” of all of that data, much of which is copyrighted of course.
And there’s the ultimate rub. Does this kind of generative AI – AI that is focused on creating new content based on content of the past that populates the Internet (including my article here that is a complete human creation, I swear!) – infringe our copyrights, our exclusive ability to commercialize our works, on a massive scale?
The simple answer is that we don’t know yet. At least not in the courts that will ultimately decide such matters that will have transformational impacts on everyone in Hollywood and across all of creativity and the Arts. But we will see answers coming soon. This year, in fact.
Just two weeks ago, two bellwether lawsuits were filed against AI companies that focus on this central issue. First, in the U.S., a group of artists filed a class action lawsuit in the federal courts of California against Stability AI, Midjourney and DeviantArt – three major players in the generative AI art space. The AI of those companies enable, among other things, users to “mimic” the work and styles of individual artists. The core of their complaint is that those companies infringed the rights of “millions of artists” by scraping 5 billion images “without the consent of the original artists.” In the second lawsuit, Getty Images – that treasure trove that is used by just about everybody in media – filed litigation in the U.K. courts against Stability AI (that’s right, two major lawsuits against the company in one week!), contending that it unlawfully processed millions of copyrighted images to “train” its software.
The central question in both of these lawsuits – and to generative AI across all creative works in general – is whether these undeniable “micro infringing” AI machinations on a massive scale add up to actual copyright infringement that warrants significant legal penalties and consequences. In other words, should we consider AI’s “creative” outputs to be copyright transgressing “derivative” works that warrant very real compensation to the creative community from which they originated? Or does AI’s mass scraping of those works instead constitute a defensible “fair use”? So far, the U.S. courts have been silent on this issue, and the U.S. Copyright Office has given no guidance on the matter.
But that changes this year, in 2023, as the courts – and not only we in the creative community – grapple with these questions that have profound implications for the media and entertainment industry, not to mention all of the Arts and human creativity in general. AI itself struggles with these issues. In a recent article published in Scientific American, Chat GPT concedes that AI should be regulated. And when asked by the human author whether AI, as a matter of ethics, should compensate authors for its micro-transgressions on which its “new” machine-made works are built, it spit out an essentially unqualified “Yes!” These were its robotic thoughts and ideas on the subject: “One possible solution … is to establish a system for compensating writers whose work is used in training models. Another solution could be to require companies or individuals using language models to obtain explicit consent from writers before using their work in the training process.”
In other words, even Chat GPT says, “pay the man!” (or woman, as the case may be). Sounds logical, of course. But how? What kind of payment or royalty system can we humans create that is both fair and logistically even possible? Just imagine the complexity of identifying and compensating a creator whose work constituted only one of five million images “scraped.” My human mind came up with one idea - something akin to a performing rights organization (PRO) in the music world, where artists receive micro-payments based on estimates of the music played in venues across the world (in bars, restaurants).
I asked Chat GPT to help me out here, and these were its specific ideas: “One potential compensation system for artists whose creative works were used to train AI could be a royalty-based system, where the artist receives a percentage of any revenue generated from the use of their work. Another option could be a licensing system, where the artist grants permission for their work to be used in exchange for a one-time or ongoing payment. Additionally, it is also possible to set up a system where the artist can choose to be compensated for the use of their work on an individual basis.”
Before your head explodes, here’s a related mind bender for you. Can AI-only generated works of art, entertainment and content in general – those that lack any kind of humanity actively participating in their creation - ever be copyrightable in the first place? So far the courts haven’t conclusively addressed that issue either. But the Copyright Office has. Its current policy is not to grant copyrights to works produced solely by AI. While seemingly satisfying from a humanist standpoint, is that the right answer intellectually when the Copyright Office has already signaled that it will treat “human-assisted” AI created works differently? Isn’t all AI created content “human-assisted” in some way (when we give our inputs and instructions, for example)? Not surprising, then, that a major pending lawsuit is challenging the Copyright Office’s conclusion.
Which begs the question: are there any good answers? Well, we certainly don’t have them yet. And the courts and governments aren’t even remotely set up to address such massive “tech-tonic” shifts (like that one?) and properly calibrate on these issues. But the pace of AI ease and sophistication is only accelerating – as is the disruption and transformation that flow from it – so, in the inimitable words of Yoda, grapple we must!
So let the real human conversations begin and accelerate throughout this new year to match AI’s increasing sophistication, because make no mistake, AI is the central media-tech story of 2023. If this sounds like a humanist’s call to arms to everyone in the creative community, then you are absolutely right. It is. Consider it an AI reality check. It’s time for all of us to stay on top of it all as best we can. Or, at a minimum, not be left behind. Our creativity. Our jobs. Our day-to-day lives, depend on it. (In that regard, check out the new AI Creative Forum think tank that just launched to thoughtfully consider these issues with leading creators, artists, executives, philosophers, and technologists).