How President Biden's AI Executive Order Impacts Media & Entertainment
Biden’s Order Adds Ammunition for Creators in Their Copyright Fights and for Workers in Their Negotiations
Last week, President Joe Biden released his long-awaited Executive Order that establishes “eight guiding principles and priorities” to govern the development and use of AI. The document follows the European Union’s lead and is based on feedback by leading experts and governments around the world. It is an attempt to define basic guardrails to AI’s largely unfettered rise, as global spending on AI is expected to top $150 billion this year alone.
Biden’s Order is deceptively sweeping and certainly provides fodder to support many of the positions taken by the creative community - including those related to content provenance, copyright infringement, content “safety” and industry job security.
The Order’s first principle is that “artificial intelligence must be safe and secure.” A primary goal is to protect consumers from AI-enabled content deception and reduce “the risks posed by synthetic content.” It calls for the development of new standards and best practices to “help develop effective labeling and content provenance mechanisms, so that Americans are able to determine when content is generated using AI and when it is not.”
One specific example is the labeling of AI-generated “synthetic” content with new “watermarking” technology. The most obvious goal here is to address the daunting new reality of “deep fakes” — like the now-infamous “fake Drake” song “Heart on My Sleeve,” which was created with generative AI — that not only can change the course of elections and potentially even topple governments, but also threaten creative works — and the names, images, likenesses, voices and styles of the creators who created them.
This burgeoning new industry of watermarking and other “AI forensics” can also be used to identify copyrighted works that have been used for AI “training” purposes, and the Executive Order expressly gives notice that the President may step in again to specifically address the issue of “the treatment of copyrighted works in AI training.”
I recently wrote about ongoing major copyright litigation that is beginning to answer fundamental questions about how much “scraping” of copyrighted works will be enough for courts to find infringement (even as every bit of it technically creates a copy). Biden’s Executive Order brings forensic AI tech to the fore, and the “My Art My Choice” initiative supported by Intel Labs is one that focuses on watermarking in what its authors calls “adversarial protection against unruly AI.” (Intel Labs also backs the related “My Face My Choice”initiative.)
But Biden’s Executive Order goes beyond the realm of copyright and ways of supporting claims of infringement. It goes directly to the heart of other fundamental issues that we face in the media and entertainment industry. Notably, the potential harm of content generated by AI is akin to the well-documented harm that has been caused to young people by social media. The Executive Order calls on Congress to establish new privacy legislation “to protect all Americans, especially kids,” as well as testing and safeguards against “unsafe” content. Examples include “independent evaluation of [company] claims concerning both the effectiveness and risk mitigation of their AI offerings.”
On that issue, the Executive Order requires that developers of “the most powerful AI systems” share their safety results with the U.S. government. And while it uses the concept of “safety” primarily in the context of national defense, the Executive Order’s accompanying “Fact Sheet” requires all companies that develop “any foundation model that poses a serious risk to … national public health and safety” to notify the government when training its model. Those companies also “must share the results of all red-team safety tests” (i.e., tests that generate problematic outputs) before those foundational models are unleashed to an unsuspecting public.
Consumer advocates should demand that AI-generated content follow this lead, rather than fall prey to social media’s sordid history of impacting the mental and physical health of kids and young people. So Big Tech take note: The Feds are far more on top of things this time with AI, especially in light of Big Tech’s AI arms race as reflected in the headlines of virtually all recent quarterly earnings calls.
Finally, President Biden’s Executive Order is also directly relevant to the current SAG-AFTRA strike and now-settled WGA strike. In its section titled “Supporting Workers,” it calls out both AI’s power and promise to improve worker productivity, but also the dangers of AI “job displacement.”
One of the President’s guiding principles is to give workers “a seat at the table, including through collective bargaining, to ensure that they benefit from these [AI] opportunities.” That seat is especially top of mind in Hollywood right now as SAG-AFTRA just responded Monday to the studios’ “best, last and final” offer and highlighted AI as being “an essential item[ ] on which we still do not have an agreement.”
Biden directs his Secretary of Labor to “develop and publish principles and best practices for employers that could be used to mitigate AI’s potential harm to employees’ well-being and maximize its potential benefits.” No mandatory edict is given about the current strikes, but the overall zeitgeist supports creators of all stripes.
Much of the Executive Order’s direct impact on the media and entertainment industry – and the overall creative economy in general – is implicit rather than directly explicit. But the clues and cues are there for creators to use it to add further support of the need to develop basic guardrails in our industry to protect both the “inputs” of copyrighted works into AI’s black box, as well as the “outputs” that those black boxes create. That alone should be applauded.
And that is precisely what the Human Artistry Campaign did after Biden issued his Executive Order. That coalition – which represents the interests of over 170 members of the creative and tech communities – calls it an “effort to craft a responsible, ethical AI policy that will promote innovation and allow both AI and human creativity to strengthen each other and thrive.”
That kind of focus, advocacy and overall commitment to humanity is exactly what this AI “moment” needs. That and the U.S. Copyright Office’s continued refusal to grant copyright protection for AI-only generated works.