2023 GDC Recap: As Belts Tighten, Developers Cautiously Turn to AI
With the specter of COVID looming less large over the proceedings, this year’s Game Developer’s Conference (GDC) had the potential to be a return to the Conference’s “glory days” as a place where game developers and lawyers from all over the world could meet, exchange stories, and generally party the night away. And while all of that certainly happened, the overall vibe this year was also a bit… muted. On top of wild weather and awful scandals, nearly everyone we talked to shared some anxiety about the future of the games industry and the US economy in general. That anxiety took many forms, focusing on everything from bank failures and budget cuts to labor issues and layoffs, and – of course – a different looming specter in the form of generative AI.
Candidly, we don’t have the solutions to those other issues. Still, one thing was crystal clear from our conversations: knowing how to use generative AI effectively is going to be critical for the next wave of game developers.
Given the general atmosphere of belt-tightening, game developers will no doubt rely more and more on AI as an economical solution to generate everything from dialogue to music to characters to (eventually) entire games. Still, in the rush to cut costs, it’s important to still keep the following three things in mind:
Tracing or modifying AI output does not make it protectable. As we’ve blogged about before, any content generated by AI is not protectable under US Copyright law. This means that while there is (currently) no legal restriction stopping a developer from using AI to generate a piece of content, tweaking it, and placing the modified content in their game, there is also no restriction preventing someone else from using that same piece of AI content, making their own, different tweaks to it, and using it in their game (or movie, merchandise, etc.). The only copyright protection the initial developer could claim would be limited to just their specific modifications (to the extent they were creative in nature, as opposed to merely functional), or for the specific compilation or arrangement of the tweaked content in the larger context of their game. In short, the protection for modified AI output is going to be very thin.
Not every asset or piece of content in a game needs to be copyrighted. The developers who will benefit most from the use of AI are the ones who deploy AI for the aspects of their game that do not need copyright protection. For example, using AI to generate basic environmental textures (e.g., grass, water, or fire), background NPCs, or generic dialogue lines (e.g., “I need you to find me 40 rat hides”) is a good use of the technology, because it is unlikely those elements would be sufficiently creative or valuable to need copyright protection. On the other hand, using generative AI in the development pipeline for big ticket items like major characters, locations, card art (in a card game), or big story beats is risky, since those elements usually represent the “secret sauce” that contributes the most value to a media franchise. Early in the game development process, it is a good idea for game developers to critically assess the strength of the IP they are attempting to create, as well as the key factors that will contribute to that strength. The more important a given piece of content is, the more steps that company should take to ensure it is human-made.
Beware the accidental public domain donation. Even if a game company’s developers use generative AI only as an inspiration, companies still need to take caution in the content it feeds to a third-party AI service. For example, the AI service Stable Diffusion’s terms state that “Images created through Stable Diffusion Online are fully open source, explicitly falling under the CC0 1.0 Universal Public Domain Dedication.” This means that if an in-house developer feeds an image of a developer’s existing IP (say, an image of the game’s mascot) into Stable Diffusion, a future infringer could argue that the developer explicitly consented to the creation of the AI’s open-source image, and by extension, any works derived from it. That’s a major risk, especially if the AI-generated, open source image created from the developer’s prompt closely resembles the original copyrighted work.
To illustrate, imagine your company owns the copyright in the image of the Mona Lisa. Now imagine that, in connection with a work project, someone on your team decided to run your copyrighted image through Stable Diffusion to see what Mona would look like if she had hands like the Incredible Hulk.
The Mona Lisa on the left would still be yours to protect. However, the image on the right might actually be free and clear for anyone to copy – including a potential infringer – if a court ruled your team member waived the company’s copyright with respect to the creation of Hulk Lisa. Given the similarities between the two images, that is a major risk.
While it’s not feasible for employers to monitor every developer’s AI use, having a clear employee policy instructing them not to submit company IP in these open-source AI generators will give the company a better argument that the employee lacked the authority to create the AI image. (We can help you draft such a policy!) Similarly, If a company has an IP enforcement team, it may be useful to send periodic DMCA notifications to these platforms to scrub out any images created by unaffiliated third parties that appear to violate the company’s IP.