TABLE OF CONTENTS

I. Introduction

In 2023, half of senior information technology business executives surveyed worldwide considered artificial intelligence (AI) to be a technology of strategic importance that they would prioritize in 2024.1 Between 2017 and 2020, funding of AI startups increased from $18 billion to $26 billion.2 In 2023, seventy-three percent of U.S. companies have begun implementing AI technologies.3 The growing interest in AI is not limited to corporations. U.S. citizens have particularly shown an interest in generative AI. In fact, a 2023 survey of U.S. adults showed that respondents were most interested in generative AI content from members of the creative community such as artists and musicians, entertainment providers, and skilled amateurs and digital artists.4

As generative AI tools have become increasingly accessible to the general public, legal scholarship has examined the role that copyright law will play in AI innovation. This scholarship has largely focused on whether AI generated content is copyrightable, who should hold that copyright, and how to treat use of copyrighted works in training AI.5 However, with slow federal action to address AI, these discussions have largely focused on a hypothetical national strategy based on comparisons to foreign copyright laws and the general legal principles and philosophical frameworks underlying intellectual property law.6 Now, a hypothetical national strategy is no longer necessary. Congress, the White House, and various federal agencies have begun developing their own strategies to address AI. 

In February 2020, the U.S. Copyright Office (USCO) began to publicly discuss the relationship between copyright law and AI.7 In a symposium co-sponsored by the USCO and the World Intellectual Property Organization (WIPO), panelists discussed the copyright implications of AI on both developers and end-users.8 Developers have largely been concerned with the treatment of copyrighted works in training AI systems. On the other hand, end-users have largely been concerned with the copyrightability of AI outputs. Moreover, members of the creative community have voiced concerns over both potential copyright infringement and the copyrightability of works utilizing AI technologies.9 Despite the increasing questions over generative AI in the creative community, the USCO did not issue a public statement on AI or host another AI-related event until October 2021.10

Finally, in March of 2023, the USCO officially launched an AI initiative “to examine the copyright law and policy issues raised by [AI], including the scope of copyright in works generated using AI tools and the use of copyrighted materials in AI training.”11 The same day, the Office issued new registration guidance, clarifying that (1) applicants have a duty to disclose AI-generated content in works submitted for registration, and (2) a work whose “traditional elements of authorship were produced by a machine” are not eligible for registration, because it “lacks human authorship.”12

The March 2023 registration guidance answered some questions for end-users by clarifying that AI-generated works are ineligible for copyright registration. However, it left many unanswered questions regarding the level of creative control needed for a work to be eligible for registration and the use of copyrighted materials in training AI systems. Throughout the Spring of 2023, the Office hosted public listening sessions on the goals and concerns regarding generative AI in creative fields.13 These listening sessions culminated in the Office issuing a notice of inquiry and request for comment (NOI) in August 2023.14 In response, a number of AI industry leaders and members of the creative community submitted comments sharing their views on generative AI and the role that copyright law should play in its regulation.15 The Office has announced that it will be issuing a Report in several Parts to analyze these issues.16

While the Copyright Office continues to examine these questions and concerns, other sectors of the U.S. federal government and industries have begun adopting generative AI strategies and best practices. Scholarship is plentiful on what the “ideal” copyright regulations should be for generative AI.17 Yet, as interested parties have begun developing various strategies for the adoption of generative AI, it is important to examine how copyright law may impair or facilitate a coherent national approach to AI. This Article first examines the approaches, opinions, and concerns of the federal government and interested parties in regard to generative AI. Then, this Article examines the ways in which existing and future copyright regulations may impair or benefit these approaches to generative AI. 

II. Analysis

A. The Federal Government’s Approach to Generative AI 

Since at least 2020, Congress, the White House, and various federal agencies have been evaluating the potential benefits and detriments of AI innovation. According to Congress, the overarching federal strategy for AI innovation is to achieve international dominance in the global AI market while safeguarding the rights and privacy of Americans.18 The White House has echoed this sentiment through actions focused on harnessing the benefits and mitigating the risks of AI.19 Intellectual property rights and regulations could significantly hamper these federal strategies. Yet, intellectual property considerations have been noticeably absent from these strategies, with most issued statements and guidance simply stating that expert advice is needed in how to manage intellectual property rights. By outlining the federal strategies to AI, the following discussions provide an overview of the policy concerns to be considered in shaping a coherent national AI strategy. 

1. Congress

To address AI, Congress has taken various approaches, including a task force, committee hearings, and proposed legislation. In 2020, Congress directed the National Science Foundation and the White House Office of Science and Technology Policy to create the National Artificial Intelligence Research Resource (NAIRR) Task Force.20 The Task Force’s Final Report noted four goals: to spur innovation, to increase diversity of talent, to improve capacity, and to advance trustworthy AI.21

The Task Force proposed an “open research environment”22 supported by a national system of computational and data resources, testbeds, and software and testing tools.23 Nevertheless, the Task Force recognized that “fostering an open research environment has tradeoffs with… protecting intellectual property rights.”24 To address IP concerns, the Task Force suggested using licensing agreements to define “publication responsibilities, disposition of intellectual property arising out of the use of the data, ownership of derived datasets, and expectations for disposal of the data.”25 However, the Task Force also noted that expert advice is needed on intellectual property management and agreements.26

In addition to the task force, Congress held multiple hearings on AI risks and innovation in 2023 and early 2024.27 Witnesses at these hearings have included federal agencies, scholars, and industry leaders.28 Generally, the hearings have examined the current uses of AI and the ways in which Congress may address the potential harms of AI, such as privacy and security concerns.29

Finally, a number of bills regarding AI technologies have been introduced in the 118th Congress.30 These bills vary greatly, but they generally aim to impose restriction and regulations on AI systems to safeguard privacy, to implement research initiatives, and to address the fraud, lying and lack of transparency issues involved with AI.31 Many of these bills, especially since January 2024, have specifically focused on curbing “AI fraud” in which AI is used to impersonate the likeness and voice of individuals.32 Other notable bills have aimed to address issues such as bias in algorithmic systems, agency transparency when using automated systems for public interaction and critical decision-making, and coordinating international AI strategies.33

These actions demonstrate that Congress is aware and, at least somewhat, focused on AI technologies. However, as of February 2024, Congress had passed no legislation, leaving the U.S. straggling behind the more than 30 countries that have passed some form of AI legislation.34

2. The White House

In October of 2022, the White House Office of Science and Technology Policy (OSTP) issued the Blueprint for an AI Bill of RightsMaking Automated Systems Work for the American People, which outlines general principles for deploying AI systems in a way that recognizes civil rights and democratic values.35 In developing the “Blueprint”, the OSTP conducted a year-long process of panel discussions, public listening sessions, meetings, formal requests for information, and public input from people throughout the United States, public servants across federal agencies, and members of the international community to examine the promises and potential harms of AI.36

The final Blueprint outlined five principles: (1) safe and effective systems, (2) algorithmic discrimination protections, (3) data privacy, (4) notice and explanation, and (5) human alternatives, consideration, and fallback.37 However, the Blueprint only vaguely addresses IP concerns as a potential limitation on the independent evaluation of and meaningful access to examine automated systems.38

Moreover, in October of 2023, President Biden issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.39 To “advance and govern the development and use of AI,” the Executive Order laid out a framework of guiding principles.40 These principles generally focus on (1) improving AI security and safety; (2) promoting innovation, competition, and collaboration in order to harness the benefits of AI and establish the U.S.’s dominance in the AI market; (3) safeguarding jobs and consumers; and (4) protecting privacy and civil liberties.41

Again, these actions demonstrate that the Biden Administration is aware of and somewhat focused on AI technologies. However, these guiding principles have the potential to be significantly compromised if copyright regulations hamper access to the data needed to promote the innovation of safe AI systems.

3. The Department of Defense

The Department of Defense (DoD) has developed its own plan to implement AI systems as part of the National Defense Strategy. In Tenet 3 of its Responsible Artificial Intelligence Strategy and Implementation Pathway, the DoD explicitly states that it may seek AI vendors, including commercially available technologies.42 Similar to the Blueprint, Line of Effort (LOE) 3.2.5 of the DoD’s strategy and implementation pathway notes that the strategy will seek to “preserve privacy and civil liberties” to avoid “unintended bias in the design and development of AI capabilities that involve the use of personal information.”43 Moreover, in LOE 5.3.1, the DoD states that it will seek “interoperability with partners and allies with data, compute, and storage systems, software, and schema.”44 As the DoD implements third party AI systems, it will be imperative that these systems are safe and effective. As such, burdens on the ability to use copyrighted materials in training AI systems may hamper these efforts. 

B. Industry Approaches to AI

At the same time that the federal government has been developing its own strategies to AI innovation, industries have continued to develop and implement AI technologies. In particular, the Copyright Office’s NOI provided valuable insight into the inner-workings and opinions of AI developers and creative communities.45   

In short, the conflict between AI developers and the creative community is two-fold. First, AI developers argue that use of copyrighted works in AI training falls under the fair use doctrine of copyright law. Under the creators’ view, licensing agreements would be required before copyrighted works could be used in AI training. Second, AI developers argue that safeguards such as “opt-out” approaches and mechanisms to restrict potentially infringing outputs are sufficient to protect the rights of copyright owners.46 Creators, on the other hand, argue for an “opt-in” approach and improved access to litigation and remedies for infringement.47

1. AI Developers

In response to the Copyright Office’s NOI, a number of AI innovators submitted comments which outlined their visions, opinions, and strategies for AI.48 The overarching view of AI developers has been that training AI models qualifies as a fair use under copyright law and that AI outputs have the potential to bring significant benefits to the creative community.49 To support this view, AI developers provided insight into their training process and mechanisms used to safeguard the interests of copyright owners.

First, AI developers clarified that the AI models do not store copies of copyrighted works used in the training process. Instead, these models consist of parameters reflecting the “statistical relationship between different words in different scenarios,”50 influenced by billions of diverse pieces of training data.51 Moreover, OpenAI, creator of ChatGPT, explained that it has employed measures to reduce verbatim repetition of training information, such as de-duplicating training data, working with copyright owners to identify internet sites reproducing their copyrighted works to exclude from training, and training ChatGPT to recognize and decline to respond to prompts seemingly aimed at reproducing significant portions of copyrighted works.52

Additionally, both OpenAI and Meta outlined their stance that industry practices and existing copyright laws are sufficient to address potential infringing uses of AI tools. Industry leaders have implemented promising features to protect copyright owners through various opt-out features and training models to decline requests that could generate a potentially infringing output.53 However, OpenAI also recognized that users seeking to generate infringing outputs may occasionally be able to evade its guardrails. If infringement occurs, OpenAI argues that the existing doctrines of secondary liability are sufficient to address the potential liability of an AI model creator or service provider.54

2. Creative Communities

The creative community has begun utilizing generative AI in a myriad of ways. A recent estimate by Alina Valyaeva suggested that over 15 billion images have been generated using text-to-image AI models.55  Creators have been able to expand their audiences by utilizing AI translation tools to translate podcasts and videos into various languages.56 Advertisers have used OpenAI’s GPT-3 to create scripts for their ads.57 Filmmakers have used AI tools to more efficiently create visual effects.58 Software developers are utilizing AI tools to streamline coding tasks.59 Nevertheless, as AI tools have proliferated, the creative community has been split on whether AI should be seen as a promising tool to improve efficiency or as an existential threat to creators.60 Public awareness of the tension between corporations and creators boomed in the wake of the 2023 writer’s strike and Universal Music Group’s (UMG) decision not to renew its licensing agreement with social-media company TikTok in early 2024.61

Organizations like UMG have voiced a number of concerns over AI models. Notably, UMG argues that AI models violate IP rights by training on unlicensed, copyrighted material; reproducing, manipulating, and processing those copyrighted materials for financial gain without license or permission from copyright owners; generating outputs strikingly similar to copyrighted works; and “clon[ing] artists’ images, likenesses, and voices to create audio and visual works…disturbingly similar to the artists themselves.”62

As such, creators have argued for a number of copyright and other regulations. First, the use of copyrighted materials in AI training should be considered an infringing use absent a licensing agreement.63 Second, outputs that impersonate artists and their works should be considered infringing uses and a violation of unfair competition laws.64 Third, labeling requirements should be imposed on AI-generated works to reduce misidentification and deception.65 Fourth, creators should be notified of AI developers’ intent to use their works in training, regardless of whether they hold registered or common-law copyrights.66

C. Creating a Coherent AI System

The creation of a coherent AI system must balance continued AI innovation with the intellectual property rights of creators, and the regulation of AI through copyright law has the potential to drastically shape the AI industry.

Both the federal government and AI developers appear to largely support an open-source system of AI innovation as a mechanism for improving the accessibility, safety, and efficacy of AI systems.67 In fact, the innovation boom in generative AI has largely been driven by open-source libraries for building and training machine learning models. For example, open-source libraries like Common Crawl have been utilized in the training and research of AI models like OpenAI’s GPT-3.68

Open-source libraries are intended to make code available to the public to promote its development and refinement. As such, an open-source system is more likely to level the playing field between small developers and dominant firms with access to key assets like computing power, cloud storage, and data. Additionally, open-source systems, as envisioned by the NAIRR Task Force, may reduce barriers to participation in AI research and increase the diversity of AI researchers by increasing accessibility to a wider range of users and providing a platform for use in education and community-building activities.69 Furthermore, the safety and efficacy of AI systems depends in part on access to diverse training datasets.70 With the DoD seeking to increase implementation of AI through acquisition strategies,71 AI systems trained on limited datasets could pose significant national security challenges. 

At the same time, broad access to AI can also increase the risk of copyright infringement and consumer deception if not regulated appropriately. The March 2023 guidance seemed to solidify that AI outputs are not eligible for copyright registration.72 As such, much of the debate has revolved around whether using copyrighted works in AI training datasets constitutes fair use. 

The open-source movement is seemingly in direct contradiction to the general principles of copyright law. Affording authors of original works exclusive rights to reproduce, publicly display, and prepare derivatives of the copyrighted work has been traditionally viewed as a mechanism for encouraging creativity.73 At the same time, however, copyright law has long recognized that granting copyright must be balanced against the public interest in the availability of literature, music, and other arts. As a result, the Copyright Act has limited the scope and reach of copyright owners’ exclusive rights.74

According to AI developers, AI training is a transformative use because it creates “a product with wholly different purposes, capabilities and uses.”75 The training process creates AI models that store “statistical relationships between words, shapes, colors, textures, and concepts,”76 rather than copies of the copyrighted works.77 These statistical relationships, the argument goes, are unprotectable because copyright law protects the expression of facts and ideas rather than the facts and ideas themselves.78 Under this theory, the purpose of utilizing copyrighted materials is to refine the algorithm rather than “to communicate through artistic expression, earn a living, and/or practice their craft.”79 In this sense, training AI models has a different purpose than employing those models to generate expressive works.

Conversely, the creative community has argued that utilizing unlicensed copyrighted materials in AI training should be considered copyright infringement. The output of generative AI models, the theory goes, serve the same purpose as the copyrighted works used in training and thus, cannot be considered transformative.80 Additionally, holding AI training as a fair use would impact the potential market by denying creators licensing income and creating competition between AI outputs and creators.81 Therefore, AI developers should be required to license copyrighted works prior to use in training, thus ensuring that creators receive compensation for use of their protected works.82

Yet, many of the creative community’s comments submitted in response to the Copyright Office’s NOI seem to generally question AI developers’ assertions that models do not copy or store copyrighted works.83 However, if the AI developers’ assertions are taken as true, neither comments submitted by the creative community nor existing legal scholarship appear to provide a concrete counterargument to the use of copyrighted works in training data as fair use. It would seem that, beyond the potential for licensing income, creators would be more concerned with infringing outputs rather than AI training. 

III. Conclusion

The use of copyrighted materials in AI training should constitute fair use. Instead, the focus of copyright regulation should be on infringing outputs. If outputs are not infringing on copyrights, it is difficult to see why utilizing copyrighted works in AI training should be considered an infringing use. In fact, in suits against AI developers for copyright infringement, plaintiffs have argued that the injury incurred was that an output may infringe on their copyright.84 These arguments have largely failed, as courts have found that an increased risk of future harm alone is insufficient to confer standing for damages.85

If outputs infringe on copyrights, AI developers should be held vicariously liable where adequate safeguards were not employed. As previously discussed, AI developers have implemented a number of safeguards to minimize the risk of infringing outputs.86 These safeguards include licensing agreements, opt-out programs, and output restrictions.87 If effective, regulations requiring AI developers to implement such safeguards could be sufficient to minimize copyright concerns.  

This scheme facilitates coherence between the federal government and AI developers while protecting the rights of copyright owners. Concerns over copyright infringement claims has driven data-rich companies to create their own AI models trained off their IP.88 In fact, several AI developers are facing copyright infringement claims for using unlicensed data to train AI models.89 If successful, the AI innovation ecosystem may be transformed from the open-source driven process accessible with limited financial resources to a restricted system where reliable AI systems can only be developed by those with the most financial resources.90 In other words, it may render the strategies of the federal government unachievable by impairing long-term AI innovation and the availability of safe and effective models. 

Moreover, this scheme may benefit creators by encouraging AI developers to adopt additional safeguards. Rather than investing in licensing agreements that may disproportionately benefit larger creators, AI developers may invest in safeguarding the rights of all creators in its outputs. If copyright infringement of outputs are enforceable against developers, then developers utilizing copyrighted materials will likely be incentivized to develop adequate safeguards to minimize the risk of infringement. 

Nevertheless, the protections necessary for creators expand far beyond this Article. Questions remain over a variety of issues such as who owns an AI output. Furthermore, the rise of “deep fake” AI outputs calls the sufficiency of the existing right of publicity doctrine into question.91 As the federal government has begun confronting these issues, the federal government and AI developers have been united in their commitment to driving AI innovation in a safe, accessible manner. If the Copyright Office determines that the use of copyrighted materials in AI training constitutes a copyright violation, this united approach to AI development will be significantly compromised. The benefits to creators through licensing income do not justify such an outcome, especially where alternative regulatory schemes can adequately address infringement concerns. Therefore, the Copyright Office should hold that only AI outputs can infringe on copyrights and hold AI developers vicariously liable if safeguards for minimizing infringement are not utilized. 

  • 1The 2024 IT Outlook Report, Rackspace Technology (Dec. 4, 2023).
  • 2Bergur Thormundsson, AI Startup Company Funding Worldwide 2020-2023, by Quarter, Statista, (Dec. 14, 2023), https://perma.cc/847F-TXGP.
  • 32024 AI Business Predictions, PwC, (Dec. 5, 2023), https://perma.cc/4JFU-Z492.
  • 4Bergur Thormundsson, Appeal of Generative AI in Social Media in the U.S. 2023, by Content Creator Category, Statista (Jan. 2023), https://perma.cc/28V6-GN9U.
  • 5See, e.g., Atilla Kasap, Copyright and Creative Artificial Intelligence (AI) Systems: A Twenty-First Century Approach to Authorship of AI-Generated Works in the United States, 19 Wake Forest J. Bus. & Intell. Prop. L. 335 (Summer 2019); Victor M. Palace, What if Artificial Intelligence Wrote This? Artificial Intelligence and Copyright Law, 71 Fla. L. Rev. 217 (Jan. 2019); Jessica L. Gillotte, Copyright Infringement in AI-Generated Artworks, 53 U.C. Davis L. Rev. 2655 (June 2020).
  • 6See, e.g., Megan Svedman, Artificial Creativity: A Case Against Copyright for AI-Created Visual Artwork, 9 IP Theory 1 (2020); Tzipi Zipper, Mind Over Matter: Addressing Challenges of Computer-Generated Works Under Copyright Law, 22 Wake Forest J. Bus. & Intell. Prop. L. 129 (Winter 2022); Yudong Chen, The Legality of Artificial Intelligence’s Unauthorized Use of Copyrighted Materials Under China and U.S. Law, 63 IDEA 241 (2023).
  • 7Symposium, Copyright in the Age of Artificial Intelligence (U.S. Copyright Off. & World Intell. Prop. Org. Feb. 5, 2020).
  • 8Id.
  • 9Nora Scheland, #ICYMI: The Copyright Office Hears from Stakeholders on Important Issues with AI and Copyright, LIBR. OF CONG. BLOGS (July 18, 2023), https://perma.cc/P828-6PHN.
  • 10Whitney Levandusky, Artificial Intelligence: The Copyright Connection, LIBR. OF CONG. BLOGS (Oct. 13, 2021), https://perma.cc/M7X3-4JLA; Conference, Copyright Law and Machine Learning for AI: Where are We and Where are We Going? (U.S. Copyright Off. & USPTO Oct. 26, 2021).
  • 11Copyright Office Launches New Artificial Intelligence Initiative, COPYRIGHT.GOV: NEWSNET (Mar. 16, 2023), https://perma.cc/79KJ-RGAT.
  • 12Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence, 88 Fed. Reg. 16190, 16192-93 (Mar. 16, 2023) (to be codified at 37 C.F.R. pt. 202).
  • 13COPYRIGHT.GOV, supra note 11.
  • 14Artificial Intelligence and Copyright, 88 Fed. Reg. 59942 (USCO, Aug. 30, 2023).
  • 15See infra Part II.B.
  • 16On July 31, 2024, the Office published Part 1 of the Report, which calls for new federal legislation to address the use of AI to create unauthorized digital replicas or “deepfakes.” U.S. Copyright Off., Copyright and Artificial Intelligence, Part 1: Digital Replicas (July 31, 2024).
  • 17See, e.g., Kasap supra note 5; Palace supra note 5; Gillotte supra note 5.
  • 18National Artificial Intelligence Initiative Act of 2020, 15 U.S.C. § 9411(a).
  • 19Administrative Actions on AI, AI.GOV, https://perma.cc/ENX8-Z3NX.
  • 20Nat’l AI Rsch. Res. Task Force, Strengthening and Democratizing the U.S. Artificial intelligence Innovation Ecosystem, ii (2023).
  • 21Id. at v.
  • 22Id. at 26.
  • 23Id. at iv–v.
  • 24Id. at 26.
  • 25Id. at 35.
  • 26Id. at 13.
  • 27Oma Seddiq & Elizabeth Kim, AI Influencers Pound Capitol Hill Hallways to Shape Legislation, BLOOMBERG LAW (Aug. 8, 2023), https://perma.cc/KUU2-G3JL; Congressional Committees Continue AI Hearings in 2024, AKIN (Jan. 10, 2024), https://perma.cc/Z7M5-7NG2.
  • 28See Hearing Wrap Up: Federal Government Use of Artificial Intelligence Poses Promise, Peril, Comm. ON Oversight & Accountability (Sept. 15, 2023), https://perma.cc/J4Z2-5DSQ; Oversight of A.I.: Legislating on Artificial Intelligence: Hearing Before the Subcomm. On Priv., Tech., & Law of the S. Comm. On the Judiciary, 118th Cong. (2023), https://perma.cc/5JQH-DZHK.
  • 29See Hearing Wrap Upsupra note 28.
  • 30Artificial Intelligence Legislation Tracker, Brennan Ctr. For Just. (Aug. 7, 2023, last updated Jan. 5, 2024), https://perma.cc/WEN4-CHS8. It is important to note that AI legislation was passed prior to the 118th Congress. That legislation was more broad than recent legislation, as it focused on initiatives to gather more information and funding provisions. See. e.g., National Artificial Intelligence Initiative Act of 2020, 15 U.S.C. § 9411; Artificial Intelligence Training for the Acquisition Workforce Act, 117th Cong., 136 Stat. 2238 (2022).
  • 31Id.
  • 32See, e.g., H.R. 7120, 118th Cong. (2024); H.R. 7123, 118th Cong. (2024); H.R. 6943, 118th Cong. (2024).
  • 33S. 3478, 118th Cong. (2023); H.R. 6886, 118th Cong. (2023); H.R. 6425, 118th Cong. (2023).
  • 34Bill Whyman, AI Regulation is Coming—What is the Likely Outcome?, CTR. FOR STRATEGIC & INT’L STUD. (Oct. 10, 2023), https://perma.cc/5Z5L-WDEM.
  • 35Off. of Sci. & Tech. Pol’y, Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People 3 (2022).
  • 36Id. at 4.
  • 37Id. at 5–7.
  • 38Id. at 20, 51.
  • 39Exec. Order. No. 14110, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, 88 Fed. Reg. 75191 (Oct. 30, 2023).
  • 40Id.
  • 41Id. at 75191–93.
  • 42DoD Responsible AI Working Council, U.S. Dep’t of Def. Responsible A.I. Strategy and Implementation Pathway, 25 (Jun. 2022).
  • 43Id. at 27.
  • 44Id. at 30.
  • 45Artificial Intelligence and Copyrightsupra note 14.
  • 46See infra Part II.B.1.
  • 47Nat’l Writers Union, Comment Letter on Artificial Intelligence and Copyright 14-15 (Oct. 30, 2023), https://perma.cc/Y4UH-ZUGL.
  • 48OpenAI, Comment Letter on Artificial Intelligence and Copyright 11 (Oct. 30, 2023), https://perma.cc/H46C-WP8H; Meta, Comment Letter on Artificial Intelligence and Copyright 10 (Oct. 30, 2023), https://perma.cc/6M67-WLR6.
  • 49See, e.g., OpenAI, supra note 48, at 2 (stating that generative AI tools may benefit copyright industries by “increase[ing] worker productivity, lower[ing] the costs of production, and stimulat[ing] creativity by making it easier to brainstorm, prototype, iterate, and share ideas”).
  • 50Id. at 6.
  • 51Meta, supra note 48, at 6.
  • 52OpenAI, supra note 48, at 7.
  • 53Id.; Meta, supra note 48, at 7.
  • 54OpenAI, supra note 48, at 14.
  • 55Alina Valyaeva, AI has Already Created as Many Images as Photographers Have Taken in 150 Years. Statistics for 2023, EVERYPIXEL JOURNAL (Aug. 15, 2023), https://perma.cc/C4WB-4V28.
  • 56OpenAI, supra note 48, at 2–3.
  • 57Id.
  • 58Id.
  • 59Id.
  • 60National Writers Union, supra note 47, at 5–6; These Artists are Using AI as a Creative Partner. See How!, Worklife. (Feb. 22, 2023), https://perma.cc/RM9P-VZBD.
  • 61Los Angeles Times Staff, Writers Strike: What Happened, How it Ended, and its Impact on Hollywood, LA Times (Last Updated Oct. 19, 2023), https://perma.cc/LCM6-TSYH;  Universal Music Group, An Open Letter to the Artist and Songwriter Community—Why We Must Call Time Out on TikTok (Jan. 30, 2024), https://perma.cc/7QYL-MJCV.
  • 62Universal Music Group, Comment Letter on Artificial Intelligence and Copyright 9 (Oct. 31, 2023), https://perma.cc/4ZBJ-3S9Y.
  • 63Id. at 19–21.
  • 64Id. at 22–23.
  • 65National Writers Union, supra note 47, at 9–10.
  • 66Id. at 12.
  • 67Congress has expressed support for an open-source system through the NAIRR final report and bills that have been introduced. For example, the NAIRR final report proposes the creation of an “integrated portal” comprised of a “federated mix of computational and data resources.” Nat’l AI Rsch. Res. Task Force, supra note 20, at v. In addition, bills such as H.R. 6881, 118th Congress (2023) advocate for increased transparency into the training data used in AI models. Moreover, President Biden’s Executive Order calls for the promotion of “a fair, open, and competitive ecosystem and marketplace for AI” by addressing the risks of dominant firms’ use of key assets and ensuring that “small developers and entrepreneurs can continue to drive innovation.” See EO 14110, supra note 39, at §2(b).
  • 68Mozilla Report: How Common Crawl’s Data Infrastructure Shaped the Battle Royale over Generative AI, Mozilla (Feb. 6, 2024), https://perma.cc/P8RW-LU8K.
  • 69See Nat’l AI Rsch. Res. Task Force, supra note 20 at 9.
  • 70Adam Zewe, Can Machine-Learning Models Overcome Biased Datasets?, MIT News (Feb. 21, 2022), https://perma.cc/9XZK-9FYW.
  • 71See DoD Responsible AI Working Council, supra note 42.
  • 72Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence, 88 Fed. Reg. 16190, 16192–93 (USCO Mar. 16, 2023) (to be codified at 37 C.F.R. 202).
  • 73Andy Warhol Found. for the Visual Arts, Inc. v. Goldsmith, 598 U.S. 508, 526 (2023).
  • 74Id. at 526 (citing Twentieth Century Music Corp. v. Aiken, 422 U.S. 151, 156 (1975)); Copyright Act, 17 U.S.C. §§102, 107–122, 302–305.
  • 75Van Lindberg, Building and Using Generative Models Under U.S. Copyright Law, 18 Rutgers Bus. L. Rev. 1, 38 (Spring 2023).
  • 76OpenAI, supra note 48, at 11.
  • 77Id. at 5–6.
  • 78Id. at 11–12. See also Authors Guild v. Google, Inc., 804 F.3d 202, 220 (2nd Cir. 2015) (“[T]he copyright does not protect facts or ideas set forth in a work”).
  • 79Gillotte, supra note 5, at 2684.
  • 80Graphic Artists Guild, Comment Letter on Artificial Intelligence and Copyright 9–10 (Oct. 30, 2023), https://perma.cc/2436-378S.
  • 81Id. at 10.
  • 82Am. Soc’y of Composers, Authors and Publishers, Comment Letter on Artificial Intelligence and Copyright 4 (Oct. 31, 2023), https://perma.cc/EP7T-YESD.
  • 83See, e.g., Graphic Artists Guild supra note 80 at 9–10.
  • 84See, e.g., Doe 1 v. GitHub, Inc., No. 22-CV-06823-JST, 2023 WL 3449131 at 5 (N.D. Cal. May 11, 2023).
  • 85Id.
  • 86See infra Section II.B.1.
  • 87Id.
  • 88See, e.g., Brody Ford, Adobe to Sell a New AI Subscription With Copyright Services (2), BLOOMBERG NEWS (June 8, 2023), https://perma.cc/74SB-9TGW.
  • 89See, e.g.Github, 2023 WL 3449131; Andersen v. Stability AI Ltd., No. 23-CV 00201, 2023 WL 7132064 (N.D. Cal. Oct. 30,2023); Getty Images (US) Inc. v. Stability AI, Inc., No. 23-CV-00135 (D. Del. filed Feb. 3, 2023); Silverman v. OpenAI, Inc., No. 23-CV-03416 (N.D. Cal. filed July 7, 2023).
  • 90See, e.g., Brody Ford, Adobe to Sell a New AI Subscription With Copyright Services (2), BLOOMBERG LAW (June 8, 2023), https://perma.cc/74SB-9TGW.
  • 91Nate Lanxon, Why are Deepfakes Everywhere? Can They Be Stopped?, BLOOMBERG (Feb. 9, 2024), https://perma.cc/3TJJ-MWJV.