🎙️ Voice is AI-generated. Inconsistencies may occur.
Lawsuits against artificial intelligence (AI) companies like OpenAI and Microsoft highlight an uncomfortable truth for these tech giants. Namely, under current copyright law, these companies are very likely fighting a losing battle. The reality is that training AI models requires utilizing vast amounts of copyrighted material, and our existing intellectual property framework is purpose built to prevent and punish this practice. But we shouldn't accept this legal status quo. Congress must act quickly and decisively to create a clear copyright exemption for AI training. The stakes are simply too high to let 20th century copyright law strangle 21st century innovation.
The fair use doctrine, which AI companies lean heavily on in their defense, wasn't designed with machine learning in mind. This should come as little surprise, as the general contours of existing copyright law date back to the Copyright Act of 1976. This law was written when cutting-edge technology meant pocket calculators and eight-track tapes. While Congress has made targeted updates since then, like the Digital Millennium Copyright Act of 1998 for internet issues, we're still effectively trying to regulate 21st century AI technology with a legal framework that's nearly half a century old.

When courts analyze fair use under the aging 1976 framework, they consider factors like whether the use is transformative and its effect on the market for the original work. But AI training creates verbatim copies of millions of works, storing them in ways that can sometimes reproduce portions of the original content. Moreover, these models are commercial products that could potentially replace some of the very works they were trained on. This spells trouble for AI companies.
Luckily, the Constitution points the way forward. In Article I, Section 8, Congress is explicitly empowered "to promote the Progress of Science" through copyright law. That is to say, the power to create copyrights isn't just about protecting content creators, it's also about advancing human knowledge and innovation.
When the Founders gave Congress this power, they couldn't have imagined artificial intelligence, but they clearly understood that intellectual property laws would need to evolve to promote scientific progress. Congress therefore not only has the authority to adapt copyright law for the AI age, it has the duty to ensure our intellectual property framework promotes rather than hinders technological progress.
Consider what's at risk with inaction. AI is already revolutionizing health care, with models helping to spot cancer in medical images before a human can and is accelerating drug discovery. It's making education more accessible through personalized tutoring that adapts to each student's needs. It's boosting productivity across industries, from small businesses to major corporations. These benefits aren't just theoretical, they're happening now. But all this could come to an abrupt end without a change to the law.
Another urgent reason for this sort of legal reform is that our national security depends on it. While American companies are struggling with copyright constraints, China is racing ahead with AI development, unencumbered by such concerns. The Chinese Communist Party has made it clear that they view AI supremacy as a key strategic goal, and they're not going to let intellectual property rights stand in their way.
This creates a dangerous scenario where America's commitment to protecting creators' rights could inadvertently hand technological leadership to an adversarial authoritarian regime. If we don't level the playing field for our own companies, we risk ceding China control of what may be the most transformative technology in human history.
The choice before us is clear, we can either reform our copyright laws to enable responsible AI development at home or we can watch as the future of AI is shaped by authoritarian powers abroad. The cost of inaction isn't just measured in lost innovation or economic opportunity, it is measured in our diminishing ability to ensure AI develops in alignment with democratic values and a respect for human rights.
The ideal solution here isn't to abandon copyright protection entirely, but to craft a careful exemption for AI training. This could even include provisions for compensating content creators through a mandated licensing framework or revenue-sharing system, ensuring that AI companies can access the data they need while creators can still benefit from and be credited for their work's use in training these models.
Critics will argue that this represents a taking from creators for the benefit of tech companies, but this misses the broader picture. The benefits of AI development flow not just to tech companies but to society as a whole. We should recognize that allowing AI models to learn from human knowledge serves a crucial public good, one we're at risk of losing if Congress doesn't act.
Nicholas Creel is an associate professor of business law at Georgia College & State University.
The views expressed in this article are the writer's own.