
🎙️ Voice is AI-generated. Inconsistencies may occur.
Advancements in artificial intelligence (AI) are justifiably dominating public discourse. The U.S. Chamber of Commerce projects that AI will increase global economic growth by $13 trillion by the end of this decade. ChatGPT, a large-language-model AI chatbot, has drawn over 170 million active users since its November 2022 launch, making it the fastest-growing consumer application in history. A new leap forward in AI's capabilities and applications seems to occur daily. Lenin once mused, "There are decades where nothing happens; and there are weeks where decades happen." We are now living through the latter, and AI has permeated nearly every aspect of business life.
But the explosive growth of AI has also raised alarm bells — particularly with respect to generative technologies that create new text or images based on content scraped from the internet. Experts ranging from Geoffrey Hinton, "The Godfather of AI," to Sam Altman, the CEO of OpenAI, have expressed serious reservations about the ability of AI to amplify the harms already perpetuated on the internet (such as misinformation, deep fakes, monetary scams, and malware), invade privacy, violate intellectual property rights, and even produce existential dangers.
We are at a tipping point. After a decade of somnolence, numerous jurisdictions are now taking steps to bring the development of AI within governmental purview. The European Union, with its AI Act, first proposed in April 2021 and expected to become law by the end of this year, and China, with its Measures for the Management of Generative Artificial Intelligence Services, currently nearing finalization, have taken the lead. Congress has thus far lagged, although progress at the federal level may be forthcoming. Most of the AI regulatory action in the United States to date has occurred at the state or city level, where a handful of states have passed or are considering regulations aimed at reducing AI-generated bias, particularly in employment decisions, and about a dozen states and localities have imposed or are weighing limits on facial recognition AI.
The need for the regulation of AI is clear. Although companies often see autonomous regulation as both a distraction and a waste of money, the reality is that a punitive framework is sometimes needed to incentivize corporations to act in a socially desired manner. Experience has also taught that investing in compliance and acting within officially demarcated behavioral norms can save companies money in the long run.
But regulating AI is tricky. "AI" has no generally accepted definition. Given that what can be considered "AI" is so vast and cuts across so many different industries, it will be challenging — if not impossible — for any single regulator to have the knowledge and capacity sufficient to regulate all problematic aspects of machine learning. AI's impact and attendant risks also will vary by sector and business category. And "Who should be regulated?" will be highly contested, as different actors (such as developers and deployers) play distinct roles within AI systems and often have little-to-no contact with each other.
Keeping up with the frenetic pace of evolution creates another governance hurdle. Changes in AI technology are measured in days or weeks. By contrast, it can take years for policymakers to enact laws or regulations. New AI rules, once passed, are likely to be quickly out of date, and nations in a competitive global economy will need to be careful not to freeze their AI technology in place with outdated measures. Collectively, these issues may lead to a balancing act in which legislators craft broad guardrails for AI but leave the intricacies of further AI regulation to agencies with existing expertise in affected industries and more knowledge of the at-issue AI applications.
The stringency of AI regulation will depend on the characteristics of the AI technology in question and the importance of the values it affects. Plenty of AI is noncontroversial — for example, AI that optimizes industrial processes, supply chains, and travel routes. Minimal regulation is thus needed in such categories. But other AI jeopardizes core human values, thereby requiring greater supervision. A jurisdiction's values will drive its approach to AI regulation, including the tools it employs. For instance, China's Measures require that AI "reflect the Socialist Core Values" and not subvert "state power." As a result, its regulations — designed to preserve these national values — are more comprehensive and detailed than the general, horizontal guidelines being debated elsewhere.
AI regulators have a variety of tools to promote such values. Some AI is too risky to justify release and will be banned. For instance, the EU will prohibit "dark-pattern" AI, which deploys subliminal techniques to distort human behavior; "social scoring" AI, which evaluates trustworthiness; and most biometric identification. Other risky AI may be "sandboxed" — i.e., it must meet testing thresholds and obtain government certification before release, then pass periodic post-launch assessments. Many jurisdictions are already requiring third-party audits to root out embedded bias within AI; however, the accuracy and feasibility of such audits remain unclear. Human-in-the-loop (HITL) requirements are a popular means by which to improve the precision of AI models, reduce bias, and foster transparency, although there are concerns that HITL increases costs, slows AI's speed, and imports human biases or mistakes.
Regulators are also likely to impose disclosure requirements, including that affected parties be informed when they are interacting with or being judged by AI, be able to "opt-out" of the AI process, and be afforded a meaningful explanation of an AI-driven decision. Data minimization and anonymization requirements will be used to protect privacy interests, and digital "watermarking" of AI-generated images or text will help fight copyright infringement and reduce misinformation. And future regulators will doubtlessly employ their own AI to assist their oversight duties.
While AI represents a new field in the eternal battle between regulation and economic development, the values at stake — autonomy, fairness, privacy, accountability, and transparency — are time-honored. Regulators will need to act with speed and agility to ensure their preservation, as private litigation cannot do so alone, and the issues at stake are too great for inaction.