Looking to California's AI Policy—Balancing Innovation and Accountability | Opinion

🎙️ Voice is AI-generated. Inconsistencies may occur.

California took an important step toward responsible AI governance recently when a panel of leading AI experts—convened by Governor Gavin Newsom—released a comprehensive roadmap for AI policy in the state. The report offered thoughtful and nuanced recommendations for California legislators considering dozens of AI bills this session, including recommendations to emphasize industry competition, greater transparency from AI developers, third-party evaluations of advanced models, and stronger protections for whistleblowers. It's encouraging that legislators appear to be already referring to these recommendations.

As the CEO of Omidyar Network, a philanthropic investment firm headquartered in Silicon Valley, I have spent the last decade supporting responsible tech innovation that serves the public good. I believe that California's framework exemplifies how policymakers can ensure AI guardrails foster innovation while protecting the public interest—and should set a national precedent.

If legislators want to act on AI, they should look to the Golden State.

Google Gemini, OpenAI ChatGPT and Microsoft Copilot
Google Gemini, OpenAI ChatGPT, and Microsoft Copilot app icons are seen on screen. Getty Images

California is running a process based on transparency and diverse participation. Public input is an essential part of any type of good policymaking, but it's particularly important on AI. Because the technology is so new (and evolving so quickly), it is already impacting communities across the country in unexpected ways. On the positive side, AI is helping firefighters identify and better manage wildfires, and improving nutrition access in the state's food deserts. But it's also increasing the costs to affordable housing and putting hundreds of thousands of jobs at risk. As the technology evolves, the number of people and industries impacted will only grow.

That's why it's so important that the panel is actively seeking input on its report. The public comment period is open until April 8, offering an important opportunity for a wider range of stakeholders to help shape the future of AI policy in California. AI will impact us all—and we should all make our voices heard.

Big Tech companies and their enormous lobbying teams are very eager to share their views as part of this process. But while California policymakers are wrestling to strike a balance between AI innovation and responsibility, Big Tech companies are largely fighting to maintain the status quo. Days before the California report's release, the Trump administration closed the comment period on its own AI Action Plan. Industry giants like Google and Meta, and newer large players like OpenAI and Anthropic flooded the docket with submissions that read like a corporate wish list. Beyond stressing important national security concerns and competition with China, these companies requested exemptions from copyright laws that, if granted, will harm the creative economy of Hollywood, professional media organizations, and independent artists across the state. They also urged the federal government to create loopholes that would free them from liability in the case of major catastrophic results of the use of their products. We know firsthand the harm that this type of provision can cause—after all, Big Tech and social media companies in particular continue to exploit the liability shield allowing for harmful online content aimed at children.

As investors in companies and funds in this space, we are aware of the risks to investors of an unclear liability framework or unclear or muddied rules around IP ownership. The key is to build clarity and enact well-constructed safeguards so we can focus on keeping the AI economy competitive and innovative. Policies like third-party evaluations, whistleblower protections, and adverse-event reporting systems can ensure AI companies are accountable, while encouraging breakthroughs that also serve America's national interests.

Regulations, done well, can level the playing field, empowering smaller challengers to take on giants, create new products and use cases, and push industries forward. Silicon Valley itself was built on this spirit of disruption, where startups thrived because they had the chance to compete. Not long ago, OpenAI was one of those up-starts with just a few hundred employees, directly challenging behemoth Google in the AI race. Even today's most dominant players can face competition that drives the whole industry forward when markets remain open and fair.

If California can figure out how to safely drive innovation, everyone else can, too. As the home of many of the world's largest AI companies, the state has witnessed both the transformative potential of innovation and the dangers of unchecked growth. From invasive surveillance to addictive services, the costs are impossible to ignore. Yet California is also home to a powerful ecosystem of advocates and experts dedicated to responsible tech development. Common Sense Media and newly-launched Tech Oversight Project-California are working on kids safety, data privacy, and accountability in the tech sector; Economic Security California is advocating for CalCompute to boost equity in access to compute; and TechEquity Collaborative is working alongside California state legislators to advocate for economic justice in communities impacted by technological advancement.

California's approach to AI governance can serve as a model for the nation. Disruptive, radical innovation is necessary to maintain American primacy in AI. But that can only come from an environment where startups can challenge incumbents, and consolidated giants are constantly pushing the limits of science and knowledge to maintain an edge. This type of environment is not the state of nature; it must be crafted by and for the public interest. By standing firm and advancing a balanced approach to AI governance, the state is laying the groundwork for a healthier AI ecosystem.

Mike Kubzansky is CEO of Omidyar Network.

The views expressed in this article are the writer's own.

Is This Article Trustworthy?

Newsweek Logo

Is This Article Trustworthy?

Newsweek Logo

Newsweek is committed to journalism that is factual and fair

We value your input and encourage you to rate this article.

Newsweek is committed to journalism that is factual and fair

We value your input and encourage you to rate this article.

Slide Circle to Vote

Reader Avg.
No Moderately Yes
VOTE

About the writer

Mike Kubzansky