Skip links
image

Fei-Fei Li: AI Laws Must Look to the Future

Fei-Fei Li, often referred to as the “godmother of AI,” is leading a group calling for a rethinking of how artificial intelligence should be regulated.

By Vasyl Hulyi, AI Columnist
March 20, 2025 — Germany

In a new 41-page report published on March 18, 2025, the Joint California Policy Working Group on AI Frontier Models, established at the initiative of California Governor Gavin Newsom, urges policymakers to consider risks from AI that “have yet to manifest in the world.” The document is available on cafrontieraigov.org, as reported by TechCrunch. But what does this mean for the future of AI — and why does it matter?

image

The report, co-authored by Fei-Fei Li along with Jennifer Chayes (Dean of the College of Computing at UC Berkeley) and Mariano-Florentino Cuéllar (President of the Carnegie Endowment for International Peace), stresses the need for transparency from frontier AI labs such as OpenAI, Google, and Anthropic. The authors argue that legislation should not only respond to current threats but anticipate future risks, even when the evidence remains “uncertain.”

“If those speculating about the most extreme risks turn out to be right — and we’re not sure they are — the cost of inaction on frontier AI today would be extremely high,”
the report states. As examples, it cites AI’s potential for cyberattacks or the development of biological weapons, comparing these risks to nuclear arms:
“No one needs to witness an explosion to understand the damage it could cause.”

See also  European Deeptech 2025: Through Thorns to a Transcendent Market

Is Li right?

image

The initiative follows Newsom’s 2024 veto of the controversial SB 1047 bill, which he deemed insufficiently developed. However, he acknowledged the need for deeper analysis of AI risks, leading to the formation of the working group. The new report supports several ideas from SB 1047 and the recently proposed SB 53 by Senator Scott Wiener, including mandatory safety test reporting by AI developers. Wiener called the report

“an important step in the ongoing dialogue on AI regulation,”
and praised its
“thoughtful balance between the need for safeguards and the support of innovation.”

Fei-Fei Li, best known for her work on ImageNet and as the founder of World Labs, has long advocated for a balanced approach to AI. In February 2025, at the AI Action Summit in Paris, she warned that regulation must be grounded in science, not “science fiction,” to avoid sensationalism. Her stance in the new report remains pragmatic: she proposes a “trust but verify” framework, where AI developers and employees could safely report potential risks without legal repercussions.

See also  Sam Altman: “AI is Already Smarter Than Humans. We Just Have Not Noticed.” 

“We need a more scientific method for assessing and measuring the capabilities and limitations of AI, which can lead to more accurate and effective policy decisions,”
Li emphasized.

Food for thought

The report has received support from both proponents and critics of strict AI regulation. Dean Ball, a researcher at George Mason University who had previously criticized SB 1047, called the document

“a promising step” for California.
However, experts like Matthew Barnett caution that AI risks are often exaggerated and that real digital threats, such as cyberattacks, have existed for years and are not unique to AI. This raises a key question: are we moving too fast with regulations based on hypothetical scenarios?

As Fei-Fei Li and her team continue to shape the future of AI, the draft report is open for public comment until April 8, with the final version expected in June 2025. Will such laws truly protect society from the unpredictable threats of AI — or is this just an attempt to please all sides?

We’ll be watching closely as science and policy try to speak the same language.

This website uses cookies to improve your web experience.
Explore
Drag