Skip links
A recent Arize AI analysis shows 137 of Fortune 500 companies cited AI regulation as a risk factor in annual reports. Investors and technology leaders pictured above at a generative AI conference in San Francisco in June 2023. Photo: carlos barria/Reuters

AI Regulation Is Coming. Fortune 500 Companies Are Bracing for Impact

The state of artificial intelligence regulation in the U.S., or the lack of it, is a pressing matter for Fortune 500 companies as they launch AI projects. 

Roughly 27% cited AI regulation as a risk in recent filings with the Securities and Exchange Commission, one of the clearest signs yet of how AI rules could affect businesses.

The concern reflects a regulatory landscape bracketed by the European Union’s AI Act on one end and a hodgepodge of U.S. state initiatives in development on the other, and that corporations are forging ahead in using AI anyway. The U.S. government hasn’t regulated AI, even as lawmakers at last week’s Democratic National Convention said they would continue to push for an election year breakthrough.

“We need a combination of industry and consumer self-regulation, as well as formal regulation,” said George Kurian, chief executive of data storage and services company NetApp. “If regulation is focused on enabling the confident use of AI, it can be a boon.”

But, according to NetApp’s annual filing with the SEC, if regulation “materially delays or impedes the adoption of AI, demand for our products may not meet our forecasts.”

Other companies also have a mixed view of AI regulation. A recent analysis from Arize AI, a startup building a platform for monitoring AI models, shows 137 of the Fortune 500 cited AI regulation as a risk factor in annual reports, with issues ranging from higher compliance costs and penalties to a drag on revenue and AI running afoul of regulations.

For Motorola Solutions, complying with existing and new AI and data usage laws may be “onerous and expensive,” and inconsistencies in regulation from jurisdiction to jurisdiction could increase costs and liability, the communications and security equipment maker said in its annual report. 

Motorola also said it isn’t clear how new laws will apply to its products and services, and increases in costs or liability might make its AI and data products less attractive to customers.

The inability to predict how regulation takes shape and the absence of a single global regulatory framework for AI creates uncertainty, credit card company Visa said in its annual report.

Still, many corporations aren’t pausing AI initiatives, especially if competitors are pushing forward with the technology, and though far from being fully ironed out, some AI rules in the U.S. are starting to gain steam.

California bill called SB 1047 is seen by companies and AI developers as a bellwether for how AI will be regulated across the country. AI model makers say the bill will slow innovation and discourage sharing AI technology—for free or otherwise.

Hundreds of other AI bills are in state legislatures, with 30-some bills in California alone.

“Just looking at all the regulations making their way through the California Senate and Assembly is mind boggling,” said Niranjan Ramsunder, chief technology officer and head of data services at digital transformation services firm UST. “Any time bureaucracy steps in it creates additional compliance work and decreases productivity.” 

Moody’s echoed his view in its annual report, noting that its use of generative AI could open the credit rating company up to greater regulatory scrutiny and litigation and take senior management’s time away from addressing other business issues.

The White House’s Executive Order on AI and the EU AI Act point to a global trend toward more comprehensive and nuanced regulation, requiring “compliance developments or enhancements,” healthcare giant Johnson & Johnson said in its annual report.

While regulation may increase costs and liability, some companies say reining in AI model makers might be a good thing.

Travel technology company Booking Holdings, for instance, pointed out in its annual report that model developers could train AI on biased, stale or insufficient data—and “such risks are heightened if we or third-party developers or vendors lack sufficient responsible AI development or governance practices.” 

Some corporations are hoping to get ahead of regulation by setting their own AI guidelines. Bhavesh Dayalji, S&P Global’s Chief AI Officer, said that because the financial information and analytics company has rolled out internal AI policies and practices, “we don’t anticipate any balanced industry regulation significantly impacting our approach.”

New and enhanced regulations, however, could hurt S&P Global’s ability to offer products and services using AI and compete with other AI products, the company said in its annual report.  

Financial services and insurance companies accustomed to working with regulators may have an easier time navigating AI laws.

Nasdaq President Tal Cohen said the exchange operator recently launched an SEC approved AI-enabled stock order type that offers a blueprint for future work with regulators, including how to manage the technology if it goes awry. Still, regulation is costly: Nasdaq said in its annual report that new laws may be burdensome or involve additional oversight.

Regulating and creating guardrails for generative AI is also different from other forms of AI, which behave more predictably, Cohen said. Generative AI requires designing processes “that allow us to keep up with the trajectory of intelligence.”

Cohen added that, “We won’t let anything get out the door unless it’s ready for prime time.”

Source wsj.com

Leave a comment

This website uses cookies to improve your web experience.
Explore
Drag