Skip links
In the London neighborhood of Wapping, AI company Synthesia captures audio and video of real humans to create AI avatars. The green screen is used to capture video to create two-dimensional avatars that just include a subject’s head and shoulders. Photo: Isabelle Bousquette/The Wall Street Journal

Regulators May Not Like ‘Deepfakes,’ But Businesses Are Using Them Anyway

With AI regulation at an embryonic stage, companies are charting their own course in creating audio and video avatars, cognizant of the legal hazards. ‘It’s a minefield right now,’ says one executive

Companies are drawing up their own best practices for the use of AI-generated imagery and video, also known as deepfakes, for a range of business situations, from research to employee training.

Amid a dearth of U.S. regulatory guidance on artificial intelligence, it’s a delicate dance. Synthetic audio and video in the hands of bad actors has been linked to a swath of ills, from misinformation campaigns to revenge porn. More recently the 2024 election has put the spotlight on the potential for AI-powered falsehoods that risk confusing voters. 

Even the legitimate use of the generative AI raises challenging questions over intellectual property, consent and disclosure that have yet to work their way through the U.S. regulatory system.

“It is still very new,” said Eng Lim Goh, senior vice president of data and AI at Hewlett Packard Enterprise. The company recently created “Antonio Nearly,” an AI avatar of CEO Antonio Neri. Antonio Nearly can appear in video, audio, text chat or even hologram form.

The avatar was augmented with HPE white papers, news releases and marketing materials, as well as speeches Neri has given. 

HPE owns the avatar and is using it under CEO Neri’s guidance. But what happens when Neri leaves the company?  

The company said it could shift the avatar to take on a new persona. But it is a scenario that may come up more often, said Vilas Dhar, president of the Patrick J. McGovern Foundation, a philanthropic organization focused on AI for social good.

Telecommunications company Vodafone said it recently deployed a customer service audio chatbot in Germany based on the voice of one of its agents, who was selected for the role in an internal competition. 

In a situation like that, Dhar said, companies need to consider what happens if that employee leaves the company, especially if the departure is marked by conflict, and whether there are any ramifications for continuity of service.

Sonesta International Hotels said it is using avatars from tech provider Colossyan to create employee training videos at a fraction of the cost it used to take to film and produce the videos manually. But the company ran into obstacles when it needed an avatar in a sexual harassment training video to use sexually explicit language. Actors who lent their likenesses to Colossyan as avatars had initially forbidden certain language in their contracts.  

At Synthesia, its circular room includes 300 lights, 79 cameras and five microphones to create avatars. PHOTO: ISABELLE BOUSQUETTE/THE WALL STREET JOURNAL

“The legalities of what the AI can and cannot say is—it’s a minefield right now,” said Kristin Broadhead, Sonesta’s director of learning and development.

Efforts in the public sphere designed to protect individuals and entities from deepfake misuse—intentional or otherwise—could help clear that minefield. Fears of sophisticated misinformation campaigns around the 2024 presidential election and high-profile incidents involving sexualized imagery are helping propel some of that legislation. 

The European Union’s AI Act, which took effect this month, requires mandatory disclosure of highly realistic AI-generated outputs. But in the U.S., a complex patchwork of legislation includes some bills that offer more explicit rules on deepfakes than others. 

In New York, for example, Assemblyman Alex Bores, who represents the state’s 73rd district, was one of the sponsors of the Political Artificial Intelligence Disclaimer (PAID) Act, which requires political communications that use synthetic media to disclose that they were created with the assistance of artificial intelligence. 

“I’m very big on: You should know when you’re talking to an AI,” said Bores. The bill is now in committee. 

Some companies say they don’t necessarily disclose to employees watching a training video, for example, whether the images are AI-generated or not. Sonesta doesn’t have any kind of disclaimer on the videos currently.

Neither does Birchwood Foods, which uses AI avatars in training videos for employees who work at food plants. AI avatars have allowed the company to reduce costs of translating or dubbing the training for many workers who don’t speak English or struggle with literacy. 

Fortell.ai, an AI startup that works with nongovernmental organizations and humanitarian organizations, said it lets people know when they are about to interact with AI avatars. It uses avatars to interview recipients of humanitarian aid. People who have received aid of some kind receive a link to a site where the avatar asks them questions and their responses are recorded and shared back with the humanitarian organization. 

Still, Rob Symes, chief executive of Fortell.ai, said many users, who often come from vulnerable communities where AI is less commonplace, don’t notice the disclosure and thus don’t realize they are communicating with an AI-generated avatar.

A simple one-liner may not be enough for a company to show it made a good-faith effort to disclose the use of AI, said Henry Ajder, founder of Latent Space, a consulting firm focused on responsible AI. It is likely disclosures may be required to appear in different languages and be accessible to those with disabilities. 

“What people consistently want is transparency,” Ajder said. “They want disclosure about what it is they’re viewing and they don’t like being fooled. That’s the bottom line.”

Source wsj.com

Leave a comment

This website uses cookies to improve your web experience.
Explore
Drag