New York Attorney General Letitia James is stepping up to tackle the growing threat of election-related misinformation, especially as generative AI technology makes it easier to spread deceptive content. In a letter obtained exclusively by ABC News, James has called on nearly a dozen major tech companies, including Meta, Google, and OpenAI, to take serious action to protect voters from misleading information.
James pointed out that while misinformation has always been a concern during elections, the rise of generative AI has dramatically lowered the barriers for bad actors to create and spread fake content. These AI tools, which have become incredibly popular and easy to use, are making it increasingly difficult for people to tell the difference between what’s real and what’s not.
- Advertisement -
One recent example James highlighted involved an altered campaign video of Vice President Kamala Harris. The original audio was swapped out and replaced with an AI-generated voice that mimicked Harris, making it seem like she said things she never did. The video, posted on the platform X, was labeled as a parody, but after it was reposted by X owner Elon Musk without clarification, it gained widespread attention.
This isn’t the only instance of AI being used to mislead voters. Earlier this year, a robocall impersonating President Joe Biden’s voice told recipients to “save your vote” for the general election instead of participating in the New Hampshire primary—a clear attempt to confuse voters.
James isn’t the only one sounding the alarm. Last month, secretaries of state from five different states wrote to Musk, urging him to ensure that X’s AI search assistant, Grok, directs voters to accurate, nonpartisan information about voting, much like ChatGPT and OpenAI do.
A recent study by AI Forensics, a European nonprofit, found that Microsoft Copilot’s answers to election-related questions were wrong 30% of the time. This led Microsoft and Google to introduce new moderation layers to their AI chatbots, preventing them from answering questions related to elections.
Back in February 2023, many of the companies James addressed in her letter signed a voluntary agreement to prevent AI from disrupting democratic elections. While they didn’t commit to banning or removing deepfakes outright, they promised to detect and label deceptive AI content when it appears on their platforms.
In her letter, James requested an in-person meeting with these companies to discuss what steps they are taking to safeguard voters from misinformation. She’s also seeking written responses about their policies and practices and pushing for a meeting with corporate representatives. While the letter didn’t mention any mandatory obligations, there’s an underlying hint that non-compliance could lead to enforcement actions.