Elon Musk Stirs Controversy After Retweeting AI-Altered Kamala Harris Campaign Ad, Sparking Debate On AI In Politics

In a move that has ignited widespread concern and debate, tech mogul Elon Musk, owner of X (formerly Twitter), recently retweeted an AI-generated parody of Vice President Kamala Harris’ campaign ad.

The altered video, originating from a YouTube account named “Mr Reagan,” features an uncanny voiceover mimicking Harris, spouting divisive and satirical remarks. This incident has raised alarms about the burgeoning role of artificial intelligence in political discourse and the potential for misuse.

The video, which uses footage from a genuine YouTube campaign ad of Harris addressing crowds and interacting with supporters, is striking not just for its visual content but also for the voiceover that closely resembles Harris’s voice. In the fake voiceover, the supposed Harris says, “I, Kamala Harris, am your Democrat candidate for president because Joe Biden finally exposed his senility at the debate. I was selected because I am the ultimate diversity hire. I’m both a woman and a person of color. So if you criticize anything, you’re both sexist and racist.” The original context and message of the campaign ad were completely subverted, turning the video into a pointed, if crude, commentary on identity politics.

Experts in AI and digital forensics quickly confirmed the video’s AI origins. Hany Farid, a digital forensics expert at the University of California, Berkeley, noted the sophistication of the AI-generated voice, highlighting how the video demonstrates the power and potential danger of generative AI and deepfake technology. “The AI-generated voice is very good,” Farid remarked. “Even though most people won’t believe it is VP Harris’ voice, the video is that much more powerful when the words are in her voice.”

The retweet by Musk, who has over 100 million followers on the platform, did not go unnoticed. Critics and public officials voiced their concerns about the implications of such technology in the political arena. 

Alexios Mantzarlis, Director of the Security, Trust, and Safety Initiative at Cornell Tech, pointed out that the use of deepfakes in political contexts is not unprecedented globally, noting similar incidents in countries like Argentina and India. “I expect we’ll see plenty of this in the U.S. for the next 100 days until the November election,” Mantzarlis predicts.

The incident also brought to the forefront the ongoing debate about the regulation of AI-generated content. California Governor Gavin Newsom condemned the use of manipulated media in politics, stating, “Manipulating a voice in an ‘ad’ like this one should be illegal.” Newsom pledged to sign new legislation aimed at curbing such practices. Meanwhile, Minnesota Democratic Senator Amy Klobuchar criticized Musk’s decision to share the video, suggesting that it might violate X’s policies on misleading media. “If @elonmusk and X let this go and don’t label it as altered AI content, they will not only be violating X’s own rules, they’ll be unleashing an entire election season of fake AI voice and image-altered content with no limits, regardless of party,” Klobuchar wrote.

X’s policy states, “You may not share synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm (‘misleading media’). In addition, we may label posts containing misleading media to help people understand their authenticity and to provide additional context.”

The retweet has sparked a crucial conversation about the ethical boundaries and responsibilities of technology platforms and influencers in an era where AI can easily manipulate the masses. 

As the 2024 election season heats up, the AI-generated ad of VP Kamala Harris serves as a stark reminder of the potential for AI to disrupt the political process and the urgent need to stay vigilant of misinformation online. 

About Post Author

Comments

From the Web

Skip to content