WOODTV.com

AI may try to keep you from voting. Don’t fall for it

GRAND RAPIDS, Mich. (WOOD) — Bad actors are using AI to try to sway voters and impact this year’s presidential election.

A new public service announcement that came out Tuesday from Represent Us, a nonpartisan anti-corruption organization, warns voters that AI may be used to keep them from voting. It says fake phone calls, videos and messages may tell voters that voting has been extended or that their polling location has changed.

“If something seems off, it probably is,” the campaign says.

Multiple celebrities deliver the message, but at the end, it’s revealed that only some of them are real. Others, like Amy Schumer, were deepfakes — an actor that, through the use of AI, is made to look and sound like the celebrity.

“I love you, Amy. I’m so sorry,” the deepfake actor says. “I’m not even American.”

Prior to the New Hampshire’s primary in January, a robocall that used AI to make it sound like President Joe Biden’s voice went out to 50,000 Democratic voters, NBC reports. The robocall encouraged them not to vote in the primary.

“It’s important that you save your vote for the November election,” the call said, according to NBC.

HOW AI LIES WORK

Many voters are aware of the potential use of AI in this year’s election. In a study released in September, the Pew Research Center found that 57% of U.S. adults were concerned about the influence AI would have on the election and 40% thought AI would be primarily used for bad purposes in the presidential campaign.

Republicans and Democrats were equally concerned, the study found.

The respondents were right to be worried. Expert Greg Gogolin, Ph.D., the director of Ferris State University’s Center for Cybersecurity and Data Science, told News 8 AI has had a “fair amount of impact” on this election.

“Some of the maybe more commonly known ones would be, (former President Donald Trump) had posted an image of Taylor Swift endorsing him, which was AI-generated,” he said. “But there’s a lot of subtler things. So for example, there may be images that are generated to make the candidates look better: healthier, trimmer, younger.”

Other countries have also used the tool to try to influence the election, he said. Outside actors have used bots to post inaccurate reports online. AI can generate them pretty quickly, so even if one story doesn’t pick up steam, the next one might.

The bots can then interact with the people who post the story to help boost it on a website’s algorithm or may continue to target them with similar falsehoods.

“The information could be anything. It could be videos, it could be bots and stories and things like that, manipulating information,” he said. “You don’t have to have it be crazy to be effective and it doesn’t have to be super well-polished necessarily.”

Programs like Google’s Gemini will reject requests to generate election-related content, he said, but other’s like Grok, which comes from the website X, formerly known as Twitter, do not.

“It will generate all kinds of crazy things,” he said. “You’ve seen people generate images of Kamala Harris out with a gun or Trump and Elon Musk kissing. There’s all kinds of things. And they look pretty good, in terms of image quality.”

Overall, AI and its new role in politics and the election isn’t something that can be easily addressed.

“I see the AI challenge being no less difficult to address as the Second Amendment,” Gogolin said. “There’s a lot of emotion through that process.”

Still, there are several efforts to combat AI misinformation, especially when it comes to the election. Several states, including Michigan, are considering legislation dealing with the matter.

Gogolin said that would be just a starting point, noting that if legislation focuses on one aspect of AI, it can quickly become outdated.

“A lot of times the approach is to try to be a little bit broader, a little bit vaguer so that it will potentially apply,” he said. “But then the interpretation when it gets through the court systems can be a little bit vague as well, and you might not get the result that you were looking for.”

MITIGATING MISINFORMATION

He added that legislation wouldn’t make a difference for AI content coming in from overseas.

Michigan Attorney General Dana Nessel recently launched a webpage that aims to combat election misinformation by helping residents recognize AI online.

“While technology has made it easier than ever to access election information, the rise of AI has also made it easier for misinformation on these topics to spread,” Nessel said in an Oct. 15 release. “To help protect voters from misinformation, this comprehensive guide is part of our ongoing efforts to ensure everyone can confidently navigate the digital landscape and stay informed with the facts they need to make their voices heard this election.”  

But Gogolin noted that some may be wary of government-sponsored websites. He said Michigan Republicans may not trust webpages like the one Nessel launched and Democrats may not trust tools coming from states with Republican-controlled governments.

WHAT TO WATCH FOR

Still, there are several steps individuals can take to stay vigilant. To verify suspicious videos, photos or stories, check with trusted news outlets, government agencies or expert organizations like universities, research groups or hospitals.

When watching for AI-generated stories posted by bots, Gogolin’s advice was to watch for grammar that is “too perfect,” with no emotional words in the syntax and no slang. Search the story online to see if trustworthy organizations have reported on it. If you think an image may be AI-generated, run it through a Google image search and see if it’s located elsewhere.

“A lot of times you will see part of an image on another picture,” Gogolin said. “If you see part of that image in another picture, in a different context even, that’s a huge tipoff.”

He also encouraged people to consider where they are seeing the image or story. Was it posted by a source that you trust? Was it posted in the “wild west” — social media?

“It’s a game of influence and trust and it comes down to who you are going to believe,” Gogolin wrote in a message to News 8 after the interview.

If a friend or loved one sends something that appears to be AI-generated, it’s difficult to know how to respond. Someone who trusts your opinion, like an elderly relative, may be open to hear about some of the techniques that can be used to tell if something is AI. Others may not be.

“I have had many conversations with friends, family members, acquaintances, colleagues about the issue of the authenticity of something,” Gogolin said. “This is a particularly interesting election cycle in that there are some very, very strong opinions one way or another. And a lot of people have made up their mind that if it is pro their candidate that it’s accurate (and) if it is anti their candidate that it’s not accurate. And there’s really not a lot that you can do to change their mind.”