Throughout Tech Week, News 8 is publishing a five-part series on the role of AI in various industries. For full Tech Week coverage, click here. 

GRAND RAPIDS, Mich. (WOOD) — A big topic as Tech Week Grand Rapids gets underway is the many uses of AI — including some nefarious uses.

Greg Gogolin, Ph.D., professor and director of cybersecurity and data science at Ferris State University, said the cybersecurity industry is focused on both securing AI applications and using AI to protect against cybersecurity threats.

“Cybersecurity threats are so sophisticated and so rapid … it’s not like a person or a team can respond to it. You need the automation,” he said. “AI is great at detecting outliers and patterns and things like that and giving you actionable intelligence and reaction.”

On Wednesday, two Tech Week events hosted by Ferris State University will look at some of the biggest topics of concern when it comes to AI, like deepfakes. “Deepfakes: The threat is Deep but it isn’t Fake!” will run from 1 p.m. to 1:45 p.m., followed by “The Relentless Pursuit of Securing the AI Borderless Walls,” which will run from 2 p.m. to 2:45 p.m.

Gogolin said as AI rapidly progresses, it is able to create more convincing deepfakes.

“The challenge with deepfakes is just in the last about 18 months, the technology has advanced so quickly that it doesn’t take a highly skilled programmer, artificial intelligence analysts to create things anymore,” he said.

He said anyone can easily access technology to face swap — putting one person’s face onto another person’s body. Deepfakes can be used for nefarious reasons, like earlier this year when pornographic deepfake images of Taylor Swift circulated online.

“It’s often used for nefarious purposes,” Gogolin said. “Whether it is political, for some sort of a cybersecurity incident, … (or) revenge.”

AI can also be used to create convincing deepfake videos or to impersonate someone’s voice.

“Perhaps the more alarming thing is voice,” Gogolin said, adding that right now, you only need about 15 seconds of someone’s voice to create a deepfake.

That can be used to prey on the elderly, like when scammers impersonate a family member over the phone and fake an emergency, typically asking for money. Gogolin encouraged families to come up with a keyword or phrase to help identify themselves in a real emergency.

AI is also becoming more prevalent in political spaces. In her endorsement of Vice President Kamala Harris, pop star Taylor Swift addressed concerns over the use of AI.

“Recently I was made aware that AI of ‘me’ falsely endorsing Donald Trump’s presidential run was posted to his site,” she shared on social media. “It really conjured up my fears around AI, and the dangers of spreading misinformation.”

WHAT CAN YOU DO?

As AI becomes more sophisticated, Gogolin said you should be careful about anything you see online.

“You do have to be vigilant, perhaps suspicious,” he said. “It’s very difficult to believe anything anymore, particularly people that are maybe a little less technical, that don’t have the understanding.”

There’s very little you can do to protect your information online, Gogolin said, especially if you’re on social media. He said he isn’t on social media at all for that reason. Even if you’re not on social media, bad actors can get your information through a work website, through posts a friend makes of you or through hacks, like the 2017 Equifax hack that comprised the data of 147.9 million people.

“Your information is already compromised. It’s already out there,” he said.

One suggestion he had was to intentionally post fake information, like giving websites a fake birthday. That’s especially important if you have an uncommon name.

“It might slow them down or make them move on to the next person,” he said. “A lot of times that’s all you’re trying to do.”

He also encouraged people to contact lawmakers.

“I really think it’s important that people reach out to their legislatures and elevate this, because until the heat is put on the legislatures, very little is going to happen,” he said.

‘THE NO. 1 PROBLEM’ IN AI AND CYBERSECURITY

Gogolin said “the No. 1 problem” when it comes to AI and cybersecurity is that not enough U.S. citizens are not developing the necessary skill sets. In his experience, the majority of students in U.S. postgraduate programs on AI are not U.S. citizens, he said.

Ferris developed one of the first AI undergrad programs in the country, he said, and more undergraduate programs are developing fast. It’s a trend that he hopes will continue, though he cautioned that if schools develop an AI program, it needs to be given enough resources to bring in professors with the right background.

He said there are so many jobs out there connected to AI and cybersecurity.

“There’s a lot of different potential skill sets. Don’t think that it’s just programming, it’s just networking or something like that. It’s not. There’s something for everybody,” he said.

He also stressed that it’s not a job just for men. He said particularly with AI, it’s important to have multiple perspectives in the room, but throughout his years in the cybersecurity industry, the field has primarily been dominated by white men.

“It doesn’t have to be that way, and it shouldn’t be that way,” he said. “AI and cybersecurity is not a male thing. Actually, all of the faculty members in our … cybersecurity program have daughters, and we have made it a point to try to make sure that the message is out there, that there are a lot of opportunities for female students, same for minorities.”