Select Language

English

Down Icon

Select Country

America

Down Icon

Concerns grow as AI-generated videos spread hate, racism online: ‘No safety rules’

Concerns grow as AI-generated videos spread hate, racism online: ‘No safety rules’

At first it appears to be a quirky video clip generated by artificial intelligence to make people laugh.

In it, a hairy Bigfoot wearing a cowboy hat and a vest emblazoned with the American flag sits behind the wheel of a pickup truck.

“We are going today to the LGBT parade,” the apelike creature says with a laugh. “You are going to love it.”

Things then take a violent and disturbing turn as Bigfoot drives through a crowd of screaming people, some of them holding rainbow flags.

The clip posted in June on the AmericanBigfoot TikTok page has garnered more than 360,000 views and hundreds of comments, most of them applauding the video.

In recent months similar AI-generated content has flooded social media platforms, openly promoting violence and spreading hate against members of LGBTQ+, Jewish, Muslim and other minority groups.

Story continues below advertisement

While the origin of most of those videos is unclear, their spread on social media is sparking outrage and concern among experts and advocates who say Canadian regulations cannot keep up with the pace of hateful AI-generated content, nor adequately address the risks it poses to public safety.

Click to play video: 'Saskatchewan’s premier to track down creators of AI ‘deepfakes’ of him and Carney'
Saskatchewan’s premier to track down creators of AI ‘deepfakes’ of him and Carney

Egale Canada, an LGBTQ+ advocacy organization, says the community is worried about the rise of transphobic and homophobic misinformation content on social media.

“These AI tools are being weaponized to dehumanize and discredit trans and gender diverse people and existing digital safety laws are failing to address the scale and speed of this new threat,” executive director Helen Kennedy said in a statement.

Rapidly evolving technology has given bad actors a powerful tool to spread misinformation and hate, with transgender individuals being targeted disproportionately, Kennedy said.

Story continues below advertisement

The LGBTQ+ community isn’t the only target, said Evan Balgord, executive director of the Canadian Anti-Hate Network. Islamophobic, antisemitic and anti-South Asian content made with generative AI tools is also widely circulating on social media, he said.

“When they create the environment where there’s a lot of celebration of violence towards those groups, it does make violence towards those groups happening in person or on the streets more likely,” Balgord warned in a phone interview.

Click to play video: 'Study finds ChatGPT responded in dangerous ways more than half the time to users seeking advice'
Study finds ChatGPT responded in dangerous ways more than half the time to users seeking advice

Canada’s digital safety laws were already lagging behind and advancements in AI have made things even more complicated, he said.

Get the day's top news, political, economic, and current affairs headlines, delivered to your inbox once a day.
Get the day's top news, political, economic, and current affairs headlines, delivered to your inbox once a day.
Story continues below advertisement

Bills aimed at addressing harmful online content and establishing a regulatory AI framework died when Parliament was prorogued in January, said Andrea Slane, a legal studies professor at Ontario Tech University who has done extensive research on online safety.

Slane said the government needs to take another look at online harms legislation and reintroduce the bill “urgently.”

“I think Canada is in a situation where they really just need to move,” she said.

Justice Minister Sean Fraser told The Canadian Press in June that the federal government will take a “fresh” look at the Online Harms Act but it hasn’t decided whether to rewrite or simply reintroduce it. Among other things, the bill aimed to hold social media platforms accountable for reducing exposure to harmful content.

A spokesperson for the newly crated Ministry of Artificial Intelligence and Digital Innovation said the government is taking the issue of AI-generated hateful content seriously, especially when it targets vulnerable minority groups.

Click to play video: 'Alberta girl’s football coach accused of making child pornography with AI'
Alberta girl’s football coach accused of making child pornography with AI

Sofia Ouslis said existing laws do provide “important protections,” but admitted they didn’t aim to address the threat of generative AI when they were designed.

Story continues below advertisement

“There’s a real need to understand how AI tools are being used and misused — and how we can strengthen the guardrails,” she said in a statement. “That work is ongoing.”

The work involves reviewing existing frameworks, monitoring court decisions “and listening closely to both legal and technological experts,” Ouslis said. She added that Prime Minister Mark Carney’s government has also committed to making the distribution of non-consensual sexual deepfakes a criminal offence.

“In this fast-moving space, we believe it’s better to get regulation right than to move too quickly and get it wrong,” she said, noting that Ottawa is looking to learn from the European Union and the United Kingdom.

Slane said the European Union has been ahead of others in regulating AI and ensuring digital safety, but despite being at the “forefront,” there is a feeling there that more needs to be done.

Experts say regulating content distributed by social media giants is particularly difficult because those companies aren’t Canadian.

Click to play video: '‘White collar bloodbath’: Will AI destroy the entry-level office job?'
‘White collar bloodbath’: Will AI destroy the entry-level office job?

Another complicating factor is the current political climate south of the border, where U.S. tech companies are seeing reduced regulations and restrictions, making them “more powerful and feeling less responsible,” said Slane.

Story continues below advertisement

Although generative AI has been around for a few years, there’s been a “breakthrough” in recent months making it easier to produce good quality videos using tools that are mostly available for free or at a very low price, said Peter Lewis, Canada Research Chair in trustworthy artificial intelligence.

“I’ve got to say it’s really accessible to almost anybody with a little bit of technical knowledge and access to the right tools right now,” he said.

Lewis, who is also an assistant professor at Ontario Tech University, said that large language models such as ChatGPT have implemented safeguards in an effort to filter out harmful or illegal content.

But more needs to be done in the video space to create such guardrails, he said.

“You and I could watch the video and probably be horrified,” he said, adding “it’s not clear necessarily that the AI system has the ability to sort of reflect on what it has created.”

Click to play video: 'Viral video of Chinese paraglider likely doctored with AI footage'
Viral video of Chinese paraglider likely doctored with AI footage

Lewis said that while he isn’t a legal expert, he believes existing laws can be used to combat the online glorification of hate and violence in the AmericanBigfoot videos.

Story continues below advertisement

But he added the rapid development of generative AI and widespread availability of new tools “does call for new technological solution” and collaboration between governments, consumers, advocates, social platforms and AI app developers to address the problem.

“If these things are being uploaded…we need really robust responsive flagging mechanisms to be able to get these things off the internet as quickly as possible,” he said.

Lewis said using AI tools to detect and flag such videos helps, but it won’t resolve the issue.

globalnews

globalnews

Similar News

All News
Animated ArrowAnimated ArrowAnimated Arrow