Skip to content
NOWCAST WXII 12 News at 11pm
Watch on Demand
Advertisement

Election misinformation is a problem in any language. But some gets more attention than others

Election misinformation will pose a formidable challenge this year as billions of people in dozens of countries head to the polls

Election misinformation is a problem in any language. But some gets more attention than others

Election misinformation will pose a formidable challenge this year as billions of people in dozens of countries head to the polls

Advertisement
Election misinformation is a problem in any language. But some gets more attention than others

Election misinformation will pose a formidable challenge this year as billions of people in dozens of countries head to the polls

Warnings about deepfakes and disinformation fueled by artificial intelligence. Concerns about campaigns and candidates using social media to spread lies about elections. Fears that tech companies will fail to address these issues as their platforms are used to undermine democracy ahead of pivotal elections.Related video above: Putin declared winner of a presidential race that was never in doubtThose are the worries facing elections in the U.S., where most voters speak English. But for languages like Spanish, or in dozens of nations where English isn't the dominant language, there are even fewer safeguards in place to protect voters and democracy against the corrosive effects of election misinformation. It's a problem getting renewed attention in an election year in which more people than ever will go to the polls.Tech companies have faced intense political pressure in countries like the U.S. and places like the European Union to show they're serious about tackling the baseless claims, hate speech and authoritarian propaganda that pollutes their sites. But critics say they've been less responsive to similar concerns from smaller countries or from voters who speak other languages, reflecting a longtime bias toward English, the U.S. and other Western democracies.Recent changes at tech firms — content moderator layoffs and decisions to roll back some misinformation policies — have only compounded the situation, even as new technologies like artificial intelligence make it easier than ever to craft lifelike audio and video that can fool voters.These gaps have opened up opportunities for candidates, political parties or foreign adversaries looking to create electoral chaos by targeting non-English speakers — whether they are Latinos in the U.S., or one of the millions of voters in India, for instance, who speak a non-English language."If there's a significant population that speaks another language, you can bet there's going to be disinformation targeting them," said Randy Abreu, an attorney at the U.S.-based National Hispanic Media Council, which created the Spanish Language Disinformation Coalition to track and identify disinformation targeting Latino voters in the U.S. "The power of artificial intelligence is now making this an even more frightening reality."Many of the big tech companies regularly tout their efforts to safeguard elections, and not just in the U.S. and E.U. This month, Meta is launching a service on WhatsApp that will allow users to flag possible AI deepfakes for action by fact-checkers. The service will work in four languages — English, Hindi, Tamil and Telugu.Meta says it has teams monitoring for misinformation in dozens of languages, and the company has announced other election-year policies for AI that will apply globally, including required labels for deepfakes as well as labels for political ads created using AI. But those rules have not taken effect, and the company hasn't said when they will begin enforcement.The laws governing social media platforms vary by nation, and critics of tech companies say they have been faster to address concerns about misinformation in the U.S. and the E.U., which has recently enacted new lawsdesigned to address the problem. Other nations all too often get a "cookie cutter" response from tech companies that falls short, according to an analysis published this month by the Mozilla Foundation.The study looked at 200 different policy announcements from Meta, TikTok, X and Google (the owner of YouTube) and found that nearly two-thirds were focused on the U.S. or E.U. Actions in those jurisdictions were also more likely to involve meaningful investments of staff and resources, the foundation found, while new policies in other nations were more likely to rely on partnerships with fact-checking organizations and media literacy campaigns.Odanga Madung, a Nairobi, Kenya-based researcher who conducted Mozilla's study, said it became clear that the platforms' focus on the U.S. and E.U. comes at the expense of the rest of the world."It's a glaring travesty that platforms blatantly favor the U.S. and Europe with excessive policy coddling and protections, while systematically neglecting" other regions, Madung said.This lack of focus on other regions and languages will increase the risk that election misinformation could mislead voters and impact the results of elections. Around the globe, the claims are already circulating.Within the U.S., voters whose primary language is something other than English are already facing a wave of misleading and baseless claims, Abreu said. Claims targeting Spanish speakers, for instance, include posts that overstate the extent of voter fraud or contain false information about casting a ballot or registering to vote.Disinformation about elections has surged in Africa ahead of recent elections, according to a study this month from the Africa Center for Strategic Studies, which identified dozens of recent disinformation campaigns — a four-fold increase from 2022. The false claims included baseless allegations about candidates, false information about voting and narratives that seemed designed to undermine support for the United States and United Nations.The center determined that some of the campaigns were mounted by groups allied with the Kremlin, while others were spearheaded by domestic political groups.India, the world's largest democracy, boasts more than a dozen languages, each with more than 10 million native speakers. It also has more than 300 million Facebook users and nearly half a billion WhatsApp users, the most of any nation.Fact-checking organizations have emerged as the front line of defense against viral misinformation about elections. The country will hold elections later this spring, and already voters going online to find out about the candidates and issues are awash in false and misleading claims.Among the latest: video of a politician's speech that was carefully edited to remove key lines; years-old photos of political rallies passed off as new; and a fake election calendar that provided the wrong dates for voting.A lack of significant steps by tech companies has forced groups that advocate for voters and free elections to band together, said Ritu Kapur, co-founder and managing director of The Quint, an online publication that recently joined with several other outlets and Google to create a new fact-checking effort known as Shakti."Mis- and disinformation is proliferating at an alarming pace, aided by technology and fueled and funded by those who stand to gain by it," Kapur said. "The only way to combat the malaise is to join forces."

Warnings about deepfakes and disinformation fueled by artificial intelligence. Concerns about campaigns and candidates using social media to spread lies about elections. Fears that tech companies will fail to address these issues as their platforms are used to undermine democracy ahead of pivotal elections.

Related video above: Putin declared winner of a presidential race that was never in doubt

Advertisement

Those are the worries facing elections in the U.S., where most voters speak English. But for languages like Spanish, or in dozens of nations where English isn't the dominant language, there are even fewer safeguards in place to protect voters and democracy against the corrosive effects of election misinformation. It's a problem getting renewed attention in an election year in which more people than ever will go to the polls.

Tech companies have faced intense political pressure in countries like the U.S. and places like the European Union to show they're serious about tackling the baseless claims, hate speech and authoritarian propaganda that pollutes their sites. But critics say they've been less responsive to similar concerns from smaller countries or from voters who speak other languages, reflecting a longtime bias toward English, the U.S. and other Western democracies.

Recent changes at tech firms — content moderator layoffs and decisions to roll back some misinformation policies — have only compounded the situation, even as new technologies like artificial intelligence make it easier than ever to craft lifelike audio and video that can fool voters.

These gaps have opened up opportunities for candidates, political parties or foreign adversaries looking to create electoral chaos by targeting non-English speakers — whether they are Latinos in the U.S., or one of the millions of voters in India, for instance, who speak a non-English language.

"If there's a significant population that speaks another language, you can bet there's going to be disinformation targeting them," said Randy Abreu, an attorney at the U.S.-based National Hispanic Media Council, which created the Spanish Language Disinformation Coalition to track and identify disinformation targeting Latino voters in the U.S. "The power of artificial intelligence is now making this an even more frightening reality."

Many of the big tech companies regularly tout their efforts to safeguard elections, and not just in the U.S. and E.U. This month, Meta is launching a service on WhatsApp that will allow users to flag possible AI deepfakes for action by fact-checkers. The service will work in four languages — English, Hindi, Tamil and Telugu.

Meta says it has teams monitoring for misinformation in dozens of languages, and the company has announced other election-year policies for AI that will apply globally, including required labels for deepfakes as well as labels for political ads created using AI. But those rules have not taken effect, and the company hasn't said when they will begin enforcement.

The laws governing social media platforms vary by nation, and critics of tech companies say they have been faster to address concerns about misinformation in the U.S. and the E.U., which has recently enacted new lawsdesigned to address the problem. Other nations all too often get a "cookie cutter" response from tech companies that falls short, according to an analysis published this month by the Mozilla Foundation.

The study looked at 200 different policy announcements from Meta, TikTok, X and Google (the owner of YouTube) and found that nearly two-thirds were focused on the U.S. or E.U. Actions in those jurisdictions were also more likely to involve meaningful investments of staff and resources, the foundation found, while new policies in other nations were more likely to rely on partnerships with fact-checking organizations and media literacy campaigns.

Odanga Madung, a Nairobi, Kenya-based researcher who conducted Mozilla's study, said it became clear that the platforms' focus on the U.S. and E.U. comes at the expense of the rest of the world.

"It's a glaring travesty that platforms blatantly favor the U.S. and Europe with excessive policy coddling and protections, while systematically neglecting" other regions, Madung said.

This lack of focus on other regions and languages will increase the risk that election misinformation could mislead voters and impact the results of elections. Around the globe, the claims are already circulating.

Within the U.S., voters whose primary language is something other than English are already facing a wave of misleading and baseless claims, Abreu said. Claims targeting Spanish speakers, for instance, include posts that overstate the extent of voter fraud or contain false information about casting a ballot or registering to vote.

Disinformation about elections has surged in Africa ahead of recent elections, according to a study this month from the Africa Center for Strategic Studies, which identified dozens of recent disinformation campaigns — a four-fold increase from 2022. The false claims included baseless allegations about candidates, false information about voting and narratives that seemed designed to undermine support for the United States and United Nations.

The center determined that some of the campaigns were mounted by groups allied with the Kremlin, while others were spearheaded by domestic political groups.

India, the world's largest democracy, boasts more than a dozen languages, each with more than 10 million native speakers. It also has more than 300 million Facebook users and nearly half a billion WhatsApp users, the most of any nation.

Fact-checking organizations have emerged as the front line of defense against viral misinformation about elections. The country will hold elections later this spring, and already voters going online to find out about the candidates and issues are awash in false and misleading claims.

Among the latest: video of a politician's speech that was carefully edited to remove key lines; years-old photos of political rallies passed off as new; and a fake election calendar that provided the wrong dates for voting.

A lack of significant steps by tech companies has forced groups that advocate for voters and free elections to band together, said Ritu Kapur, co-founder and managing director of The Quint, an online publication that recently joined with several other outlets and Google to create a new fact-checking effort known as Shakti.

"Mis- and disinformation is proliferating at an alarming pace, aided by technology and fueled and funded by those who stand to gain by it," Kapur said. "The only way to combat the malaise is to join forces."