Organizations:

People:

Topics:


Growing use of AI for political purposes is a risk to Canada’s democracy: uOttawa report

Mark Lowey
February 14, 2024

Increasing use in Canada of artificial intelligence for political purposes can interfere with voters’ behaviour, undermine democratic participation and erode Canadians’ trust, according to a report from the University of Ottawa.

AI is already being used in the political realm to analyze social media posts and voters’ history, predict elections, target political advertising, create synthetic images, and more, says the report, “The Political Uses of AI in Canada.”

“The inability to distinguish fact from fiction has the potential to dilute our trust, not just in images or videos or in the news media, but in our institutions and in each other,” said report co-authors Michelle Bartleman (photo at left) and Dr. Elizabeth Dubois, PhD. (photo at right). “ AI-generated content has destabilized our confidence in the age-old adage that ‘seeing is believing.’”

“The opportunities to put AI to use in political contexts are far reaching and will only continue to grow,” their report says.

Dubois is an associate professor of communication, the University Research Chair in Politics, Communications and Technology, and a faculty member at uOttawa’s Centre for Law, Technology and Society. Bartleman is a PhD candidate.

Their report is part of the university’s AI + Society Initiative, aimed at better understanding and framing the ethical, legal and societal implications of AI by leveraging a transdisciplinary approach.

Questions around the uses of AI in political domains are not just technical ones, the report notes. “They are fundamental questions about how societies are governed and how they should be governed. AI, by definition, involves a degree of decision making, which challenges the notions of democratic participation and self-rule.”

The report lists several ways AI-enabled tools can be used to accomplish political tasks more effectively and efficiently, including:

  • Augmented analytics provide the ability to cut through vast data sets and distill relevant elements into accessible information.
  • Machine learning can be leveraged for data analysis of voter trends and can also be used to detect online abuse or disinformation.
  • Natural language processing might be used to disseminate information via conversational agents as well as to generate campaign texts or interact with voters.
  • Synthetic content can be generated and tailored to a given context faster and more efficiently than with traditional methods, saving time and resources, and can be easily modified and updated, enabling quick adjustments to meet changing demands.
  • Generative AI can produce content that is personalized to meet specific needs or reach voters with more relevant messages.

However, the same powerful AI tools can also be used to spread disinformation, create confusion and undermine trust in democratic systems, or interfere with elections, the report’s co-authors warned.

For example, AI-powered augmented analytics make it easier to target people with particular messaging that might influence their voting behaviour. Machine learning has led to biased or inaccurate models. Synthetic content can easily be used to misrepresent political players and mislead voters. Generative AI can produce depictions of people, places and things that don’t exist.

AI is being used in Canada for political purposes

Dubois and Bartleman’s report cites several examples of how AI has already been used in Canada for political purposes, including:

  • In the run-up to Toronto’s mayoral election in June 2023, candidate Anthony Furey’s campaign released a 42-page platform. In the document’s images, people noticed the “three-armed woman,” a result of using generative AI to create the images that accompanied the text. Many of the images used in the platform document seemed “off,” such as a downtown street that can’t quite be identified and a homeless camp that seems to have more tents than usual. Furey’s campaign acknowledged it had used synthetic images in the document, but there was no indication that it had done so. In addition, a scan of Furey’s website using an AI-detection tool called GPTZero suggested sections of it had been written by AI.
  • In January 2023, the Alberta Party – which did not hold any seats in Alberta’s legislature – shared a video endorsement on Instagram. The video depicted a man promoting the party as a third option for Albertans who didn’t want to vote for either of the two main provincial parties (the United Conservative Party of Alberta or the Alberta New Democrat Party). It was quickly pointed out on social media that the man making the endorsement in the video was not a real person, and the post was subsequently deleted. The video “spokes-bot” was an AI-generated avatar created using the AI video generation software Synthesia, which allows users to feed scripts to one of some 140 pre-defined avatars.
  • All of Canada’s national parties use political engagement platforms (PEP). The Conservative Party of Canada uses a PEP called NationBuilder, a widely known customer relationship software used by the Trump campaign ahead of the 2016 election. The Liberal Party of Canada uses a platform by NGP VAN, originally developed for American Democratic parties. Previously, political parties leveraged more simple databases of voter intentions. But the combination of connectivity and AI-enabled tools means campaign software now provides unprecedented opportunities to interact with voters and analyze their behaviours. Thirty-five available PEPs aggregate voter data such as names, addresses, demographics and voting history, which is then integrated with different communication tools like websites, newsletters, text messaging and social media.
  • “Polly” is an AI-enabled market research system built by Ottawa-based Advanced Symbolics Inc. Using a combination of public social media data, web scraping and sentiment analysis, the digital pollster accurately predicted a Liberal Party minority government in Canada’s 2019 federal election with 77 per cent confidence. Pulling from millions of publicly available social media posts worldwide, Polly enables sentiment analyses to understand how real-time events are being talked about and compares these to historical patterns and trends. For political purposes, Polly aggregates public opinions, voting patterns and demographical data to build a representative sample of voters and make predictions on election outcomes (among other things). In addition to the 2019 election results, Polly also correctly guessed the outcome of the 2015 federal election. Outside of Canada, Polly was one of the few pollsters that correctly predicted the U.K’s vote to leave the European Union in 2016, as well as Donald Trump’s rise and win of the popular vote in the 2016 U.S. presidential election. The software is not infallible and has made a few incorrect guesses, including predicting a win for Hilary Clinton in the 2016 U.S. election.

Synthetic images and text showing up in political advertising

Synthetic images, videos and text have now started to show up in political advertising, according to Dubois and Bartleman's report.

AI can help create personalized messaging in automated calls, text messages or chatbots, which can insert customized greetings or additional knowledge about a voter or citizen. Synthetic text, for example, could be generated in order to change the style or tone of an email to make it more compelling for different types of voters, while voice cloning can be used to have a political candidate make “personalized” calls or messages.

Google and Meta both announced last fall they will require AI use in political advertisements to be flagged, while Microsoft created a tool to embed watermarks to make AI content more identifiable.

In the U.S., AI was used in a recent robocall in New Hampshire using a deepfake of President Joe Biden’s voice – reportedly made by a Texas company – to urge voters not to vote in the state’s presidential primary. Following that incident, the U.S. Federal Communications Commission announced on February 8 that, effective immediately, robocalls made with AI-generated voices are illegal under the Telephone Consumer Protection Act.

Dubiois and Bartleman’s report notes that in September 2018, Elections Canada put a bid out to purchase AI-enabled social listening tools used to collect information about what was being said on social media related to the upcoming federal election and identify misinformation and disinformation in circulation.

After the 2019 federal election, the agency reported that the number of occurrences of disinformation was limited, and that most inaccurate content seemed to be unintentional or meant as a joke.

Following the 2021 federal election, Elections Canada noted in its statutory report that there had been “an improvement in the agency’s ability to monitor certain election-related topics in the public environment and to address potential misinformation or disinformation that could affect electors’ ability to vote.”

But Dubois and Bartleman point out there’s no mention of specific tools in the agency’s report. Elections Canada also mentioned the creation of an Environmental Monitoring Centre in 2020, that would help “deepen its understanding of the information environment and observe inaccurate narratives as they developed,” but no specifics were provided.

The federal government’s Bill C-27 proposed amendments to the Artificial Intelligence and Data Act would require organizations with “general purpose” generative AI systems (such as ChatGPT and others now on the market), which can create text, audio, images and video that appear to either depict or to have been created by real humans, to ensure such “person-seeming” outputs can be readily detected by people, and to advise a human that they are communicating with an AI system. However, Bill C-27 is still being reviewed by the Standing Committee on Industry and Technology.

How will AI be used in Canada’s next election?

“Deep fakes” are media manipulations based on advanced AI, where images, voices, videos or text are digitally altered or fully generated by AI. This technology can be used to falsely place anyone or anything into a situation in which they did not participate – a conversation, an activity, a location.

Dubois and Bartleman’s report says deep fakes have been on the radar of Canadian intelligence and government since at least 2018, when a parliamentary report in response to privacy breaches related to the Cambridge Analytica scandal briefly mentions this use of AI.

(In the 2010s, personal data belonging to millions of Facebook users was collected without their consent by British consulting firm Cambridge Analytica, mainly to be used for political advertising. Cambridge Analytica used the data to provide analytical assistance to the 2016 presidential campaigns of Donald Trump and Ted Cruz.).

In 2019, the Library of Parliament published a report, “Deep Fakes: What Can Be Done About Synthetic Audio and Video?”

The Canadian Centre for Cyber Security first noted deep fakes as a “layer of uncertainty and confusion for the targets of disinformation campaigns” in its 2020 National Cyber Threat Assessment. In its 2021 report on cyber threats to Canadian democracy, the Cyber Centre noted that deep fake text was particularly challenging to detect, and had the potential to undermine electoral processes.

By its 2023-24 threat assessment report, the Cyber Centre was citing instances of political deep fakes, and advised that “synthetic content calls all information into question.”

“The deployment of AI in our social systems is a Collingridge dilemma playing out in real time: new technologies are easier to regulate and control, but you don’t really know the full impacts until they are fully deployed, at which point it is too late to implement the regulations or controls that are actually required,” the uOttawa researchers say in their report.

(The Collingridge dilemma is a methodological quandary in which efforts to influence or control the further development of technology face a double-bind problem: impacts cannot be easily predicted until the technology is extensive developed and widely used; and control or change is difficult when the technology has become entrenched).

The uOttawa report was sparked by a panel discussion in April 2023, hosted by Dubois, with five expert panelists. For their report, Dubois and Bartleman asked these experts what they expected to see in how AI technology is used in Canada’s next election.

Dr. Wendy Wong, PhD, professor of political science and Principal’s Research Chair at the University of British Columbia, Okanagan, noted that Canada has some of the most prominent AI researchers in the world. She’s hoping Canadians talk more about how AI fits into the national pan-Canadian AI strategy.

“I think it’s time that we as data subjects become data stakeholders, and one of the things that I’m hoping the government brings into play is thinking about digital literacy in a very serious way, which is helping all of us decipher what the machine is doing, and how we can change the terms of that co-existence,” Wong said.

Dr. Wendy Hui Kyong Chun, PhD, a professor of communication and the Canada 150 Research Chair in New Media at Simon Fraser University, said she expects to see AI being employed in the increasing use of divisive issues to create “angry clusters. What’s key is that these clusters, often focused around seemingly niche issues, are strung together to form larger clusters. So [expect] a proliferation of micro- divisions, and a linking of them all together in order to form majorities of anger.”

Dr. Fenwick McKelvey, PhD, associate professor in communication studies at Concordia University and co-director of the Applied AI Institute, said an important test will be whether political parties advertise using AI as part of their “war room showcasing, whether we’ve entered a moment where AI really has swung from something that’s cool to something that we’re worried about.”

“I think it’ll be very interesting to see how AI is framed as a policy issue: whether we’ll see an uptake in these very tangible, clear, accepted issues related to the problems with AI or if we’re going to be left constantly debating whether we live in the next version of The Terminator,” McKelvey said.

Said report co-author Dubois: “Sometimes we’re tempted to think of AI as independent entities with agency. While these tools have some decision-making ability, they are designed by humans, built by humans, and trained by humans. So, it follows that, as humans, we can also choose how we want to use these tools, what guardrails to put up, and how to make these systems transparent and equitable.”

R$


Other News






Events For Leaders in
Science, Tech, Innovation, and Policy


Discuss and learn from those in the know at our virtual and in-person events.



See Upcoming Events










You have 1 free article remaining.
Don't miss out - start your free trial today.

Start your FREE trial    Already a member? Log in






Top

By using this website, you agree to our use of cookies. We use cookies to provide you with a great experience and to help our website run effectively in accordance with our Privacy Policy and Terms of Service.