Canadian AI Safety Institute aims to build community of researchers and ensure AI safety is a must for the technology’s adoption

Mark Lowey
February 12, 2025

The new Canadian Artificial Intelligence Safety Institute’s (CAISI) $27-million research program will focus on building a community of researchers specializing in AI safety and ensuring safety goes hand-in-hand with AI adoption and deployment, say the program’s co-directors.

The federally funded CAISI Research Program at CIFAR will focus initially on the known current and emerging risks of artificial intelligence rather than the potential longer-term existential risk that AI with much greater intelligence than humans could threaten humanity’s survival.

Also, the research program won’t be funding research on the environmental impacts of AI such as energy usage, to optimize the program’s resources and keep the focus on AI safety, co-directors Nicolas Papernot and Catherine Régis said during a CIFAR webinar.

Near-term risks of AI include deepfakes (AI-generated content that looks real), misinformation, privacy security, bias and discrimination.

“A big objective of CAISI is to identify these risks and evaluate methods for mitigating them,” said Papernot (photo at right), Canada CIFAR Chair at the Vector Institute and assistant professor of electrical and computer engineering and computer science in the Faculty of Law at the University of Toronto.

CAISI’s research program, which includes both fundamental and applied research, aims to foster greater trust by Canadians in AI systems and also increase the education of Canadians in the safe and responsible use of AI, he said.

Another program goal is to ensure the public has an appropriate perception of AI technology and its potential impact and that researchers, policymakers and other stakeholders understand Canadians’ expectations of the technology so AI can meet these expectations, Papernot said.

“We want to build a high-calibre research program, a research community around AI safety in Canada,” said Régis (photo at left), Canada CIFAR Chair at Mila – Quebec AI Institute and Canada Research Chair in Health Law and Policy at the Université de Montréal.

CAISI’s research program will work with emerging researchers in the field and also with student researchers to build the next generation of AI safety researchers who’ll take the lead in the future, she said.

CAISI’s focus on research in AI safety is part of Canada’s commitment toward developing responsible AI, “which is the signature of the Canadian AI ecosystem nationally but also at an international level,” Régis said. “What we do here we hope is going to help the global research community.”

Another goal is to help policymakers, through CAISI’s research, to focus on what is important in AI safety to achieve the adoption of AI that benefits Canadians, she said. “The big picture goal is to really help develop AI that is beneficial for people and society.”

CAISI, which the federal government launched in November 2024, won’t have any regulatory authority because the institute’s role is to fund AI research and inform policy options.

 In selecting the research topics CAISI wants to focus on in the first few months, the research program is guided by the newly released International AI Safety Report, led by Yoshua Bengio, scientific director of Mila and professor at the Université de Montréal.

On February 3, CAISI’s research program announced the nine members of its research council, in addition to Papernot and Régis. The members are drawn from Canada’s three national AI institutes, the National Research Council and the University of British Columbia, and include Elissa Strome, executive director of the Pan-Canadian Artificial Intelligence Strategy at CIFAR.

CAISI’s research will look at harms caused by general purpose AI but also look at evidence of additional risks that are emerging, Papernot said. General purpose AI, which has a wide range of uses, includes large language models like ChatGPT and image generation models like DALL-E.

One emerging issue, for example, is synthetic content generated by AI systems that are becoming increasingly more capable of generating very realistic fake content.

AI models and systems are going to be trained on data that is at least partially synthetic and the consequences of this aren’t understood, Papernot said.

For example, repeatedly training more AI models on more synthetic data could render AI models unstable during adversarial attacks or the models may not adequately learn the complexity and diversity of the real world during training.

“This sort of output can be used to create scams or generate malicious software, and that is something we want to tackle [through CAISI’s research],” Papernot said.

Safety as a “prerequisite” for AI adoption and deployment

Another aim of CAISI’s research is helping to protect Canadians’ privacy from AI systems. This is especially important in Canada with its decentralized, universal health care system that includes a lot of sensitive personal information.

There is R&D evidence that existing privacy-enhancing technologies aren’t sufficient to safeguard such health data from general purpose AI systems, Papernot said. “So there’s a lot of work that needs to happen for us to understand how we can give individuals control of their data, which we’ll look at with the research program.”

More broadly, CAISI’s research will look at methods for training more trustworthy AI models, including understanding what it means for humans to trust an AI system and build a system with desired properties such as robustness, interpretability, lack of bias in predictions, and other characteristics.

 “Then we want to fund research that will allow us to test that these AI systems have these properties,” and how people can best monitor such systems, Papernot said.

 Régis said CAISI’s research will be multidisciplinary and interdisciplinary, which is important for Canada’s research community. “Many of the risks and opportunities related to AI require that interdisciplinary analysis and solution approach to fully understand the problem and propose appropriate actions."

She noted that the recent final report by Canada’s Foreign Interference Commission, led by Justice Marie-Josée Hogue, mentioned that AI could increasingly lead to more disinformation. There are technical solutions and techniques that can identify AI-produced content and disinformation, Régis added.

Dissemination of CAISI’s research beyond the scientific community will be embedded in the research projects CAISI funds, she said. “There’s a strong focus on impact.”

The research program’s work will be publicly available and open source in general, to encourage increased transparency and external scrutiny.

CAISI will maintain a connection with Canada’s AI industry through the country’s three national AI institutes (Vector Institute in Toronto, Mila in Montreal and Amii in Edmonton), which each have a team dedicated to engaging with the industry, Régis said.

CAISI’s research, in building on Canada’s existing expertise in AI safety, is expected to produce research results that are useful for the Canadian AI industry, she said. Also, CAISI will select research questions and priorities that are relevant to what’s happening in the industry.

When it comes to international collaboration on AI safety, the federal Advisory Council on Artificial Intelligence’s Public Awareness Group is currently focused on efforts to mitigate AI risks, an area that isn’t being addressed as much by other countries’ AI safety institutes, Papernot said.

There’s still a very low understanding, even among AI developers, of how general purpose AI models operate, he said. “So we need to fill that info gap. One way to do this is [to] increase the expertise [in the research community] so regulators have more access to the relevant expertise.”

International discussions have seen an increasing emphasis on adoption, innovation and public sector AI applications rather than AI safety – as reflected in the agenda for the AI Action Summit in Paris, France, on February 10 and 11.

“If the summit turns into a mere showcase of successful AI projects, its core mission –  setting clear boundaries for powerful tech firms and mitigating AI’s societal and environmental dangers – will be sidelined,” Lisa Soder, senior policy researcher at independent IT and public policy organization interface in Atlanta, Georgia, wrote in an op-ed published by Context.

The co-directors of CAISI’s research program both underscored that AI safety and adoption should go hand-in-hand.

 “Safety and adoption should co-exist,” Régis said. “There will be no adoption if people don’t think AI is safe to use.”

Also, there will be setbacks in AI adoption if safety problems arise that haven’t been considered and addressed, she added. “We need to anticipate and act before [safety problems] arise.”

Added Papernot: “Safety is a prerequisite to being able to adopt AI and to sustain the innovation it enables.”

However, the federal government’s Artificial Intelligence and Data Act (AIDA) introduced in 2022 as part of Bill C-27, died last month when Prime Minister Justin Trudeau prorogued Parliament. AIDA set out broad principles for the development of algorithms and AI models.

At the AI Action Summit in Paris, Canada signed the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law.

The convention establishes a common international legal framework that addresses the risks that AI poses to human rights, democratic institutions and the rule of law.

However, the U.S. and the U.K. did not sign the summit’s closing communique, a high-level list of objectives that included trying to reduce digital divides, ensuring AI is “inclusive, transparent, ethical, safe, secure and trustworthy, and developing AI sustainably. 

Funding of compute resources needed to support AI safety research

CAISI’s research program has launched a call for proposals for CIFAR Catalyst Grant research projects that advance fundamental and applied research into AI safety.

Projects will typically be funded for $100,000 for up to one year. Proposals, due by February 27, 2025, are to be uploaded into CIFAR’s SurveyMonkey Apply portal.

CAISI also will support networks of researchers working on longer-term problems, such as how to effectively share information about AI systems’ vulnerabilities and potential risks. That part of the program is expected to open in late February and provide up to $700,000 per project over two or three years.

Along with funding research, Papernot said it’s also necessary to continue funding the compute resources required to support CAISI’s research.

The federal government’s $2-billion commitment in Budget 2024 to compute infrastructure and access “is really just keeping us afloat,” he said. “It’s essentially making sure that academics have just enough compute to make that research possible.”

CAISI can fund research that will help decrease the need for resources outside of the AI industry to audit the behaviour of AI systems, he said.

Cryptography is an example of a technology that can be used so that the cost of testing and monitoring an AI system with a specific property is the responsibility of the AI developer, rather than the regulator and the public.

Compared with the $27 million in federal funding for CAISI’s research program, AI developers such as Amazon, Google, Microsoft, Meta and others are spending hundreds of millions to billions of dollars to develop and train new AI models and systems.

For example, Amazon CEO Andy Jassy said in a conference call this month with analysts that he expects the e-commerce giant’s capital expenditures to average around US$25 billion per quarter in 2025, with the “vast majority” going toward AI for the company’s Amazon Web Services platform, according to a report by CNBC.

The huge gap between private sector funding for AI development and publicly funded research on AI safety “is certainly a challenge and I don’t think there are any easy solutions to that,” Régis said.

Nevertheless, Canada and other countries need to invest in independent, public research on AI, she said. “Otherwise, the gap will continue to increase.”

When it comes to the outcomes of CAISI’s research, she said it’s important for the research program to create a high-calibre, robust research community on AI safety.

Régis said she also hopes CAISI’s research agenda helps policy makers at the national, regional and international levels make the best decisions and fund AI technology in a way that society can benefit from it.

Papernot pointed out that Canada was the first country in the world to establish a national AI strategy. “So we really want to benefit from all these investments we’ve made as a country in this technology. I see AI safety as an enabler of this innovation.”

He said he would like CAISI to help achieve a state where the general public understands the capabilities of AI systems much better, and to some extent knows how these systems operate and how they can bring positive benefits to society.

He also hopes CAISI helps to streamline the collaborations across academia and industry, creating more feedback loops to ensure that – with AI technology advancing so quickly – information is constantly updated in a coherent way.

“There really is an urgent societal need to understand more about AI safety,” said Elissa Strome, executive director of the Pan-Canadian AI Strategy. “History has shown us that research is really a critical cornerstone for advancing new ideas, policies and innovation.”

R$

 


Other News






Events For Leaders in
Science, Tech, Innovation, and Policy


Discuss and learn from those in the know at our virtual and in-person events.



See Upcoming Events










You have 1 free article remaining.
Don't miss out - start your free trial today.

Start your FREE trial    Already a member? Log in






Top

By using this website, you agree to our use of cookies. We use cookies to provide you with a great experience and to help our website run effectively in accordance with our Privacy Policy and Terms of Service.