Canadian policymakers have been very influential in promoting responsible artificial intelligence (AI) globally, according to experts participating in AICan, the annual meeting of the Pan-Canadian AI Strategy. But as the development, deployment and use of AI systems accelerates, Canada urgently needs new laws and regulations to ensure they benefit everyone, they said.
A panel discussion on Canada’s leadership in responsible AI featured Patricia Kooseim, Ontario’s information and privacy commissioner; Deborah Raji, an Ottawa-based research fellow at the Mozilla Foundation; Jacques Rajotte, interim executive director of CEIMIA, a non-profit organization supporting the activities of the Global Partnership on AI (GPAI); and, Dr. Golnoosh Farnadi (PhD), a core academic member of Mila, the Quebec Institute for Artificial Intelligence.
The panelists weighed in on what Canada is doing well in AI policy and where it must improve. Kosseim noted that steady leadership by federal, provincial and territorial privacy commissioners and a strong privacy research capacity has strengthened Canada’s influence globally. But she said our laws need to be modernized to address AI’s risks, including its potential impacts on privacy, data protection and human rights: “Our actions have been louder than words. Now it's time for our words to catch up.”
Raji, whose research focuses on algorithmic auditing and evaluation, noted Canada’s agile policy development, citing the Algorithmic Impact Assessment tool to help federal departments, and others, measure the impacts of AI and ensure it is implemented ethically and responsibly. The tool was developed in response to the Treasury Board of Canada's 2019 Directive on Automated Decision Making. But Raji said policymakers still have to solve the considerable problem of disclosure—ensuring Canadians know and understand how public institutions are using algorithms to make decisions that could impact their lives in meaningful ways. “At the municipal, provincial and federal level, it can be very difficult for anyone to understand if their data is being collected and used algorithmically, and what kind of algorithms are being applied to the data.”
AI systems use data differently from big data analytics, which has profound implications for policymaking, according to the panelists. For example, AI systems are very resource-intensive, which introduces very specific power dynamics, said Raji. Farnadi, whose research focuses on algorithmic fairness, said transparency is another issue: Public and private institutions using AI tools are not aware of the data those tools are using, and whether they are representative of the general population. If AI models are trained on data that contain human biases, those biases can be amplified, creating the potential for discrimination. And while responsible AI aims to address bias and fairness, Farnadi said we urgently need better solutions: “We are facing systematic discrimination that could be severe and harmful. We need to deal with it today.”
Rajotte cautioned that when AI is used in developing countries, the risk of bias is increased because of a lack of input data relevant to those setting. Raji argued that a culture shift is needed. “Instead of a culture focused on collecting as much data as possible, we need to focus on the population AI tools are going to be deployed on.”
R$