Startup aims to make it easier to deploy trustworthy and responsible AI

Lindsay Borthwick
November 24, 2021

A new Canadian startup is using AI to help businesses deploy AI-based systems responsibly.

Toronto-based Armilla AI, which came out of stealth mode last month, has developed an AI governance platform designed to mitigate the risks associated with AI adoption.

While governments around the world grapple with how to regulate AI, Armilla’s CEO Dan Adamson said the private sector should act now to mitigate the risks and maximize the benefits of the technology.

“The industry as a whole has to get better. We have to make sure that there's a lot more disclosure and that these AI models are more robust so all of society benefits,” he said in an interview with Research Money.

There are other startups offering tools and services like Armilla’s, but the company describes itself as the world’s first all-in-one quality assurance platform. It has developed a platform that automates the testing and verification of machine learning models at all stages of the lifecycle and brings together all the parts of the governance process into one place, making it much more efficient, according to Adamson.

"There's a lot of competition around individual tools that might focus on bias or explainability or the governance process. I think our strength is that we're able to combine all of these aspects. They're all important to create robust models,” he said.

Armilla launched in October with $1.5 million in seed funding from several high-profile investors, including Yoshua Bengio, the founder and scientific director of Mila. Bengio backed 2017’s Montreal Declaration for a Responsible Development of Artificial Intelligence and is a longtime advocate for responsible AI.

The company's co-founders are Adamson, Karthik Ramakrishnan, formerly of Element AI, and Rahm Hafiz. Adamson and Hafiz previously worked together at Outside IQ, which developed artificial intelligence solutions for the finance and insurance sectors, and Exiger, which acquired the company in 2017.

Armilla's initial focus is testing models for businesses operating in highly regulated sectors, such as finance. But Adamson said Armilla is already working on a diverse set of use cases in financial services, health care, human resources and manufacturing.

“Right to Explanation”

There are already numerous prominent examples of faulty AI and the consequences it can have on people’s lives, especially discrimination due to algorithmic bias.

In Canada, the University of Toronto’s Citizen Lab has warned that the use of automated decision-making systems in the immigration and refugee system could undermine human rights and called for greater transparency about the use of such technologies. The Privacy Commissioner of Canada has also investigated the RCMP’s use of facial recognition technologies and issued a draft guidance for police agencies.

In recognition of these risks, policymakers and civil society organizations have introduced a patchwork of initiatives. For example, at the federal level, the Treasury Board of Canada Secretariat issued a directive in 2019 to ensure the responsible and ethical use of automated decision systems, including those using AI, and developed the world's first algorithmic risk assessment tool.

However, regulation is still in its infancy and AI systems are being deployed more widely and are becoming more complex than ever before.

Adamson said the Armilla team recognized the urgent need for businesses to invest in building better AI systems. “We saw a bunch of best practices in the industry, we saw things being done very badly, and we saw nothing done great. So we said, ‘Okay, is there an opportunity to create better technology that makes things safer, more efficient, and more robust?’”

He acknowledged that technology is only one part of the solution to the development of trustworthy and responsible AI. But he said tools like Armilla’s can help deliver accountability, including the "right to explainability,” which is the idea that people who are subject to automated decisions have a right to know how those decisions were made. (This right is a provision in the European Union's General Data Protection Regulation (GDPR) and was part of Canada's proposed privacy reforms, Bill C-11.)

Adamson wants policymakers to know that explainable and transparent decisions are now within reach. “It’s important for them to understand that technology is there now," he said.

R$


Other News






Events For Leaders in
Science, Tech, Innovation, and Policy


Discuss and learn from those in the know at our virtual and in-person events.



See Upcoming Events










You have 1 free article remaining.
Don't miss out - start your free trial today.

Start your FREE trial    Already a member? Log in






Top

By using this website, you agree to our use of cookies. We use cookies to provide you with a great experience and to help our website run effectively in accordance with our Privacy Policy and Terms of Service.