Skip to main content

Building trust in AI: Why governance and ethics matter now

From left: Moderator Hugh Mansfield, President, Bizcom Group, with panelists Dr. Peter Lewis, Canada Research Chair in Trustworthy Artificial Intelligence and Director, Mindful AI Research Institute (MAIRI), Ontario Tech University; Dr. Hossein Rahnama, Founder and CEO, Flybits; Amber Mac, President, AmberMac Media Inc.; and Ontario Tech President and Vice-Chancellor Dr. Steven Murphy, at the Ethical AI, Building Trust panel, held during Ontario Tech University's AI Forum.
From left: Moderator Hugh Mansfield, President, Bizcom Group, with panelists Dr. Peter Lewis, Canada Research Chair in Trustworthy Artificial Intelligence and Director, Mindful AI Research Institute (MAIRI), Ontario Tech University; Dr. Hossein Rahnama, Founder and CEO, Flybits; Amber Mac, President, AmberMac Media Inc.; and Ontario Tech President and Vice-Chancellor Dr. Steven Murphy, at the Ethical AI, Building Trust panel, held during Ontario Tech University's AI Forum.

The growing presence of artificial intelligence (AI) across workplaces, classrooms and public spaces is bringing greater urgency to questions around its governance and responsible use.  

These considerations were explored during Ethical AI, Building Trust, one of the panels at Ontario Tech University’s inaugural AI Forum, held March 27. Bringing together experts from academia and industry, the panel examined how governance frameworks, regulation and human-centred design must evolve to ensure AI systems are worthy of public trust.

Held under the forum’s theme Building Trust: The Strategic Advantage of Human-Centred AI, the discussion reinforced a clear message: trust is not a barrier to innovation, but a condition for it.

How do we define trust in the context of AI?

“Trust is the decision to put yourself in a situation where the outcome that matters to you depends on the actions of somebody else,” said Dr. Peter Lewis, Canada Research Chair in Trustworthy Artificial Intelligence and Director of Ontario Tech’s Mindful Artificial Intelligence Research Institute. “It’s about being vulnerable when you cannot fully understand, predict or control what the other will do.”

He noted that in the context of AI, uncertainty is amplified. AI systems are complex, not always predictable, and not fully within our control, yet they present enough potential value that many are willing to accept the risk.

Rather than encouraging blind confidence, Dr. Lewis emphasized the importance of trustworthiness. “The aim is not to just have more trust,” he said. Instead, trust should be informed by seeking evidence of whether an AI system has demonstrated that it is worthy of trust in a particular situation.

Are we dealing with a lack of trust in AI, or is the problem that we don’t know when to trust AI?

Dr. Hossein Rahnama, Founder and Chief Executive Officer of Flybits, questioned the notion of trusted AI.

“Trust is something that is defined between people,” he said. “If I wear my engineering hat, I think there should be zero trust on machines. Algorithms should be transparent; you must have the ability to audit them.”

Dr. Rahnama said explaining how AI systems work can be difficult, and instead emphasized auditing their decision-making processes. He added that most AI systems today support decisions rather than making them.

“These are built by humans; they’re fine-tuned, structured and created by humans.”

Where are we going as a country with AI regulation and what guardrails do we need to have in place?

Dr. Steven Murphy, President and Vice-Chancellor of Ontario Tech University, said AI regulation in Canada and the discussion around it is often described as falling somewhere between the American and European approaches. While Europe has been seen as conservative in its regulation of AI, the U.S. has taken the approach of speed to market in the name of innovation.

Dr. Murphy called the perceived regulation-innovation divide a false dichotomy.

“What we need to start thinking about is, how do we innovate in a trustworthy environment? How do we innovate with guardrails around us? It isn't that hard to do. And in fact, the case I make is that there's a real business opportunity for Canadians to be trusted.”

For Amber MacArthur, award-winning podcaster and President of AmberMac Media, meaningful regulation is already past due.

“It’s not a question of if we regulate; it’s how we do it, and how quickly we do it,” she said. “We have a role as Canadians right now to actually introduce regulation that is sensible, that helps our companies thrive.”

What’s a misconception leaders have about AI risks?

Dr. Murphy noted that a key misconception centres on trust and social licence.

Drawing a parallel to Canada’s nuclear sector, he pointed out that trust in nuclear energy is built over time through transparency, public education and clear safeguards.

“We need to be thinking about building social licence,” he said, emphasizing that many people do not trust AI because they do not fully understand it.

Dr. Murphy added that responsibility lies with institutions, not the public. “Why should I trust you and what you have to say? Well, you shouldn’t until I have won your trust,” he said.