Skip to main content

Can we trust generative AI to know and tell us when it doesn’t know the answer?

Questions and answers with Ontario Tech AI researcher Dr. Peter Lewis on trust, uncertainty, self-awareness, and being a ‘rational skeptic’

Dr. Peter Lewis, Associate Professor and the Canada Research Chair in Trustworthy Artificial Intelligence, speaks in the Trustworthy AI Lab at Ontario Tech University.
Dr. Peter Lewis, Associate Professor and the Canada Research Chair in Trustworthy Artificial Intelligence, speaks in the Trustworthy AI Lab at Ontario Tech University.

As more and more people, and organizations, turn to artificial intelligence (AI) for ideas and answers, a leading Ontario Tech University global expert in trustworthy AI is raising a caution flag for those who assume the information AI provides is always reliable and accurate.

Dr. Peter Lewis, Canada Research Chair in Trustworthy AI, says a quiet AI revolution may be taking place—not in pursuit of perfection, but in the acceptance of AI’s imperfections.

Q. At the heart of this shift is a question that continues to challenge engineers, ethicists and professionals such as clinicians, lawyers and educators alike: can we trust generative AI to ‘be reflective’ and admit when it doesn’t know something?

“With current AI systems, we really can’t. As humans, we inherently know we are not perfect, and we expect that level of humility and acknowledgement from one other. So, we might also expect AI to have this same kind of characteristic, to be considered trustworthy. Perhaps it’s due to media illiteracy or what psychologists call ‘automation bias’, but people may currently have a built-in assumption that everything AI generates is correct.”

Q. What are the risks when AI doesn’t know an answer, yet still creates one? Doesn’t AI have inherent human-like fluency and reasoning? Can’t it tell us when it isn’t confident in its answer?

“When large language models (the driving force behind tools like ChatGPT) perform tasks, particularly in high-impact domains such as finance, law or health care, any errors carry significant consequences. So, a key challenge arises: AI might produce answers, but these platforms arrive at them in a very different way than humans would, and they can be wrong in unusual and surprising ways. You might also expect that an AI system can signal low confidence if and when it is uncertain. But unfortunately, our research shows that AI systems are often overconfident in what they tell us, and are not able to judge their own ability very well. We argue that instead of only minimizing uncertainty in AI, if we want to use modern AI we must learn to work with the uncertainty inherent in how it works, and even integrate that into existing risk frameworks.”

Q. Is it an ethical obligation for AI to be transparent when it doesn’t know the answer with certainty?

“Yes. It’s not just a technical feature; it is a safeguard. Instead of aspiring to infallibility, AI systems need to be reflectively aware of their own blind spots: ones that can recommend human oversight when needed. This encourages not only smarter technology, but also more humane design. Ultimately, the ability of an AI model to self-recognize any limitations and say ‘I’m not sure’ may be more valuable than its ability to produce an answer at all.”

Q. If we know that that it’s an unattainable goal for this form of AI to be correct 100 per cent of the time, how do we calibrate justified levels of trust and encourage a healthy skepticism in high-stakes AI applications?

“AI being correct is important, but 100 per cent certainty is an unattainable goal. It’s a misleading expectation that the machine should always be right. We need to work with AI experts and users to understand and expose the different ways uncertainty shows up in AI models, and align these with domain-specific (e.g. clinical) risk factors. This is a practical way to empower people to make better and more appropriate use of AI in areas like medicine.”

Q. Is there hope for future AI systems that are better at evaluating themselves and reporting their uncertainty more accurately?

“That’s a great question because yes, there is hope. But it isn’t something that will come by building bigger neural network models. In our research, we’re working on new architectures for what we call ‘Reflective AI’. Essentially, we ‘wrap’ the engine of the AI system in a reflective layer that evaluates the output of the large language model against various important criteria rather than just acting on it. This way, the broader system can be more trustworthy in terms of its reported confidence, and also in such considerations as whether it’s acting in a socially expected and responsible way.”

Q. In addition to ethical considerations about trustworthiness in AI, is there a responsibility for AI to be equitable and accessible?

“Absolutely, but this is still an emerging area where there is a lot of work to do. Many AI systems operate as what are known as 'black boxes’, where we don't know how they make their decisions or which factors, including potential bias, drive their output. We ought to be putting accessibility front and centre in the development of AI systems, just as we eventually started doing with the World Wide Web a few decades ago. In our research at Ontario Tech, for example, we’ve been working with the sight loss community, and particularly the Canadian National Institute for the Blind (CNIB), on developing accessible versions of explainable AI tools for people with disabilities—and demonstrating why the current state of the technology is nowhere near adequate.”

Q. AI has many cheerleaders, but also many doomsayers. Should people be ‘rational skeptics’ when it comes to AI?

“AI is a polarizing issue to a large extent. We’ve got people who are all-in and say it’s revolutionary, and at the other end of the spectrum there are those who adamantly claim AI will be the ‘end of the humanity’. What’s often not acknowledged is that both of these opinions are actually based on the assumption that AI is inevitable and hugely powerful. The reality is there are countless grey areas and gaps to fill in: areas where AI is useful if done right, and other areas where talk of revolutionary transformation is mostly hype. What we need to do is chart a way forward that builds room for a ‘rational skeptic’ view of AI, so that we use it where there is real benefit, while not being naïve to its challenges and downsides.”

- Dr. Peter Lewis is an Associate Professor and the Canada Research Chair in Trustworthy Artificial Intelligence in the Faculty of Business and Information Technology at Ontario Tech University.

 

Gallery