By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
Thought leadership

Why AI needs explainability

Joakim Edin

Should you trust any AI tool? Our resident Ph.D at Corti, Joakim Edin, specializes in AI explainability, the process of figuring out how AI reasons and arrives at an answer. Explainable AI is vital, he says, but current explainability methods need improving. 

At Corti, we create AI for better healthcare—co-piloting with doctors during consultations to ensure that all potential diagnosis and treatment pathways are explored before generating an accurate clinical note at the end. 

But how do you know you can trust an AI to co-pilot correctly, diagnose accurately, or even summarize a note well—especially in a high-stakes environment?

There are two ways to foster trust in AI, says Joakim Edin, an industrial Ph.D with Corti and the University of Copenhagen. 

“The first is for AI developers to be transparent about how a machine learning model works, what data it was trained on—and also when something goes wrong.”


The second approach is explainability—the focus of Joakim’s research. This means the ability to explain how an AI reasons, which is crucial in a world where AI decisions can have huge financial, security, or personal ramifications. 

However, according to Joakim’s latest research paper, co-authored with Andreas Geert Motzfeldt, Casper L. Christensen, Tuukka Ruotsalo, Lars Maaløe, and Maria Maistro, current methods for explainability may not be as reliable as previously thought—highlighting the need to rethink and prioritize improvements in this area.

First, let’s explain “explainability” 

As machine learning models become more advanced, humans face the challenge of being able to understand and retrace the steps an algorithm takes to arrive at a result. 

For all their predictive insights, many machine learning engines retain what’s known as a “black box”, meaning that their calculation process is impossible to decipher.

Enter explainability. While explainable AI—also known as XAI—is still an emerging concept that requires more consolidated and precise definitions, it largely refers to the idea that a machine learning model’s reasoning process can be explained in a way that makes sense to us as humans.

“For example, if you have an AI scribe recording a consultation, explainability can highlight which phrases or words in a transcript were used to generate the clinical note,” 

Why is explainability important? 

AI has the potential to transform many industries for the better, including healthcare. But for systems to implement these revolutionary new technologies, users need to be able to trust it, and not just blindly trust it.

 

Joakim says that users deserve to understand the decision-making processes of the tools that they use in their daily work. Especially in healthcare, where the stakes are high and patient health is on the line. 

“In this age of co-pilots, where you have an AI-human relationship, explainability is more important than ever. It’s crucial that the human understands how the AI reasons,”

Unsurprisingly, McKinsey’s report The State of AI in 2020 found that improving the explainability of systems led to increased technology adoption. 

AI researchers have also recently identified explainability as a necessary feature of trustworthy AI, recognizing its ability to address emerging ethical and legal questions around AI and help developers ensure that systems work as expected—and as promised. 

Explainability can even spot bias 

Explainability is also important for identifying biases embedded in AI, says Joakim. He cites an example where he fed a clinical note to a machine learning model that suggests medical codes, and it predicted reckless sexual behavior even though the text had not mentioned anything sexual. 

“It was puzzling. But by applying explainability, I discovered a reference to a motorcycle had promoted the output, meaning that the model had learned a correlation between motorcycle riding and specific sexual behavior,” 

“By being able to identify these kinds of flaws or biases in the AI’s reasoning, you can accelerate its evolution and perfect its accuracy.” 


How do we know an explanation is good?

However, as explainability gains importance, it is also becoming more difficult to understand what constitutes as a good explanation.

Joakim’s most recent research paper looks at some of the most common ways of evaluating how trustworthy an explanation is. It also finds some big flaws that throw into question previous research conclusions—before making suggestions for improving these techniques.  

While Joakim says that getting into the details would quickly become overwhelming for anyone without a software engineering background, he says that the point of the paper is that many of our current techniques for evaluating explainability may not be reliable and need to be refined. 

“The current evaluation methods don’t work that well, which makes it hard to develop techniques that do work well if you don’t know how good the existing techniques are. This paper is a big step in being able to evaluate that.” 

When we can accurately evaluate the quality of explanations, we can develop better techniques to compute them. One exciting research direction is techniques looking inside the brain of the AI.

“So you’re looking at the neurons while calculating how it reasons. It would be very unethical to do on humans but with an AI, researchers can measure every single neuron and when they fire. A lot of research has been done in this field but it hasn’t been nailed yet,”

“In short, explanations are often unreliable with current techniques, and that’s why we’re researching how to make them trustworthy and good.”


Knowledge sharing is the way forward

Ultimately, Joakim says explainability should become a priority among AI developers.

“If a model has 75% accuracy, but you understand how it works, I think that’s worth much more than an AI with 80% accuracy but where you have no idea how it works.” 

That’s why Corti is invested in researching explainability and incorporating it into the technology—and has an open research approach, to ensure everyone can use the findings. 

“For AI to be truly useful, it’s important to understand how smart an AI is, and also how stupid it is.”