Google's research chief questions value of 'Explainable AI'

Peter Norvig says output of machine learning systems a more useful probe for fairness

As machine learning and AI become more ubiquitous, there are growing calls for the technologies to explain themselves in human terms.

Despite being used to make life-altering decisions from medical diagnoses to loan limits, the inner workings of various machine learning architectures – including deep learning, neural networks and probabilistic graphical models – are incredibly complex and increasingly opaque.

As these techniques improve, often by themselves, revealing their inner workings becomes more and more difficult. They have become a ‘black box’, according to growing numbers of scientists, governments and concerned citizens.

According to some, there is a need for these systems to expose their decision-making process, and be ‘explainable’ to non-experts: An approach known as explainable artificial intelligence or XAI.

But efforts to crack open the black box hit a snag yesterday, as the research director of arguably the world’s biggest AI powerhouse, Google, cast doubt on the value of explainable AI.

After all, Peter Norvig suggested, humans aren’t very good at explaining their decision-making either.

Frontier psychology

Speaking at an event at UNSW in Sydney on Thursday, Norvig – who at NASA developed software that flew on Deep Space 1 – said: “You can ask a human, but, you know, what cognitive psychologists have discovered is that when you ask a human you’re not really getting at the decision process. They make a decision first, and then you ask, and then they generate an explanation and that may not be the true explanation.”

Just as humans worked to make sense and explain their actions after the fact, a similar method could be adopted in AI, Norvig explained.

“So we might end up being in the same place with machine learning where we train one system to get an answer and then we train another system to say – given the input of this first system, now it’s your job to generate an explanation.”

Although a relatively new field of study, progress in this area is already being made. Researchers at the University of California and Max Planck Institute for Informatics published a paper in December on a system that put machine learning-based image recognition into human explanations.

Although explanations were justified “by having access to the hidden state of the model”, they “do not necessarily have to align with the system’s reasoning process”, researchers said.

Besides, Norvig added yesterday: “Explanations alone aren’t enough, we need other ways of monitoring the decision making process.”

Output checks

A more veracious way of checking AI for fairness and bias, Norvig said, was to look instead not at inner workings but at outputs.

“If I apply for a loan and I get turned down, whether it’s by a human or by a machine, and I say what’s the explanation, and it says well you didn’t have enough collateral. That might be the right explanation or it might be it didn’t like my skin colour. And I can’t tell from that explanation,” he said.

“…But if I look at all the decisions that it’s made over a wide variety of cases then I can say you’ve got some bias there – over a collection of decisions that you can’t tell from a single decision. So it’s good to have the explanation but it’s good to have a level of checks.”

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags AustraliaGoogleArchitecturealgorithmsAIunswmachine learningbiasfairnessNeural Networksdeep learningexplainable artificial intelligenceXAIPeter Norvigprobabilistic graphical models

More about EUGoogleNASAUNSW

Show Comments
[]