Will removing AI from its impenetrable ‘black box’ and bringing its workings out into the light help organizations sustain their AI adoption?

AI has met with some roadblocks along the way to complete adoption. One of them is simple to explain but hard to overcome: people don’t know how AI works, so they don’t know if they can trust it.

It’s perfectly natural. Take sales as an example. Good sales reps work hard to get to know their clients and their product lines. They develop techniques and soft skills that make them better at their job. And, in time, their years of experience give them a ‘feel’ for what to do in a given situation.

As humans, we like to trust our gut – but we’re willing to be persuaded if and only if the other person gives us good reason to be. When the other ‘person’ is a mysterious computer program and their reasons are absent or unclear, we don’t want to take their advice.

As I said, this is perfectly natural. So how can we work around it?

That’s the question Explainable AI (or XAI) tries to answer.

What Is Explainable AI?

According to Wikipedia, “Explainable AI … is artificial intelligence in which humans can understand the decisions or predictions made by the AI. It contrasts with the “black box” concept in machine learning where even its designers cannot explain why an AI arrived at a specific decision. […] XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions.”

In short, XAI seeks to demystify AI by using various processes and techniques that allow humans to understand what goes into its decisions and why that particular action was chosen. It’s interesting to note that at times AI designers and developers can’t explain why or how a certain outcome was reached; it’s easy to see how this can deter users from trusting that machine-based intelligence.

It’s also quite hard to verify that the system is working correctly if you don’t understand how it’s working. Thus, in addition to increasing end-user adoption, Explainable AI also aims to improve the accuracy and effectiveness of AI. It can also help ensure compliance and regulatory goals are being met; transparency has been called one of the hallmarks of responsible AI.

NIST’s 4 Principles of Explainable AI

In 2020, the U.S. the National Institute of Standards (NIST) released a report outlining the four key principles of Explainable AI. In a nutshell, they are:

  1. Explanation. The system must be able to provide an explanation (i.e., support or evidence) of its output. It doesn’t have to be a great explanation or strong support, but it has to be there.
  2. Meaningful. The user should be able to understand the system’s explanations, or the output has to be judged as useful.
  3. Explanation Accuracy. The explanation must be a true reflection of the system’s processes.
  4. Knowledge Limits. The system must identify and declare if a case is outside what it is designed to do and that its answers may not be reliable. This is designed to limit the impact of misleading or harmful conclusions. From the report:
    • “There are two ways a system can reach its knowledge limits. First, the question can be outside the domain of the system. For example, in a system built to classify bird species, a user may input an image of an apple. The system could return an answer to indicate that it could not find any birds in the input image; therefore, the system cannot provide an answer. This is both an answer and an explanation. In the second way a knowledge limit can be reached, the confidence of the most likely answer may be too low […] For a bird classification system, the input image of a bird may be too blurry to determine its species. In this case, the system may recognize that the image is of a bird, but that the image is of low quality. An example output may be: ‘I found a bird in the image, but the image quality is too low to identify it.’”

Benefits of Explainable AI

An AI system that lets you know when it’s unable to generate an accurate response and provides explanations of its actions can benefit your organization in the two ways already discussed: greater acceptance by users and an easier path for its creators. This can have a spillover effect on various parts of AI development process itself:

  • AI tools can be developed and implemented more quickly.
  • AI tools will be easier to train and produce better results.
  • Models can be continuously evaluated and adjusted to improve their performance.
  • Models can be corrected for bias and inaccuracies.
  • AI auditing, compliance, and risk management can be streamlined.

Implementing XAI, as you might deduce from the above list, requires planning and forethought. It’s crucial to look for potential biases and inaccuracies (first in the data, and then in the model). Continuous monitoring is also essential, as is automating lifecycle management. This automation should also look for signs of drift, but manual review and analysis should never be discounted.

It’s not too difficult to imagine Explainable AI as the new standard of business AI. It has the potential to eliminate several of the pernicious roadblocks that have barred the way to higher rates of AI acceptance. It will be very interesting to watch how this progresses in the coming months; for those who develop AI tools and those who use them, XAI may just become the next evolution of Artificial Intelligence.

Authored by: Anil Kaul, CEO at Absolutdata, an Infogain Company and Chief AI Officer at Infogain

Subscribe

Related Absolutdata products and services: AI & Data Sciences, NAVIK AI Platform