Trust Factor: Why is Explainability so Difficult?

demo

One of AI’s most persistent criticisms is that users are asked to blindly trust the system’s outputs without understanding how they were derived. This is a bigger deal in some areas of AI applications than in others. In visual analysis, for example, there is typically little concern with the details of how an algorithm recognizes a photograph of a cat or a video of a fire that’s just started.

 

So long as the algorithm is good at doing its job, we implicitly trust the technology and have no problem relying on its results. But medical diagnoses, financial/legal/hiring decisions, or expensive maintenance/repair recommendations are areas where people, governments, and regulators want to understand exactly how the recommendations came about. And this—the challenge of explainability—is an area where AI has suffered somewhat since it first started widespread implementation.

 

Explainability seems a simple enough concept at a high level: the ability to understand why a thing is the way it is. Why is the sky blue? Anyone with a small child understands the challenge of delivering explainability, sometimes for the most innocuous-seeming questions. The challenge has been an important aspect of human experience for as long as we have been an inquisitive species. But in recent years, an interesting and important new dimension has arisen, one brought to the fore by the emergence of new artificial intelligence (AI) systems that deliver conclusions, recommendations, and warnings that often seem to emerge from out of black boxes without apparent explanation. 

 

So much data, so many nodes

Several reasons for this difficulty are endemic to how AI technology works. First is that machine learning (ML) relies on networks comprising many layers of interconnected nodes, as described in a recent SparkCognition white paper on normal behavior modeling (NBM). Because of the complex interactions of these nodes, deciphering exactly why a specific outcome emerged from the algorithm can be challenging or, frankly, impossible.

 

Compounding this state of affairs are the frequently enormous data sets that go into many AI algorithms, which often comprise hundreds or even thousands of individual sensor-derived variables. Attributing causation to one variable can be a very complex undertaking.

 

Given these difficulties, it’s no wonder people and organizations express concern when told that an algorithm has determined that they cannot have a mortgage or that they have a potentially life-threatening disease. Insurance companies, for example, might be loath to pay on a claim for a medical procedure determined to be necessary by an unexplainable algorithm.

 

Explainability—a global matter

In some parts of the world, this has become such a significant concern that regulators have mandated the ability to explain such outcomes. In Europe, for example, the General Data Protection Regulation (GDPR) laws enacted in 2018 state that all people have the right to “meaningful information about the logic behind automated decisions using their data.” Further, a wide-ranging assortment of state- and federal-level legislation is currently being reviewed and enacted throughout the U.S. governing the uses of AI and ML-enabled decision-making.

 

As a result of all this attention, data scientists and AI users have, in recent years, increasingly focused their attention on developing tools that can answer the calls to deliver explainable AI insights. In practice, such explanations, while often useful when applied to global AI processes such as model development, knowledge discovery, and audit, are rarely informative concerning individual decisions. But fortunately, there do exist tools that can provide the explanatory insight decision recipients frequently demand. 

 

Tools like data visualization, feature impact diagrams, and heatmaps (the latter shown in the diagram below) are critical explanatory enhancements to AI/ML analyses. They can help to identify the specific components that are deviating from normal behavior and provide users with a deeper understanding of where a failure is likely to occur in a complex system (be it mechanical, governmental, or human). With such enhancements, AI algorithms can not only provide advance warning that a system component is verging on failure but also provide insight into the source of the anomaly. Such enhanced analyses work equally well whether the system in question is a jet turbine, a mortgage decision, or a human body. 

feature-heat-map

Feature Heatmap

 

A real-world example

In a recent case, a fintech start-up in Mexico was struggling with excessive numbers of fraudulent transactions, enough to threaten the company’s long-term viability. The startup processes about 24,000 transactions per year; of these, 20% were fraudulent. To address the challenge, the bank implemented SparkCognition’s ML Studio platform for automated machine learning, with the goal of building a machine learning model that could monitor transactions and flag any that were likely to be fraudulent. 

 

The model flags about 4,320 fraudulent transactions each year, which at an average value of 2,000 pesos per transaction means this model has saved the startup $457,214 per year. The model also continues to learn and improve over time. In all, the startup saw an ROI on this investment of 4.5 times in the first year and 10 times in the second year, and by using this technology, was able to save itself from bankruptcy.

 

To more directly address explainability in the application, the next step for this solution is to incorporate natural language processing (NLP) to help understand, extract, and analyze features and information from natural language sources, such as call logs, emails, and contracts. This will allow the model to generate even more robust and explainable predictions by working from a broader set of features, such as:

 

  • Customers’ previous transactions
  • Customers’ previous logins and bank interactions
  • Previously flagged and reviewed transactions
  • Reviewed audit transactions
  • Submitted customer contracts
  • Bank’s whitelists and blacklists

 

Explainability—the key to trust

SparkCognition’s customers are every bit as concerned about the explainability of their decision outcomes as other AI/ML users across the world. To address this need, SparkCognition data scientists have developed and integrated into our product portfolio the capabilities necessary to create trust and confidence that our insights can be supported in transparent and understandable ways, e.g., the data visualization and feature vector-derived heat mapping techniques described above. In addition, explainability is enhanced by employing human-in-the-loop learning techniques that leverage the domain knowledge of SMEs to improve model performance through an intentional feedback loop between humans and machines. Because their knowledge is integrated into the deployed AI solution—freeing them from the tedious aspects of predictive maintenance activities now handled by the automated solution—they can instead employ their skills on higher-value activities.

To learn more about SparkCognition’s AI offerings and the ways in which we deliver explainable, trustworthy insights, check out our recent NBM white paper and schedule a discussion with one of our data science experts.

Latest blogs

Earth Day Renewing Renewables and Scaling Renewable Intelligence hero image
Blog
Campbell LeFlore

Scaling Renewable Intelligence

On Monday, April 22, more than one billion people in 193 countries will participate in events celebrating the environmental movement and renewing their commitment to

Read More

SparkCognition is committed to compliance with applicable privacy laws, including GDPR, and we provide related assurances in our contractual commitments. Click here to review our Cookie & Privacy Policy.