Explaining “Black Box” Neural Networks with Machine Learning. Understanding by human analysts is crucial for speeding machine learning into production. How better than using the technology itself?
As Chief Analytics Officer at FICO, many people assume I spend most of my time tucked away in the data science lab, wearing a white coat amidst the clatter of chalk against blackboard and fingers against keyboard, a slide rule stuck in my chest pocket. In reality, I spend a lot of my time with customer audiences worldwide, explaining how artificial intelligence (AI), machine learning (ML), and other leading-edge technologies can be applied to solve today’s business problems.
Neural networks are one of those technologies. They are a critical component of machine learning, which can dramatically boost the efficacy of an enterprise’s arsenal of analytic tools. But countless organizations hesitate to deploy machine learning algorithms given their popular characterization as a “black box”.
While their mathematical equations are easily executed by AI, deriving a human-understandable explanation for each score or output is often difficult. The result is that, especially in regulated industries, machine learning models that could provide significant business value are often not deployed into production.
To overcome this challenge, my recent work includes developing a patent-pending machine learning technique called Interpretable Latent Features. Instead of explaining AI as an afterthought, as in many organizations, our new technique brings an explainable model architecture to the forefront. The approach is derived from our organization’s three decades of experience in using neural network models to solve business problems.
Why Explaining ML is Important
An explainable multi-layered neural network can be easily understood by an analyst, a business manager and a regulator. These machine learning models can be directly deployed due to their increased transparency. Alternately, analytic teams can take the latent features learned and incorporate them into current model architectures (such as scorecards).
The advantage of leveraging this ML technique is twofold:
- It doesn’t change any of the established workflows for creating and deploying traditional models
- It improves the models’ performance by suggesting new ML-exposed features to be used in the “traditional rails.”
READ MORE ON: RT INSIGHTS