What Is Explainable Machine Learning?

Elisabeth Rosen
Director of Marketing

Machine learning can be a confusing, buzzwordy space, especially for anyone without a technical background. Even if you’ve interacted with a machine learning solution before, you might still find it difficult to navigate. The issue is compounded by the fact that most ML solutions produce "black boxes" that are difficult to understand and trust.

For this reason, we’ve seen a number of companies crop up that claim to offer "explainable" machine learning, specifically targeted at the industrial sector. But this is false advertising. They’re still building black boxes—software programs that spit out mysterious answers without revealing how they got there.

Truly explainable machine learning is different. You can think of it as a clear box, in contrast to the black box solution provided by Netflix or Uber. In these cases, simplicity makes sense—you don't need to know the exact algorithm that made Netflix decide you'd enjoy Black Mirror. But the complex, high-stakes world of manufacturing is a different story. When you plug machine learning models into your production process, you should get results that are easy not just to act on, but to understand.

At Fero, we think of explainable machine learning as having 3 key properties. If you're evaluating any software that claims to optimize your processes with ML, make sure it checks these boxes:

It’s statistical.

Imagine your industrial process as a stovetop pressure cooker in which you're trying to craft the perfect three-bean chili. The software shouldn't just say, "Turn the heat to medium." It should tell you how confident it is that medium heat will yield a delicious outcome, and provide a range of estimates about the results, from flavorful stew to watery mush.

Netflix doesn't need to give you this kind of information. If you don't like a show the platform recommends, you can just turn it off and watch something else. But in manufacturing, as in chili-making, small mistakes can be gravely disappointing.

It’s causal.

When you teach a software solution like Fero about your process flow, it should be able to distinguish correlations from causations, helping you pin down the root cause of any issue. In the metaphorical pressure cooker, for example, several variables combine to make the optimal chili. The software should be able to deduce which variables affect chili quality (such as increasing the temperature) and which aren't causally linked (the steam coming out of the lid is a side effect of the increase in temperature).

A high-pressure situation

Quality issues can result in costs as high as 40% of total operations, making root cause analysis a particularly vital tool. Using human engineers, this could take months or years of time and effort. Machine learning software, on the other hand, can diagnose in a matter of hours which settings or combinations result in quality issues.

It can be queried or corroborated.

One of the most useful aspects of machine learning is the ability to test a vast array of hypotheticals. You should be able to ask the software questions like "What would have happened if I produced yesterday’s chili at a higher temperature?" or "What if I produce tomorrow's at a lower temperature?"

By examining all the what-ifs through virtual testing, you'll learn which settings are optimal without having to actually run the process each time. In both manufacturing and dinner production, this can be a valuable time-saver.

Everyone’s talking about AI and ML. But beyond all the hype and buzzwords, we believe explainable machine learning offers manufacturers a valuable tool for creating a sustainable future.

We've helped companies in industries from automotives to steel achieve meaningful waste reductions, save millions of dollars, and boost their environmental sustainability. Interested in joining them? Reach out and request a demo today.

Stay in touch

Sign up to receive news and updates.