Skip to content

4 Roadblocks to Generating ROI Through Machine Learning

By: Fero Labs Logo light
• April 2022
Steel manufacturing molten

Plants across sectors have embraced machine learning technology, with managers increasingly seeing it as a method to help them boost volume and profitability in the highly competitive industrial world. Indeed, machine learning has vast potential to reduce manufacturing costs and waste, making production systems both more efficient and more sustainable.

However, fully benefiting from the power of machine learning requires a complex system far beyond creating a handful of models. Machine learning models themselves are only as good as the data that’s put into them. If you put in bad data, you’ll get a bad result; furthermore, as the factory changes, the models must adapt, otherwise they’ll break or provide inaccurate predictions. And if you build false assumptions into the code, you’ll similarly get inaccurate output.

Unless you’re using a dedicated industrial-grade factory optimization software, you’ll need to build a whole complex set of systems around machine learning models to make them production-ready. That’s in addition to what may already be expensive machine learning investments. Without building this system, you will encounter significant roadblocks.

In this article, we’ll address 4 major roadblocks that will prevent you from generating ROI:

Preparing industrial data for machine learning analyses

When many people think of machine learning, they think about building and developing models—in other words, algorithms that take in raw data (generated, in this case, by the plant) and use it to generate some kind of analysis, prediction, or recommendation.

Of course, building models is a fundamental element of any machine learning process. But the process of creating the models is only one step in the complex “MLOps” system that is required to achieve ROI. Alongside code, one also needs to think about a multitude of areas including configuration, feature extraction, and machine resource management.

And let’s not forget data cleaning.

Real-world industrial data is rarely perfect, making the machine learning model’s job a challenge. To address this challenge, data scientists typically spend hundreds of hours writing data cleaning code—and this is only for one-time cleaning. When a machine learning model is deployed, the cleaning has to happen in real-time, on streaming data, a much more complex and difficult process. If you don’t do it right, or fail to do it, your model will be trained and evaluated on garbage, and the output of your model will also be garbage, yielding zero ROI.

If you want to build your own MLOps system, you will need to pay a lot of data scientists to spend a lot of time dealing with the tediousness of data cleaning. In a complex industry like steel, where different grades go into different products, there is a lot of work that needs to be done on the data cleaning end to associate bounds with products; in the chemical process industry, you will frequently need to make sure that erroneous sensors and shutdowns are properly eliminated.

If you’re not looking to build your own system, you may want to take advantage of a software that has such functionality built in. At Fero, we automatically remove measurements that are outliers to remove false sensor readings or test results. Fero also automatically processes data from different historians and merges them. For example, in steel, raw material spectrometer readings get merged with rolling mill process readings, product specs, and the final test results to build a comprehensive, clean dataset.

In addition, Fero software’s data cleaning code is optimized for both one-time use (when creating and evaluating the model for the first time) and streaming use. This means that less garbage enters the model, making the ROI of the models that much higher.

Preventing bad predictions for deployed models

As we said before, real industrial data is rarely perfect. Machine learning models are trained on certain values, and when real-world values diverge dramatically from the values provided during training, the system will not be able to adapt accordingly and provide accurate predictions. From a managerial standpoint, you’ll either need to hire a huge engineering team to deal with this or accept a fickle system with frequently inaccurate predictions.

When you first train a machine learning model, it uses all the data available in the factory at that moment. Since factories change dynamically, the future values that are sent to the model may be different than the past ones. As a result, machine learning models must often evaluate data they’ve never seen before. Let’s say your nitrogen reading comes in unusually high, and this leads to manganese increasing your tensile strength more than it is supposed to. Machine learning models can pick up on this unique change and recommend a lower manganese addition for a particular heat so that you avoid exceeding your maximum tensile limits.

This is the genius of machine learning—that you can train it on data and then let it “guess” the result. But when the training values are too different from the actual ones, the prediction won’t work.

Fero adds safeguards. It catches when real-world values are too different from training values and prevents bad values from being evaluated by the model, so bad values don’t cause bad predictions. Essentially, it’s like having that huge engineering team, in one piece of software.

Factories change over time

If you keep the same model without retraining it, predictive accuracy will decline as the plant produces new types of products, or when the plant is run in a different way. Retraining is absolutely key to making sure that the predictive accuracy is as high as it can be. However, retraining wrongly can also cause the models to perform badly.

Furthermore, even if you consistently retrain your models, the underlying assumptions may change if your data sources begin to look different.

A factory produces different product mixes over the year. In addition, raw material sourcing practices change and machines deviate from their original calibration. Retraining allows the models to learn the most recent results and relationships, thereby dramatically increasing predictive accuracy within a campaign.

Whenever a process is updated with Fero software, the software automatically searches for schema updates and notifies you of potential changes you might not be aware of. Additionally, Fero’s means of displaying and quantifying uncertainty allow you to evaluate if your models are still performing as inspected and allow you to debug your data feeds.

Process engineers and data scientists aren’t collaborating

For machine learning investments to be successful, data scientists need to speak to process experts. Any issues in this communication stream will cause the models to perform badly.

In the manufacturing world, any data science teams are typically located within a central unit. These folks have to work with the plants to understand the exact characteristics of the product, which requires communication with operations teams and quality engineers at the plant. Any issue in this communication results in bad assumptions, thereby leading to lower predictive accuracy or outright failures.

Any of these roadblocks will prevent you from seeing return on investment. This is obviously bad. But even more dangerous, we believe at Fero, is the fact that a loss of ROI, in turn, will cause teams at the factories and organization to lose trust in machine learning.

Many machine learning solutions marketed to manufacturers are black boxes. Operators and engineers have no way to know what's inside, any more than one can know why Google's image recognition software might erroneously pin a cat as a potato.

This has many drawbacks. A black-box model isn't built for partial data, nor does it change with time. You can train an image recognition model with a database of cats from the past fifty years, but factory behavior changes dramatically within months. So if you take data from a decade and put it in a model, it's like you’ve trained it for two different factories.

Even more problematically, conventional ML is not built to prescribe inputs. In the industrial world, you might want to change melt shop setpoints, so the mechanical properties at the end of the line can be sure to meet specs. Most ML methods are not built to answer this question. Perhaps you’ve seen the example of a Tesla "thinking" it's being attacked by traffic lights, when in fact it's driving behind a truck carrying traffic lights. With all the car's intelligence, it can't tell the difference between a stationary traffic light and one that's moving on the highway.

One bad analysis simply affects trust in that analysis. But if ML keeps producing bad analyses, teams will lose trust in the technology and miss out on a potential opportunity to boost their competitive value and production quality, which will be fundamental as the industry shifts towards Industry 4.0 and the interconnected factory.