Preventing breakdowns by predicting the condition and maintenance requirements of industrial assets is a massive challenge. The world of data science is full of models that struggle to deliver results in real-world environments. So, what is the best approach?
Theory and practice
In theory, theory and practice are the same. In practice, they’re not. Nowhere is this truer than when trying to translate models of industrial assets into actionable insights that deliver improvements on the shop floor. Academic papers on data science might include analyses that demonstrate how particular algorithms may improve on others by a percentage point or two, but in a factory environment, cutting through noisy signals to uncover any patterns at all can be a challenge.
Yet this is only the first major obstacle that would-be DIY model developers must overcome if they are hoping that their efforts will enable Predictive Maintenance or other business outcomes. Those who do manage to develop a robust model that can work under real-world conditions immediately run into the next big problem: useful models must be deployed, not just developed.
Deployment naturally means running models at scale, but it also means providing an interface that presents results in a friendly way and satisfies users by enabling different groups to prioritize alerts, collect feedback and so on. If you have 20,000 robots working in a major plant, even deploying a user interface to display interactive charts for all of them is far from trivial. In fact, DIY modelers typically find that what they’re actually trying to do is develop their own apps, and that can be extremely resource-intensive and costly.
Ask the experts
For these reasons it is almost always better to team up with a specialist provider, complete with its own data science expertise and the deployment support needed to ensure that shop floor users can easily access the information they need. Companies may think that their own, custom models can perform better than generic algorithms generated by suppliers. However, any difference is often marginal and can be far outweighed by the negative aspects of doing it alone.
For example, the models used in Senseye’s Predictive Maintenance solution, Senseye PdM, are often on par with custom models and can perform even better. Its unique algorithms turn data into an accurate prediction of the Remaining Useful Life (RUL) of manufacturing assets – a technique known as prognostics.
One reason why Senseye PdM routinely outperforms expectations is that the algorithms treat every machine as unique – even if they are the same make and model. Machines that start out the same will behave and wear differently over time as they are subjected to differences in their immediate environment or due to the work they are performing. Treating each asset as an individual with a unique ‘behavioral fingerprint’ boosts the accuracy of Senseye PdM’s prognostics considerably and better supports teams in charge of the production assets to maximize uptime.
As well as delivering the proven performance of tried and tested algorithms, partnering with Senseye removes all the accompanying headaches around robust performance, scaling, deployment, usability and security.
If a potential user has already developed a custom model and would like to use it, Senseye can integrate it into the system via an API. Even if the custom model itself is not integrated into Senseye PdM, the solution can still accept the results from custom models as useful input.
However, it remains much more common for Senseye to deploy its own sophisticated generic algorithms. Senseye’s data scientists focus on dealing with the real world as it is, not how we would like it to be, so the models are extremely robust in even the noisiest data environments.
Where users are aiming to implement prognostics and Predictive Maintenance, this robust approach is especially important when capturing data from failures. In what can be a relatively chaotic moment, it’s vital to extract the meaningful information from beneath the noise so the system can identify an approaching failure and raise an alert before the asset fails again.
Even though bringing in outside expertise is the most resource-efficient way of deploying models for condition monitoring and Predictive Maintenance, users play an important role in getting the most from generic data models.
For a start, there’s always a learning curve when deploying a generic model. For instance, Senseye PdM initially takes 14 days to deliver results, building up a ‘fingerprint’ of the unique behavior of each asset under normal operating conditions.
The in-house expertise and experience of our customer teams including condition monitoring specialists and mechanical engineers combined with our technology experts can feed into this process, enabling Senseye to configure the system upfront to prioritize some of the data and events that users are most interested in. This speeds-up the initial learning process for the algorithms. In the longer term, a system of regular feedback enables the algorithms to build up a picture of which events and trends matter to users and which are irrelevant. This is useful when deploying generic models that are gradually adapting to predict the behavior of each machine more and more accurately over time.
Get it right and the business benefits are extremely impressive. Senseye PdM typically reduces unplanned machine downtime by 50%, increases maintenance staff productivity by 55% and boosts the accuracy of forecasting downtime by 85%.
Typically, these benefits are hard to match with custom algorithms hence a standout approach is highly recommended to achieve real-world results.