One prediction, many users

“Houston, we have a problem” must have carried a different meaning depending on whether you were an astronaut on board Apollo XIII, the astronaut’s family, in mission control or a rocket engineer on the Saturn project. While the example seems obvious, many people have a vague idea on where to apply predictive maintenance in their business. When we ask about whose jobs will be impacted we very often don’t get beyond “the maintenance engineer”. And how is said maintenance engineer going to use the predictions?

Let’s have a look at some roles/business fields impacted most by predictive maintenance analytics. And let’s start with the above-mentioned maintenance engineer.

Maintenance engineer
The goals of predictive maintenance analytics are multiple but some of the main subjects are: moving unplanned to planned, better scheduling of maintenance, improved maintenance activity prioritisation. The maintenance engineer really wants to get an improved worksheet which tells him what to focus on during the upcoming activities. Predictive analytics should therefore drive an ‘intelligent’ worksheet, which combines preventive maintenance with prioritised predictive maintenance activities. The Maintenance engineer’s contact with predictive maintenance should involve little more than a revised worksheet.

Maintenance Scheduler
The maintenance scheduler may get impacted a bit more; instead of spreading out maintenance activities based on counters (time, cycles, mileage,…), predictive maintenance schedules are more dynamic and combine the former (i.e. due to legal requirements) with the predictive information. As a first step, the traditional schedules should be left untouched but activities augmented with predicted visits (improved worksheets). As a second step, the maintenance schedule should be optimised; in fact, predictive analytics isn’t even required for this step but I’m puzzled to never see truly optimised maintenance schedules…! A third step would involve spreading out maintenance visits; if need be, even negotiating with the legislator to allow this within certain limits. This third step will almost certainly require collaboration between OEMs, Operators and MROs.

Reliability Engineer
The reliability engineer is crucial for two main benefits from predictive maintenance: understanding the past and improving the future. The reliability engineer is not just interested in the predictions as such, but really in why the predictions were made. The improved insight should allow the reliability engineer to find root causes, define behavioural patterns (need to find them in order to avoid them), propose solutions, etc. Better insights into what causes certain failures and how well they can be predicted will also allow the reliability engineer to come up with new maintenance scheduling information.

Fleet Planner
Because predictive maintenance analytics focus on maintenance, people tend to forget the main goal lies outside maintenance: optimal uptime at the lowest possible cost. In my book, uptime means more than just guaranteeing equipment can be operated; it should really mean it’s fit for the task. When a large industrial robot has one of it’s grippers broken but the one required for a specific task works fine, that machine is 100% available for that task, even if from a technical point of view, it’s broken…! Giving fleet planners (machine park planners – let’s use fleet as a generic description) deep insight into fleet health allows them to assign the right machine to the right task.

While the CFO doesn’t generally care about the mechanics of maintenance, they typically do care about the cost of maintenance and operational risk. Predictive analytics can give an insight in both. Maintenance management or the COO may actually be interested in simulating the impact of budget constraints on fleet availability, maintenance effectiveness, etc.

These are just a few examples of the impact of predictive maintenance analytics on different corporate roles; one can easily come up with more. The main lesson of this thought exercise is that PdA impacts the whole organisation, predictions (or derivative information) should be presented appropriately so that every role can correctly interpret the results and apply the best conclusions. Whoever sat through an hour-long discussion between statisticians on the interpretation of a prediction knows what I’m talking about: keep it simple, contextualised and usable!

The steps to predictive maintenance

Predictive maintenance is almost all about data, software, etc. and is therefore for many maintenance departments very far from their natural habitat. This scares off many, but in reality it shouldn’t. Modern businesses can no longer function with hard walls between departments. What’s important is that every department knows how what they are doing influences other departments’ work. This also means that CIO’s have to be more business experts than ICT experts. Here’s a quick overview of the main steps towards predictive maintenance.


Machines generate data – more and more of it. Some business leaders are surprised to find out they have a machine park that has been generating data for many years. Only, because they didn’t find a use for it, the data was just stored or (alarmingly often) just dumped… Collecting data requires very little infrastructure (most of it is done in the Cloud). Sometimes, equipment requires retrofitting with sensors and data communications capabilities; many IoT or M2M solutions exist today, both from the OEMs and from third party vendors.


Once it is collected, interpreting it and presenting it in a meaningful way, turns data into information. Many cases are known where just this step allows companies to save such huge amounts, we’re often told they can’t believe they didn’t take that step sooner. Certain patterns, exceptional behaviours, etc. quickly become apparent through a good representation (our brains are wired to recognise patterns – which also sometimes leads us to taking wrong decisions…).


While BI (Business Intelligence) tools could slice and dice through data and help with the representation, prediction is a whole other matter. Why has prediction come to the forefront recently? Well, the first step – collect – has gotten a big boost through Big Data and the second step – represent – has gotten really powerful at analysing huge amounts of data with multiple dimensions. So people naturally started wondering not just about the past but whether they could infer future behaviour from the data they were collecting. A vast library of algorithms is available and we’ve got the computing power. The hard step is really to come up with the appropriate algorithm for every situation – and that situation keeps changing all the time (just read some of our other blogs to see how we coped with this).


Predictions are good but what should you do with them? There has to be a framework to help users take the right decisions; for their own requirement but relative to corporate objectives. Just reacting to a prediction may have huge negative impacts somewhere else in the company. Solutions should therefore be designed with subject matter experts and business experts in order to come up with correct decision support tools (based on the predictions but also weighting other impacts – operational and financial).


From the optimise step, actions are proposed, which then need to be executed. These maintenance actions set in motion a series of events (from calling the mechanic to ordering spare parts). A close integration between the optimisation and execution steps is necessary to avoid launching unnecessary activities (i.e. sending a mechanic when the spare part is not available) and creating an alternative sequence of events.

As one can see, IT permeates the predictive maintenance environment. But it should remain very clear: maintenance expertise still sits at the core of the solution. IT is merely an enabler. In order to avoid it becoming too much IT oriented, solutions should be designed with the end user in mind. The reliability analyst will have very different needs than the mechanic for instance. Tuning these solutions towards user needs requires a strong involvement of all roles during the design and training phases of a predictive maintenance project. Training should also extend to upper management who need to be informed on what to expect (both in general and in particular with regard to their equipment/data/use/…).

Looks complex? Well, it isn’t. Today, solutions exist for each of these steps. Most crucial for successfully implementing a predictive maintenance approach is bringing together the right partners – but that’s a discussion point for another blog.

Your data might not be your data.

Previous Predikto bloggers have emphasized the centrality of data in the successful adoption of predictive maintenance initiatives.

Leveraging the full range of equipment operation, environmental and maintenance data, operators can better align the “right” resources to the appropriate mission requirements and/or environmental conditions, and more easily transition to effective implementation of condition-based maintenance programs, which can reduce the financial impact of scheduled maintenance as well as minimize the “fire drills” associated with an unplanned performance degradation or interruption.

Similarly, advantages accrue to manufacturers who can capitalize on “nuggets of gold” in their data.  Better insight into operational performance can help the engineering team gain a more granular, objective understanding of the performance of their products in sustained operation across a broad range of conditions (vs. a controlled test environment) and allow them to further improve product performance and reliability.  Also, delivery of sensor or telematics-enabled products can enable suppliers to move to “power by the hour” business relationships: if the economics are structured properly, the vendor is incented to deliver reliable service and enjoy a long term, predictable revenue stream, and the customer can reduce or eliminate investment in spare parts inventories or the imperative to staff a large maintenance department.

One of the “7 Habits” Stephen Covey made famous was “first things first”.   The first step a prospective predictive maintenance practitioner can take is to understand the availability of their data and secure access to it.   Failure to take this one step has derailed or delayed more than one project.  For example, a large equipment manufacturer had to suspend a major project upon learning the telematics data they expected to use wasn’t theirs, but belonged to the telematics equipment supplier.   In another situation, a major international transportation provider, operating state of the art equipment laden with sensors collecting volumes of operational data, was stunned to learn their ability to extract and analyze the data was impeded by the equipment supplier, whose onboard system only made a handful of cryptic fault codes available when a maintenance event occurred (and prevented access to the message bus).  

The effective application of Predictive Analytics in support of Condition-Based Maintenance holds great potential.  Make sure you own your data.   

A scarce resource

McKinsey, in this study – – foresees data science jobs in the United States to exceed 490,000 by 2018. However, only 200,000 data scientists are projected to be available by then… Globally, by the same time this demand is projected to exceed supply by more than 50 percent.

At the same time, 99% of CEO’s indicate that big data analytics is important to their strategies (KPMG study). Beyond big data analytics, the rise of predictive analytics (PdA) creates the need for very advanced data scientists able to model complex concepts. Not surprising then, that Harvard Business Review describes data science as “The sexiest job of the 21st century “(

Data scientists’ job consists of amassing vast amounts of data, sorting through it and, eventually, making sense of it. In order to do this, they don’t just use old BI (Business Intelligence) tools and techniques but rather rely on the latest statistical technologies, such as neural nets, deep learning, Bayesian statistics, etc. Data science is more than just science and its practitioners are sometimes referred to as “part analyst, part artist.”  To be proficient at data science requires a combination of talent, education, creativity and perseverance. It also requires kills in various domains such as math, analytics, statistics, computer science, etc. And to make sense of all that data, also some level of domain expertise. So, even though the numbers mentioned above list quite a shortage, the problem may be even worse in some areas which require specific domain expertise or at least a thorough understanding of the problems one tries to solve.

Facing this conundrum, one is offered a variety of solutions, such as: run and hope it goes away, throw lots of money at it in order to acquire the (rare) resource or come up with a smarter solution. The latter is exactly what Predikto has done through automating much of the data scientists’ job. Pure automation will not get us very far but smart automation with advanced machine learning does! And so it happened. This way, Predikto can afford to tackle predictive analytics challenges with a (much) smaller team and focus on the really important matters. Indeed, while it’s good, very good even, to deliver any kind of predictions (many projects relying on data scientists fail – for a variety of reasons), it’s even more important to be able to correctly interpret these forecasts. Only then can they be turned into actions. Without which they’re no more than a science project.

Faced with the scarcity of arguably one of the most important resources in our industry (computing power is another – and it’s not scarce), Predikto therefore chose to find ways to lower its reliance on that scarce resource. And achieving that successfully, makes us unique!

What about unplanned?

Everybody’s looking at process inefficiencies to improve maintenance but there’s lower hanging – and bigger – fruit to focus on first: unplanned events!

Maintenance has pretty simple goals; guarantee and increase equipment uptime and do so at the lowest possible cost. Let’s take a quick look at how unplanned events influence these three conditions.

Guarantee uptime

When production went through the evolutions of JIT (Just In Time), Lean,… and other optimisation schemes, schedules got ever tighter and deviations from the plan ever more problematic. WIP (Work In Progress) has to be limited as much as possible for understandable reasons. However, this has a side-effect of also limiting buffers, which means that when any cog in the mechanism locks up, the whole thing stops. Therefore, maintenance receives increasing pressure to guarantee uptime, at least during planned production time. Operational risk is something investors increasingly look at when evaluating big ticket investments or during M&A due diligence and for good reason; it’s like investing in a top athlete – don’t just pick the fastest runner, pick the one who can do so consistently!

Failures are bound to happen so the name of the game is to pre-emptively foresee these events in order to remediate them beforehand; planned, and preferably outside of production time.

Increase Uptime

The more you are able to increase (guaranteed) uptime, the more output you can generate from your investment. Unplanned events are true output killers; not just because they stop the failing machine but also because they may cause a waterfall of other equipment – depending on the failing machine’s output – to come to a halt. Unplanned events should therefore a) be avoided and b) dealt with in the fastest possible manner. The latter means having technicians and parts at hand, which can be a very expensive manner (like insurance policies; they’re always too expensive until you need them). In order to avoid unplanned failures, we have therefore introduced preventive maintenance (for either cheaper or cyclical events) and condition based or preventive maintenance. Capturing machine health and deciding when to pre-emptively intervene in order to avoid unplanned failures is a pretty young science but one that shows the highest potential for operational and financial gains in the field of maintenance.

Lower maintenance cost

By now most people know that unplanned maintenance costs a multiple of planned maintenance; by a factor three to nine (depending on the industry) is generally accepted as a ballpark figure. It therefore keeps surprising me that most of the investments have traditionally been made in optimising planned maintenance. Agreed, how to increase efficiencies for planned maintenance is easier to grasp but we have by now come to a level where returns on extra investments in this field are diminishing. Enter unplanned maintenance; can either be avoided (increase equipment reliability) or foreseen (in which case it can be prevented). Increasing equipment reliability has not always been the goal of OEMs. In the traditional business model, they made a good buck from selling spare parts and they therefore had to carefully balance how to stay ahead of the competition without pricing themselves out of the market (reliability comes at a cost). Mind you, this was more an economic balancing act than a deliberate “let’s make equipment fail” decision. Now however, with uptime-based contracts, OEM’s are incentivised to improve equipment reliability. Unfortunately, unplanned failures still occur; and due to tighter planning and higher equipment utilisation requirements, these failures’ costs have increased! Therefore, in order to lower maintenance costs, we have to lower the number of unplanned events. The only practical way is to become better at foreseeing these events in order to be able to plan interventions before they occur. The simple plan is: gather data, turn it into information, make predictions and take action to avoid these events. And voilà, 3-9 times more money saved than if we focused on planned events!

Life can be simple.

What’s happening to my train (and by extension, any equipment)?

At regular intervals, company managers are asked to provide their forecast for the next period(s). While some dread this exercise – and it is a tough ask for them to put together this forecast – others can almost pull the numbers on request. Why? Because they have good visibility on current and past performance, on current and past conditions and are well informed on forecasted evolutions in the market. Why is that so difficult for other? In short: lack of visibility. So many companies put KPI’s in place but then forget to make it easy to monitor these KPI’s.
For equipment, remote monitoring can be put in place and it is even more important with moving than with static assets (RCM: Remote Condition Monitoring) as reaction times are de facto longer due to the remoteness of the equipment in case of an unplanned event. I was therefore, to say the least, surprised to find out that most rail companies still have very limited visibility on their assets’ whereabouts, let alone the assets’ condition! With increasing pressure on punctuality and efficiencies, things have to change rapidly. Some process improvements has been put in place. Some signalisation has been upgraded and, finally, some cross-European initiatives have been put in place (such as ETCS – European Train Control System and ERTMS – European Rail Traffic Management System). In some cases, the lack of punctuality has been solved by… adapting the train tables to the observed performance of the trains (really!)…
I’m often asked why visibility is so important. Let the following graphic illustrate this:
Only a small part of total downtime is actual repair time! Obvious as this seems, time and time again when executives check the numbers, they’re flabbergasted by this finding. Turns out they’ve all been investing tons of money in improving the repair process and very often they’ve forgotten to get the diagnosis and parts/mechanics logistics in order. I was once told by a rail operator that in order to get better remote diagnostics, in case of a failure they asked the train driver to ‘make a picture of the control screen with his smartphone and send it back to central’. May sound funny but I actually think this was a great idea! And, in effect, it cut a big chunk off their diagnose time.
How do we take things one step further? Once we have (digital) visibility, we can use this data to make predictions on assets’ condition. With this information, it is possible to avoid (some, not all) remote failures, which leads to less downtime, higher punctuality and service levels, lower maintenance costs, etc. How so? A number of conditions need to be fulfilled:
– the predictions need to be actionable (i.e. they should tell you what to do, not just give you an abstract statistical forecast)
– the predictions should be based on accurate and up-to-date data
– the predictions should be accurate enough to warrant an intervention
– the predictions should provide enough lead time to get the repair done (planned, mechanic and part in place, etc.)
Given these conditions, RCM can lead to CBM (Condition Based Maintenance). While RCM is a mature technology, it is not yet generalised; very often, trains are quite capable of capturing their condition through sensors which feed an on-board diagnostics system, but all too often, the capability to offload this data from the train is lacking. CBM as an approach to maintenance is equally mature but even less widespread, mainly due to the lack of data and processes. However, it is well-known that CBM increases equipment performance which, in rail, results in increased punctuality, less breakdown and, ultimately, higher capacity.

Context is King to Operationalize Predictive Analytics


Companies have invested significantly in Big Data solutions or capabilities. They usually start with adding more sensors on their equipment or perhaps bringing all of their historical data into a Big Data repository like Hadoop.  They have taken the first step towards a “Big Data” driven solution. The challenge is that “tackling” the data does not bring any tangible value.  This is why Predikto focuses so much of our R&D and technology in the “Action” related to the analytics.

Once data has been tackled, the next step is to perform some kind of data crunching or analytics to derive insight and hopefully perform an “Action” that brings real value to the business. Predikto is laser focused on Predictive Analytics for Industrial Fleet Maintenance and Operations “Moving Unplanned to Planned”. We spend a lot of time figuring out what “Action” our customer will take from the configuration of the Predikto Enterprise platform software.  Up to now, I have not mentioned Context. So, why is Context King?

The reason is that once our platform and the power of Predictive Analytics is able to provide an actionable warning that a piece of equipment is at a high risk for failure, Context becomes the next huge hurdle.  The first reaction by a user of our software is “Why should I care about this Predikto warning?”, “Why is this relevant?”, “Why is this warning critical?”, “I don’t trust this warning from a machine learning algorithm?”, etc…  You get the point.

This has driven Predikto to invest heavily in technologies and capabilities that help the maintenance expert or equipment engineer with “Context” as to why they should care or why this warning is important.  Users are able to easily drill through all of their maintenance, asset usage history, diagnostic codes, sensor data, and any other data which has been loaded into our platform.  The “Context” is King in order to empower the subject matter expert to confirm or validate prior to automatically performing the “Action” our software is recommending.

Next time you are rolling out a Big Data solution, focus on the key activities a user will take after you have tackled the data.  What automated action recommendation can I give experts and what Context can I provide to help them make a more informed and confident decision.