We typically expect statements like “there’s a 20% chance of part A failing over the coming two weeks” from a predictive analytics solution. More important than the prediction though, is the interpretation of that statement and what it means to operations, maintenance, etc.
Predictions are at the core of predictive maintenance applications. Understanding and by extension, applying predictions is not a given. The four-axis framework laid out in this blog should allow any executive to not only fully grasp the impact of prediction-driven decisions but also to make sure the whole organisation grasps predictive concepts. This framework not only looks at the risk-based decision process but it does so in plain language, no math degree required (should be a relief to most of us).
In plain language, the accuracy of a prediction measures how often the prediction turned out to be correct after the facts. So, if a prediction says a part will fail and accuracy is 20%, it means that you’re over-predicting five times. For the maintenance teams this means they’ll have to inspect/replace five parts in order to prevent one failure. While this doesn’t sound impressive, there are cases where you still want to go ahead and act. Which is why we introduce a second axis – Criticality.
If we take the prediction from above and look at two different situations; in case A, that 20% accuracy prediction is about in-flight entertainment. While a hassle (and for some airlines a no-go part), we can safely assume that most airlines wouldn’t act on a prediction for this non-critical part at such a low accuracy. However, if in case B the same prediction is made about the landing gear, it may warrant somewhat more attention. An example from every-day life would be; there’s a 20% chance you caught the flue versus there’s a 20% chance you caught SARS – no doubt, both statements would enlist different reactions, both from the patient and from the doctor! Therefore, please take note that not only operational criticality is important, but also safety (in some sectors more so than in others).
Which is better? You made 1 failure prediction and it was correct but there were 10 in total (high accuracy but low coverage). Or, you made 30 failure predictions and caught all 10 that actually occurred but at the cost of 20 unnecessary (a posteriori) interventions (high coverage but lower accuracy). Answer: it depends. For instance, criticality plays a big role in determining which is better. Even more than that, it’ll depend on where you are with the project. During phase-in, many clients choose to focus on high accuracy and not so much on coverage. The reason is straightforward: any failure caught pre-emptively is a win and sticking with highly accurate predictions builds trust throughout the organisation about introducing risk-based concepts for maintenance. In order to make a really educated decision, a fourth axis needs to be introduced. I call it effort.
It’s actually very tempting to call this fourth axis Cost but that would be an over-simplification. We’ll just consider Cost to be Financial Effort. Please note that, should you choose to really plot effort on an axis, net effort should be taken into account; i.e. (in a very simple form) the cost of predictive maintenance versus the cost of a failure*. If the net cost is negative, I shouldn’t act. Really? What about criticality. i.e. if a part is predicted to fail and such failure could lead to bodily harm, surely I should act. Well, if such is the case, this should be represented in the “cost of failure”. Which takes us back to why I prefer referring to Effort instead of cost. Equally, very low accuracy (i.e. 5%) could lead to a lot of dissatisfaction within the maintenance teams because most inspections they do lead to NFF (No Fault Found). If such inspection is as simple as reading out a log file, the Effort is different than if it requires dismantling an engine. Net Effort is therefore both crucial AND very hard to get right.
Understanding and applying the four axis mentioned above is crucial for operational deployment of predictive analytics for maintenance. Executives should educate and train themselves to become comfortable with these concepts. And make sure the whole organisation understands them. You’re using a CMMS (Computerised Maintenance Management System)? Great! Keep using it. Predictive analytics only provide an extra, smart layer on top of your operational systems in order to drive actions. In the field, processes don’t really change (frequencies typically change) but at the decision-taking level, it’s like putting on coloured glasses. We need to start looking at aftermarket processes from a risk-based point of view. And while that scares many people – “because they don’t understand statistics” – the four-axis approach described above should demystify things quite a bit.
* just do a quick internal check-up by asking what is the cost of a failure in your company; most of the time these figures are hard to get by and if someone has them, they’re typically greatly underestimated. I still have to come across a case where predictive maintenance has no positive ROI… other than a business not being capable of deploying PdM (i.e. lack of data, wrong processes,…)