Forecasting is only useful if the result is applicable… and to the point! I may have perfectly forecasted the weather but if I’m going to spend the whole day inside, this has no impact on me. This is the same with industrial applications. Current interest in predictive analytics results in plenty of pilot projects of which most focus on how well the forecast performs. To the non-initiated this means; out of every 100 failures, did you catch 99? That’s where trouble starts:

  • let’s say I don’t catch 99 but just 30, does that mean I failed? Obviously, this depends on the cost of the avoided failures, the cost to avoid them. But 30 could prove to be very worthwhile.
  • now let’s say I do catch 99 but to do so, I actually forecast 129. Do the 30 extra inspections/repairs void the value of catching the 99? Again, it depends. If the marginal cost of these 30 extra interventions exceeds the cost of the 99 failures, one may be better off not acting at all. However, we need to be very careful when evaluating cost/benefits as we’ve rarely seen correct numbers being utilised (i.e. what is the cost to the organisation, customer aggravation,…)?

Forecasting_Cartoon2

In the case pictured above, the cost of not having the car spare is not huge; however, if one of the riders loses the race because he couldn’t get his bike spare in time, the derivative cost is very high!

There are two approaches to predictive analytics: big bang or step by step. The former is typically launched when a company is so in trouble it just can’t afford to wait and understands this may be the weapon to beat the competition. The step by step approach typically looks (or at least should) at two criteria to define a first project: impact and feasibility – let’s find the 10 parts that cause me the most pain AND for which I have good data to implement a condition based or predictive approach to maintenance.

That way, not only would the team have had a surplus of bike spares on board, it would at least have had one for the car !