Introducing the New Predikto Maintain!

At Predikto, we take your IoT data and turn it into predictions. We convert unplanned failures into preventative maintenance events. We are very good at data transformation and analysis. We tear it up at modeling and machine learning. We have a cadre of super-smart data scientists, engineers, and subject matter experts.

But we are too smart for our own good. We created an app that is 1) beige (really?) and 2) doesn’t go the final mile to turn predictions into actions. And you can’t use your data to make a decision if you can’t understand the predictive output. Even if that prediction is mathematically sound.

For all the people out there who have ever felt this way about math books:

Predikto heard you, then, they hired me. And bam! Now we have a new app that normal humans can use.

Redoing your app from the ground up is scary. Why are you making such a radical shift? What was wrong with the old app? I am sure we’ve all read those business cases about how Domino’s took a big risk by admitting their pizza stunk and then totally redoing it. But, it worked out for them. Now I can order pizza from an app while driving. Which is totally a good idea.

This time, it wasn’t about old vs. new or good vs. bad. Our old app was quite nice. I just have a personal vendetta about beige (as do many bitter parents  and what is this perfume?), and against people who don’t think normal humans can understand science. As a writing professor once told me, “If your reader doesn’t understand your message, they aren’t stupid, you wrote it wrong.”

Without further ado – here is the new Predikto Maintain. It’s still got our algorithm-mastering, machine-learning, big-brain, MAX™ making amazing predictions. But now, it’s more understandable to mere mortals.

I know, It’s blue. Because we can’t be going too crazy here. Blue it is.

The heart of Predikto is our predictive models. But our end users think about them in three different ways. So now, we offer up three different methods to examine what our Predikto MAX models are doing, and more importantly, what they are telling you to do.

  • Let’s start with the system engineers who actually have to go do things during the day (like fix locomotives or quay cranes or aircraft engines). We let you know when our MAX algorithms predict a critical component will fail. And now, on the same screen, you can see what has happened to this asset, and point-of-failure, in the past – and how long you have to fix it. You can drill down into the events from that asset, reading more about work orders and sensor data relevant to the situation. You can quickly select what to do about this prediction (go fix this, close this) and move on with your day.
  • Moving on to the data scientists, who need to monitor their company’s data, and their overall system health. We’ve developed a Data Science screen where your data folks can geek out to their heart’s content. We’ve got a method to monitor the success of the ETL, a way to check in on model performance, methods to perform data integrity checks, QA checks for data feeds, and much, much more…
  • Finally, for the decision-makers, we show you Predikto’s Value Add. Now you can see exactly how many unplanned maintenance events have been prevented, and how many man-hours (and more importantly, how much money) our system has saved. And you’ll notice how quickly this all adds up to customer success.

All and all, a great, blue, package. And this is just the tip of the iceberg. Soon, we’ll have even more information about our predictions, our models, and your system health. Maps! Drill downs! Sliders! The display options are endless. I promise this: no 1990 my-space pages, and certainly, no beige.

I predict the future is bright. And actionable.

Predikto is featured in ARC Advanced Analytics and Machine Learning Guide

Predikto has been featured in the latest “Advanced Analytics and Machine Learning Planning Guide” by ARC.  Predikto is a powerful and innovative technology that helps asset-intensive industrial customers to deploy advanced analytics and machine learning algorithms to predict failures in their equipment.

Founded in 1986, ARC Advisory Group is the leading technology research and advisory firm for industry, infrastructure, and cities. ARC stands apart due to our in-depth coverage of both information technologies (IT) and operational technologies (OT) and associated business trends.

The Planning Guide is designed to help organizations navigate the buying process for advanced analytics.

With an emphasis on predictive solutions, cognitive intelligence, and machine learning, this planning guide will provide useful ways of thinking about analytics. It will clarify the different analytics modes, from the enterprise to the edge. The report will:

  • Explain key concepts needed to navigate the buying process
  • Help you understand how to select a solution that fits your business
  • Detail recent market entrants
  • Provide insight into how traditional technology providers with analytics are positioned in the market
  • Build Business Case Consensus

Predikto Named by Gartner as a Cool Vendor in IoT Analytics


Predikto has been included in the “Cool Vendors in IoT Analytics, 2017” report by Gartner, Inc.

It is an honor to be included in the Gartner Cool Vendor report. This recognition is validation on our ability to bring machine learning algorithm development software at a massive scale to help improve equipment uptime. By automating close to 80% of the process of creating machine learning algorithms, we are able to empower our customers to complement existing condition maintenance rules based systems with the power of machine learning on a massive scale. Our approach includes Predikto Maintain, a software that bridges the gap between machine learning algorithm output and the information required by Maintenance and Reliability Engineers to do their job more effectively and reduce unplanned breakdowns.

The Gartner report “Cool Vendors in IoT Analytics,” by analysts Svetlana Sicular, Jim Hare, Saniye Burcu Alaybeyi, Shubhangi Vashisth, and Simon F Jacobson was published April 27, 2017.

Gartner Disclaimer
Gartner does not endorse any vendor, product or service depicted in our research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

For additional information or a product demonstration
Please contact us at

Predikto closes $4 Million investment – adds Jim Gagnard to BOD

Last month we announced a $4m VC led round to help us expand.  This latest round was led by Fidelis Capital and our prior investors, TechOperators, also participated in the round.  We also added Jim Gagnard, ex-CEO of Smart Signal (acquired by GE) into our BOD as our latest independent board member.  Investors in this last round also included a strategic with deep expertise in Aerospace and Defense globally.

Looking back at our 4-year history, I would like to take this opportunity to reflect on our past, paint a picture of where we are today, and explain the road ahead for Predikto.

We started with the idea that automated insights from large amounts of industrial equipment data could disrupt how maintenance, operations, logistics, PLM, and after market capabilities for large industrial players.  Organizations spend millions adding sensors, improving business processes, deploying software to improve how businesses are run, but in the end, humans are heavily required to extract insights from data.  With the explosion of new sensors and IoT, more data means more challenges in gaining actionable insights from that data.  Our focus and goal is to try and “automate” machine learning and predictive analytics algorithm development as much as possible to provide targeted solutions to problems faced by our customers.  If we could automatically create lemonade from lemons, we were onto something big.  We raised our first $3.6m of funding in December of 2014.  This enabled us to expand our core team which primarily included Robert, Will and me.  We were able to bring in the right talent with expertise who had scars building large scale complex cloud-based software applications (Roy joined to lead our Engineering team).  We also invested in Sales & Marketing at that time.  We were going to market with a wide net and chasing anything that moved.  After gaining a deeper understanding of our market, customers and how they “do business”, we felt it was critical for us to focus in one vertical and then expand from there.

About 18 months ago we decided to focus in Rail.  We had some early customers and a lot of new opportunities in Rail globally.  We also had a few channel partners who would help us expand in the rail vertical.  We hired 4 team members who came from Rail and this brought credibility, expertise, and a much deeper understanding of our customers and prospects that we lacked from the start (Greg joined to lead of Services / Solutions team).  Our success enabled us to expand our use cases, experience, and gain more scars.  Our success made it possible to begin to expand beyond Rail and open up the dialog with prospects in Aviation, Shipping, and now Wind Turbines.  We invested heavily in productizing and improving the Predikto Enterprise Platform for fast data ingestion and ETL.  We also continued to invest in automating and expanding the capabilities of Predikto MAX.  This has made it possible to do more with less (people and time).  In the last 6 months, we have been focusing on the latest release of Predikto Maintain which we believe is a huge step forward in helping asset-intensive organizations to operationalize in the real world how a maintenance engineer would translate the predictive output from very advanced and sophisticated machine learning algorithms into maintenance notifications or actionable warnings.  The context and supporting information provided by the Predikto Maintain application is critical in helping a Maintenance Engineer feel comfortable and answer the questions to:

 – Why should I believe this Predikto MAX prediction that my motor is going to fail?
 – How much time do I have until the asset or component is most likely to fail?
 – What are the risk and business impacts of my action (or lack of action)?

Today, we are expanding in multiple accounts globally across Freight Rail, Commuter Rail, Shipping (terminals), and Aviation.  Our platform and messaging are working. The space is still trying to sift through all the cluttered messaging from big and small competitors.  Our approach is to let our results speak for themselves.

So now what? This latest round of funding is to expand the team to ensure the current customer expansions and deployments go well.  We have a backlog of work and deployments for the year that is forcing us to continue investing in our software to ensure partners and customers can expand the deployment of Predikto software on their own.  We are also expanding into Azure later this year due to pressure from customers (we are in AWS EU and North America today).  Our team is solid and we continue to add key resources on a monthly basis.  I am proud of what we are doing and the direction we are going.  We are landing new huge customers who are leaders in their space.  They have invested in Data Science and advanced analytics, and they feel they need our help to get to the next phase in improving asset uptime with our analytics capabilities.


Predictive maintenance is not for me…


… because that’s not how it’s done in our market
Well, what shall I say? Prepare for the tsunami! While it’s true that switching (fully or partially) to predictive maintenance requires a transformation of how business is done, one thing is for sure: in every (!) market, there’s bound to be a player who moves first. And predictive maintenance has such disruptive potential that the first-mover advantage may be much larger than many executives suspect. The truth in the statement lies in the fact that many stakeholders, not in the least the mechanics, need to be taught how to deal with the prescriptions resulting from predictive analytics. And this – like most change processes – will require determination, drive and  a lot of training in order to make the transition successful. I believe it was Steve Jobs who once said something along the lines of ‘People don’t know they need something until I show it to them’. The same goes for predictive maintenance; it’s not whether the market (any market!) is moving that way, it’s about when and who will be leading the pack.
… because we don’t have data
Aaah, the ‘garbage in, garbage out’ argument. Obviously this is a true statement. But ‘we don’t have data’ typically points to different issues than what the statement says; most often it refers to either the fact ‘there is data but we don’t know where’ (as it’s often spread across many different systems – ERP, CRM, MRO, MMO, etc.), or it points to the fact that ‘we have no clue what you need’. Predictive analytics are associated with Big Data and, while often true that more data is better, this doesn’t have to be the case. I’ve seen many, surprisingly good, predictions from very limited data sets. So what’s key here? Work with partners who can easily ingest data from different sources, in different formats, etc. Also work with partners who can point you towards the data you need; you’d be surprised at how much of it you already have.
… because we don’t have sensors
A refinement on the previous objection and this one slightly more to the point. Many predictive maintenance applications  – but not all – require equipment status data. However, as we’ve seen from the first objection, a full roll-out of predictive maintenance approach may have to overcome organisational hurdles and take time. Therefore, start with the low-hanging fruit and gradually get a buy-in from the organisation through early successes. What’s more, upon inspection, we often find plenty of sensors, either unbeknownst to the client or in situations where the client doesn’t have access to the sensor data (in many cases, they do have access to the data but can’t interpret it – a case where automated data analytics can help!).
… because we don’t have a business case
This is worrying for a number of reasons; not the least because companies should at least have investigated what the potential impact could be to their business of new technologies, business concepts, etc. The lack of a business case make come from a lack of understanding of predictive maintenance and unlike as described in the last argument, a denial of this lack of understanding. As we know by now, one of the main (but not only) benefits of predictive maintenance is the moving of unplanned maintenance to planned maintenance. According to cross-industry consensus, unplanned maintenance is 3x-9x more expensive compared to planned maintenance. The impact of moving one to the other is obvious! However, many companies fail to track what’s planned and what’s unplanned and therefore have no idea on potential savings that could be generated by predictive analytics. Lacking this KPI, it also seems impossible to assess other impacts such as customer satisfaction, employee satisfaction, reliability improvements, etc.
… because our maintenance organisation is already stretched out
Let’s say we have a prediction for a failure and want to move it to planned maintenance (the event is still taking place but it’s nature changes). A commonly heard objection is ‘but my maintenance planning is already full’. Well, what would you do when the failure occurs? In that case, a maintenance event would still take place, only it would either upset the maintenance planning or would be executed by a specific ‘unplanned maintenance’ intervention team. Either way, time and money can be saved by predicting the event and executing the maintenance during a planned downtime. As a matter of fact, moving unplanned to planned actually frees up maintenance time and resources (to be totally correct: after an initial workload increase to catch up to the zero point). The opposite is actually called ‘the vicious circle of maintenance’: unplanned events disrupt maintenance planning, which is therefore often not fully or perfectly executed, which in turn generates more unplanned events, etc.
… because we don’t trust the predictions
Humans are not wired for statistics! While we have a natural propensity for pattern recognition, this talent also tends to fool us into wrong conclusions… Any predictive maintenance (or, by extension, any predictive analytics project) initiative should be accompanied by good training to make sure predictions are interpreted correctly. Even better, business apps tuned towards individual job requirements which translate the prediction into circumstantial actions should be developed and deployed. At the current state of affairs, starting with a limited scope project (call it a pilot or POC), should allow the delivery team to validate the presence of enough and good enough data, build the business case, etc.
… because we don’t understand predictive maintenance
Well, at least they’re being honest! Companies that are aware of their lack of understanding are halfway down the path towards the cure…
Understanding predictive maintenance starts with understanding predictions (see above) but then also seeing how this affects your business; operationally, financially, commercially, etc. Predictive, and by extension prescriptive, maintenance has  far-reaching impact, both internal and external (non-predictive approaches will have a hard time competing with well-implemented predictive maintenance organisations!).
These are, by far, not all the objections we encounter but they give a general idea of how/why many organisations are scared of the currently ongoing change. As with so many technological or business advances, it’s often easier to formulate objections that to stick one’s neck out and back up the new approach. However, as history has shown us (in general and with regards to maintenance as shown by the evolution from reactive > proactive > condition based), maintenance is inevitably evolving this way. Getting it right is hard but not getting it is expensive! We’ll inevitably get front-runners and a flock of successful followers but many of the late starters will simply be run over by more efficient and effective competitors. Business is changing at an ever-faster pace and executives have to be ever-more flexible at adapting to the changing environment. Make sure you get one or more trustworthy partners to introduce you to the concepts and tools for predictive maintenance. At this time of writing, the front-runners are already out of the starting blocks!

Dreaded NFF

NFF (No Fault Found) is an often-used KPI to which may get a whole new meaning with the introduction of predictive maintenance. Let’s go back to the origin of this KPI. Some two decades ago it gained in attention as companies increasingly focused on customer satisfaction, people found out that many so-called ‘bad parts’ that were injected in the reverse supply chain tested perfectly and were therefore flagged NFF. There has been an ongoing struggle between the field organisations’ and the reverse supply chain’s goals. Field service did all it could to increase the number of interventions per engineer per day, even at the expense of removing too many parts, under the motto that time is more expensive than parts. Removed parts then get injected in the reverse supply chain where they’re typically get sent to a hub to be checked and subsequently repaired. To the reverse logistics/repair organisation, NFF create ‘unnecessary’ activity – parts needing to be checked without any demonstrable problem being uncovered. These parts then need to be re-qualified: documented, repackaged,… Therefore, NFF is really bad to the observed performance of that organisation.

Back to predictive analytics: whereas CBM (Condition Based Maintenance) will order for a maintenance activity based on the condition of the equipment/part – and therefore do so when there’s demonstrable cause for concern – predictive maintenance will ideally generate a warning with longer lead time. Often before any signs of wear/tear become apparent! Provided certain conditions are met (see previous posts on criticality/accuracy/coverage/effort), parts will be therefore be removed which technically will be NFF! Because the removal of these parts will prevent a much costlier event this is not a problem per se, but it will require rethinking not only internal and external processes but also KPI’s! If we want adoption of these predictive approaches to maintenance and operations, KPI’s need to be rethought to reflect the optimised nature of these actions. We can’t allow anybody to be penalised for applying the optimal approach!

Predictive (or by extension, prescriptive) maintenance has huge potential for cost savings and as we’ve seen before (see previous blog entries), these savings should be looked at from a holistic point of view. Some costs may actually go up in order to bring down overall costs. Introducing such methodologies therefore also demand a lot of attention to process changes and to how people’s performance is measured. The good news is that introduction of predictive maintenance can be gradual; i.e. start with those areas that offer high confidence in the predictions and high return. Nothing helps adoption better than proven use cases!

Today’s focus area for operations: increase uptime!

As other domains such as procurement, supply chain, production planning, etc. get increasingly lean, attention focuses on the few remaining areas where large gains are expected from increasing efficiency. Fleet uptime or machine park uptime is thé focus area today. Indeed, investors increasingly look at asset utilisation to determine whether an operation is run efficiently or not. As we know, in the past many mistakes have been made by focusing on acquisition cost at the cost of quality. This has led to a lot of disruption with regards to equipment uptime which, in turn, renders inefficient any of the lean initiatives mentioned above. So, what are the important factors determining uptime? We’ll look at the two most important ones:

– reducing the number of failures

– reducing time to repair (TTR)

Reducing the number of failures sounds pretty obvious: purchase better equipment and you’re set. Sure, but how do you know the equipment is better? Sometimes, it’s easily measurable; i.e. I’ve known a case where steel screws were replaced by titanium ones. Although the latter were maybe five times more expensive, their total cost on the machine may have been less than 1,000$ whereas one failure caused by a steel screw cost 25,000$. Taking an integrated business approach to purchasing saved a lot of money over the lifetime of the equipment. In other cases, the extra quality is hard to measure and one has to trust the supplier. This ‘trust’ can be captured by SLA’s, warranty contracts or even fully servicized approach (where the supplier gets paid if and when the equipment functions according to a preset standard).

Number of failures can also be reduced by improving maintenance; pretty straightforward for run of the mill things such as clogging oil filters, etc. One just sets a measurement by which a trigger is set off and performs the cleaning or replacement. This is what happens with your car; every 15,000 miles or so certain things get replaced, whatever their status. The low price of both the parts involved and the intervention allows for such an approach. Things become more complex when different schedules need to be executed on complex equipment: allow all of the triggers to work independently (engine, landing gear, hydraulics, etc. on a plane for instance) may cause maintenance requirements almost every day. At least some of these need to be synchronised and ideally, the whole maintenance schedule should be optimised. Mind you, optimisation doesn’t necessarily mean a minimisation of the number of interventions! It should rather focus on minimising impact on operational requirements.

In order to further reduce the number of failures, wouldn’t it be great if we could prevent those events that occur less often? This involves predicting the event and prescribing an action in order to minimise its impact on production. This is exactly the focus of prescriptive maintenance; combining predictions (resulting from predictive analytics) with cause/effect/cost analysis to come up with the most appropriate course of action. Ideally, if maintenance is prescribed, it enters the same optimisation logic as described above. Remember, the goal is to optimise asset utilisation.

Reducing TTR is too often overlooked or just approached by process standardisation. However, many studies have shown that TTR is highly impacted by the time it takes to diagnose the problem and the time to get the technician/parts on site – especially in the case of moving equipment. Predictive analytics may help reduce both: the first, by providing the technician with a list of the systems/parts most at risk at any moment in time and the second by making sure the ‘risky’ parts are available. There’s nothing worse than having to set in motion an unprepared chain of actors (technical department, supplier, tier 1,…) for tracking down a hard to find part. This is even worse when the failing machine slows down or halts an entire production chain…

Poor ROA (Return On Assets) is often a trigger for takeovers because the buyer is confident they can easily improve the situation. It’s one of the telltale signs of a poorly run or suboptimal operation and has to be avoided at all cost. If your sights are not yet set on this domain, chances are other people’s are!