You’re going to need a bigger boat!

boat3

With the recent 42nd anniversary of ‘Jaws’ the film, everyone always comes up with the tagline above ‘you’re going to need a bigger boat’ and that led me recently to consider where we have come from regarding engineering, calculations (to solve problems), data, data science and now to Big Data, the Internet of Things (IoT) and its Industrial counterpart and the role of the engineer and the data scientist.

Back in my day, not quite 42 years ago (but close), we did calculations by hand, on paper, with a calculator, often requiring engineering ‘iterations’ to get the right design flow (or whatever). Yes, we made mistakes (probably too many to worry about!) – then we had spreadsheets (to speed up iterations with less mistakes), then static process simulations to speed up the design and test alternatives, then dynamic process simulations to allow us to ‘predict’ the performance of the plant so as to help optimize the design – from there integrating with controls systems to develop control room operator training and from there to 3D Virtual Reality headsets that allow the User to ‘walk’ around the plant integrating control room and field operations etc….

And so into predictive analytics, where engineering meets data science…Predictive analytics has been around for quite some time and the ‘engineering’ approach has been to seek the answer to the question that everyone asks after there has been some kind of process or equipment failure, outage or worse still some kind of accident ‘Why couldn’t we see this problem coming?’ and usually, you can, but you have to be looking at all the variables, all the time and apply your engineering knowledge and skills to ‘spot the problem’ way in advance.

Clearly, you can apply lots of engineers and operators to look at all the data all of the time or you can use computer software to help. Early solutions involved looking at the ‘time series’ data only, as deviations from the ‘normal’ are relatively easy to spot and ‘trending’ solutions can alert and notify the host. But, as referring to my earlier engineering statements, users then want more – from spreadsheets to 3D VR – once you have your predictions based on time series analytics – then you want to add context, add CMMS and Maintenance data, operation logs, does the weather play a part, etc… and by adding each level of complexity, then the ‘problem’ to be solved gets much more complex and your ‘time series’ algorithm, even if it’s a really good one, just doesn’t stack up.

So you need to look at the problem from different aspects and try to find the best way to solve the ‘really big’ problem – which is – ‘How do I take all my relevant data – format it in a way that it can make sense and then contextualize it in a way that I can begin to make sense of it and then use all that data to ‘predict’ whats going to happen’. That’s a big problem that will need a big computer (‘gonna need a bigger computer’) and complex algorithm(s) to solve it.

Cloud computing has been around awhile, as has ETL (Extract Transform Load), and that is what impressed me so much to join Predikto, their ability to take all the data (as much as anyone would need or want), ETL it, put it in the Cloud, allow our ‘MAX’ Predictive engine to chew through it (just once, for 3-4 weeks), and then begin to apply every known algorithm that every Data Scientist might know (and a few of our own) AND, daily, optimize those algorithms for accuracy, applicability, and optimize for the types of features that help the predictions depending on the varying conditions of the process or asset (or weather etc) that are affecting it daily, hourly, etc….

Now that is really, REALLY clever stuff….and things that our customers have been asking for, for over 25 years in the business….

Big Data, IIoT, Industrie 4.0 and all the things that bring these together combined with what we are doing with Predikto – now that’s the future…..and I am honored to be part of Mario and Robert’s team – watch this space – stay on this track (sic), the future is in Predikto and is HERE!

By Paul Seccombe.

Paul joined Predikto in 2017 after his role as Solutions Leader at the GE Predix Oil & Gas for Europe and the Middle East. He was also at Smart Signal for many years. Paul holds a PhD in BioChemical Engineering from the University of Wales. He is based out of London.

Digital Transformations : From Analysis Paralysis to Execution Mode

Digital Trans

I have never been more excited about the future of Predikto. We started 4 yrs ago with the Vision of “Moving Unplanned to Planned”. We wanted to help large industrials “To harness the power of predictive analytics to optimize operational performance”. We are enabling this with:

  1. Our software platform including Predikto MAX which automates machine learning algorithm generation at a massive scale
  2. Our unique approach to data preparation optimized for Machine Learning

So why am I so excited? We have seen a big shift in the past 12 months of organizations going from “analysis paralysis” to “let’s start to execute”. We love it when prospects get what we have built. This was not the case 3 and 4 yrs ago. It takes a sophisticated organization to be ready to capitalize on our technology. We look for:

Clear Strategy

A clear strategy with the Executive support that incorporates AI, Predictive Analytics, Analytics/Digital Transformations, and investments that will enable them to increase revenues or cut costs by leveraging their own data. A recent report by IDC found that 80% of senior executives said investing in digital transformation is critical to future success. Investments in digital transformation initiatives will reach USD2.2 trillion by 2019 which is 60% more than this year.

Organizational Readiness

Organizational readiness is another key aspect of prospects and customers who are ready for our technology. Most customers have hired a Chief Digital Officer who came from the outside to change the way they have tackled innovation and digital transformations. They have dedicated teams with the power and budgets to run multiple pilots with companies big and small to learn how new technology can bring tangible value to their organization. They are learning to move fast and fail fast. The best ones are learning from startups and aligning their key initiatives with true disruptors. If you are looking for Ginni to sell you IBM Watson to solve all your problems, you are going to have a rude awakening in 18 months. We actually look for prospects who have already hired IBM and failed. IBM, please send me your list of Pilot customers from the past 3 years for Predictive Maintenance projects.

Technical Transformation

Technical transformation means a lot of different things depending on the industry vertical. Some customers did not have access to their own equipment sensor data since the OEM would keep it. They had to invest in new hardware to tap into the sensor data inside trains. Others had data stored in siloed on-prem historians and it was a challenge to get their IT Security organization to push that data to the Cloud. Others are trying to figure out which cloud provider to go with? If you are still wondering, there are only two you should consider, AWS and Microsoft Azure.

We are finding that prospects that are moving and executing have figured out all three components of their Digital Transformation. IDC also states that by 2019, 40% of All Digital Transformation initiatives, and 100% of all effective IoT efforts, will be supported by Cognitive/AI capabilities. I am excited about our future and the AI / Machine Learning software we have built to bring value to large industrial transportation companies looking to move from Unplanned to Planned using a data approach to complement their engineering based condition monitoring approach.

The Missing Link in Why You’re Not Getting Value From Your Data Science

The Missing Link in Why You’re Not Getting Value From Your Data Science

by Robert Morris, Ph.D.

DECEMBER 28, 2016

Recently, Kalyan Veeramachaneni of MIT published an insightful monologue in the Harvard Business Review entitled “Why You’re Not Getting Value from Your Data Science. The author argued that bfbusinesses struggle to see value from machine learning/data science solutions because most machine learning experts tend not to build and design models around business value. Rather, machine learning models are built around nuanced tuning and subtle, yet complex, performance enhancements. Further, experts tend to make broad assumptions about the data that will be used in such models (e.g., consistent and clean data sources). With these arguments, I couldn’t agree more.

 

WHY IS THERE A MISSING LINK?

At Predikto, I have overseen many deployments of our automated predictive analytics software within many Industrial IoT (IIoT) verticals, including the Transportation industry. In many cases, our initial presence at a customer is in part due to limited short-term value gained from an internal (or consulting) human driven data science effort where the focus had been on just what Kalyan mentioned; a focus on the “model” rather than how to actually get business value from the results. Many companies aren’t seeing a return on their investment in human driven data science.

wallThere are many reasons why experts don’t cook business objectives into their analytics from the outset. This is largely due to a disjunction between academic expertise, habit, and operations management (not to mention the immense diversity of focus areas within the machine learning world, which is a separate topic altogether). This is particularly relevant for large industrial businesses striving to cut costs by preventing unplanned operational downtime. Unfortunately, the bulk of the effort in deploying machine learning solutions geared toward business value is that one of the most difficult aspects of this process is actually delivering and demonstrating value to customers.

WHAT IS THE MISSING LINK?

In the world of machine learning, over 80% of the work revolves around cleaning and preparing data for analysis, which comes before the sexy machine learning part (see this recent Forbes article for some survey results supporting this claim). The remaining 20% involves tuning and validating results from a machine learning model(s). Unfortunately, this calculation fails to account for the most important element of the process; extracting value from the model output.

In business, the goal is to gain value from predictive model accuracy (another subjective topic area worthy of its own dialog). We have found that this is the most difficult aspect of deploying predictive analytics for industrial equipment. In my experience, the breakdown of effort required from beginning (data prep) to end (demonstrating business value) is really more like:

40% Cleaning/Preparing the Data

10% Creating/Validating a well performing machine learning model/s

50% Demonstrating Business Value by operationalizing the output of the model

The latter 50% is something that is rarely discussed in machine learning conversations (with the aforementioned exception). Veeramachaneni is right. It makes a lot of sense to keep models simple if you can, cast a wide net to explore more problems, don’t assume you need all of the data, and automate as much as you can. Predikto is doing all of these things. But again, this is only half the battle. Once you have each of the above elements tackled, you still have to:

Provide an outlet for near-real-time performance auditing. In our market (heavy industry), customers want proof that the models work with their historical data, with their “not so perfect” data today, and with their data in the future. The right solution provides fully transparent and consistent access to detailed auditing data from top to bottom; from what data are used to how models are developed, and how the output is being used. This is not only about trust, but it’s about a continuous improvement process.

Provide an interface for users to tune output to fit operational needs and appetites. Tuning output (not the model) is everything. Users want to set their own thresholds for each output, respectively, and have the option to return to a previous setting on the fly, should operating conditions change. One person’s red-alert is not the same as another’s, and this all may be different tomorrow.

Provide a means for taking action from the model output (i.e., the predictions). Users of our predictive output are fleet managers and maintenance technicians. Even with highly precise, high coverage machine learning models, the first thing they all ask is What do I do with this information? They need an easy-to-use, configurable interface that allows them to take a prediction notification, originating from a predicted probability, to business action in a single click. For us, it is often the creation of an inspection work order in an effort to prevent a predicted equipment failure.

Predikto has learned by doing, and iterating. We understand how to get value from machine learning output, and it’s been a big challenge. This understanding led us to create the Predikto Enterprise Platform®, Predikto MAX® [patent pending], and the Predikto Maintain® user interface. We scale across many potential use cases automatically (regardless of the type of equipment), we test countless model specifications on the fly, we give some control to the customer in terms of interfacing with the predictive output, and we provide an outlet for them to take action from their predictions and show value.

As to the missing 50% discussed above, we tackle it directly with Predikto Maintain® and we believe this is why our customers are seeing value from our software.

pm1

Robert Morris, Ph.D. is Co-founder and Chief Science/Technology Officer at Predikto, Inc. (and former Associate Professor at University of Texas at Dallas).

What about unplanned?

Everybody’s looking at process inefficiencies to improve maintenance but there’s lower hanging – and bigger – fruit to focus on first: unplanned events!

Maintenance has pretty simple goals; guarantee and increase equipment uptime and do so at the lowest possible cost. Let’s take a quick look at how unplanned events influence these three conditions.

Guarantee uptime

When production went through the evolutions of JIT (Just In Time), Lean,… and other optimisation schemes, schedules got ever tighter and deviations from the plan ever more problematic. WIP (Work In Progress) has to be limited as much as possible for understandable reasons. However, this has a side-effect of also limiting buffers, which means that when any cog in the mechanism locks up, the whole thing stops. Therefore, maintenance receives increasing pressure to guarantee uptime, at least during planned production time. Operational risk is something investors increasingly look at when evaluating big ticket investments or during M&A due diligence and for good reason; it’s like investing in a top athlete – don’t just pick the fastest runner, pick the one who can do so consistently!

Failures are bound to happen so the name of the game is to pre-emptively foresee these events in order to remediate them beforehand; planned, and preferably outside of production time.

Increase Uptime

The more you are able to increase (guaranteed) uptime, the more output you can generate from your investment. Unplanned events are true output killers; not just because they stop the failing machine but also because they may cause a waterfall of other equipment – depending on the failing machine’s output – to come to a halt. Unplanned events should therefore a) be avoided and b) dealt with in the fastest possible manner. The latter means having technicians and parts at hand, which can be a very expensive manner (like insurance policies; they’re always too expensive until you need them). In order to avoid unplanned failures, we have therefore introduced preventive maintenance (for either cheaper or cyclical events) and condition based or preventive maintenance. Capturing machine health and deciding when to pre-emptively intervene in order to avoid unplanned failures is a pretty young science but one that shows the highest potential for operational and financial gains in the field of maintenance.

Lower maintenance cost

By now most people know that unplanned maintenance costs a multiple of planned maintenance; by a factor three to nine (depending on the industry) is generally accepted as a ballpark figure. It therefore keeps surprising me that most of the investments have traditionally been made in optimising planned maintenance. Agreed, how to increase efficiencies for planned maintenance is easier to grasp but we have by now come to a level where returns on extra investments in this field are diminishing. Enter unplanned maintenance; can either be avoided (increase equipment reliability) or foreseen (in which case it can be prevented). Increasing equipment reliability has not always been the goal of OEMs. In the traditional business model, they made a good buck from selling spare parts and they therefore had to carefully balance how to stay ahead of the competition without pricing themselves out of the market (reliability comes at a cost). Mind you, this was more an economic balancing act than a deliberate “let’s make equipment fail” decision. Now however, with uptime-based contracts, OEM’s are incentivised to improve equipment reliability. Unfortunately, unplanned failures still occur; and due to tighter planning and higher equipment utilisation requirements, these failures’ costs have increased! Therefore, in order to lower maintenance costs, we have to lower the number of unplanned events. The only practical way is to become better at foreseeing these events in order to be able to plan interventions before they occur. The simple plan is: gather data, turn it into information, make predictions and take action to avoid these events. And voilà, 3-9 times more money saved than if we focused on planned events!

Life can be simple.

Context is King to Operationalize Predictive Analytics

contextisking

Companies have invested significantly in Big Data solutions or capabilities. They usually start with adding more sensors on their equipment or perhaps bringing all of their historical data into a Big Data repository like Hadoop.  They have taken the first step towards a “Big Data” driven solution. The challenge is that “tackling” the data does not bring any tangible value.  This is why Predikto focuses so much of our R&D and technology in the “Action” related to the analytics.

Once data has been tackled, the next step is to perform some kind of data crunching or analytics to derive insight and hopefully perform an “Action” that brings real value to the business. Predikto is laser focused on Predictive Analytics for Industrial Fleet Maintenance and Operations “Moving Unplanned to Planned”. We spend a lot of time figuring out what “Action” our customer will take from the configuration of the Predikto Enterprise platform software.  Up to now, I have not mentioned Context. So, why is Context King?

The reason is that once our platform and the power of Predictive Analytics is able to provide an actionable warning that a piece of equipment is at a high risk for failure, Context becomes the next huge hurdle.  The first reaction by a user of our software is “Why should I care about this Predikto warning?”, “Why is this relevant?”, “Why is this warning critical?”, “I don’t trust this warning from a machine learning algorithm?”, etc…  You get the point.

This has driven Predikto to invest heavily in technologies and capabilities that help the maintenance expert or equipment engineer with “Context” as to why they should care or why this warning is important.  Users are able to easily drill through all of their maintenance, asset usage history, diagnostic codes, sensor data, and any other data which has been loaded into our platform.  The “Context” is King in order to empower the subject matter expert to confirm or validate prior to automatically performing the “Action” our software is recommending.

Next time you are rolling out a Big Data solution, focus on the key activities a user will take after you have tackled the data.  What automated action recommendation can I give experts and what Context can I provide to help them make a more informed and confident decision.

Data. The “other” four-letter word.

At Predikto, we work with customers who are OEMs, large-scale equipment operators, as well as some smaller operations. The volume of data they push to us ranges from a few megabytes per week to dozens of terabytes per month. Regardless of their transmission volume, every customer is tantalized by the prospect of what deploying predictive maintenance and predictive analytics solutions can do for their bottom line.

In many cases, there’s a hesitancy by corporations to trust their own available data because the data are perceived as incomplete, or otherwise “dirty.” In other words, they feel that their data could be more valuable and question the utility of the data in its initial form. In fact, a consistent theme within many organizations is the tendency to hyper-focus on perceived shortcomings around operational data when in fact there is incredible value just below the surface.

This acknowledgement is actually a good thing. First, this is good because there are always shortcomings within operations data and results are only as good as the data from which they are derived. Second, this is evidence that a company has done their homework and has really put a lot of thought into their data situation.

However, the tendency is to think of or dismiss data as more deficient than valuable is often lacking context. For instance, Predikto takes data from any source. Data may come directly from sensors that draw measurements many times per second. Or, data might come from maintenance records that are pushed to an EAM client only once in 24 hours. Other times, data from important sensors might be transmitted only when a fault code from some other system is triggered, which might be really sparse and staggered.

The fact of the matter is that one would be mistaken to dismiss data as dirty or lacking in value too quickly. Indeed, data that come in piecemeal can be quite valuable if properly combined with other factors. This contextualization of data is essential to bringing value to supposed dirty data.

Life Expectancy of an Algorithm

In our field of predictive analytics (asset health and failures), and I presume the same goes for other fields, there is a major misconception about what an algorithm represents. Speaking with business leaders and, more worryingly, practitioners too, about PdA and in most cases the conversation will turn to “and then we build the algorithm that describes the behaviour of…” Here’s some bad news for these people; many, if not most, assets can’t be described with A algorithm. No sooner than the algorithm enters production does it start to age and as with most things technology related, they age fast and at ever faster speeds.

I was listening to Peter Hinssen talk about innovation and disruption the other night and he mentioned “the end of average”. As an example he used the pharmaceutical industry where drugs are made to be applicable in as many cases as possible. Today, this industry is a volume business. What about tomorrow? Well, there are signs that certain drugs will be tailored to both the patient and the disease. This will allow faster and more effective treatment with less side effects.

Having some experience in the printing industry (for those who know this industry, I was on the extreme end: printing at 600lpi), I can tell a printing press can behave very differently according to the circumstances. Not only will the printed result differ but also the reliability of the machine depends on operator, ink, atmosphere, etc. To make matters even more complex, this also changes over time due to the ageing of the machine, maintenance impact, etc.

The only way to deal with this variety is to continuously re-evaluate the algorithm used to predict machine behaviour. Algorithm should no longer refer to a static “formula” but rather to that which is applicable today (or even: at this precise moment). Averaging algorithms simply can’t generate good enough results (in most cases – feel free to try!). This creates a problem for those approaches which rely heavily on data science to analyse the problem; how fast can the data scientists adapt to the ever-changing requirements for the algorithm and at what expense? Bandwidth, speed, flexibility and cost should be key considerations alongside forecasting effectiveness when launching a predictive analytics project.

The four forces of disruption are now being listed as: data – networks – intelligence – automation. Predictive analytics has in it the elements to be all of those – imagine the disruptive potential! However, while we see a lot of data, networks and at least some intelligence, automation is often still just a concept. Automating data science is key to successful predictive analytics projects that will not only work as a proof-of-concept but will also be able to evolve to cover real life situations.

The Algorithm is dead, long live the algorithm!