A predictive maintenance example

A prediction doesn’t mean that something will happen! A prediction merely says something may happen. Obviously, the more accurate that prediction gets, the closer it comes to determining something will happen. Yet, we often misinterpret accuracy or confidence in a prediction; when something has 20% chance of failing or 90% chance of failing, we often mistake the result of the failure for the chance of failing. In both cases, when the failure occurs, the result is the same; it is only the frequency of this failure happening that changes.

What I describe above is one of the reasons why managers often fail to come up with a solid business case for predictive analytics. Numbers – and especially risk-based numbers – all to often scare off people when they’re really not that hard to understand. Obviously, the underlying predictive math is hard but the interpretation from a business point of view is much simpler than most people dare to appreciate. We’ll illustrate this with an example: Company A is in the business of sand. Could hardly be simpler than that. It’s business consists of unloading barges, transporting and sifting the sand internally and then loading it onto truck for delivery. To do this, they need cranes (to unload the ships), conveyor belts and more cranes (to load the trucks). Some of these items are more expensive (the ship-loading cranes) or static (the conveyor belts) than others (the truck loading cranes). In this case, this has led to a purchasing policy which has focused on getting the best cranes available for offloading the ships (tying down a ship because the crane is broken is very expensive), slightly less stringent on the conveyor belts (if it’s broken, at least the sand is on our yards and we can work around the failures with our mobile cranes) or downright hedged by buying overcapacity on the, cheaper, mobile cranes. This happens quite often: the insurance strategy changes with either the value of the assets as well as with their criticality to the operations. Please also note that criticality goes up with diminishing alternatives… A single asset is typically less critical (from an operational point of view) when part of a fleet of 100 than if it were alone to perform a specific task.

All these assets are subject to downtime; both planned and unplanned. We’ll focus on the unplanned downtime. When a fixed ship-loading crane fails, the ship either can’t be off-loaded any more or it has to be moved in reach of another such cranes (if that one’s available). Either way, the offloading is interrupted and the failure not only yields repair costs (time: diagnose, get the part, fix the problem – parts – people) but also delays the ship’s departure, which may result in additional direct charges or costs due to later bay availability for incoming ships. When a conveyor belt breaks down, there’s the choice of waiting for it to be repaired or for finding an alternative such as charging the sand on trucks and hauling it the processing plant. Both situations come at a high cost. Moreover, both the cranes and the conveyor may cause delays for the sifting plant, which is probably the most expensive asset on site whose utilisation must be maximised! For the truck loading cranes, the solution was to add one extra crane for every 10 in the fleet. That overcapacity should ensure ‘always on’ but comes at the cost of buying spare assets.

Let’s now mix in some numbers. Let’s say a ship-loading crane costs €5,000,000; a conveyor costs €500,000 and a mobile crane costs €250,000. The company has three ship docks with one crane each, 6 conveyors and a fleet of 20 mobile cranes, putting their total asset value at €22,000,000. If we take a conservative estimate that 6% of the ARV (Asset Replacement Value) is spent on maintenance, this installed base costs €1,320,000 to maintain every year. Let’s further assume that 50% of the interventions are planned and 50% are unplanned. We know that unplanned maintenance is 3-9 times more expensive than planned so for this example we’ll take the middle figure of 6x. We can now easily calculate the cost of planned and unplanned events by: €1,320,000 = 0.5x + 0.5*6x, where x is the total planned maintenance cost. Result: of the total maintenance cost, roughly €190,000 is spent on planned maintenance whereas a whopping €1,130,000 is due to unplanned downtime! If the number of maintenance events is 200, that means that one planned maintenance event costs €1,900 and one unplanned event costs €11,300. . Company A has done all it can to optimise the maintenance processes but can’t seem to drive down the costs further and therefore just decided this is part of doing business.

Meanwhile on the other part of town… Company B is a direct competitor of Company A. And for the sake of this example, we’ll even make it an exact copy of Company A but for one difference: it has embarked on a project to diminish the number of unplanned downtime events. They came to the same conclusion that for the 200 maintenance events, the best way to lower the costs was if they could magically transform unplanned maintenance into planned maintenance. They did some research and found that, well, they could – at least for some. Here’s the deal: if we can forecast a failure with enough lead time, we can prevent it from happening by planning maintenance on the asset (or component that is forecasted to fail) either when other maintenance is planned to happen or during times when the asset is not required for production. While the event is still happening, the prevent-fix being planned costs €1,900 as compared to a break-fix costing €11,300 – that’s a €9,400 difference per event!

The realisation that the difference between a break-fix and a prevent-fix was €9,400 per event allowed them to avoid the greatest pitfall of predictive maintenance. Any such project requiring a major shift in mindset is bound to face headwind. In predictive analytics, most of the pushback comes from people not understanding risk-based decision making or people not seeing the value associated with introducing the new approach. The first relates to the fact that many people still believe that predictions should be spot-on. Once they realise this is impossible, they often (choose to) ignore the fact that sensitivity can be tuned to increase precision albeit at a cost: higher precision means less coverage (if we want to get higher prediction confidence, we can get this but out of all failures, we’ll catch a smaller portion). “If you can predict all failures, then what’s the point?” is an often heard objection.

Company B did it’s homework though and concluded that they could live with the high enough prediction accuracy at a 20% catch rate. The accuracy at this (rather low) catch rate meant that for every 11 predictions, 10 actually prevented a failure and 1 was a false positive (these figures are made up for this example). Let’s look at the economics: a 20% catch rate means that of 100 unplanned downtimes, 20 could be prevented, which resulted in a saving of 20 x €9,400 = €188,000. However, the prediction accuracy also means that for catching these 20, they actually had to perform 22 planned activities; the 2 extra events costed 2 x €1,900 = €3,800. The resulting savings were therefore €188,000 – €3,800 = €184,200; savings of more than 16% on the total maintenance budget!

ARTICLE_GRAPHICS

What’s more, there are fringe benefits: avoiding the unplanned downtime results in better planning, which ultimately results in higher availability with the same asset base. Stock-listed companies how important ROCE (Return On Capital Employed) is when investors compare opportunities but even private companies should beware: financial institutions use this kind of KPI’s to evaluate whether or not to allow for credit and at what rate (it plays a major role in determining a company’s risk profile). Another fringe benefit – and not a small one – is that on the fleet sizing for the mobile cranes (remember they took 10% extra machines just as a buffer for unplanned events), fleet size can be adjusted downward for the same production capacity because downtime during planned utilisation will be down by 20%. Even if they play it very safe and just downsize by one crane, that’s a €250,000 one-time saving plus an annual benefit of 6% on that: €15,000!

Company B is gradually improving flow by avoiding surprises; a 20% impact can’t go unnoticed and has a major effect on employee morale. They also did their homework very well and passed (part of) the reduced operational costs on to their clients. Meanwhile, at Company A, employees constantly feel like they’re running after the facts and can’t understand how Company B manages to undercut them on price and still throw a heck of a party to celebrate their great year!

The next efficiency frontier?

Mountains of consulting dollars have been invested in business process optimisation, manufacturing process optimisation, supply chain optimisation, etc. Now’s the time to bring everything together and with all these processes optimised, our whole production apparatus utilisation rate becomes ever higher. When all goes well, this means more gets done per invested dollar, making CFO and investors happy through better ROA (Return On Assets). However efficient, this increasing load on the machine park comes at a price: less wriggle room in case something unexpected happens. When in the past, companies had excess capacity, this not only served to absorb demand variability; it also came in very handy when machines broke down by allowing the demand to be re-routed to other equipment.
There’s no more place to hide now, so there are a number of options one can consider in order to avoid major disruptions:

  • increase preventive maintenance: this may or may not help. Law of diminishing returns applies, especially as preventive maintenance tends to focus on normal wear and tear and parts with a foreseeable degradation behaviour. A better approach is to improve predictive maintenance; don’t overdo where there’s no benefit but try to identify additional quick wins. Your best suppliers will be a good source of information. Suppliers than can’t help; well, you can guess what I think of those.
  • improve the production plan: too many companies still approach production planning purely reactively and lack optimisation capabilities. Machine changes, lot’s of stop and go, etc. all add to the fragility of the whole production apparatus (not to mention they typically – negatively – influence the output quality as well).
  • improve flow: I’m still perplexed when I see the number of hick-ups in production lines because ‘things bumped into each other’. Crossing flows of unfinished parts is still a major cause of disruption (and a major focus point for top performers such as Toyota). As most plant managers why machines are in a certain place and they either “don’t remember” or will say “that’s the place where they needed the machine first” or even “that was the only place we had left”. Way too rarely do plant layouts get re-considered. Again, the best-in-class do this as often as once a year!
  • shift responsibilities: if you can’t (or won’t) be good at maintenance, then outsource it! Get a provider that can improve your maintenance and ideally can work towards your goal, which is usually not to have shinier machines but to get more and better output. If you really decide you don’t care about machine ownership at all, consider performance- or output-based contracts.
  • get better machines: sounds trivial but current purchasing approaches often fail to capture the ‘equipment quality’ axis and forget to look at lifetime cost in light of output. Just two months ago I heard of a company buying excavators from a supplier because for every three machines, they got one for free. This was presented as an assurance that the operator would never run out of machine capacity. In this case, it had the adverse effect as the buyer thought why they needed to throw in an extra machine if they claimed they were as reliable as the best.
  • connect your machines: this is a very interesting step. Recognising that machines will eventually fail but at least making sure you get maximum visibility on what/where. Most of the time resolving equipment failures is spent… waiting! Waiting for the mechanic to arrive, waiting for the right part, etc.
  • add predictive analytics: predictive analytics not only allow you to prevent failures from happening but, relating to the previous point, to the what/where axis, predictive analytics allows the addition of why. Determining why something failed or will fail is crucial in optimising production output. Well-implemented predictive analytics allow us to improve production continuity by avoiding unplanned incidents (through predictive maintenance) but also allows for more efficient (faster) and effective (resulting in better machine uptime) maintenance.

So which of these steps should we take? Frankly, all of them. Maybe not all at once and (hopefully) some of them may already have been implemented. Key is to have a plan. Where are we now, what are our current problems, what are we facing,…? Formulating the problem is half the solution. Then – and this may surprise some – work top down. Start with the end goal, your “ultimate production apparatus”, and work your way back to determine how to get there. All too often people start with the simplest steps without having looked at the end goal and after having taken two or three steps they find out they need to backtrack because they took the wrong turn earlier in the process.

At any step, whether it’s purchasing equipment or to install sensors or whatever, look at whether your supplier understands your goals and is capable of integrating in “the bigger plan”. The next efficiency frontier is APM: Asset Performance Management. Not individually, but from a holistic point of view. While individual APM metrics are interesting for determining rogue equipment, only the overall APM metrics matter for the bottom line; did we deliver on time, was the quality on par, at what cost,…

Predictive Maintenance – a framework for executives

We typically expect statements like “there’s a 20% chance of part A failing over the coming two weeks” from a predictive analytics solution. More important than the prediction though, is the interpretation of that statement and what it means to operations, maintenance, etc.
Predictions are at the core of predictive maintenance applications. Understanding and by extension, applying predictions is not a given. The four-axis framework laid out in this blog should allow any executive to not only fully grasp the impact of prediction-driven decisions but also to make sure the whole organisation grasps predictive concepts. This framework not only looks at the risk-based decision process but it does so in plain language, no math degree required (should be a relief to most of us).
Accuracy
In plain language, the accuracy of a prediction measures how often the prediction turned out to be correct after the facts. So, if a prediction says a part will fail and accuracy is 20%, it means that you’re over-predicting five times. For the maintenance teams this means they’ll have to inspect/replace five parts in order to prevent one failure. While this doesn’t sound impressive, there are cases where you still want to go ahead and act. Which is why we introduce a second axis – Criticality.
Criticality
If we take the prediction from above and look at two different situations; in case A, that 20% accuracy prediction is about in-flight entertainment. While a hassle (and for some airlines a no-go part), we can safely assume that most airlines wouldn’t act on a prediction for this non-critical part at such a low accuracy. However, if in case B the same prediction is made about the landing gear, it may warrant somewhat more attention. An example from every-day life would be; there’s a 20% chance you caught the flue versus there’s a 20% chance you caught SARS – no doubt, both statements would enlist different reactions, both from the patient and from the doctor! Therefore, please take note that not only operational criticality is important, but also safety (in some sectors more so than in others).
Coverage
Which is better? You made 1 failure prediction and it was correct but there were 10 in total (high accuracy but low coverage). Or, you made 30 failure predictions and caught all 10 that actually occurred but at the cost of 20 unnecessary (a posteriori) interventions (high coverage but lower accuracy). Answer: it depends. For instance, criticality plays a big role in determining which is better. Even more than that, it’ll depend on where you are with the project. During phase-in, many clients choose to focus on high accuracy and not so much on coverage. The reason is straightforward: any failure caught pre-emptively is a win and sticking with highly accurate predictions builds trust throughout the organisation about introducing risk-based concepts for maintenance. In order to make a really educated decision, a fourth axis needs to be introduced. I call it effort.
Effort
It’s actually very tempting to call this fourth axis Cost but that would be an over-simplification. We’ll just consider Cost to be Financial Effort. Please note that, should you choose to really plot effort on an axis, net effort should be taken into account; i.e. (in a very simple form) the cost of predictive maintenance versus the cost of a failure*. If the net cost is negative, I shouldn’t act. Really? What about criticality. i.e. if a part is predicted to fail and such failure could lead to bodily harm, surely I should act. Well, if such is the case, this should be represented in the “cost of failure”. Which takes us back to why I prefer referring to Effort instead of cost. Equally, very low accuracy (i.e. 5%) could lead to a lot of dissatisfaction within the maintenance teams because most inspections they do lead to NFF (No Fault Found). If such inspection is as simple as reading out a log file, the Effort is different than if it requires dismantling an engine. Net Effort is therefore both crucial AND very hard to get right.
Understanding and applying the four axis mentioned above is crucial for operational deployment of predictive analytics for maintenance. Executives should educate and train themselves to become comfortable with these concepts. And make sure the whole organisation understands them. You’re using a CMMS (Computerised Maintenance Management System)? Great! Keep using it. Predictive analytics only provide an extra, smart layer on top of your operational systems in order to drive actions. In the field, processes don’t really change (frequencies typically change) but at the decision-taking level, it’s like putting on coloured glasses. We need to start looking at aftermarket processes from a risk-based point of view. And while that scares many people – “because they don’t understand statistics” – the four-axis approach described above should demystify things quite a bit.
* just do a quick internal check-up by asking what is the cost of a failure in your company; most of the time these figures are hard to get by and if someone has them, they’re typically greatly underestimated. I still have to come across a case where predictive maintenance has no positive ROI… other than a business not being capable of deploying PdM (i.e. lack of data, wrong processes,…)

One prediction, many users

“Houston, we have a problem” must have carried a different meaning depending on whether you were an astronaut on board Apollo XIII, the astronaut’s family, in mission control or a rocket engineer on the Saturn project. While the example seems obvious, many people have a vague idea on where to apply predictive maintenance in their business. When we ask about whose jobs will be impacted we very often don’t get beyond “the maintenance engineer”. And how is said maintenance engineer going to use the predictions?

Let’s have a look at some roles/business fields impacted most by predictive maintenance analytics. And let’s start with the above-mentioned maintenance engineer.

Maintenance engineer
The goals of predictive maintenance analytics are multiple but some of the main subjects are: moving unplanned to planned, better scheduling of maintenance, improved maintenance activity prioritisation. The maintenance engineer really wants to get an improved worksheet which tells him what to focus on during the upcoming activities. Predictive analytics should therefore drive an ‘intelligent’ worksheet, which combines preventive maintenance with prioritised predictive maintenance activities. The Maintenance engineer’s contact with predictive maintenance should involve little more than a revised worksheet.

Maintenance Scheduler
The maintenance scheduler may get impacted a bit more; instead of spreading out maintenance activities based on counters (time, cycles, mileage,…), predictive maintenance schedules are more dynamic and combine the former (i.e. due to legal requirements) with the predictive information. As a first step, the traditional schedules should be left untouched but activities augmented with predicted visits (improved worksheets). As a second step, the maintenance schedule should be optimised; in fact, predictive analytics isn’t even required for this step but I’m puzzled to never see truly optimised maintenance schedules…! A third step would involve spreading out maintenance visits; if need be, even negotiating with the legislator to allow this within certain limits. This third step will almost certainly require collaboration between OEMs, Operators and MROs.

Reliability Engineer
The reliability engineer is crucial for two main benefits from predictive maintenance: understanding the past and improving the future. The reliability engineer is not just interested in the predictions as such, but really in why the predictions were made. The improved insight should allow the reliability engineer to find root causes, define behavioural patterns (need to find them in order to avoid them), propose solutions, etc. Better insights into what causes certain failures and how well they can be predicted will also allow the reliability engineer to come up with new maintenance scheduling information.

Fleet Planner
Because predictive maintenance analytics focus on maintenance, people tend to forget the main goal lies outside maintenance: optimal uptime at the lowest possible cost. In my book, uptime means more than just guaranteeing equipment can be operated; it should really mean it’s fit for the task. When a large industrial robot has one of it’s grippers broken but the one required for a specific task works fine, that machine is 100% available for that task, even if from a technical point of view, it’s broken…! Giving fleet planners (machine park planners – let’s use fleet as a generic description) deep insight into fleet health allows them to assign the right machine to the right task.

CFO
While the CFO doesn’t generally care about the mechanics of maintenance, they typically do care about the cost of maintenance and operational risk. Predictive analytics can give an insight in both. Maintenance management or the COO may actually be interested in simulating the impact of budget constraints on fleet availability, maintenance effectiveness, etc.

These are just a few examples of the impact of predictive maintenance analytics on different corporate roles; one can easily come up with more. The main lesson of this thought exercise is that PdA impacts the whole organisation, predictions (or derivative information) should be presented appropriately so that every role can correctly interpret the results and apply the best conclusions. Whoever sat through an hour-long discussion between statisticians on the interpretation of a prediction knows what I’m talking about: keep it simple, contextualised and usable!

The steps to predictive maintenance

Predictive maintenance is almost all about data, software, etc. and is therefore for many maintenance departments very far from their natural habitat. This scares off many, but in reality it shouldn’t. Modern businesses can no longer function with hard walls between departments. What’s important is that every department knows how what they are doing influences other departments’ work. This also means that CIO’s have to be more business experts than ICT experts. Here’s a quick overview of the main steps towards predictive maintenance.

Collect

Machines generate data – more and more of it. Some business leaders are surprised to find out they have a machine park that has been generating data for many years. Only, because they didn’t find a use for it, the data was just stored or (alarmingly often) just dumped… Collecting data requires very little infrastructure (most of it is done in the Cloud). Sometimes, equipment requires retrofitting with sensors and data communications capabilities; many IoT or M2M solutions exist today, both from the OEMs and from third party vendors.

Represent

Once it is collected, interpreting it and presenting it in a meaningful way, turns data into information. Many cases are known where just this step allows companies to save such huge amounts, we’re often told they can’t believe they didn’t take that step sooner. Certain patterns, exceptional behaviours, etc. quickly become apparent through a good representation (our brains are wired to recognise patterns – which also sometimes leads us to taking wrong decisions…).

Predict

While BI (Business Intelligence) tools could slice and dice through data and help with the representation, prediction is a whole other matter. Why has prediction come to the forefront recently? Well, the first step – collect – has gotten a big boost through Big Data and the second step – represent – has gotten really powerful at analysing huge amounts of data with multiple dimensions. So people naturally started wondering not just about the past but whether they could infer future behaviour from the data they were collecting. A vast library of algorithms is available and we’ve got the computing power. The hard step is really to come up with the appropriate algorithm for every situation – and that situation keeps changing all the time (just read some of our other blogs to see how we coped with this).

Optimise

Predictions are good but what should you do with them? There has to be a framework to help users take the right decisions; for their own requirement but relative to corporate objectives. Just reacting to a prediction may have huge negative impacts somewhere else in the company. Solutions should therefore be designed with subject matter experts and business experts in order to come up with correct decision support tools (based on the predictions but also weighting other impacts – operational and financial).

Operationalise

From the optimise step, actions are proposed, which then need to be executed. These maintenance actions set in motion a series of events (from calling the mechanic to ordering spare parts). A close integration between the optimisation and execution steps is necessary to avoid launching unnecessary activities (i.e. sending a mechanic when the spare part is not available) and creating an alternative sequence of events.

As one can see, IT permeates the predictive maintenance environment. But it should remain very clear: maintenance expertise still sits at the core of the solution. IT is merely an enabler. In order to avoid it becoming too much IT oriented, solutions should be designed with the end user in mind. The reliability analyst will have very different needs than the mechanic for instance. Tuning these solutions towards user needs requires a strong involvement of all roles during the design and training phases of a predictive maintenance project. Training should also extend to upper management who need to be informed on what to expect (both in general and in particular with regard to their equipment/data/use/…).

Looks complex? Well, it isn’t. Today, solutions exist for each of these steps. Most crucial for successfully implementing a predictive maintenance approach is bringing together the right partners – but that’s a discussion point for another blog.

Your data might not be your data.

Previous Predikto bloggers have emphasized the centrality of data in the successful adoption of predictive maintenance initiatives.

Leveraging the full range of equipment operation, environmental and maintenance data, operators can better align the “right” resources to the appropriate mission requirements and/or environmental conditions, and more easily transition to effective implementation of condition-based maintenance programs, which can reduce the financial impact of scheduled maintenance as well as minimize the “fire drills” associated with an unplanned performance degradation or interruption.

Similarly, advantages accrue to manufacturers who can capitalize on “nuggets of gold” in their data.  Better insight into operational performance can help the engineering team gain a more granular, objective understanding of the performance of their products in sustained operation across a broad range of conditions (vs. a controlled test environment) and allow them to further improve product performance and reliability.  Also, delivery of sensor or telematics-enabled products can enable suppliers to move to “power by the hour” business relationships: if the economics are structured properly, the vendor is incented to deliver reliable service and enjoy a long term, predictable revenue stream, and the customer can reduce or eliminate investment in spare parts inventories or the imperative to staff a large maintenance department.

One of the “7 Habits” Stephen Covey made famous was “first things first”.   The first step a prospective predictive maintenance practitioner can take is to understand the availability of their data and secure access to it.   Failure to take this one step has derailed or delayed more than one project.  For example, a large equipment manufacturer had to suspend a major project upon learning the telematics data they expected to use wasn’t theirs, but belonged to the telematics equipment supplier.   In another situation, a major international transportation provider, operating state of the art equipment laden with sensors collecting volumes of operational data, was stunned to learn their ability to extract and analyze the data was impeded by the equipment supplier, whose onboard system only made a handful of cryptic fault codes available when a maintenance event occurred (and prevented access to the message bus).  

The effective application of Predictive Analytics in support of Condition-Based Maintenance holds great potential.  Make sure you own your data.   

A scarce resource

McKinsey, in this study – http://www.mckinsey.com/business-functions/business-technology/our-insights/big-data-the-next-frontier-for-innovation – foresees data science jobs in the United States to exceed 490,000 by 2018. However, only 200,000 data scientists are projected to be available by then… Globally, by the same time this demand is projected to exceed supply by more than 50 percent.

At the same time, 99% of CEO’s indicate that big data analytics is important to their strategies (KPMG study). Beyond big data analytics, the rise of predictive analytics (PdA) creates the need for very advanced data scientists able to model complex concepts. Not surprising then, that Harvard Business Review describes data science as “The sexiest job of the 21st century “(https://hbr.org/2012/10/data-scientist-the-sexiest-job-of-the-21st-century/).

Data scientists’ job consists of amassing vast amounts of data, sorting through it and, eventually, making sense of it. In order to do this, they don’t just use old BI (Business Intelligence) tools and techniques but rather rely on the latest statistical technologies, such as neural nets, deep learning, Bayesian statistics, etc. Data science is more than just science and its practitioners are sometimes referred to as “part analyst, part artist.”  To be proficient at data science requires a combination of talent, education, creativity and perseverance. It also requires kills in various domains such as math, analytics, statistics, computer science, etc. And to make sense of all that data, also some level of domain expertise. So, even though the numbers mentioned above list quite a shortage, the problem may be even worse in some areas which require specific domain expertise or at least a thorough understanding of the problems one tries to solve.

Facing this conundrum, one is offered a variety of solutions, such as: run and hope it goes away, throw lots of money at it in order to acquire the (rare) resource or come up with a smarter solution. The latter is exactly what Predikto has done through automating much of the data scientists’ job. Pure automation will not get us very far but smart automation with advanced machine learning does! And so it happened. This way, Predikto can afford to tackle predictive analytics challenges with a (much) smaller team and focus on the really important matters. Indeed, while it’s good, very good even, to deliver any kind of predictions (many projects relying on data scientists fail – for a variety of reasons), it’s even more important to be able to correctly interpret these forecasts. Only then can they be turned into actions. Without which they’re no more than a science project.

Faced with the scarcity of arguably one of the most important resources in our industry (computing power is another – and it’s not scarce), Predikto therefore chose to find ways to lower its reliance on that scarce resource. And achieving that successfully, makes us unique!