At Predikto, we work with customers who are OEMs, large-scale equipment operators, as well as some smaller operations. The volume of data they push to us ranges from a few megabytes per week to dozens of terabytes per month. Regardless of their transmission volume, every customer is tantalized by the prospect of what deploying predictive maintenance and predictive analytics solutions can do for their bottom line.
In many cases, there’s a hesitancy by corporations to trust their own available data because the data are perceived as incomplete, or otherwise “dirty.” In other words, they feel that their data could be more valuable and question the utility of the data in its initial form. In fact, a consistent theme within many organizations is the tendency to hyper-focus on perceived shortcomings around operational data when in fact there is incredible value just below the surface.
This acknowledgement is actually a good thing. First, this is good because there are always shortcomings within operations data and results are only as good as the data from which they are derived. Second, this is evidence that a company has done their homework and has really put a lot of thought into their data situation.
However, the tendency is to think of or dismiss data as more deficient than valuable is often lacking context. For instance, Predikto takes data from any source. Data may come directly from sensors that draw measurements many times per second. Or, data might come from maintenance records that are pushed to an EAM client only once in 24 hours. Other times, data from important sensors might be transmitted only when a fault code from some other system is triggered, which might be really sparse and staggered.
The fact of the matter is that one would be mistaken to dismiss data as dirty or lacking in value too quickly. Indeed, data that come in piecemeal can be quite valuable if properly combined with other factors. This contextualization of data is essential to bringing value to supposed dirty data.