Socializing Big Data through BRPs

September 11, 2013

BTD

To start let’s consider two distinctions about organizational processes. Following Sig over at Thingamy, two basic types of processes exist: easily repeatable processes (ERPs) and barely repeatable processes (BRPs).

ERPs: Processes that handle resources, from human (hiring, firing, payroll and more) to parts and products through supply chains, distribution and production.

BRPs: Typically exceptions to the ERPs, anything that involves people in non-rigid flows through education, health, support, government, consulting or the daily unplanned issues that happens in every organisation.

As I noted in Social Learning and Exception Handling, BRPs result in business exceptions and take up almost all of the time employees spend at work. Interestingly, much of the writing I see on Big Data is about making ERPs more efficient or making guesses about when to expect occurrences of a BRP. In other words, both goals are really about making coordination of organizational efforts more efficient and/or effective.

How organizations coordinate their activities is essential to the way they function. What makes sense for the organization’s internal processes may not make sense in its ecosystem, and vice versa. These are distinctions that analysts of Big Data sometimes fail to note and consider.

For example, in The Industrial Internet the Future is Healthy, Brian Courtney notes the following about the use of sensors in industrial equipment and the benefits derived from storing at big data scale.

Data science is the study of data. It brings together math, statistics, data engineering, machine learning, analytics and pattern matching to help us derive insights from data. Today, industrial data is used to help us determine the health of our assets and to understand if they are running optimally or if they are in an early stage of decay. We use analytics to predict future problems and we train machine learning algorithms to help us identify complex anomalies in large data sets that no human could interpret or understand on their own [my emphasis].

The rationale behind using data science to interpret equipment health is so we can avoid unplanned downtime. Reducing down time increases uptime, and increased uptime leads to increases in production, power, flight and transportation. It ensures higher return on assets, allowing companies to derive more value from investment, lowering total cost of ownership and maximizing longevity.

In other words, Courtney’s analysis of the big data generated from sensors that constantly measure key indicators about a piece of equipment assumes the data ensures a decrease in downtime and an increase in uptime resulting in increases in production, power, flight and transportation. Yet, the implied causal relationship doesn’t translate to all cases, especially those involving barely repeatable processes (BRPs) that produce business exceptions. It is in BRPs that the real usefulness of big data manifests itself, but not on its own. As Dana Boyd and Kate Crawford note in Critical Questions for Big Data, “Managing context in light of Big Data will be an ongoing challenge.”

Read the rest of this entry »