Big Data has returned to the spotlight as an end-all solution to supply chain problems, but using data to solve issues has proven far more elusive than collecting it. Or, as the Financial Times put it in 2014, for many “Big Data has arrived, but big insights have not.”
In other words, Big Data application has been a long, multi-step journey: a mass, rapid influx of new data forces companies to develop systems to process it, and later, derive insight from it. As the concept is hyped, many shoot for the stars in the hopes that more data will immediately help provide insights.
For some, it may. But applying Big Data for the supply chain requires a deeper sense of purpose.
After all, supply chain managers are already drowning in information to take in and report. If the supply chain manager wants to use Big Data to derive big insights, then, it may help to understand the infrastructure and technology that allowed the concept to emerge in the first place.
What is Big Data?
A 2014 survey of 43 data scientists conducted by the University of California, Berkeley revealed even the definition of Big Data is contentious. Each respondent provided a different answer, debating whether the concept was a process, a tool or a result.
The confusion seemingly boils down to a question of scope: Is the “Big Data revolution” just about the information being collected, or does it include the tools needed to process and apply the new information? The answer, it seems, may depend on the stage at which executives are applying it.
A look at the most-often cited definition of the term, Gartner’s, may provide more clarity. Gartner claims “Big Data is high-volume, high-velocity and/or high-variety information assets that demand cost-effective, innovative forms of information processing that enable enhanced insight, decision making, and process automation (emphasis added).”
By this definition, Big Data as a concept requires three distinct layers before application: more data, processing systems, and analytics. If Big Data only recently entered the supply chain management spotlight, then, it may be because the technology only recently reached the last layer to deliver insights.
Volume, Velocity, Variety: The transition from data to Big Data
Every point of data is an interaction: an item is picked off a shelf, a customer leaving a website, an online review is written, a damaged product is returned. These interactions are present everywhere in the supply chain, but not always collected in a meaningful way.
In fact, until recently such granularity of data could not be collected, stored or transmitted so as to become meaningful at all. Think of the pedometer, for example. The first versions of these fitness devices could track steps taken and display them on a screen, leaving the information for the wearer to absorb or record independently – and nothing more.
The rapid advance of the Internet, the Cloud and then the Internet of Things changed all of that, giving rise to the first layer of Big Data: high-volume, high-velocity and high-variety, colloquially known as the “three V’s.”
Big Data as a concept requires three distinct layers before application: more data, processing systems, and analytics.
Supply Chain Dive
These innovations allowed previously untapped data to be collected. The connectivity of the Internet created an endless stream of new interactions between people and products – establishing correlations where none could previously be seen. The seemingly unlimited storage of the Cloud increased access to data, and provided a place to archive uncollected information. Meanwhile, the Internet of Things bridged the divide between the physical and digital worlds, enabling businesses to automatically collect data from products at a granular level, and more.
In turn, the world was handling, creating and transmitting data at higher rates than ever before, straining technological infrastructure. As a result, technology kept advancing in order to not just store, but eventually also process the data for various applications.
Information processing: The rise of analytics platforms
Businesses are no stranger to data; supply chain managers have been producing reports, tracking trends and forecasting for decades. So when data exploded to become Big Data, companies were quick to rise to the challenge of collecting it for future use.
“What the CIOs and IT organizations were asked to do, early part of this decade – probably even the latter half of the last decade – was ‘hey there's a lot of value in data, let's actually keep on collecting data,’” Suresh Acharya, Head of JDA Labs told Supply Chain Dive.
But even if a pedometer generates bits and bytes each second, the information created remains unpalatable unless it is stored with previous data to be analyzed over time.
Therein came the need for information processing systems more powerful than spreadsheets. Many of these are now known by their three letter acronyms (e.g. ERP, CRM, TMS or WMS), but their purpose is similar: to store, collect and simplify information for the average user. Such processors became so ubiquitous, it is now common for a company to boast nine or ten distinct systems supporting supply chain management in a single plant.
“Now, when supply chain people think of data, they tend to start to blur the lines a little bit of with traditional IT,” Adam Mussomeli, principal of supply chain and manufacturing operations at Deloitte Consulting told Supply Chain Dive told Supply Chain Dive. But it was this technical shift, perhaps, that unlocked the top layer of Big Data: insight generation.
Insight and decision-making: The next frontier
There’s a new wave of data processors on the market promising to reap the benefits of Big Data for supply chains.
Supply chain solutions companies often offer to integrate the various systems from the previous generation, allowing companies to visualize data sets at each corporate level for the increased granularity and analytical capacity desired from Big Data.
Yet, Big Data is not only the ability to process more information, but the ability to innovate, automate and use data for enhanced decision-making. The toolkit is meant to be applied, not simply possessed.
A look back at our pedometer example may help illustrate the difference between having a software solution and actively unpacking Big Data. At first, the pedometer could only track information – making it a data generator. If connected to the Cloud and transmitting to a data processor, the device could be considered as helping to generate Big Data. But it was never a Big Data device because it never actively helped a user make decisions.
Meanwhile, the Fitbit – which tracks steps, heart rates and other biometrics – can analyze and apply the data it collects to guide the wearer to better health habits; for example, it alerts the user when they have been sitting too long and reminds them to go take a walk.
Big Data is here to stay and supply chain managers should embrace it.
Supply Chain Dive
The technology to apply Big Data to supply chain management is here, and many companies have begun to reap the benefits. These companies rely on machine learning technology to automatically run reports, alert executives of disruptions and, in some cases, independently suggest changes to optimize processes.
Or not. Various case studies suggest supply chain professionals can unlock the top layer without AI or machine learning technology. But it requires a well-thought out process. The right data and processing power must be in place, alongside a clear problem to solve and an algorithm to solve it.
Regardless, Big Data is here to stay and supply chain managers should embrace it. It's no surprise the jobs in largest demand, according to Glassdoor, have something to do with data science. Nor is it a coincidence Supply Chain Management is the 18th best job in the U.S. according to that ranking.