If you are wondering what all the fuss is about over ‘big data’, then relax – you are one of a growing band of skeptics asking the very same question.
After all, data in large quantities is nothing new – we have been storing it and processing it for years. The focus today seems to be on doing it in real time and, more specifically, to be able to improve the customer experience (whatever that means).
Maybe it has simply taken us time to finally realize the potential of the data we have already been collecting – and not just for regulatory or marketing reasons. By manipulating and molding that data every which way, we can now justify almost anything, and therein lies one of big data’s perception problems.
The perception that a simple query will yield magical results is another misnomer that has fooled senior executives. Good analysis needs good analysts or ‘data scientists’ – and they, in turn, need good guidance from management as to what is actually wanted from the data.
In most cases, the query for a deep dive into the data can be stored away for future use, but it is likely that it may simply be used once, then forgotten. It seems that we have to get the most out of our newfound tools to justify the investment in them.
Yet, whilst your big data teams are twisting and teasing that endless stream of zeroes and ones to find out if Mr Bloggs likes Corn Flakes or porridge for breakfast, some of your less sexy but equally valuable applications are using that very same data day in and day out to keep your business running, and profitable.
I use the word “application” with purpose because these are software programs that were designed for a specific purpose, and they have just enough flexibility to tweak a query and format the output without impacting the performance of servers the data resides on, or the network itself.
Pure business support systems (BSS) have been working this way for years, with billing a good example. The collection of millions of raw call and data session records, their rating and sorting into accounts, and the production of the bills themselves involves massive amounts of data – really big data – and it has been this way for years.
Today the emphasis is on doing all of this in real time, adding complexity and speed to the massive volumes. And while all this is going on, other applications are monitoring all the data again for anomalies that may indicate fraudulent activity or revenue leakage. These are the true masters of big data and they are all basically applications.
They work best for repetitive, high-volume processing of mission critical activities. There is no room for error, let alone ad hoc queries that may impact the real task of keeping the money flowing – all of it.
But how big is big data – really? According to Slater Victoroff, writing for Tech Crunch, many people and companies simply lie about it. “Companies brag about the size of their datasets the way fishermen brag about the size of their fish. They claim access to endless terabytes of information. I’ve found it’s a good rule of thumb to assume a company has one one-thousandth of the data they say they do.”
Victoroff claims, and rightfully so, that “even big companies only use a tiny fraction of the data they collect.” He then reels off a number of platforms and applications that manage specific tasks admirably without the need for big data overheads.
If you are a telco, you already have all the information at hand to determine how profitable a tariff plan or customer is; or if a customer is even worth keeping; or if they are thinking of churning; or what effect they have on other customers via social media; or even what it costs to provide a service to each of them – and all of this using applications and existing data.
The whole marketing argument for investment in big data to profile individual customers (an almost impossible task to be achieved accurately) pales in insignificance when the real business information is so readily at hand and accessible via the humble app.