Big data is getting bigger

Tinniam V. Ganesh
13 Dec 2011
00:00
 
It is here that big data enters the picture. Big data enables the management of the 3 V's of data, namely volume, velocity and variety. As mentioned above the volume of data is growing at an exponential rate and should exceed 200 exabytes by 2015. The rate at which the data is generated, or the velocity, is also growing phenomenally given the variety and the number of devices that are connected to the network. Besides, there is a tremendous variety to the data. Data can be structured, semi-structured and unstructured. Logs could be in plain text, CSV, XML, JSON and so on. The issue of 3 V's of data makes Big Data most suited for crunching this enormous proliferation of data at the velocity at which it is generated.
 
Big data: Big Data or Analytics deals with the algorithms that analyze petabytes of data and identify key patterns in them. The patterns that are so identified can be used to make important predictions in the future. For example Big Data has been used by energy companies in identifying key locations for positioning their wind turbines. To identify the precise location requires that petabytes of data be crunched rapidly and appropriate patterns be identified. There are several applications of big data including identifying brand sentiment from social media, to customer behavior from click exhaust to identifying optimal power usage by consumers.
 
The key difference between Big Data and traditional processing methods are that the volume of data that has be processed and the speed with which it has to be processed. As mentioned before the 3 V's of volume, velocity and variety make traditional methods unsuitable for handling this data. In this context, besides the key algorithms of analytics another player is extremely important in Big Data – that is Hadoop. Hadoop is a processing technique that involves tremendous parallelization of the task
 
The Hadoop ecosystem – Hadoop had its origins at Google during its work with the Google's File System (GFS) and the Map Reduce programming paradigm.
 
HDFS and Map-Reduce: Hadoop in essence is the Hadoop Distributed File System (HDFS) and the Map Reduce paradigm. The Hadoop System is made up of thousands of distributed commodity servers. The data is stored in the HDFS in blocks of 64 MB or 128 MB. The data is replicated among two or more servers to maintain redundancy. Since Hadoop is made of regular commodity servers which are prone to failures, fault tolerance is included by design. The Map Reduce Paradigm essentially breaks a job into multiple tasks which are executed in parallel. Initially the “Map” part processes the input data and outputs a pair of tuples. The “Reduce” part then scans the pair of tuples and generates a consolidated output. For e.g. The “map” part could count the number of occurrences of different words in different sets of files and output the words and their count as pairs. The “reduce” would then sum up the counts of the word from the individual 'map' parts and provide the total occurrences of the words in multiple files.
 

Pages

Follow Telecom Asia Sport!
Comments
No Comments Yet! Be the first to share what you think!
This website uses cookies
This provides customers with a personalized experience and increases the efficiency of visiting the site, allowing us to provide the most efficient service. By using the website and accepting the terms of the policy, you consent to the use of cookies in accordance with the terms of this policy.