The brave new frontiers of computing

Tinniam V Ganesh/IBM India
21 Mar 2014
00:00

Von Neumann reference architecture and the sequential processing of Turing machines have been the basis for ‘classical’ computers for the last 6 decades. The juggernaut of technology has resulted in faster and denser processors being churned out inexorably by the semiconductor industry, substantiating Gordon Moore’s claim of transistors density in chips doubling every 18 months, now famously known as Moore’s law. These days we have processors with an excess of billion transistors. We are now reaching the physical limit of the number of transistors on a chip. There is now an imminent need to look at alternative paradigms to crack problems of the internet age which cannot be solved by classical computing

In the last decade or so 3 new, radical and lateral paradigms have surfaced which hold tremendous promise. They are

i) Deep learning ii) Quantum computing and iii) Genetic programming.

These techniques hold enormous potential and may offer solutions to problems which would take classical computers anywhere between a few years to a few decades to solve.

Deep Learning: Deep Learning is a new area of Machine Learning research. The objective of deep learning is to bring Machine Learning closer to one of its original goals namely Artificial Intelligence. Deep Learning is based on multi-level neural networks called deep neural networks. Deep Learning works on large sets of unclassified data and is able to learn lower level patterns on which it builds higher level representations much the same way the human brain works.

Deep learning tries to mimic the human brain For example, the visual cortex shows a sequence of areas where signals flow from one level to the next. In the visual cortex the feature hierarchy represents input at a different level of abstraction, with more abstract features further up in the hierarchy, defined in terms of the lower-level ones. Deep Learning is based on the premise that humans organize ideas hierarchically and compose more abstract concepts from simpler ones.

Deep Learning algorithms generally requires powerful processors and works on enormous amounts of data to learn key features. The characteristic of Deep Learning algorithms is that the input is passed through several non-linearities before generating its output.

About 3 years ago, researcher's at Google's Brain ran a deep learning algorithm on 10 million still images extracted from Youtube, on 1000’s of extremely powerful processors called GPUs. Google's Brain was able independently infer that these images consisted of a preponderance of cat videos. A seemingly trivial result, but of great significance as the algorithm inferred this result without any other input!

An interesting article in Nature discusses how deep learning has proved useful for several scientific tasks including handwriting recognition, speech recognition, natural language processing, and in analyzing 3 dimensional images of brain slices etc.

The importance of Deep Learning has not been lost on the Tech titans like Google, Facebook, Microsoft and IBM which have all taken steps to stay ahead in this race.

Related content

Follow Telecom Asia Sport!
Comments
No Comments Yet! Be the first to share what you think!
This website uses cookies
This provides customers with a personalized experience and increases the efficiency of visiting the site, allowing us to provide the most efficient service. By using the website and accepting the terms of the policy, you consent to the use of cookies in accordance with the terms of this policy.