Bonus $100
Fury vs Usyk
IPL 2024
Paris 2024 Olympics
PROMO CODES 2024
Sunrisers Hyderabad vs Rajasthan Royals
UEFA Euro 2024
Users' Choice
88
87
85
69

Open source is accelerating AI innovation

24 Mar 2015
00:00
Read More

Neural-networks-based deep learning is the artificial intelligence (AI) architecture paradigm of the moment.

In the last five years it has been breaking established image classification benchmark records and setting new ones, and it is also enabling start-up businesses to find better ways to perform intelligence-assisted tasks.

This progress in AI is being vastly accelerated by the sharing of knowledge and source code in the open source community. The major open source deep learning code libraries are Caffe (Python-based), Minerva (Python, C++), Theano (Python), and Torch (Lua).

All of the landmark academic papers on deep learning are freely available. Open source is showing its power to accelerate innovation, and Python is proving to be one of the most popular languages in this field.

Open source Python is the language of choice in science and engineering

Java is a popular language in enterprise application development, but in science and engineering it is Python that is most often turned to today. Java was never a natural language for numerical analysis and it struggled against entrenched Fortran as well as C++ and C#, but in recent years, Python has pushed venerable Fortran aside. Python has all the numerical libraries required by scientists and engineers, such as Basic Linear Algebra Subprograms (BLAS), used as building blocks for numeric intensive calculations. The choice of Python for Caffe, Minerva, and Theano deep learning libraries indicates how far the language has progressed within this community. The fast turnaround time for making changes in language features that the science and engineering open source community can bring about is a key reason for Python’s ascendency.

Nvidia’s cuDNN library offers pre-built GPU-ready numeric blocks for deep learning frameworks

Nvidia’s programmable graphic processing units (GPUs) are a significant enabler of deep learning, helping to reduce training time from a month to a day or less. The Nvidia CUDA Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives that can be integrated into the higher-level frameworks, including Caffe, Theano, and Torch. It emphasizes performance, ease of use, and low memory overhead.

Michael Azoff is a principal analyst for IT infrastructure solutions at Ovum. For more information, visit www.ovum.com/

.

Related content

Rating: 5
Advertising