Intel Announces ‘Nervana’ Neural Network Processor


Science-fiction authors and modern technology mega-corporations agree on one thing–artificial intelligence is the future. Everyone from Google to Facebook is designing artificial neural networks to tackle big problems like computer vision and speech synthesis. Most of these projects are using existing computer hardware, but Intel has something big on the way. The chip maker has announced the first dedicated neural network processor, the Intel Nervana Neural Network Processor (NNP).

A neural network is designed to process data and solve problems in a way that’s more like a brain. They consist of layers of artificial neurons that process inputs and pass the data down the line to the next neuron in the network. At the end, you have an output that’s informed by all the transformations applied by the network, which is more efficient than brute force computation. These systems can learn over time by training with large batches of data. This is how Google perfected the AlphaGo network that managed to defeat the best human Go players in the world.

The Nervana NNP is designed from the ground up with this type of computing in mind. This is what’s known as an application specific integrated circuit (ASIC), so it’s not useful for general computing tasks. However, if you’re trying to run or train a neural network, Nervana could be many times faster than existing hardware. Nervana will be adept at matrix multiplication, convolutions, and other mathematical operations used in neural networks.

Interestingly, there’s no cache on the chip like you’d find on a CPU. Instead, Nervana will use a software-defined memory management system, which can adjust performance based on the needs of the neural network. Intel has also implemented its own numerical format called Flexpoint. This is less precise than regular integer math, but Intel says that’s no problem for neural networks. They’re naturally resistant to noise, and in some cases noise in the data can even aid in training neurons. The lower precision also helps make the chip better at parallel computing, so the overall network can have higher bandwidth and lower latency.

Intel is not alone in its quest to speed up neural networks. Google has developed cloud-based silicon called Tensor Processing Units, and Nvidia is pushing its GPUs as a solution to neural network processing. Facebook has gotten on board with Intel’s hardware and made some contributions to the design. Intel says the Nervana NNP will ship by the end of 2017.

Now read: What are neural networks?