Intel tunes its mega-chip for machine learning

01.07.2016
Intel wants to take on Google's Tensor Processing Unit and Nvidia's GPUs in machine learning computing with improvements to its Xeon Phi mega-chips.

The company will add new features to Xeon Phi to tune it for machine learning, said Nidhi Chappell, director of machine learning at Intel. Machine learning, a trendy technology, allows software to be trained to do tasks like image recognition or data analysis more efficiently. 

Intel didn't disclose when the new features will be added, but the next version of Xeon Phi will come by 2018. Intel's already behind chip rivals in machine learning, so it may have to speed up the next Xeon Phi release. 

Intel released its latest Xeon Phi chip called Knights Landing -- which has up to 72 cores -- last week. The chip is still being tested for machine learning, but the company believes it can outperform GPUs in certain machine-learning models.

Xeon Phi can be a primary CPU or a co-processor, but for now, it is designed more for supercomputing than for machine learning with its vector processors and juiced up Atom cores.

The Xeon Phi reduces the need to offload machine learning to co-processors, Chappell said. Many machine-learning tasks are performed today by GPUs. Google announced its TPUs in May.

Xeon Phi can be packaged with a speedy interconnect to speed up machine learning, Chappell said. Intel is trying to link its Xeon Phi with the OmniPath interconnect, which allows for blazing fast movement of data between servers.

Intel will also add support for more open-source machine-learning software, Chappell said. Intel is building machine-learning models around Caffe, an open-source package. Xeon Phi chips could ultimately support TensorFlow, Google's open-source machine learning software. 

Intel believes it chips could be used in areas like medicine and image processing, where learning models can help make sense of data. For example, the chip could help identify a disease, or it could help classify an image by analyzing pixels.

The goal is to reduce the time it takes to train a specific model and make calculations faster, Chappell said.

Agam Shah

Zur Startseite