Alternative processors tapped to fulfill supercomputing's need for speed

16.11.2015
As world powers compete to build the fastest supercomputers, more attention is being paid to alternative processing technologies as a way to add more horsepower to such systems.

One thing is clear: It is becoming prohibitive to build blazing CPU-only supercomputers, due to power and space constraints. That's where powerful coprocessors step in -- the processors work in conjuction with CPUs to conduct complex calculations in a power-efficient manner. 

Coprocessors are an important topic at this week's Supercomputing 15 conference in Austin, Texas. According to the Top500 list of the fastest supercomputers, released on Monday, 104 systems used coprocessors, growing from 90 systems in a list released in July.

The world's fastest computer -- Tianhe-2 in China -- uses Intel's specialized Xeon Phi coprocessor, which mixes conventional x86 CPUs with vector processors. Of the 104 coprocessor supercomputers, 70 have Nvidia's graphics processors, which also benefit graphical simulations.

There's also a growing interest in FPGAs (field programmable gate arrays), which are good for specific tasks. Microsoft uses FPGAs to deliver faster Bing search results and believes they could help in high-performance computing. The company last week said it would make servers based on its Project Catapult design, which mixes FPGAs and CPUs, available to researchers via the Texas Advanced Computing Center, which is in Austin.

The number of supercomputers with coprocessors will continue to grow as more applications are written to take advantage of them, said Nathan Brookwood, principal analyst at Insight 64.

Coprocessors are faster than conventional CPUs, Brookwood said. The simultaneous execution of tasks on coprocessors has helped systems reach desired performance targets, and that trend will continue to advance.

The immediate goal is to surpass the performance barrier of 100 petaflops, which countries and companies hope to reach by 2018. The U.S. Department of Energy plans to deploy a 180-petaflop supercomputer, code-named Aurora, by 2019. China's Tianhe-2 supercomputer today offers peak performance of 54.9 petaflops with the help of a coprocessor, and 33.86 petaflops without a coprocessor.

GPUs have been used in supercomputers for many years, and Nvidia and AMD are loading more memory and frame buffers on GPUs. FPGAs are more specialized and could find use in machine learning and neural networks, Brookwood said.

"Whether FPGAs are going to be a general trend -- it's too soon to say," Brookwood said.

Faster computers give nations boasting rights about technology progress. Such systems are also vital to national economic, security, scientific and environmental programs. Supercomputers are used to develop advanced weapons, simulate economic models and predict weather.

But boosting computing performance while reducing power consumption is becoming a challenge in conventional computers. Research is ongoing in alternative computing technologies to push supercomputing into the future. Last week the Los Alamos National Laboratory purchased a quantum computer from D-Wave Systems for research into the future of computing.

Beyond CPUs and coprocessors, other supercomputing technologies are being discussed at the show that could boost overall performance. Supercomputers could get photonics technology, in which light is used to speed up data faster between CPUs, memory and storage. Also being discussed at the conference are new memory technologies like HMC (Hybrid Memory Cube), which allows more memory capacity by stacking modules.

Agam Shah

Zur Startseite