The Cerebras Systems and the federal Department of Energy’s National Energy Technology Laboratory are out with a big announcement today. They claim that the company’s CS-1 system is more than 10,000 times faster than a graphics processing unit (GPU).
Good morning world! We are proud to announce the Cerebras CS-1 system, a purpose-built high performance AI compute solution which houses the Wafer Scale Engine, the world’s largest chip – https://t.co/9Qo2827Vrh #waferscale #buildbigchips #DeepLearning #AI #MachineLearning
— Cerebras Systems (@CerebrasSystems) November 19, 2019
On a practical level, this means AI neural networks that previously took months to train can now train in minutes on the Cerebras system. Cerebras makes the world’s biggest microprocessor, the WSE. Chipmakers ordinarily cut a wafer from a 12-inch-width ingot of silicon to measure in a chip production line. When handled, the wafer is cut into many separate chips that can be utilized in electronic hardware.
“CS-1 system uses … wafer-size chip, which has 1.2 trillion transistors, … also 200 times faster than the Joule Supercomputer, which is No. 82 … in the world. … Cerebras …uses 20 kilowatts of power. Joule … consumes 450 kilowatts of power.”https://t.co/cHtw4CtHN5
— Robin Hanson (@robinhanson) November 21, 2020
Cerebras showed that a single wafer-scale Cerebras CS-1 can outperform one of the fastest supercomputers in the US by more than 200 X. The problem was to solve a large, sparse, structured system of linear equations of the sort that arises in modeling physical phenomena https://t.co/kvSV8urXuf
— C.R. Campos (@bcolorb) November 20, 2020
Yet, Cerebras, which began via SeaMicro originator Andrew Feldman, takes that wafer and makes a solitary, huge chip out of it. Each bit of the chip, named a center, is interconnected in an advanced method to different centers. The interconnections are intended to keep all the centers working at high speeds so the semiconductors can cooperate as one.
Cerebras’ CS-1 framework utilizes the WSE wafer-size chip, which has 1.2 trillion semiconductors, the essential on-off electronic switches that are the structure squares of silicon chips. Intel’s initial 4004 processor in 1971 had 2,300 semiconductors, and the Nvidia A100 80GB chip, reported yesterday, has 54 billion semiconductors.
Los Altos, California-based #STARTUP..and federal Department of Energy’s National Energy #Technology Laboratory announced that the co’s CS-1 system is more than 10,000 times faster than a graphics processing unit (GPU).
*..#AI neural networks that..https://t.co/CWxgWtKOcY #CHIP pic.twitter.com/S33TAwGDvo
— STARTinfoUP (@moueller1961) November 17, 2020
Feldman noted that the CS-1 can finish calculations faster than real time, meaning it can start the simulation of a power plant’s reaction core when the reaction starts and finish the simulation before the reaction ends.
“These dynamic modeling problems have an interesting characteristic,” Feldman said. “They scale poorly across CPU and GPU cores. In the language of the computational scientist, they do not exhibit ‘strong scaling.’ This means that beyond a certain point, adding more processors to a supercomputer does not yield additional performance gains.”