This design can run a large neural network more efficiently that banks of GPUs wired together. However, manufacturing and running the chip can be a challenge. This requires new methods of etching silicon features and a novel water system. Cerebras also had to solve another engineering problem: how to efficiently get data into and out of the chip. While regular chips have their own memory, Cerebras created an off-chip memory box called MemoryX. Software was also developed by Cerebras that allows a neural network to partially be stored in the off-chip memory while the computations are transferred to the silicon chip. It also created a hardware and software system called SwarmX, which wires everything together. He also said that the actual performance of the chip in terms of speed and efficiency, as well as cost, is still unknown. Cerebras has not published benchmark results. Demler states that there is a lot of engineering in the new MemoryX or SwarmX technology. “But, just like the processor it’s highly specialized stuff; this only makes sense to train the very largest models.” Cerebras chips have been adopted by supercomputing-power labs. Argonne National Labs, Lawrence Livermore National Labs, pharma companies such as GlaxoSmithKline, AstraZeneca and what Feldman calls “military intelligence” organisations are some of the early customers. The Cerebras chip can also be used to power neural networks. These labs run parallel, massive mathematical operations. Demler says that they are always looking for more computing power. Kanter states that he believes in data-centric machine learning [machine learning]. Therefore, larger data sets are needed to enable the creation of larger models with more parameters.

Leave a Reply

Wow look at this!

This is an optional, highly
customizable off canvas area.

About Salient

The Castle
Unit 345
2500 Castle Dr
Manhattan, NY

T: +216 (0)40 3629 4753
E: hello@themenectar.com