A Californian-based start-up has unveiled what it says is the world’s largest computer chip.
The Wafer Scale Engine, designed by Cerebras Systems, is slightly bigger than a standard iPad.
The firm says a single chip can drive complex artificial intelligence (AI) systems in everything from driverless cars to surveillance software.
However, one expert suggested that the innovation would prove impractical to install in many data centres.
Why is the development important?
Computer chips have generally become smaller and faster over the years.
Dozens are typically manufactured on a single silicon “wafer”, which is then cut apart to separate them from each other.
The most powerful desktop CPUs (central processing units) have about 30 processor cores – each able to handle their own set of calculations simultaneously.
GPUs (graphics processing units) tend to have more cores, albeit less powerful ones.
This has traditionally made them the preferred option for artificial intelligence processes that cans be broken down into several parts and run simultaneously, where the outcome of any one calculation does not determine the input of another.
Examples include speech recognition, image processing and pattern matching. The most powerful GPUs have as many as 5,000 cores.
But Cerebras’ new chip has 400,000 cores, all linked to each other by high-bandwidth connections.
The firm suggests this gives it an advantage at handling complex machine learning challenges with less lag and lower power requirements than combinations of the other options.
Cerebras claims the Wafer Scale Engine will reduce the time it takes to process some complex data from months to minutes.
Its founder and chief executive Andrew Feldman said the company had “overcome decades-old technical challenges” that had limited chip size.
“Reducing training time removes a major bottleneck to industry-wide progress,” he said.
Cerebras has started shipping the hardware to a small number of customers.
It has not yet revealed how much the chips cost.
What are the disadvantages?
While the chips process information much faster, Dr Ian Cutress, senior editor at the news site AnandTech, said the advances in technology would come at a cost.
“One of the advantages of smaller computer chips is they use a lot less power and are easier to keep cool,” he explained.
“When you start to deal with bigger chips like this, companies need specialist infrastructure to support them, which will limit who can use it practically.
“That’s why it’s suited for artificial intelligence development as that’s where the big dollars are going at the moment.”
Is this the first AI-chip?
Cerebras is far from the first company to develop chips to power AI systems.
In 2016, Google developed TPU (tensor processing unit) chips to power software including its language translation app, and now sells the technology to third parties.
The following year, China’s Huawei announced that its smartphone Kirin chips had gained an NPU (neural processing unit) to help speed up the calculation of matrix multiplications – a type of mathematics commonly involved in AI tasks.
But not all such efforts have been successful.
In the early 1980s, the US company Trilogy received hundreds of millions of dollars in funding to create its own super-chip.
However, the processors got too hot in testing and were less powerful than initially thought.
Plagued by technical and personal challenges, the company gave up on the project five years later.