Intel is beginning to experiment with so-called neuromorphic chips that attempt to more closely resemble how a real brain functions.
Coming out of the chip giant’s research lab, the new chip, dubbed the Intel Loihi test chip, consists of 128 computing cores. Each core has 1,024 artificial neurons, giving the chip a total of more than 130,000 neurons and 130 million synaptic connections.
Based on the number of neurons, the Loihi chip is a little more complex than the simple lobster brain. Meanwhile, the human brain is made up of more than 80 billion neurons. In other words, this chip is a long ways off before it could begin matching the amount of complexity going on inside the human brain.
Similar to how neuroscientists think the brain functions, the Loihi chip transmits data through patterns of pulses (or spikes) between neurons.
Intel said the chip can adapt and learn on the go. Current cutting-edge machine learning systems rely on deep learning, which requires training models on giant sets of data using huge clusters of computers. The Loihi chip doesn’t need all that intensive training and is “self-learning,” Intel said.
Intel researchers think the chip could be used for devices out in the world that need to learn in real-time: autonomous drones and cars adapting to what’s going on in the environment; cameras looking for a missing person; or stoplights automatically adjusting to traffic conditions.
The spiking nature of the simulated neurons will make the chip run more efficiently than a traditional chip design, Intel said.
“The brain doesn’t communicate as often as you’d think,” said Narayan Srinivasa, senior principal engineer and chief scientist at Intel Labs, in an interview. “This chip doesn’t consume energy until there’s a spike.”
Intel wouldn’t specify exactly how efficient the chip would be, as the test chip isn’t ready it. But the company vaguely stated the hardware would be up to 1,000 times more energy-efficient than a chip typically used for training an artificial intelligence system.
Intel expects to receive the first test chip in November. It will be built on Intel’s 14-nanometer process technology. With the first chip, Intel said it plans to start making it available to universities and researchers focused on AI in the first half of 2018.
Even though Intel still doesn’t have its hands on an actual chip yet, the company has done limited testing of the hardware using a field programmable gate array (or FPGA), a type of chip that can be reprogrammed on the fly for particular use cases. With the FPGA, Intel has tested applications like path planning — like, say, finding the most efficient routes from one location to various other locations on a map — and dictionary learning.
Srinivasa said Intel has been looking at ideas around neuromorphic computing for the past three year, but the company is hardly the first one to explore the idea. Most notably, IBM Research has been working on a neuromorphic chip for years with what it calls TrueNorth, which similarly tries to take advantage of spiking neurons. The TrueNorth chip contains 4,096 cores and 5.4 million transistors, while only drawing 70 milliwatts of power. The chip simulates a million neurons and 256 million synapses — that’s larger than Intel’s first-generation Loihi test chip. TrueNorth is roughly able to simulate the equivalent of a bee’s brain.
“It’s trying to get as close to the brain as possible, within the limits of today’s inorganic silicon technology,” said IBM chief scientist Dharmendra Modha, who heads the TrueNorth project, in an interview last year.
Some AI experts have expressed skepticism of IBM’s approach. In a 2014 post when IBM announced TrueNorth, Yann LeCun, an early pioneer of deep learning and head of Facebook’s AI research group, wrote that the chip would have difficulty running tasks like image recognition using a deep learning model called convolutional neural networks. In a follow-up 2016 paper, however, IBM said it was able to demonstrate how convolutional networks could run quickly and accurately on its neuromorphic chip.
Srinivasa admitted that Intel’s chip wouldn’t do well with some deep learning models.
“We exploit time,” Srinivasa said. “That’s absent in deep learning.”
Whether or not Intel’s neuromorphic chip experiment ever goes anywhere, its unveiling highlights Intel’s interest in moving beyond the traditional central processing unit (or CPU), where the company dominates in the PC and data center markets. Now with the growing importance of AI, Intel is trying to embrace other types of chip designs. In 2015, it acquired FPGA-maker Altera for $16.7 billion and, last year, it acquired AI chip startup Nervana for $400 million. Rival chipmaker Nvidia currently dominates the AI market with its graphics processors.
Srinivasa said it likely wouldn’t be for at least another three to five years before the Loihi chip makes it out of the research lab.