A new ‘high rise’ chip breaks computing’s data bottleneck in two ways
One day soon we may live in smart houses that cater to our habits and needs, or ride in autonomous cars that rely on embedded sensors to provide safety and convenience. But today’s electronic devices may not be able handle the deluge of data such applications portend because of limitations in their materials and design, according to the authors of a Stanford-led experiment recently published in Nature.
To begin with, silicon transistors are no longer improving at their historic rate, which threatens to end the promise of smaller, faster computing known as Moore’s Law. A second and related reason is computer design, say senior authors and Stanford professors Subhasish Mitra and H.-S. Philip Wong. Today’s computers rely on separate logic and memory chips. These are laid out in two dimensions, like houses in a suburb, and connected by tiny wires, or interconnects, that become bottlenecked with data traffic.
Now, the Stanford team has created a chip that breaks this bottleneck in two ways: first, by using nanomaterials not based on silicon for both logic and memory, and second, by stacking these computation and storage layers vertically, like floors in a high-rise, with a plethora of elevator-like interconnects between the “floors” to eliminate delays. “This is the largest and most complex nanoelectronic system that has so far been made using the materials and nanotechnologies that are emerging to leapfrog silicon,” said Mitra.
The team, whose other Stanford members include professors Roger Howe and Krishna Saraswat, integrated over 2 million non-silicon transistors and 1 million memory cells, in addition to on-chip sensors for detecting gases – a proof of principle for other tasks yet to be devised. “Electronic devices of these materials and three-dimensional design could ultimately give us computational systems 1,000 times more energy-efficient than anything we can build of silicon,” Wong said.
First author Max Shulaker, who performed this work while a PhD candidate at Stanford, is now an assistant professor at MIT and core member of its Microsystems Technology Laboratories. He explained in a single word why the team had to use emerging nanotechnologies and not conventional silicon technologies to achieve the high-rise design: heat. “Building silicon transistors involves temperatures of over 1,000 degrees Celsius,” Shulaker said. “If you try to build a second layer on top of the first, you’ll damage the bottom layer. This is why chips today have a single layer of circuitry.”
The magic of the materials
The new prototype chip is a radical change from today’s chips because it uses multiple nanotechnologies that can be fabricated at relatively low heat, Shulaker explained. Instead of relying on silicon-based transistors, the new chip uses carbon nanotubes, or CNTs, to perform computations. CNTs are sheets of 2-D carbon formed into nanocylinders. The new Naturepaper incorporates prior ground-breaking work by this team in developing the world’s first all-CNT computer.
The memory component of the new chip also relied on new processes and materials improved upon by this team. Called resistive random-access memory (RRAM), this is a type of nonvolatile memory — meaning that it doesn’t lose data when the power is turned off – that operates by changing the resistance of a solid dielectric material.
The key in this work is that CNT circuits and RRAM memory can be fabricated at temperatures below 200 Celsius. “This means they can be built up in layers without harming the circuits beneath,” Shulaker says. “This truly is a remarkable feat of engineering,” says Barbara De Salvo, scientific director at CEA-LETI, France, an international expert not connected with this project.
The RRAM and carbon nanotubes are built vertically over one another, making a new, dense 3-D computer architecture with interleaving layers of logic and memory. By inserting a plethora of wires between these layers, this 3-D architecture promises to address the communication bottleneck. “In addition to improved devices, 3-D integration can address another key consideration in systems: the interconnects within and between chips,” Saraswat said.
To demonstrate the potential of the technology, the researchers placed over a million carbon nanotube-based sensors on the surface of the chip, which they used to detect and classify ambient gases.
Due to the layering of sensing, data storage and computing, the chip was able to measure each of the sensors in parallel and then write directly into its memory, generating huge bandwidth without risk of hitting a bottleneck, because the 3-D design made it unnecessary to move data between chips. In fact, even though Shulaker built the chip using the limited capabilities of an academic fabrication facility, the peak bandwidth between vertical layers of the chip could potentially approach and exceed the peak memory bandwidth of the most sophisticated silicon-based technologies available today.
System benefits
This provides several simultaneous benefits for future computing systems.
“The new 3-D computer architecture provides dense and fine-grained integration of computing and data storage, drastically overcoming the bottleneck from moving data between chips,” Mitra says.
“As a result, the chip is able to store massive amounts of data and perform on-chip processing to transform a data deluge into useful information,” Mitra says.
Energy efficiency is another benefit. “Logic made from carbon nanotubes will be ten times more energy efficient as today’s logic made from silicon,” Wong said. “RRAM can also be denser, faster and more energy-efficient than the memory we use today.”
Thanks to the ground-breaking approach embodied by the Nature paper, the work is getting attention from leading scientists who are not directly connected with the research. Jan Rabaey, a professor of electrical engineering and computer sciences at the University of California, Berkeley, said 3-D chip architecture is such a fundamentally different approach that it may have other, more futuristic benefits to the advance of computing. “These [3-D] structures may be particularly suited for alternative learning-based computational paradigms such as brain-inspired systems and deep neural nets,” Rabaey said, adding, “The approach presented by the authors is definitely a great first step in that direction.”
This work was funded by the Defense Advanced Research Projects Agency, National Science Foundation, Semiconductor Research Corporation, STARnet SONIC and member companies of the Stanford SystemX Alliance.
This story is a revised version of a press release by MIT News correspondent Helen Knight.