Friday, 23 February 2018 17:44

Neural networks everywhere

New chip reduces neural networks’ power consumption by up to 95 percent, making them practical for battery-powered devices. 
MIT Neural Network Chip PRESS Web
MIT researchers have developed a special-purpose chip that increases the speed of neural-network computations by three to seven times over its predecessors, while reducing power consumption 93 to 96 percent. That could make it practical to run neural networks locally on smartphones or even to embed them in household appliances. Image, Chelsea Turner/MIT

Most recent advances in artificial-intelligence systems such as speech- or face-recognition programs have come courtesy of neural networks, densely interconnected meshes of simple information processors that learn to perform tasks by analyzing huge sets of training data.

But neural nets are large, and their computations are energy intensive, so they’re not very practical for handheld devices. Most smartphone apps that rely on neural nets simply upload data to internet servers, which process it and send the results back to the phone.

Now, MIT researchers have developed a special-purpose chip that increases the speed of neural-network computations by three to seven times over its predecessors, while reducing power consumption 94 to 95 percent. That could make it practical to run neural networks locally on smartphones or even to embed them in household appliances.

“The general processor model is that there is a memory in some part of the chip, and there is a processor in another part of the chip, and you move the data back and forth between them when you do these computations,” says Avishek Biswas, an MIT graduate student in electrical engineering and computer science, who led the new chip’s development.

“Since these machine-learning algorithms need so many computations, this transferring back and forth of data is the dominant portion of the energy consumption. But the computation these algorithms do can be simplified to one specific operation, called the dot product. Our approach was, can we implement this dot-product functionality inside the memory so that you don’t need to transfer this data back and forth?”

Biswas and his thesis advisor, Anantha Chandrakasan, dean of MIT’s School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science, describe the new chip in a paper that Biswas presented during the International Solid State Circuits Conference.

Back to analog

Neural networks are typically arranged into layers. A single processing node in one layer of the network will generally receive data from several nodes in the layer below and pass data to several nodes in the layer above. Each connection between nodes has its own “weight,” which indicates how large a role the output of one node will play in the computation performed by the next. Training the network is a matter of setting those weights.

A node receiving data from multiple nodes in the layer below will multiply each input by the weight of the corresponding connection and sum the results. That operation — the summation of multiplications — is the definition of a dot product. If the dot product exceeds some threshold value, the node will transmit it to nodes in the next layer, over connections with their own weights.

A neural net is an abstraction: The “nodes” are just weights stored in a computer’s memory. Calculating a dot product usually involves fetching a weight from memory, fetching the associated data item, multiplying the two, storing the result somewhere, and then repeating the operation for every input to a node. Given that a neural net will have thousands or even millions of nodes, that’s a lot of data to move around.

But that sequence of operations is just a digital approximation of what happens in the brain, where signals traveling along multiple neurons meet at a “synapse,” or a gap between bundles of neurons. The neurons’ firing rates and the electrochemical signals that cross the synapse correspond to the data values and weights. The MIT researchers’ new chip improves efficiency by replicating the brain more faithfully.

In the chip, a node’s input values are converted into electrical voltages and then multiplied by the appropriate weights. Summing the products is simply a matter of combining the voltages. Only the combined voltages are converted back into a digital representation and stored for further processing.
The chip can thus calculate dot products for multiple nodes — 16 at a time, in the prototype — in a single step, instead of shuttling between a processor and memory for every computation.

All or nothing

One of the keys to the system is that all the weights are either 1 or -1. That means that they can be implemented within the memory itself as simple switches that either close a circuit or leave it open. Recent theoretical work suggests that neural nets trained with only two weights should lose little accuracy — somewhere between 1 and 2 percent.
Biswas and Chandrakasan’s research bears that prediction out. In experiments, they ran the full implementation of a neural network on a conventional computer and the binary-weight equivalent on their chip. Their chip’s results were generally within 2 to 3 percent of the conventional network’s.

"This is a promising real-world demonstration of SRAM-based in-memory analog computing for deep-learning applications,” says Dario Gil, vice president of artificial intelligence at IBM. "The results show impressive specifications for the energy-efficient implementation of convolution operations with memory arrays. It certainly will open the possibility to employ more complex convolutional neural networks for image and video classifications in IoT [the internet of things] in the future."

back to newsletterLarry Hardesty | MIT News Office
February 13, 2018

Assistant professor in EECS and DMSE is developing materials with novel structures and useful applications, including renewable energy and information storage.


Ceramics research Jennifer Rupp headshot MIT Webm
Jennifer Rupp's current ceramics research applications range from battery-based storage for renewable energy, to energy-harvesting systems, to devices used to store data during computation. Photo courtesy of Jennifer Rupp.

Ensuring that her research contributes to society’s well-being is a major driving force for Jennifer Rupp.

“Even if my work is fundamental, I want to think about how it can be useful for society,” says Rupp, the Thomas Lord Assistant Professor of Materials Science and Engineering and an assistant professor in the Department of Electrical Engineering and Computer Science (EECS) at MIT.

Since joining the Department of Materials Science and Engineering in February 2017, she has been focusing not only on the basics of ceramics processing techniques but also on how to further develop those techniques to design new practical devices as well as materials with novel structures. Her current research applications range from battery-based storage for renewable energy, to energy-harvesting systems, to devices used to store data during computation.

Rupp first became intrigued with ceramics during her doctoral studies at ETH Zurich.

“I got particularly interested in how they can influence structures to gain certain functionalities and properties,” she says. During this time, she also became fascinated with how ceramics can contribute to the conversion and storage of energy. The need to transition to a low-carbon energy future motivates much of her work at MIT. “Climate change is happening,” she says. “Even though not everybody may agree on that, it’s a fact.”

One way to tackle the climate change problem is by capitalizing on solar energy. Sunshine falling on the Earth delivers roughly 170,000 terawatts per year — about 10,000 times the energy consumed annually worldwide. “So we have a lot of solar energy,” says Rupp. “The question is, how do we profit the most from it?”

To help convert that solar energy into a renewable fuel, her team is designing a ceramic material that can be used in a solar reactor in which incoming sunlight is controlled to create a heat cycle. During the temperature shifts, the ceramic material incorporates and releases oxygen. At the higher temperature, it loses oxygen; at the lower temperature, it regains the oxygen. When carbon dioxide and water are flushed into the solar reactor during this oxidation process, a split reaction occurs, yielding a combination of carbon monoxide and hydrogen known as syngas, which can be converted catalytically into ethanol, methanol, or other liquid fuels.

While the challenges are many, Rupp says she feels bolstered by the humanitarian ethos at MIT. “At MIT, there are scientists and engineers who care about social issues and try to contribute with science and their problem-solving skills to do more,” she says. “I think this is quite important. MIT gives you strong support to try out even very risky things.”

In addition to continuing her work on new materials, Rupp looks forward to exploring new concepts with her students. During the fall of 2017, she taught two recitation sections of 3.091 (Introduction to Solid State Chemistry), a class that has given thousands of MIT undergraduates a foundation in chemistry from an engineering perspective. This spring, she will begin teaching a new elective for graduate students on ceramics processing and engineering that will delve into making ceramic materials not only on the conventional large-scale level but also as nanofabricated structures and small-system structures for devices that can store and convert energy, compute information, or sense carbon dioxide or various environmental pollutants.

To further engage with students, Rupp has proposed an extracurricular club for them to develop materials science comic strips. The first iteration is available on Instagram (@materialcomics) and it depicts three heroes who jump into various structures to investigate their composition and, naturally, to have adventures. Rupp sees the comics as an exciting avenue to engage the nonscientific community as a whole and to illustrate the structures and compositions of various everyday materials.

“I think it is important to create interest in the topic of materials science across various ages and simply to enjoy the fun in it,” she says.

Rupp says MIT is proving to be a stimulating environment. “Everybody is really committed and open to being creative,” she says. “I think a scientist is not only a teacher or a student; a scientist is someone of any age, of any rank, someone who simply enjoys unlocking creativity to design new materials and devices.”

This article appears in the Autumn 2017 issue of Energy Futures, the magazine of the MIT Energy Initiative.

Kelley Travers | MIT Energy Initiative
MIT News Office, February 9, 2018

Tuesday, 24 October 2017 14:12

Selective memory

Scheme would make new high-capacity data caches 33 to 50 percent more efficient.

MIT Fast Cache press Web

In a traditional computer, a microprocessor is mounted on a “package,” a small circuit board with a grid of electrical leads on its bottom. The package snaps into the computer’s motherboard, and data travels between the processor and the computer’s main memory bank through the leads.

As processors’ transistor counts have gone up, the relatively slow connection between the processor and main memory has become the chief impediment to improving computers’ performance. So, in the past few years, chip manufacturers have started putting dynamic random-access memory — or DRAM, the type of memory traditionally used for main memory — right on the chip package.

The natural way to use that memory is as a high-capacity cache, a fast, local store of frequently used data. But DRAM is fundamentally different from the type of memory typically used for on-chip caches, and existing cache-management schemes don’t use it efficiently.

At the recent IEEE/ACM International Symposium on Microarchitecture, researchers from MIT, Intel, and ETH Zurich presented a new cache-management scheme that improves the data rate of in-package DRAM caches by 33 to 50 percent.

back to newsletter

Read more at the MIT News Office.

Larry Hardesty | MIT News Office
October 22, 2017