Tuesday, 24 October 2017 14:12

Selective memory

Scheme would make new high-capacity data caches 33 to 50 percent more efficient.

MIT Fast Cache press Web

In a traditional computer, a microprocessor is mounted on a “package,” a small circuit board with a grid of electrical leads on its bottom. The package snaps into the computer’s motherboard, and data travels between the processor and the computer’s main memory bank through the leads.

As processors’ transistor counts have gone up, the relatively slow connection between the processor and main memory has become the chief impediment to improving computers’ performance. So, in the past few years, chip manufacturers have started putting dynamic random-access memory — or DRAM, the type of memory traditionally used for main memory — right on the chip package.

The natural way to use that memory is as a high-capacity cache, a fast, local store of frequently used data. But DRAM is fundamentally different from the type of memory typically used for on-chip caches, and existing cache-management schemes don’t use it efficiently.

At the recent IEEE/ACM International Symposium on Microarchitecture, researchers from MIT, Intel, and ETH Zurich presented a new cache-management scheme that improves the data rate of in-package DRAM caches by 33 to 50 percent.

back to newsletter

Read more at the MIT News Office.

Larry Hardesty | MIT News Office
October 22, 2017