When you’re dealing with big data, supercomputers, and massive servers, a one percent increase in energy and price efficiency can mean massive savings. You can imagine, then, how big of a deal it is that MIT researchers have found a way to replace traditional RAM with flash memory, because “it’s about a tenth as expensive, and it consumes a tenth as much power.”
The RAM is where data is stored while a computer is actively working with it. The advantage here is that traditional DRAM can be read and written at incredible speeds — far greater than that of flash memory, which is a similar order of magnitude faster than hard disk drives. The disadvantage is that storage is limited, and DRAM can’t store data unless an electrical charge is supplied to it, which isn’t great for power consumption.
Until now, the standard operating procedure was to use the DRAM as much as possible in order to lower latency. Since using a lot of RAM is cost prohibitive, both for initial cost and energy use, not all of the necessary information could be stored there. The rest of the information is stored on disk-based drives, which are cheaper and boast huge storage size, but slow access times. Very sporadically, typically less than ten percent of the time, the computer has to access that information. The result is a hitch in its giddy-up, but a big one, and the biggest contributor to slower processing times.
The MIT researchers found that in a number of common use cases for supercomputers, they were able to use NAND flash, like you might find in your PC’s solid state drive, to act as the RAM for the entire process. Even though the read and write speeds are slower overall, the computer never had to access a disk-based drive, eliminating the longest latency period.The researchers found that as long as a process would access the disk drive more than five percent of the time, the NAND architecture is a comparable choice for speed, but a much better one for energy management.
NAND-based flash memory has been a huge boon to data access across both mobile and performance applications. It far outpaces the traditional disk drives in terms of speed and power consumption, but its not without its own issues. With barriers like price and availability not limiting access, they’re becoming the norm for systems from smartphones to servers. Limited speed compared to conventional RAM means they haven’t yet found their way into desktop or mobile system memory yet, but this new development from MIT could start to change that.
The researchers tested the system across three different algorithms – searching for similar images in a huge database, running Google’s PageRank, and Memcached, a common database system used on large websites. They found that through specially designed chips and software, they were able to stay competitive with speeds traditional DRAM based systems were running at, while cutting power usage and price across the board.
For now, this is basically a proof of concept, but the system wouldn’t be too hard to implement in real world systems. This is the bleeding edge of super-computing and memory architecture, and the sort of implementation that may see consumer application down the road, especially as PCIe SSDs become more common.
- Samsung beefs up the data center with a new SSD packing 31TB of storage
- First Spectre, now BranchScope — another vulnerability in Intel processors
- Google makes good on promise to offset 100 percent of its electricity use
- What is RAM?
- From the doctor to the DMV, blockchain can make governments swift and secure