Over the last few months there’s been increasing interest in the extension of existing memory technologies, as promising 2.5D solutions like HBM2 have generally failed to appear in-market (at least, not at price points consumers can afford). Micron has been instrumental in extending the lifespan of technologies like GDDR5X and the upcoming GDDR6. The company recently published a blog post detailing how it has extended both GDDR5X and intends to push into GDDR6. The memory manufacturer believes there’s substantial headroom left in both standards, with GDDR5X/GDDR6 pushing up to 16Gbps of bandwidth per pin by 2019, or double GDDR5’s current maximum data rate.
Micron has announced that it has hit 16Gbps signaling in its labs. While GDDR5X at that clock won’t debut at any point in the near future, it’s still a significant achievement for a memory standard once expected to serve as a short-term filler between mainstream GDDR5 and next-generation 2.5D memory technologies. GDDR6 is, in many ways, similar to GDDR5X, as shown in the table below:
The primary differences between GDDR5X and GDDR6 is that the latter has a dual-channel architecture and a different FBGA package and pitch. It’s not clear what the exact advantage of the dual-channel approach is and Micron is a bit cagey on that point. But it seems to give the company better granularity in certain configurations.
It does not equate to the way we talk about dual or quad-channel architectures on the CPU side of the semiconductor business, where adding more memory channels is the standard way to improve overall memory bandwidth. GPUs use an array of memory controllers and an entirely different bus structure than CPUs. The advent of dual-channel GDDR6 shouldn’t be read as a doubling of memory bandwidth over “single-channel” GDDR5/5X.
Micron’s Kristopher Kido, who wrote the blog post in question, expects that the company will introduce GDDR6 by 2018. That aligns well with SK Hynix, which is planning its own introductions for the same time period. And it strongly suggests that far from being developed for a single client, the way GDDR5X was used by Nvidia, we’ll see hardware from Teams Green and Red utilizing the new memory standard.
HBM2 was supposed to deliver significant improvements in bandwidth and power consumption. But while GPUs like the Radeon Nano made it clear that these benefits absolutely existed, the memory standard’s failure to breach the consumer market indicates long-term price structure issues that clearly aren’t expected to be resolved any time soon.
Now read: How L1 and L2 CPU Caches Work, and Why They’re an Essential Part of Modern Chips