Researchers have discovered a new method of potentially integrating optical interconnects at the chip level. If successful, such an approach could theoretically allow for a significant increase in overall performance, not to mention power savings.
Light-based computing has several intrinsic properties to recommend it. First and foremost, it’s fast. Switching an optical transistor with another optical transistor has a theoretical speed measured in femtoseconds (10-15, as compared to the pokey nanoseconds (10-9) we measure performance in today.
The problem with using light to switch light is that it’s also extremely power inefficient and typically functions best over longer distances. Hybrid devices that combine optics and electronics, using the electronics for signaling and light for actually carrying information, have been difficult to build due to significant differences in scale, as well as the energy losses incurred when switching from light to electricity and back again.
The researchers used a new type of more efficient photonic crystal, allowing them to create both electrical-to-optical and optical-to-electrical devices. The team built both an electro-optical modulator that transmitted 40Gb/s of data and a photoreceiver at 10Gb/s. Power consumption was dramatically lower, at just 42 attojoules per bit.
At these speeds and power consumption levels, hybrid optical/electrical systems could be potentially used in future devices to provide interconnects between chips — for example, when maintaining cache coherence between multi-core CPUs. But taking advantage of this capability would also require chips to get bigger. The optical hardware simply can’t be shrunk to the same level as conventional logic transistors.
There’s no chance of this technology being used to build a full-scale chip; a Core i7 implemented using current optical technology would measure 48m2. This is unsupported by the standard ATX form factor. But the idea that making components larger might allow us to ultimately improve performance isn’t crazy.
With Moore’s law transistor density scaling ending and Dennard scaling long since dead, the power efficiency and performance improvements from switching to optical interconnect would presumably be larger than anything still to be eked out from lower process nodes. That’s particularly likely to be true if you consider this technology is still years from adoption — and we’ll be well past 5nm by the time any plausible solution could come to market.
- Here’s why we don’t have light-based computing just yet
- IBM to demonstrate first on-package silicon photonics
- DARPA laser scanning: Bending light with a microchip