Intel Demos PCIe 5.0 on Upcoming Sapphire Rapids CPUs – ExtremeTech

This site may earn affiliate commissions from the links on this page. Terms of use.

Intel and Synopsys have jointly announced the first PCIe 5.0 IP interoperability test. The goal of this type of testing is to demonstrate Intel’s commitment to future high-speed interfaces, as well as to create reference platforms for early PCIe 5.0 certification.

It’s also a sign that PCIe 5.0 could show up on motherboards in as little as 12 months, though I think 2022 is a bit more likely than late 2021. If Intel launches Rocket Lake at the end of Q1 2021, as is expected, it’s not clear the company would then refresh Alder Lake in the October / November time frame. Typically Intel likes to wait a bit longer than that between product cycles. This, in turn, means that the Sapphire Rapids platform — which is where Intel is expected to debut both PCIe 5.0 and technologies like DDR5 — would debut some time in 2022. There are also rumors that AMD might not move to DDR5 until around this time, so the timeframe is consistent (even if we don’t know if the rumor itself is accurate).

Just as PCIe 4.0 doubled PCIe 3.0 bandwidth, PCIe 5.0 is expected to double PCIe 4.0. That works out to 4GB/s in each direction for an x1 link, 15.754GB/s for an x4 link, and 63GB/s in each direction for an x16 link. This is an interesting set of developments with potentially long-term ramifications.

Microsoft and Sony have both demonstrated that an SSD with comparatively low performance (compared with DRAM) can dramatically accelerate game performance and loading times. If effective PCIe performance continues to scale, it implies this trend of using fast SSD storage in lieu of RAM could have legs beyond a single console generation. PCIe 6.0 is supposedly targeting a 2021 release date, which means we could see interfaces with support for up to 8GB/s per link by 2024 – 2025. A 4x PCIe 6.0 link would offer 4x the bandwidth of a PCIe 4.0 x4 connection, allowing for even faster storage configurations and a potential extension of the benefits we’re already seeing today.

Two things to keep in mind, as far as the potential impact future PCIe standards could have on computing. PCIe 5.0 and PCIe 6.0 hit bandwidth levels equivalent to what we expect from modern DRAM, but even if they can sustain equivalent bandwidth, PCI Express’ latencies are much, much higher than RAM. That’s not a problem for NAND flash, which also has much higher latency than DRAM, but it’s part of why NAND connected via PCIe can’t completely replace DRAM, no matter how much bandwidth can be provided.

Second, the power consumption from these standards could be formidable. AMD’s PCIe 4.0 motherboards draw more power than previous platforms, and while we don’t know how Intel will compare, it stands to reason that PCIe 5.0 and PCIe 6.0 will increase power consumption to some degree.

It’s also interesting to consider what this massive bandwidth boost could mean for AI. Generally speaking, the goal with AI is to keep workloads as close to the chip as possible — moving data across PCIe is a great way to waste tremendous amounts of power. Still, there are likely to be at least some workloads where the ability to leverage this kind of bandwidth would be useful. It will be interesting to see how Nvidia evolves technologies like NVLink if PCIe starts jumping by leaps and bounds.

One thing these improvements are unlikely to change is overall GPU performance. Tests repeatedly show that PCIe improvements have only a small impact on graphics cards, and if we truly quadruple available bandwidth in the next 4-5 years, we’ll be moving much faster than any GPU can match. Expect the biggest impacts in markets like storage and AI.

Now Read: