One of the critical deficits Intel has to its competition in its server platform is core count – other companies are enabling more cores by one of two routes: smaller cores, or individual chiplets connected together. At its Architecture Day 2021, Intel has disclosed features about its next-gen Xeon Scalable platform, one of which is the move to a tiled architecture. Intel is set to combine four tiles/chiplets through its fast embedded bridges, leading to better CPU scalability at higher core counts. As part of the disclosure, Intel also expanded on its new Advanced Matrix Extension (AMX) technology, CXL 1.1 support, DDR5, PCIe 5.0, and an Accelerator Interfacing Architecture that may lead to custom Xeon CPUs in the future.

What is Sapphire Rapids?

Built on an Intel 7 process, Sapphire Rapids (SPR) will be Intel’s next-generation Xeon Scalable server processor for its Eagle Stream platform. Using its latest Golden Cove processor cores which we detailed last week, Sapphire Rapids will bring together a number of key technologies for Intel: Acceleration Engines, native half-precision FP16 support, DDR5, 300-Series Optane DC Persistent Memory, PCIe 5.0, CXL 1.1, a wider and faster UPI, its newest bridging technology (EMIB), new QoS and telemetry, HBM, and workload specialized acceleration.

Set to launch in 2022, Sapphire Rapids will be Intel’s first modern CPU product to take advantage of a multi-die architecture that aims to minimize latency and maximize bandwidth due to its Embedded Multi-Die Interconnect Bridge technology. This allows for more high-performance cores (Intel hasn’t said how many just quite yet), with the focus on ‘metrics that matter for its customer base, such as node performance and data center performance’. Intel is calling SPR the ‘Biggest Leap in DC Capabilities in a Decade’.

The headline benefits are easy to rattle off. PCIe 5.0 is an upgrade over the previous generation Ice Lake PCIe 4.0, and we move from six 64-bit memory controllers of DDR4 to eight 64-bit memory controllers of DDR5. But the bigger improvements are in the cores, the accelerators, and the packaging.

Golden Cove: A High-Performance Core with AMX and AIA

By using the same core design on its enterprise platform Sapphire Rapids and consumer platform Alder Lake, there are some of the same synergies we saw back in the early 2000s when Intel did the same thing. We covered Golden Cove in detail in our Alder Lake architecture deep dive, however here’s a quick recap:

The new core, according to Intel, will over a +19% IPC gain in single-thread workloads compared to Cypress Cove, which was Intel’s backport of Ice Lake. This comes down to some big core changes, including:

  • 16B → 32B length decode
  • 4-wide → 6-wide decode
  • 5K → 12K branch targets
  • 2.25K → 4K μop cache
  • 5 → 6 wide allocation
  • 10 → 12 execution ports
  • 352 → 512-entry reorder buffer

The goal of any core is to process more things faster, and the newest generation tries to do it better than before. A lot of Intel’s changes make sense, and those wanting the deeper details are encouraged to read our deep dive.

There are some major differences between the consumer version of this core in Alder Lake and the server version in Sapphire Rapids. The most obvious one is that the consumer version does not have AVX-512, whereas SPR will have it enabled. SPR also has a 2 MB private L2 cache per core, whereas the consumer model has 1.25 MB. Beyond this, we’re talking about Advanced Matrix Extensions (AMX) and a new Accelerator Interface Architecture (AIA).

So far in Intel’s CPU cores we have scalar operation (normal) and vector operation (AVX, AVX2, AVX-512). The next stage up from that is a dedicated matrix solver, or something akin to a tensor core in a GPU. This is what AMX does, by adding a new expandable register file with dedicated AMX instructions in the form of TMUL instructions.

AMX uses eight 1024-bit registers for basic data operators, and through memory references, the TMUL instructions will operate on tiles of data using those tile registers. The TMUL is supported through a dedicated Engine Coprocessor built into the core (of which each core has one), and the basis behind AMX is that TMUL is only one such co-processor. Intel has designed AMX to be wider-ranging than simply this – in the event that Intel goes deeper with its silicon multi-die strategy, at some point we could see custom accelerators being enabled through AMX.

Intel confirmed that we shouldn’t see any frequency dips worse than AVX – there are new fine-grained power controllers per core for when vector and matrix instructions are invoked.

This feeds quite nicely into discussing AIA, the new accelerator interface. Typically when using add-in accelerator cards, commands must navigate between kernel and user space, set up memory, and direct any virtualization between multiple hosts. The way Intel is describing its new Acceleration Engine interface is akin to talking to a PCIe device as if it were simply an accelerator on board to the CPU, even though it’s attached through PCIe.

Initially, Intel will have two capable AIA bits of hardware.

Intel Quick Assist Technology (QAT) is one we’ve seen before, as it showcased inside special variants of Skylake Xeon’s chipset (that required a PCIe 3.0 x16 link) as well as an add-in PCIe card – this version will support up to 400 Gb/s symmetric cryptography, or up to 160 Gb/s compression plus 160 Gb/s decompression simultaneously, double the previous version.

The other is Intel’s Data Streaming Accelerator (DSA). Intel has had documentation about DSA on the web since 2019, stating that it is a high-performance data copy and transformation accelerator for streaming data from storage and memory or to other parts of the system through a DMA remapping hardware unit/IOMMU. DSA has been a request from specific hyperscaler customers, who are looking to deploy it within their own internal cloud infrastructure, and Intel is keen to point out that some customers will use DSA, some will use Intel’s new Infrastructure Processing Unit, while some will use both, depending on what level of integration or abstraction they are interested in. Intel told us that DSA is an upgrade over the Crystal Beach DMA engine which was present on the Purley (SKL+CLX) platforms.

On top of all this, Sapphire Rapids also supports AVX512_FP16 instructions for half-precision, mostly for AI workloads as part of its DLBoost strategy (Intel was quite quiet on DLBoost during the event). These FP16 commands can also be used as part of AMX, alongside INT8 and BF16 support. Intel now also supports CLDEMOTE for cache-line management.

A Side Word about CXL

Throughout the presentations of Sapphire Rapids, Intel has been keen to highlight it will support CXL 1.1 at launch. CXL is a connectivity standard designed to handle much more than what PCIe does – aside from simply acting as a data transfer from host to device, CXL has three branches to support, known as IO, Cache, and Memory. As defined in the CXL 1.0 and 1.1 standards, these three form the basis of a new way to connect a host with a device. 

Naturally it was our expectation that all CXL 1.1 devices would support all three of these standards. It wasn’t until Hot Chips, several days later, that we learned Sapphire Rapids is only supporting part of the CXL standard, specifically CXL.io and CXL.cache, but CXL.memory would not be part of SPR. We're not sure to what extent this means SPR isn't CXL 1.1 compliant, or what it means for CXL 1.1 devices - without CXL.mem, as per the diagram above, all Intel loses is Type-2 support. Perhaps this is more of an indication that the market around CXL is better served by CXL 2.0, which will no doubt come in a later product.

In the next page, we look at Intel's new tiled architecture for Sapphire Rapids.

The March of More Silicon: Connectivity Matters
POST A COMMENT

94 Comments

View All Comments

  • Silver5urfer - Tuesday, August 31, 2021 - link

    Finally something innovative from Intel after years of abandonment of HEDT and Xeon leadership. I would give credit to Intel here because no big little BS scam. Mirroring the die mask design and using separate on top of the design which is using an EMIB on such a large silicon damn it looks super complicated vs AMD chiplet strategy.

    Now for the worst part, the I/O controller and Memory controller. That is going to be an issue for this design, Intel's mesh had power problems and inferior x86 performance on SKL Xeon and the Ice Lake solved that issue but core problem is AMD solved the EPYC 7000 series Zen based chiplets into a central I/O die and memory controller design eliminating the NUMA node performance hit. With smaller path trace due to EMIB this looks great but still it will have that hopping issues of Zen design based processors.

    So a SPR based HEDT LGA4xxx socket is coming but when ? 2022 ? Zen 3 Threadripper Chagall / Genesis Peak is coming this year. And Zen 3 based V-Cache Milan EPYC will be coming next year once the factories start producing them and they will be dripped to AM4 socket processors. SPR needs to prove a lot, Zen 4 is dropping soon with 96C and beastly IPC on top of 12 Channel memory design on TSMC 5N.

    IPC is a whole big another equation BUT most important is how the Intel 7 / 10nm design is vs TSMC 7N based EPYC in terms of clock potential and efficiency to performance ratios. Esp the fact that Intel had to cut off the x86 cores into those small SKL inferior crap cores onto the LGA1700 socket to keep up with the rising power consumption of their x86 processor designs. This one maxes out at 56C apparently with each tile at 14C, a big shame all these 14C couldn't make it to the LGA1700 they would have been perfect for the desktop compute, for those stupid thin and light BGA junk sales they axed it and shoved those efficiency designs into the Desktop LGA platform.
    Reply
  • Kamen Rider Blade - Tuesday, August 31, 2021 - link

    The NUMA Domain is going to be interesting with the 4x Memory Controllers split on each die having to cross domain boundaries.

    And there appears to be 2x Cross-Tile interconnects on each tile that hold CHA (Caching and Home Agent) and a LLC (Last Level Cache) to handle resolution of coherency across multiple processors.

    Each core has a tiny CHA to check against the larger dedicated CHA & LLC to improve scalability according to the number of cores, memory controllers, or the sub-NUMA clustering mode. The address space is interleaved across different CHAs, which act like a single logical agent.

    So that's a interesting solution to their data locality issues when multi-threading or having cross core communication.
    Reply
  • repoman27 - Tuesday, August 31, 2021 - link

    Why do you presume there will be any NUMA domain boundaries on-package? The whole point of going with EMIB and tiles vs. conventional chiplets on an organic substrate is that the EMIBs essentially act as "long wires" bridging the metal layers of the individual dies and extending the mesh without having to pipe the data over a separate link with a SerDes / PHY at each end. Reply
  • JayNor - Tuesday, August 31, 2021 - link

    The leaked Emerald Rapids slides show CXL.mem. With the info that Sapphire Rapids doesn't implement CXL.mem, that finally makes sense.

    CXL.mem isn't mandatory according to servethehome article, "Compute Express Link or CXL What it is and Examples", from May 21,2021
    Reply
  • JayNor - Tuesday, August 31, 2021 - link

    The CXL.mem feature in Emerald Rapids can be seen in the slides in the adoredtv transcription article, "intel-rapidly-getting-worse", from June 29, 2021. Reply
  • Wereweeb - Tuesday, August 31, 2021 - link

    Not what I wanted to see, but EMIB is a pretty cool piece of tech Reply
  • Blastdoor - Wednesday, September 1, 2021 - link

    Itanium, Optane, Knight's Landing, and Intel's original 10nm plans were all cool pieces of tech, too. Yet they lost to less cool tech like AMD64, flash, GPUs, and TSMC 7nm.

    After reading all this I'm left wondering if Intel designed this thing to showcase how cool EMIB is rather than EMIB enabling the optimal design (aka, a misalignment between cart and horse).

    I look forward to seeing how this super cool glue compares to the combo of AMD"s Elmer's plus a big slab of SRAM plus high yielding multi-use chiplets.

    Let's also see how yields and watts look.
    Reply
  • wira6444 - Tuesday, August 31, 2021 - link

    I though Intel hate "GLUE" ? Reply
  • Kamen Rider Blade - Tuesday, August 31, 2021 - link

    They changed their mind. Reply
  • Oxford Guy - Thursday, September 2, 2021 - link

    'Truth as convenience' is one of the defining qualities of corporate reasoning. Reply

Log in

Don't have an account? Sign up now