• CXL Consortium

Compute Express Link™ 1.1 Specification: Now Available to Members

Updated: Mar 20, 2020

By Debendra Das Sharma and Siamak Tavallaei

CXL Consortium™ Technical Task Force Co-Chairs

The Compute Express Link™ (CXL) is an open industry-standard interconnect offering coherency and memory semantics using high-bandwidth, low-latency connectivity between the host processor and devices such as accelerators, memory buffers, and smart I/O devices. CXL is based on the PCI Express® (PCIe®) 5.0 physical layer infrastructure as shown in

Figure 1.

Figure 1: Compute Express Link has the benefit of supporting either

a standard PCIe Device or a CXL Device on the same Link

CXL technology is designed to address the growing needs of high-performance computational workloads by supporting heterogeneous processing and memory systems for applications in Artificial Intelligence (AI), Machine Learning (ML), communication systems, and High-Performance Computing (HPC). These applications deploy a diverse mix of scalar, vector, matrix, and spatial architectures through CPU, GPU, FPGA, smart NICs, and other accelerators. The coherency and memory semantics of CXL will enable power-efficient high performance in these heterogeneous environments.

CXL achieves these objectives by supporting dynamic multiplexing between a rich set of protocols that includes I/O (, based on PCIe), caching (CXL.cache) and memory (CXL.memory) semantics, as shown in Figure 2. CXL maintains a unified, coherent memory space between the CPU (host processor) and any memory on the attached CXL device. This allows both the CPU and device to share resources for higher performance and reduced software stack complexity. Moreover, since the CPU is primarily responsible for coherency management, it can reduce device cost and complexity as well as overhead traditionally associated with coherency management across an I/O link.

Figure 2: CXL maintains a unified, coherent memory space between the

CPU and any memory on the attached CXL Device

Following shortly on the heels of the CXL 1.0 specification, we proudly announced the availability of the CXL 1.1 specification. The CXL 1.1 specification is fully backward-compatible with the CXL 1.0 specification. In addition to errata fixes to the CXL 1.0 specification, the CXL 1.1 specification includes a new compliance chapter defining how interoperability testing between the host processor and an attached CXL device can be performed.

Our approach to compliance testing is novel in that users can perform in-system testing without using special hardware or dedicated test equipment such as an exerciser. In the CXL specification, we defined a set of registers in the architecture to automate in-system testing. Each test defines the assumptions reflected in the set-up, a set of steps to run the test, and clearly identified pass or fail criteria. There are seven areas of testing: Application or Transaction layers, Link Layers, Arb/Mux, Physical Layer, Registers, Reset flows, and RAS (Reliability, Availability, and Serviceability). For example, to test the coherency flows in the CXL.cache Transaction Layer level, we use the concept of false sharing where a device and the host processor access a set of addresses set up by the testing software running in the system. The test program checks the data consistency in the system itself which is a part of the test. Each set of accesses in a test can be run a finite number of times or can run till the test process is terminated manually.

Similarly, to test the producer-consumer ordering model of the Application layer, we have an automated mechanism where the host and/or the device reads a range of memory and writes a different range as specified in the configuration registers. Checking is automated in the test to ensure producer-consumer ordering rules are followed. Tests can run anywhere from one iteration to infinite iterations, to help with debug as well as to provide adequate coverage with something like an overnight run. This built-in compliance approach will help enable very high-quality components in the CXL ecosystem.

The CXL Consortium is working on the next generation of CXL specification to address additional usage models beyond the CXL 1.0 and CXL 1.1 specifications while maintaining full backward compatibility.

For more information on CXL technology, check out our Resource Library:

· White Paper: Introduction to Compute Express Link

· Presentation: Compute Express Link (CXL): A Coherent Interface for Ultra-High-

Speed Transfers

· Webinar: Introduction to Compute Express Link (CXL) Webinar Presentation

· Webinar: Exploring Coherent Memory and Innovative Use Cases

If you are a CXL adopter member interested in voicing your feedback on future specification installments, we invite you to join CXL Consortium as a contributor. If you haven’t yet joined CXL Consortium but are interested in shaping the future generations of CXLe interconnect, please join the Consortium.

1,060 views0 comments