- CXL Consortium
CXL™ 2.0 Specification: Memory Pooling – Questions from the Webinar Part 2
By Mahesh Wagh and Rick Sodke
The recent CXL™ Consortium webinar provided a deep dive into the memory pooling features of the CXL 2.0 specification and introduced the standardized fabric manager for inventory and resource allocation to enable easier adoption and management of CXL-based switch and fabric solutions.
In the previous webinar Q&A blog post, we answered questions from attendees about data security and CXL fabric management. This post provides answers to questions about switching, multi-logical devices (MLD), and other memory pooling topics.
In case of memory pooling without a switch, how is the enumeration done? All devices are on Bus 0, Device 0. Is the host supposed to be aware that it is addressing multiple devices?
Each port presents a fully compliant SLD device to the host. Enumeration is exactly the same as a direct attached SLD device.
In the slide "Memory Expansion with CXL 2.0 switching," are D1-D6 CXL.mem devices or a memory subsystem (*DRAM controller + DIMMs, for example)?
D1-D6 are individual CXL Type 3 Memory Controllers.
Is Microchip newest PCIe switch a CXL switch? Any other vendor is providing the CXL switch? How many CXL ports do we expect from the switch?
We cannot comment on member company products, features, or timelines. Please contact them directly.
What will be the approximate latency (in ns) introduced by a single switch layer?
We can't comment on the latency for a device implementation.
Multi-Logical Device Questions
When a MLD link tries to enter low-power for either CXL.io or CXL.mem vLSM, does the related vLSM have to get ALL of the logical devices agreement to enter L1?
An MLD link can enter a low power state when CXL.io and CXL.mem data link layers have reached a low power state. The data link layers are shared between all Logical Devices.
Should each CXL MLD have its own configuration space (as in a device with 16 MLD will have 16 Device 0, Function0 CFG registers + 1 for FMLD). Can each MLD itself contain multiple physical and virtual functions?
Each Logical Device in an MLD represent a CXL Device with CXL.io Configuration space, CXL.io Memory Space, and CXL.mem Memory space. Each Logical Device can be multi-function or SR-IOV.
In the multi-ported, direct attached, MLD devices, did you say that each upstream port is a SLD endpoint?
A direct attached multi-ported Type 3 memory controller can only present an SLD device to each host. Configuration and Control of the interaction between the two SLD devices is Vendor Defined.
Is it allowed for a MLD device to have one single function (in CXL.io) to represent/control all the advertised logical devices?
An MLD is made up of Logical Devices, each of which support CXL.io and CXL.mem. A Logical Device is bound to a host and the host is responsible for enumerating and configuring the Logical Device.
In this example, Device 2 may produce less bandwidth for H1 than D1. Who manages bandwidth? An MLD device bandwidth is a shared resource between the Hosts. Bandwidth allocation is a parameter configured by the Fabric Manager and communicated to the host through its Logical Device.
Have you thought about how the dynamic reallocation works with non-volatile memory?
CXL supports secure erase of non-volatile memory prior to reallocation.
Is there a single Physical Layer for all logical devices in an MLD?
There is one physical layer between the switch and the MLD Component. The link is owned by the FM in the switch and by the FM-owned Logical Device in the Type 3 Pooled Memory Controller.
When do we need hot-add memory mechanism?
Managed Hot Add is used for BINDing an SLD or Logical Device of an MLD. Dynamic addition of memory within an MLD uses CDAT update notification.
Other Memory Pooling Questions
Is pooling only specific to Type-3 devices? You would not use memory in a Type 2 device in a pool?
Pooling is limited to CXL Type-3 devices only.
Do memory pooling and interleaving work together?
Memory Pooling and address interleaving are entirely separate features. Each Logical Device in a Type 3 Pooled Memory Controller individually support interleaving as configured by the host.
Is there an API for preforming a memory wipe functions upon deallocation?
CXL Specification 2.0 Section 9.7.1 describes the methods to erase or clear the memory contents.
Do you view ISVs requiring new certification requirement for this technology or will/should it be transparent to them?
Generally, no certification is required for using this technology. A specific system may have additional certification requirements that are above and beyond the scope of the CXL Consortium. Please connect with the consortium for more details.
Can the DPA be shared by multiple hosts, i.e. HPA?
CXL 2.0 does not support sharing of device physical memory so any memory location can only be mapped to one host's HDM
Are you showing the hosts, switch and devices on the same PCIe bus or are you showing the hosts, switches and devices on a network?
CXL builds on PCI Express and all devices attached to a host are represented in a tree topology.
What would be the latency of access for CXL memory?
We can't comment on the latency for a device implementation, but in general the latency for the Host accessing CXL based DDR memory needs to be similar to the Host accessing local DDR memory. CXL based Storage Class Memory will have slower access time.