xref: /openbmc/linux/drivers/cxl/Kconfig (revision 515bddf0)
1# SPDX-License-Identifier: GPL-2.0-only
2menuconfig CXL_BUS
3	tristate "CXL (Compute Express Link) Devices Support"
4	depends on PCI
5	select PCI_DOE
6	help
7	  CXL is a bus that is electrically compatible with PCI Express, but
8	  layers three protocols on that signalling (CXL.io, CXL.cache, and
9	  CXL.mem). The CXL.cache protocol allows devices to hold cachelines
10	  locally, the CXL.mem protocol allows devices to be fully coherent
11	  memory targets, the CXL.io protocol is equivalent to PCI Express.
12	  Say 'y' to enable support for the configuration and management of
13	  devices supporting these protocols.
14
15if CXL_BUS
16
17config CXL_PCI
18	tristate "PCI manageability"
19	default CXL_BUS
20	help
21	  The CXL specification defines a "CXL memory device" sub-class in the
22	  PCI "memory controller" base class of devices. Device's identified by
23	  this class code provide support for volatile and / or persistent
24	  memory to be mapped into the system address map (Host-managed Device
25	  Memory (HDM)).
26
27	  Say 'y/m' to enable a driver that will attach to CXL memory expander
28	  devices enumerated by the memory device class code for configuration
29	  and management primarily via the mailbox interface. See Chapter 2.3
30	  Type 3 CXL Device in the CXL 2.0 specification for more details.
31
32	  If unsure say 'm'.
33
34config CXL_MEM_RAW_COMMANDS
35	bool "RAW Command Interface for Memory Devices"
36	depends on CXL_PCI
37	help
38	  Enable CXL RAW command interface.
39
40	  The CXL driver ioctl interface may assign a kernel ioctl command
41	  number for each specification defined opcode. At any given point in
42	  time the number of opcodes that the specification defines and a device
43	  may implement may exceed the kernel's set of associated ioctl function
44	  numbers. The mismatch is either by omission, specification is too new,
45	  or by design. When prototyping new hardware, or developing / debugging
46	  the driver it is useful to be able to submit any possible command to
47	  the hardware, even commands that may crash the kernel due to their
48	  potential impact to memory currently in use by the kernel.
49
50	  If developing CXL hardware or the driver say Y, otherwise say N.
51
52config CXL_ACPI
53	tristate "CXL ACPI: Platform Support"
54	depends on ACPI
55	default CXL_BUS
56	select ACPI_TABLE_LIB
57	help
58	  Enable support for host managed device memory (HDM) resources
59	  published by a platform's ACPI CXL memory layout description.  See
60	  Chapter 9.14.1 CXL Early Discovery Table (CEDT) in the CXL 2.0
61	  specification, and CXL Fixed Memory Window Structures (CEDT.CFMWS)
62	  (https://www.computeexpresslink.org/spec-landing). The CXL core
63	  consumes these resource to publish the root of a cxl_port decode
64	  hierarchy to map regions that represent System RAM, or Persistent
65	  Memory regions to be managed by LIBNVDIMM.
66
67	  If unsure say 'm'.
68
69config CXL_PMEM
70	tristate "CXL PMEM: Persistent Memory Support"
71	depends on LIBNVDIMM
72	default CXL_BUS
73	help
74	  In addition to typical memory resources a platform may also advertise
75	  support for persistent memory attached via CXL. This support is
76	  managed via a bridge driver from CXL to the LIBNVDIMM system
77	  subsystem. Say 'y/m' to enable support for enumerating and
78	  provisioning the persistent memory capacity of CXL memory expanders.
79
80	  If unsure say 'm'.
81
82config CXL_MEM
83	tristate "CXL: Memory Expansion"
84	depends on CXL_PCI
85	default CXL_BUS
86	help
87	  The CXL.mem protocol allows a device to act as a provider of "System
88	  RAM" and/or "Persistent Memory" that is fully coherent as if the
89	  memory were attached to the typical CPU memory controller. This is
90	  known as HDM "Host-managed Device Memory".
91
92	  Say 'y/m' to enable a driver that will attach to CXL.mem devices for
93	  memory expansion and control of HDM. See Chapter 9.13 in the CXL 2.0
94	  specification for a detailed description of HDM.
95
96	  If unsure say 'm'.
97
98config CXL_PORT
99	default CXL_BUS
100	tristate
101
102config CXL_SUSPEND
103	def_bool y
104	depends on SUSPEND && CXL_MEM
105
106config CXL_REGION
107	bool "CXL: Region Support"
108	default CXL_BUS
109	# For MAX_PHYSMEM_BITS
110	depends on SPARSEMEM
111	select MEMREGION
112	select GET_FREE_REGION
113	help
114	  Enable the CXL core to enumerate and provision CXL regions. A CXL
115	  region is defined by one or more CXL expanders that decode a given
116	  system-physical address range. For CXL regions established by
117	  platform-firmware this option enables memory error handling to
118	  identify the devices participating in a given interleaved memory
119	  range. Otherwise, platform-firmware managed CXL is enabled by being
120	  placed in the system address map and does not need a driver.
121
122	  If unsure say 'y'
123
124config CXL_REGION_INVALIDATION_TEST
125	bool "CXL: Region Cache Management Bypass (TEST)"
126	depends on CXL_REGION
127	help
128	  CXL Region management and security operations potentially invalidate
129	  the content of CPU caches without notifying those caches to
130	  invalidate the affected cachelines. The CXL Region driver attempts
131	  to invalidate caches when those events occur.  If that invalidation
132	  fails the region will fail to enable.  Reasons for cache
133	  invalidation failure are due to the CPU not providing a cache
134	  invalidation mechanism. For example usage of wbinvd is restricted to
135	  bare metal x86. However, for testing purposes toggling this option
136	  can disable that data integrity safety and proceed with enabling
137	  regions when there might be conflicting contents in the CPU cache.
138
139	  If unsure, or if this kernel is meant for production environments,
140	  say N.
141
142endif
143