1menu "Xen driver support" 2 depends on XEN 3 4config XEN_BALLOON 5 bool "Xen memory balloon driver" 6 default y 7 help 8 The balloon driver allows the Xen domain to request more memory from 9 the system to expand the domain's memory allocation, or alternatively 10 return unneeded memory to the system. 11 12config XEN_SELFBALLOONING 13 bool "Dynamically self-balloon kernel memory to target" 14 depends on XEN && XEN_BALLOON && CLEANCACHE && SWAP && XEN_TMEM 15 default n 16 help 17 Self-ballooning dynamically balloons available kernel memory driven 18 by the current usage of anonymous memory ("committed AS") and 19 controlled by various sysfs-settable parameters. Configuring 20 FRONTSWAP is highly recommended; if it is not configured, self- 21 ballooning is disabled by default. If FRONTSWAP is configured, 22 frontswap-selfshrinking is enabled by default but can be disabled 23 with the 'tmem.selfshrink=0' kernel boot parameter; and self-ballooning 24 is enabled by default but can be disabled with the 'tmem.selfballooning=0' 25 kernel boot parameter. Note that systems without a sufficiently 26 large swap device should not enable self-ballooning. 27 28config XEN_BALLOON_MEMORY_HOTPLUG 29 bool "Memory hotplug support for Xen balloon driver" 30 default n 31 depends on XEN_BALLOON && MEMORY_HOTPLUG 32 help 33 Memory hotplug support for Xen balloon driver allows expanding memory 34 available for the system above limit declared at system startup. 35 It is very useful on critical systems which require long 36 run without rebooting. 37 38 Memory could be hotplugged in following steps: 39 40 1) target domain: ensure that memory auto online policy is in 41 effect by checking /sys/devices/system/memory/auto_online_blocks 42 file (should be 'online'). 43 44 2) control domain: xl mem-max <target-domain> <maxmem> 45 where <maxmem> is >= requested memory size, 46 47 3) control domain: xl mem-set <target-domain> <memory> 48 where <memory> is requested memory size; alternatively memory 49 could be added by writing proper value to 50 /sys/devices/system/xen_memory/xen_memory0/target or 51 /sys/devices/system/xen_memory/xen_memory0/target_kb on the 52 target domain. 53 54 Alternatively, if memory auto onlining was not requested at step 1 55 the newly added memory can be manually onlined in the target domain 56 by doing the following: 57 58 for i in /sys/devices/system/memory/memory*/state; do \ 59 [ "`cat "$i"`" = offline ] && echo online > "$i"; done 60 61 or by adding the following line to udev rules: 62 63 SUBSYSTEM=="memory", ACTION=="add", RUN+="/bin/sh -c '[ -f /sys$devpath/state ] && echo online > /sys$devpath/state'" 64 65config XEN_BALLOON_MEMORY_HOTPLUG_LIMIT 66 int "Hotplugged memory limit (in GiB) for a PV guest" 67 default 512 if X86_64 68 default 4 if X86_32 69 range 0 64 if X86_32 70 depends on XEN_HAVE_PVMMU 71 depends on XEN_BALLOON_MEMORY_HOTPLUG 72 help 73 Maxmium amount of memory (in GiB) that a PV guest can be 74 expanded to when using memory hotplug. 75 76 A PV guest can have more memory than this limit if is 77 started with a larger maximum. 78 79 This value is used to allocate enough space in internal 80 tables needed for physical memory administration. 81 82config XEN_SCRUB_PAGES_DEFAULT 83 bool "Scrub pages before returning them to system by default" 84 depends on XEN_BALLOON 85 default y 86 help 87 Scrub pages before returning them to the system for reuse by 88 other domains. This makes sure that any confidential data 89 is not accidentally visible to other domains. It is more 90 secure, but slightly less efficient. This can be controlled with 91 xen_scrub_pages=0 parameter and 92 /sys/devices/system/xen_memory/xen_memory0/scrub_pages. 93 This option only sets the default value. 94 95 If in doubt, say yes. 96 97config XEN_DEV_EVTCHN 98 tristate "Xen /dev/xen/evtchn device" 99 default y 100 help 101 The evtchn driver allows a userspace process to trigger event 102 channels and to receive notification of an event channel 103 firing. 104 If in doubt, say yes. 105 106config XEN_BACKEND 107 bool "Backend driver support" 108 default XEN_DOM0 109 help 110 Support for backend device drivers that provide I/O services 111 to other virtual machines. 112 113config XENFS 114 tristate "Xen filesystem" 115 select XEN_PRIVCMD 116 default y 117 help 118 The xen filesystem provides a way for domains to share 119 information with each other and with the hypervisor. 120 For example, by reading and writing the "xenbus" file, guests 121 may pass arbitrary information to the initial domain. 122 If in doubt, say yes. 123 124config XEN_COMPAT_XENFS 125 bool "Create compatibility mount point /proc/xen" 126 depends on XENFS 127 default y 128 help 129 The old xenstore userspace tools expect to find "xenbus" 130 under /proc/xen, but "xenbus" is now found at the root of the 131 xenfs filesystem. Selecting this causes the kernel to create 132 the compatibility mount point /proc/xen if it is running on 133 a xen platform. 134 If in doubt, say yes. 135 136config XEN_SYS_HYPERVISOR 137 bool "Create xen entries under /sys/hypervisor" 138 depends on SYSFS 139 select SYS_HYPERVISOR 140 default y 141 help 142 Create entries under /sys/hypervisor describing the Xen 143 hypervisor environment. When running native or in another 144 virtual environment, /sys/hypervisor will still be present, 145 but will have no xen contents. 146 147config XEN_XENBUS_FRONTEND 148 tristate 149 150config XEN_GNTDEV 151 tristate "userspace grant access device driver" 152 depends on XEN 153 default m 154 select MMU_NOTIFIER 155 help 156 Allows userspace processes to use grants. 157 158config XEN_GNTDEV_DMABUF 159 bool "Add support for dma-buf grant access device driver extension" 160 depends on XEN_GNTDEV && XEN_GRANT_DMA_ALLOC && DMA_SHARED_BUFFER 161 help 162 Allows userspace processes and kernel modules to use Xen backed 163 dma-buf implementation. With this extension grant references to 164 the pages of an imported dma-buf can be exported for other domain 165 use and grant references coming from a foreign domain can be 166 converted into a local dma-buf for local export. 167 168config XEN_GRANT_DEV_ALLOC 169 tristate "User-space grant reference allocator driver" 170 depends on XEN 171 default m 172 help 173 Allows userspace processes to create pages with access granted 174 to other domains. This can be used to implement frontend drivers 175 or as part of an inter-domain shared memory channel. 176 177config XEN_GRANT_DMA_ALLOC 178 bool "Allow allocating DMA capable buffers with grant reference module" 179 depends on XEN && HAS_DMA 180 help 181 Extends grant table module API to allow allocating DMA capable 182 buffers and mapping foreign grant references on top of it. 183 The resulting buffer is similar to one allocated by the balloon 184 driver in that proper memory reservation is made by 185 ({increase|decrease}_reservation and VA mappings are updated if 186 needed). 187 This is useful for sharing foreign buffers with HW drivers which 188 cannot work with scattered buffers provided by the balloon driver, 189 but require DMAable memory instead. 190 191config SWIOTLB_XEN 192 def_bool y 193 select SWIOTLB 194 195config XEN_TMEM 196 tristate 197 depends on !ARM && !ARM64 198 default m if (CLEANCACHE || FRONTSWAP) 199 help 200 Shim to interface in-kernel Transcendent Memory hooks 201 (e.g. cleancache and frontswap) to Xen tmem hypercalls. 202 203config XEN_PCIDEV_BACKEND 204 tristate "Xen PCI-device backend driver" 205 depends on PCI && X86 && XEN 206 depends on XEN_BACKEND 207 default m 208 help 209 The PCI device backend driver allows the kernel to export arbitrary 210 PCI devices to other guests. If you select this to be a module, you 211 will need to make sure no other driver has bound to the device(s) 212 you want to make visible to other guests. 213 214 The parameter "passthrough" allows you specify how you want the PCI 215 devices to appear in the guest. You can choose the default (0) where 216 PCI topology starts at 00.00.0, or (1) for passthrough if you want 217 the PCI devices topology appear the same as in the host. 218 219 The "hide" parameter (only applicable if backend driver is compiled 220 into the kernel) allows you to bind the PCI devices to this module 221 from the default device drivers. The argument is the list of PCI BDFs: 222 xen-pciback.hide=(03:00.0)(04:00.0) 223 224 If in doubt, say m. 225 226config XEN_PVCALLS_FRONTEND 227 tristate "XEN PV Calls frontend driver" 228 depends on INET && XEN 229 default n 230 select XEN_XENBUS_FRONTEND 231 help 232 Experimental frontend for the Xen PV Calls protocol 233 (https://xenbits.xen.org/docs/unstable/misc/pvcalls.html). It 234 sends a small set of POSIX calls to the backend, which 235 implements them. 236 237config XEN_PVCALLS_BACKEND 238 bool "XEN PV Calls backend driver" 239 depends on INET && XEN && XEN_BACKEND 240 default n 241 help 242 Experimental backend for the Xen PV Calls protocol 243 (https://xenbits.xen.org/docs/unstable/misc/pvcalls.html). It 244 allows PV Calls frontends to send POSIX calls to the backend, 245 which implements them. 246 247 If in doubt, say n. 248 249config XEN_SCSI_BACKEND 250 tristate "XEN SCSI backend driver" 251 depends on XEN && XEN_BACKEND && TARGET_CORE 252 help 253 The SCSI backend driver allows the kernel to export its SCSI Devices 254 to other guests via a high-performance shared-memory interface. 255 Only needed for systems running as XEN driver domains (e.g. Dom0) and 256 if guests need generic access to SCSI devices. 257 258config XEN_PRIVCMD 259 tristate 260 depends on XEN 261 default m 262 263config XEN_STUB 264 bool "Xen stub drivers" 265 depends on XEN && X86_64 && BROKEN 266 default n 267 help 268 Allow kernel to install stub drivers, to reserve space for Xen drivers, 269 i.e. memory hotplug and cpu hotplug, and to block native drivers loaded, 270 so that real Xen drivers can be modular. 271 272 To enable Xen features like cpu and memory hotplug, select Y here. 273 274config XEN_ACPI_HOTPLUG_MEMORY 275 tristate "Xen ACPI memory hotplug" 276 depends on XEN_DOM0 && XEN_STUB && ACPI 277 default n 278 help 279 This is Xen ACPI memory hotplug. 280 281 Currently Xen only support ACPI memory hot-add. If you want 282 to hot-add memory at runtime (the hot-added memory cannot be 283 removed until machine stop), select Y/M here, otherwise select N. 284 285config XEN_ACPI_HOTPLUG_CPU 286 tristate "Xen ACPI cpu hotplug" 287 depends on XEN_DOM0 && XEN_STUB && ACPI 288 select ACPI_CONTAINER 289 default n 290 help 291 Xen ACPI cpu enumerating and hotplugging 292 293 For hotplugging, currently Xen only support ACPI cpu hotadd. 294 If you want to hotadd cpu at runtime (the hotadded cpu cannot 295 be removed until machine stop), select Y/M here. 296 297config XEN_ACPI_PROCESSOR 298 tristate "Xen ACPI processor" 299 depends on XEN && XEN_DOM0 && X86 && ACPI_PROCESSOR && CPU_FREQ 300 default m 301 help 302 This ACPI processor uploads Power Management information to the Xen 303 hypervisor. 304 305 To do that the driver parses the Power Management data and uploads 306 said information to the Xen hypervisor. Then the Xen hypervisor can 307 select the proper Cx and Pxx states. It also registers itself as the 308 SMM so that other drivers (such as ACPI cpufreq scaling driver) will 309 not load. 310 311 To compile this driver as a module, choose M here: the module will be 312 called xen_acpi_processor If you do not know what to choose, select 313 M here. If the CPUFREQ drivers are built in, select Y here. 314 315config XEN_MCE_LOG 316 bool "Xen platform mcelog" 317 depends on XEN_DOM0 && X86_64 && X86_MCE 318 default n 319 help 320 Allow kernel fetching MCE error from Xen platform and 321 converting it into Linux mcelog format for mcelog tools 322 323config XEN_HAVE_PVMMU 324 bool 325 326config XEN_EFI 327 def_bool y 328 depends on (ARM || ARM64 || X86_64) && EFI 329 330config XEN_AUTO_XLATE 331 def_bool y 332 depends on ARM || ARM64 || XEN_PVHVM 333 help 334 Support for auto-translated physmap guests. 335 336config XEN_ACPI 337 def_bool y 338 depends on X86 && ACPI 339 340config XEN_SYMS 341 bool "Xen symbols" 342 depends on X86 && XEN_DOM0 && XENFS 343 default y if KALLSYMS 344 help 345 Exports hypervisor symbols (along with their types and addresses) via 346 /proc/xen/xensyms file, similar to /proc/kallsyms 347 348config XEN_HAVE_VPMU 349 bool 350 351endmenu 352