xref: /openbmc/linux/drivers/xen/Kconfig (revision 5fbdc103)
1menu "Xen driver support"
2	depends on XEN
3
4config XEN_BALLOON
5	bool "Xen memory balloon driver"
6	default y
7	help
8	  The balloon driver allows the Xen domain to request more memory from
9	  the system to expand the domain's memory allocation, or alternatively
10	  return unneeded memory to the system.
11
12config XEN_SELFBALLOONING
13	bool "Dynamically self-balloon kernel memory to target"
14	depends on XEN && XEN_BALLOON && CLEANCACHE && SWAP && XEN_TMEM
15	default n
16	help
17	  Self-ballooning dynamically balloons available kernel memory driven
18	  by the current usage of anonymous memory ("committed AS") and
19	  controlled by various sysfs-settable parameters.  Configuring
20	  FRONTSWAP is highly recommended; if it is not configured, self-
21	  ballooning is disabled by default but can be enabled with the
22	  'selfballooning' kernel boot parameter.  If FRONTSWAP is configured,
23	  frontswap-selfshrinking is enabled by default but can be disabled
24	  with the 'noselfshrink' kernel boot parameter; and self-ballooning
25	  is enabled by default but can be disabled with the 'noselfballooning'
26	  kernel boot parameter.  Note that systems without a sufficiently
27	  large swap device should not enable self-ballooning.
28
29config XEN_BALLOON_MEMORY_HOTPLUG
30	bool "Memory hotplug support for Xen balloon driver"
31	default n
32	depends on XEN_BALLOON && MEMORY_HOTPLUG
33	help
34	  Memory hotplug support for Xen balloon driver allows expanding memory
35	  available for the system above limit declared at system startup.
36	  It is very useful on critical systems which require long
37	  run without rebooting.
38
39	  Memory could be hotplugged in following steps:
40
41	    1) dom0: xl mem-max <domU> <maxmem>
42	       where <maxmem> is >= requested memory size,
43
44	    2) dom0: xl mem-set <domU> <memory>
45	       where <memory> is requested memory size; alternatively memory
46	       could be added by writing proper value to
47	       /sys/devices/system/xen_memory/xen_memory0/target or
48	       /sys/devices/system/xen_memory/xen_memory0/target_kb on dumU,
49
50	    3) domU: for i in /sys/devices/system/memory/memory*/state; do \
51	               [ "`cat "$i"`" = offline ] && echo online > "$i"; done
52
53	  Memory could be onlined automatically on domU by adding following line to udev rules:
54
55	  SUBSYSTEM=="memory", ACTION=="add", RUN+="/bin/sh -c '[ -f /sys$devpath/state ] && echo online > /sys$devpath/state'"
56
57	  In that case step 3 should be omitted.
58
59config XEN_SCRUB_PAGES
60	bool "Scrub pages before returning them to system"
61	depends on XEN_BALLOON
62	default y
63	help
64	  Scrub pages before returning them to the system for reuse by
65	  other domains.  This makes sure that any confidential data
66	  is not accidentally visible to other domains.  Is it more
67	  secure, but slightly less efficient.
68	  If in doubt, say yes.
69
70config XEN_DEV_EVTCHN
71	tristate "Xen /dev/xen/evtchn device"
72	default y
73	help
74	  The evtchn driver allows a userspace process to triger event
75	  channels and to receive notification of an event channel
76	  firing.
77	  If in doubt, say yes.
78
79config XEN_BACKEND
80	bool "Backend driver support"
81	depends on XEN_DOM0
82	default y
83	help
84	  Support for backend device drivers that provide I/O services
85	  to other virtual machines.
86
87config XENFS
88	tristate "Xen filesystem"
89	default y
90	help
91	  The xen filesystem provides a way for domains to share
92	  information with each other and with the hypervisor.
93	  For example, by reading and writing the "xenbus" file, guests
94	  may pass arbitrary information to the initial domain.
95	  If in doubt, say yes.
96
97config XEN_COMPAT_XENFS
98       bool "Create compatibility mount point /proc/xen"
99       depends on XENFS
100       default y
101       help
102         The old xenstore userspace tools expect to find "xenbus"
103         under /proc/xen, but "xenbus" is now found at the root of the
104         xenfs filesystem.  Selecting this causes the kernel to create
105         the compatibility mount point /proc/xen if it is running on
106         a xen platform.
107         If in doubt, say yes.
108
109config XEN_SYS_HYPERVISOR
110       bool "Create xen entries under /sys/hypervisor"
111       depends on SYSFS
112       select SYS_HYPERVISOR
113       default y
114       help
115         Create entries under /sys/hypervisor describing the Xen
116	 hypervisor environment.  When running native or in another
117	 virtual environment, /sys/hypervisor will still be present,
118	 but will have no xen contents.
119
120config XEN_XENBUS_FRONTEND
121	tristate
122
123config XEN_GNTDEV
124	tristate "userspace grant access device driver"
125	depends on XEN
126	default m
127	select MMU_NOTIFIER
128	help
129	  Allows userspace processes to use grants.
130
131config XEN_GRANT_DEV_ALLOC
132	tristate "User-space grant reference allocator driver"
133	depends on XEN
134	default m
135	help
136	  Allows userspace processes to create pages with access granted
137	  to other domains. This can be used to implement frontend drivers
138	  or as part of an inter-domain shared memory channel.
139
140config SWIOTLB_XEN
141	def_bool y
142	depends on PCI
143	select SWIOTLB
144
145config XEN_TMEM
146	bool
147	default y if (CLEANCACHE || FRONTSWAP)
148	help
149	  Shim to interface in-kernel Transcendent Memory hooks
150	  (e.g. cleancache and frontswap) to Xen tmem hypercalls.
151
152config XEN_PCIDEV_BACKEND
153	tristate "Xen PCI-device backend driver"
154	depends on PCI && X86 && XEN
155	depends on XEN_BACKEND
156	default m
157	help
158	  The PCI device backend driver allows the kernel to export arbitrary
159	  PCI devices to other guests. If you select this to be a module, you
160	  will need to make sure no other driver has bound to the device(s)
161	  you want to make visible to other guests.
162
163	  The parameter "passthrough" allows you specify how you want the PCI
164	  devices to appear in the guest. You can choose the default (0) where
165	  PCI topology starts at 00.00.0, or (1) for passthrough if you want
166	  the PCI devices topology appear the same as in the host.
167
168	  The "hide" parameter (only applicable if backend driver is compiled
169	  into the kernel) allows you to bind the PCI devices to this module
170	  from the default device drivers. The argument is the list of PCI BDFs:
171	  xen-pciback.hide=(03:00.0)(04:00.0)
172
173	  If in doubt, say m.
174endmenu
175