1=========================
2CPU hotplug in the Kernel
3=========================
4
5:Date: December, 2016
6:Author: Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
7          Rusty Russell <rusty@rustcorp.com.au>,
8          Srivatsa Vaddagiri <vatsa@in.ibm.com>,
9          Ashok Raj <ashok.raj@intel.com>,
10          Joel Schopp <jschopp@austin.ibm.com>
11
12Introduction
13============
14
15Modern advances in system architectures have introduced advanced error
16reporting and correction capabilities in processors. There are couple OEMS that
17support NUMA hardware which are hot pluggable as well, where physical node
18insertion and removal require support for CPU hotplug.
19
20Such advances require CPUs available to a kernel to be removed either for
21provisioning reasons, or for RAS purposes to keep an offending CPU off
22system execution path. Hence the need for CPU hotplug support in the
23Linux kernel.
24
25A more novel use of CPU-hotplug support is its use today in suspend resume
26support for SMP. Dual-core and HT support makes even a laptop run SMP kernels
27which didn't support these methods.
28
29
30Command Line Switches
31=====================
32``maxcpus=n``
33  Restrict boot time CPUs to *n*. Say if you have four CPUs, using
34  ``maxcpus=2`` will only boot two. You can choose to bring the
35  other CPUs later online.
36
37``nr_cpus=n``
38  Restrict the total amount of CPUs the kernel will support. If the number
39  supplied here is lower than the number of physically available CPUs, then
40  those CPUs can not be brought online later.
41
42``additional_cpus=n``
43  Use this to limit hotpluggable CPUs. This option sets
44  ``cpu_possible_mask = cpu_present_mask + additional_cpus``
45
46  This option is limited to the IA64 architecture.
47
48``possible_cpus=n``
49  This option sets ``possible_cpus`` bits in ``cpu_possible_mask``.
50
51  This option is limited to the X86 and S390 architecture.
52
53``cpu0_hotplug``
54  Allow to shutdown CPU0.
55
56  This option is limited to the X86 architecture.
57
58CPU maps
59========
60
61``cpu_possible_mask``
62  Bitmap of possible CPUs that can ever be available in the
63  system. This is used to allocate some boot time memory for per_cpu variables
64  that aren't designed to grow/shrink as CPUs are made available or removed.
65  Once set during boot time discovery phase, the map is static, i.e no bits
66  are added or removed anytime. Trimming it accurately for your system needs
67  upfront can save some boot time memory.
68
69``cpu_online_mask``
70  Bitmap of all CPUs currently online. Its set in ``__cpu_up()``
71  after a CPU is available for kernel scheduling and ready to receive
72  interrupts from devices. Its cleared when a CPU is brought down using
73  ``__cpu_disable()``, before which all OS services including interrupts are
74  migrated to another target CPU.
75
76``cpu_present_mask``
77  Bitmap of CPUs currently present in the system. Not all
78  of them may be online. When physical hotplug is processed by the relevant
79  subsystem (e.g ACPI) can change and new bit either be added or removed
80  from the map depending on the event is hot-add/hot-remove. There are currently
81  no locking rules as of now. Typical usage is to init topology during boot,
82  at which time hotplug is disabled.
83
84You really don't need to manipulate any of the system CPU maps. They should
85be read-only for most use. When setting up per-cpu resources almost always use
86``cpu_possible_mask`` or ``for_each_possible_cpu()`` to iterate. To macro
87``for_each_cpu()`` can be used to iterate over a custom CPU mask.
88
89Never use anything other than ``cpumask_t`` to represent bitmap of CPUs.
90
91
92Using CPU hotplug
93=================
94
95The kernel option *CONFIG_HOTPLUG_CPU* needs to be enabled. It is currently
96available on multiple architectures including ARM, MIPS, PowerPC and X86. The
97configuration is done via the sysfs interface::
98
99 $ ls -lh /sys/devices/system/cpu
100 total 0
101 drwxr-xr-x  9 root root    0 Dec 21 16:33 cpu0
102 drwxr-xr-x  9 root root    0 Dec 21 16:33 cpu1
103 drwxr-xr-x  9 root root    0 Dec 21 16:33 cpu2
104 drwxr-xr-x  9 root root    0 Dec 21 16:33 cpu3
105 drwxr-xr-x  9 root root    0 Dec 21 16:33 cpu4
106 drwxr-xr-x  9 root root    0 Dec 21 16:33 cpu5
107 drwxr-xr-x  9 root root    0 Dec 21 16:33 cpu6
108 drwxr-xr-x  9 root root    0 Dec 21 16:33 cpu7
109 drwxr-xr-x  2 root root    0 Dec 21 16:33 hotplug
110 -r--r--r--  1 root root 4.0K Dec 21 16:33 offline
111 -r--r--r--  1 root root 4.0K Dec 21 16:33 online
112 -r--r--r--  1 root root 4.0K Dec 21 16:33 possible
113 -r--r--r--  1 root root 4.0K Dec 21 16:33 present
114
115The files *offline*, *online*, *possible*, *present* represent the CPU masks.
116Each CPU folder contains an *online* file which controls the logical on (1) and
117off (0) state. To logically shutdown CPU4::
118
119 $ echo 0 > /sys/devices/system/cpu/cpu4/online
120  smpboot: CPU 4 is now offline
121
122Once the CPU is shutdown, it will be removed from */proc/interrupts*,
123*/proc/cpuinfo* and should also not be shown visible by the *top* command. To
124bring CPU4 back online::
125
126 $ echo 1 > /sys/devices/system/cpu/cpu4/online
127 smpboot: Booting Node 0 Processor 4 APIC 0x1
128
129The CPU is usable again. This should work on all CPUs. CPU0 is often special
130and excluded from CPU hotplug. On X86 the kernel option
131*CONFIG_BOOTPARAM_HOTPLUG_CPU0* has to be enabled in order to be able to
132shutdown CPU0. Alternatively the kernel command option *cpu0_hotplug* can be
133used. Some known dependencies of CPU0:
134
135* Resume from hibernate/suspend. Hibernate/suspend will fail if CPU0 is offline.
136* PIC interrupts. CPU0 can't be removed if a PIC interrupt is detected.
137
138Please let Fenghua Yu <fenghua.yu@intel.com> know if you find any dependencies
139on CPU0.
140
141The CPU hotplug coordination
142============================
143
144The offline case
145----------------
146
147Once a CPU has been logically shutdown the teardown callbacks of registered
148hotplug states will be invoked, starting with ``CPUHP_ONLINE`` and terminating
149at state ``CPUHP_OFFLINE``. This includes:
150
151* If tasks are frozen due to a suspend operation then *cpuhp_tasks_frozen*
152  will be set to true.
153* All processes are migrated away from this outgoing CPU to new CPUs.
154  The new CPU is chosen from each process' current cpuset, which may be
155  a subset of all online CPUs.
156* All interrupts targeted to this CPU are migrated to a new CPU
157* timers are also migrated to a new CPU
158* Once all services are migrated, kernel calls an arch specific routine
159  ``__cpu_disable()`` to perform arch specific cleanup.
160
161Using the hotplug API
162---------------------
163
164It is possible to receive notifications once a CPU is offline or onlined. This
165might be important to certain drivers which need to perform some kind of setup
166or clean up functions based on the number of available CPUs::
167
168  #include <linux/cpuhotplug.h>
169
170  ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "X/Y:online",
171                          Y_online, Y_prepare_down);
172
173*X* is the subsystem and *Y* the particular driver. The *Y_online* callback
174will be invoked during registration on all online CPUs. If an error
175occurs during the online callback the *Y_prepare_down* callback will be
176invoked on all CPUs on which the online callback was previously invoked.
177After registration completed, the *Y_online* callback will be invoked
178once a CPU is brought online and *Y_prepare_down* will be invoked when a
179CPU is shutdown. All resources which were previously allocated in
180*Y_online* should be released in *Y_prepare_down*.
181The return value *ret* is negative if an error occurred during the
182registration process. Otherwise a positive value is returned which
183contains the allocated hotplug for dynamically allocated states
184(*CPUHP_AP_ONLINE_DYN*). It will return zero for predefined states.
185
186The callback can be remove by invoking ``cpuhp_remove_state()``. In case of a
187dynamically allocated state (*CPUHP_AP_ONLINE_DYN*) use the returned state.
188During the removal of a hotplug state the teardown callback will be invoked.
189
190Multiple instances
191~~~~~~~~~~~~~~~~~~
192
193If a driver has multiple instances and each instance needs to perform the
194callback independently then it is likely that a ''multi-state'' should be used.
195First a multi-state state needs to be registered::
196
197  ret = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, "X/Y:online,
198                                Y_online, Y_prepare_down);
199  Y_hp_online = ret;
200
201The ``cpuhp_setup_state_multi()`` behaves similar to ``cpuhp_setup_state()``
202except it prepares the callbacks for a multi state and does not invoke
203the callbacks. This is a one time setup.
204Once a new instance is allocated, you need to register this new instance::
205
206  ret = cpuhp_state_add_instance(Y_hp_online, &d->node);
207
208This function will add this instance to your previously allocated
209*Y_hp_online* state and invoke the previously registered callback
210(*Y_online*) on all online CPUs. The *node* element is a ``struct
211hlist_node`` member of your per-instance data structure.
212
213On removal of the instance::
214
215  cpuhp_state_remove_instance(Y_hp_online, &d->node)
216
217should be invoked which will invoke the teardown callback on all online
218CPUs.
219
220Manual setup
221~~~~~~~~~~~~
222
223Usually it is handy to invoke setup and teardown callbacks on registration or
224removal of a state because usually the operation needs to performed once a CPU
225goes online (offline) and during initial setup (shutdown) of the driver. However
226each registration and removal function is also available with a ``_nocalls``
227suffix which does not invoke the provided callbacks if the invocation of the
228callbacks is not desired. During the manual setup (or teardown) the functions
229``get_online_cpus()`` and ``put_online_cpus()`` should be used to inhibit CPU
230hotplug operations.
231
232
233The ordering of the events
234--------------------------
235
236The hotplug states are defined in ``include/linux/cpuhotplug.h``:
237
238* The states *CPUHP_OFFLINE* … *CPUHP_AP_OFFLINE* are invoked before the
239  CPU is up.
240* The states *CPUHP_AP_OFFLINE* … *CPUHP_AP_ONLINE* are invoked
241  just the after the CPU has been brought up. The interrupts are off and
242  the scheduler is not yet active on this CPU. Starting with *CPUHP_AP_OFFLINE*
243  the callbacks are invoked on the target CPU.
244* The states between *CPUHP_AP_ONLINE_DYN* and *CPUHP_AP_ONLINE_DYN_END* are
245  reserved for the dynamic allocation.
246* The states are invoked in the reverse order on CPU shutdown starting with
247  *CPUHP_ONLINE* and stopping at *CPUHP_OFFLINE*. Here the callbacks are
248  invoked on the CPU that will be shutdown until *CPUHP_AP_OFFLINE*.
249
250A dynamically allocated state via *CPUHP_AP_ONLINE_DYN* is often enough.
251However if an earlier invocation during the bring up or shutdown is required
252then an explicit state should be acquired. An explicit state might also be
253required if the hotplug event requires specific ordering in respect to
254another hotplug event.
255
256Testing of hotplug states
257=========================
258
259One way to verify whether a custom state is working as expected or not is to
260shutdown a CPU and then put it online again. It is also possible to put the CPU
261to certain state (for instance *CPUHP_AP_ONLINE*) and then go back to
262*CPUHP_ONLINE*. This would simulate an error one state after *CPUHP_AP_ONLINE*
263which would lead to rollback to the online state.
264
265All registered states are enumerated in ``/sys/devices/system/cpu/hotplug/states`` ::
266
267 $ tail /sys/devices/system/cpu/hotplug/states
268 138: mm/vmscan:online
269 139: mm/vmstat:online
270 140: lib/percpu_cnt:online
271 141: acpi/cpu-drv:online
272 142: base/cacheinfo:online
273 143: virtio/net:online
274 144: x86/mce:online
275 145: printk:online
276 168: sched:active
277 169: online
278
279To rollback CPU4 to ``lib/percpu_cnt:online`` and back online just issue::
280
281  $ cat /sys/devices/system/cpu/cpu4/hotplug/state
282  169
283  $ echo 140 > /sys/devices/system/cpu/cpu4/hotplug/target
284  $ cat /sys/devices/system/cpu/cpu4/hotplug/state
285  140
286
287It is important to note that the teardown callback of state 140 have been
288invoked. And now get back online::
289
290  $ echo 169 > /sys/devices/system/cpu/cpu4/hotplug/target
291  $ cat /sys/devices/system/cpu/cpu4/hotplug/state
292  169
293
294With trace events enabled, the individual steps are visible, too::
295
296  #  TASK-PID   CPU#    TIMESTAMP  FUNCTION
297  #     | |       |        |         |
298      bash-394  [001]  22.976: cpuhp_enter: cpu: 0004 target: 140 step: 169 (cpuhp_kick_ap_work)
299   cpuhp/4-31   [004]  22.977: cpuhp_enter: cpu: 0004 target: 140 step: 168 (sched_cpu_deactivate)
300   cpuhp/4-31   [004]  22.990: cpuhp_exit:  cpu: 0004  state: 168 step: 168 ret: 0
301   cpuhp/4-31   [004]  22.991: cpuhp_enter: cpu: 0004 target: 140 step: 144 (mce_cpu_pre_down)
302   cpuhp/4-31   [004]  22.992: cpuhp_exit:  cpu: 0004  state: 144 step: 144 ret: 0
303   cpuhp/4-31   [004]  22.993: cpuhp_multi_enter: cpu: 0004 target: 140 step: 143 (virtnet_cpu_down_prep)
304   cpuhp/4-31   [004]  22.994: cpuhp_exit:  cpu: 0004  state: 143 step: 143 ret: 0
305   cpuhp/4-31   [004]  22.995: cpuhp_enter: cpu: 0004 target: 140 step: 142 (cacheinfo_cpu_pre_down)
306   cpuhp/4-31   [004]  22.996: cpuhp_exit:  cpu: 0004  state: 142 step: 142 ret: 0
307      bash-394  [001]  22.997: cpuhp_exit:  cpu: 0004  state: 140 step: 169 ret: 0
308      bash-394  [005]  95.540: cpuhp_enter: cpu: 0004 target: 169 step: 140 (cpuhp_kick_ap_work)
309   cpuhp/4-31   [004]  95.541: cpuhp_enter: cpu: 0004 target: 169 step: 141 (acpi_soft_cpu_online)
310   cpuhp/4-31   [004]  95.542: cpuhp_exit:  cpu: 0004  state: 141 step: 141 ret: 0
311   cpuhp/4-31   [004]  95.543: cpuhp_enter: cpu: 0004 target: 169 step: 142 (cacheinfo_cpu_online)
312   cpuhp/4-31   [004]  95.544: cpuhp_exit:  cpu: 0004  state: 142 step: 142 ret: 0
313   cpuhp/4-31   [004]  95.545: cpuhp_multi_enter: cpu: 0004 target: 169 step: 143 (virtnet_cpu_online)
314   cpuhp/4-31   [004]  95.546: cpuhp_exit:  cpu: 0004  state: 143 step: 143 ret: 0
315   cpuhp/4-31   [004]  95.547: cpuhp_enter: cpu: 0004 target: 169 step: 144 (mce_cpu_online)
316   cpuhp/4-31   [004]  95.548: cpuhp_exit:  cpu: 0004  state: 144 step: 144 ret: 0
317   cpuhp/4-31   [004]  95.549: cpuhp_enter: cpu: 0004 target: 169 step: 145 (console_cpu_notify)
318   cpuhp/4-31   [004]  95.550: cpuhp_exit:  cpu: 0004  state: 145 step: 145 ret: 0
319   cpuhp/4-31   [004]  95.551: cpuhp_enter: cpu: 0004 target: 169 step: 168 (sched_cpu_activate)
320   cpuhp/4-31   [004]  95.552: cpuhp_exit:  cpu: 0004  state: 168 step: 168 ret: 0
321      bash-394  [005]  95.553: cpuhp_exit:  cpu: 0004  state: 169 step: 140 ret: 0
322
323As it an be seen, CPU4 went down until timestamp 22.996 and then back up until
32495.552. All invoked callbacks including their return codes are visible in the
325trace.
326
327Architecture's requirements
328===========================
329
330The following functions and configurations are required:
331
332``CONFIG_HOTPLUG_CPU``
333  This entry needs to be enabled in Kconfig
334
335``__cpu_up()``
336  Arch interface to bring up a CPU
337
338``__cpu_disable()``
339  Arch interface to shutdown a CPU, no more interrupts can be handled by the
340  kernel after the routine returns. This includes the shutdown of the timer.
341
342``__cpu_die()``
343  This actually supposed to ensure death of the CPU. Actually look at some
344  example code in other arch that implement CPU hotplug. The processor is taken
345  down from the ``idle()`` loop for that specific architecture. ``__cpu_die()``
346  typically waits for some per_cpu state to be set, to ensure the processor dead
347  routine is called to be sure positively.
348
349User Space Notification
350=======================
351
352After CPU successfully onlined or offline udev events are sent. A udev rule like::
353
354  SUBSYSTEM=="cpu", DRIVERS=="processor", DEVPATH=="/devices/system/cpu/*", RUN+="the_hotplug_receiver.sh"
355
356will receive all events. A script like::
357
358  #!/bin/sh
359
360  if [ "${ACTION}" = "offline" ]
361  then
362      echo "CPU ${DEVPATH##*/} offline"
363
364  elif [ "${ACTION}" = "online" ]
365  then
366      echo "CPU ${DEVPATH##*/} online"
367
368  fi
369
370can process the event further.
371
372Kernel Inline Documentations Reference
373======================================
374
375.. kernel-doc:: include/linux/cpuhotplug.h
376