xref: /openbmc/linux/Documentation/RCU/NMI-RCU.rst (revision dc6a81c3)
1.. _NMI_rcu_doc:
2
3Using RCU to Protect Dynamic NMI Handlers
4=========================================
5
6
7Although RCU is usually used to protect read-mostly data structures,
8it is possible to use RCU to provide dynamic non-maskable interrupt
9handlers, as well as dynamic irq handlers.  This document describes
10how to do this, drawing loosely from Zwane Mwaikambo's NMI-timer
11work in "arch/x86/oprofile/nmi_timer_int.c" and in
12"arch/x86/kernel/traps.c".
13
14The relevant pieces of code are listed below, each followed by a
15brief explanation::
16
17	static int dummy_nmi_callback(struct pt_regs *regs, int cpu)
18	{
19		return 0;
20	}
21
22The dummy_nmi_callback() function is a "dummy" NMI handler that does
23nothing, but returns zero, thus saying that it did nothing, allowing
24the NMI handler to take the default machine-specific action::
25
26	static nmi_callback_t nmi_callback = dummy_nmi_callback;
27
28This nmi_callback variable is a global function pointer to the current
29NMI handler::
30
31	void do_nmi(struct pt_regs * regs, long error_code)
32	{
33		int cpu;
34
35		nmi_enter();
36
37		cpu = smp_processor_id();
38		++nmi_count(cpu);
39
40		if (!rcu_dereference_sched(nmi_callback)(regs, cpu))
41			default_do_nmi(regs);
42
43		nmi_exit();
44	}
45
46The do_nmi() function processes each NMI.  It first disables preemption
47in the same way that a hardware irq would, then increments the per-CPU
48count of NMIs.  It then invokes the NMI handler stored in the nmi_callback
49function pointer.  If this handler returns zero, do_nmi() invokes the
50default_do_nmi() function to handle a machine-specific NMI.  Finally,
51preemption is restored.
52
53In theory, rcu_dereference_sched() is not needed, since this code runs
54only on i386, which in theory does not need rcu_dereference_sched()
55anyway.  However, in practice it is a good documentation aid, particularly
56for anyone attempting to do something similar on Alpha or on systems
57with aggressive optimizing compilers.
58
59Quick Quiz:
60		Why might the rcu_dereference_sched() be necessary on Alpha, given that the code referenced by the pointer is read-only?
61
62:ref:`Answer to Quick Quiz <answer_quick_quiz_NMI>`
63
64Back to the discussion of NMI and RCU::
65
66	void set_nmi_callback(nmi_callback_t callback)
67	{
68		rcu_assign_pointer(nmi_callback, callback);
69	}
70
71The set_nmi_callback() function registers an NMI handler.  Note that any
72data that is to be used by the callback must be initialized up -before-
73the call to set_nmi_callback().  On architectures that do not order
74writes, the rcu_assign_pointer() ensures that the NMI handler sees the
75initialized values::
76
77	void unset_nmi_callback(void)
78	{
79		rcu_assign_pointer(nmi_callback, dummy_nmi_callback);
80	}
81
82This function unregisters an NMI handler, restoring the original
83dummy_nmi_handler().  However, there may well be an NMI handler
84currently executing on some other CPU.  We therefore cannot free
85up any data structures used by the old NMI handler until execution
86of it completes on all other CPUs.
87
88One way to accomplish this is via synchronize_rcu(), perhaps as
89follows::
90
91	unset_nmi_callback();
92	synchronize_rcu();
93	kfree(my_nmi_data);
94
95This works because (as of v4.20) synchronize_rcu() blocks until all
96CPUs complete any preemption-disabled segments of code that they were
97executing.
98Since NMI handlers disable preemption, synchronize_rcu() is guaranteed
99not to return until all ongoing NMI handlers exit.  It is therefore safe
100to free up the handler's data as soon as synchronize_rcu() returns.
101
102Important note: for this to work, the architecture in question must
103invoke nmi_enter() and nmi_exit() on NMI entry and exit, respectively.
104
105.. _answer_quick_quiz_NMI:
106
107Answer to Quick Quiz:
108	Why might the rcu_dereference_sched() be necessary on Alpha, given that the code referenced by the pointer is read-only?
109
110	The caller to set_nmi_callback() might well have
111	initialized some data that is to be used by the new NMI
112	handler.  In this case, the rcu_dereference_sched() would
113	be needed, because otherwise a CPU that received an NMI
114	just after the new handler was set might see the pointer
115	to the new NMI handler, but the old pre-initialized
116	version of the handler's data.
117
118	This same sad story can happen on other CPUs when using
119	a compiler with aggressive pointer-value speculation
120	optimizations.
121
122	More important, the rcu_dereference_sched() makes it
123	clear to someone reading the code that the pointer is
124	being protected by RCU-sched.
125