1.. SPDX-License-Identifier: GPL-2.0
2
3=================
4KVM Lock Overview
5=================
6
71. Acquisition Orders
8---------------------
9
10The acquisition orders for mutexes are as follows:
11
12- cpus_read_lock() is taken outside kvm_lock and kvm_usage_lock
13
14- kvm->lock is taken outside vcpu->mutex
15
16- kvm->lock is taken outside kvm->slots_lock and kvm->irq_lock
17
18- kvm->slots_lock is taken outside kvm->irq_lock, though acquiring
19  them together is quite rare.
20
21- kvm->mn_active_invalidate_count ensures that pairs of
22  invalidate_range_start() and invalidate_range_end() callbacks
23  use the same memslots array.  kvm->slots_lock and kvm->slots_arch_lock
24  are taken on the waiting side when modifying memslots, so MMU notifiers
25  must not take either kvm->slots_lock or kvm->slots_arch_lock.
26
27cpus_read_lock() vs kvm_lock:
28
29- Taking cpus_read_lock() outside of kvm_lock is problematic, despite that
30  being the official ordering, as it is quite easy to unknowingly trigger
31  cpus_read_lock() while holding kvm_lock.  Use caution when walking vm_list,
32  e.g. avoid complex operations when possible.
33
34For SRCU:
35
36- ``synchronize_srcu(&kvm->srcu)`` is called inside critical sections
37  for kvm->lock, vcpu->mutex and kvm->slots_lock.  These locks _cannot_
38  be taken inside a kvm->srcu read-side critical section; that is, the
39  following is broken::
40
41      srcu_read_lock(&kvm->srcu);
42      mutex_lock(&kvm->slots_lock);
43
44- kvm->slots_arch_lock instead is released before the call to
45  ``synchronize_srcu()``.  It _can_ therefore be taken inside a
46  kvm->srcu read-side critical section, for example while processing
47  a vmexit.
48
49On x86:
50
51- vcpu->mutex is taken outside kvm->arch.hyperv.hv_lock and kvm->arch.xen.xen_lock
52
53- kvm->arch.mmu_lock is an rwlock.  kvm->arch.tdp_mmu_pages_lock and
54  kvm->arch.mmu_unsync_pages_lock are taken inside kvm->arch.mmu_lock, and
55  cannot be taken without already holding kvm->arch.mmu_lock (typically with
56  ``read_lock`` for the TDP MMU, thus the need for additional spinlocks).
57
58Everything else is a leaf: no other lock is taken inside the critical
59sections.
60
612. Exception
62------------
63
64Fast page fault:
65
66Fast page fault is the fast path which fixes the guest page fault out of
67the mmu-lock on x86. Currently, the page fault can be fast in one of the
68following two cases:
69
701. Access Tracking: The SPTE is not present, but it is marked for access
71   tracking. That means we need to restore the saved R/X bits. This is
72   described in more detail later below.
73
742. Write-Protection: The SPTE is present and the fault is caused by
75   write-protect. That means we just need to change the W bit of the spte.
76
77What we use to avoid all the races is the Host-writable bit and MMU-writable bit
78on the spte:
79
80- Host-writable means the gfn is writable in the host kernel page tables and in
81  its KVM memslot.
82- MMU-writable means the gfn is writable in the guest's mmu and it is not
83  write-protected by shadow page write-protection.
84
85On fast page fault path, we will use cmpxchg to atomically set the spte W
86bit if spte.HOST_WRITEABLE = 1 and spte.WRITE_PROTECT = 1, to restore the saved
87R/X bits if for an access-traced spte, or both. This is safe because whenever
88changing these bits can be detected by cmpxchg.
89
90But we need carefully check these cases:
91
921) The mapping from gfn to pfn
93
94The mapping from gfn to pfn may be changed since we can only ensure the pfn
95is not changed during cmpxchg. This is a ABA problem, for example, below case
96will happen:
97
98+------------------------------------------------------------------------+
99| At the beginning::                                                     |
100|                                                                        |
101|	gpte = gfn1                                                      |
102|	gfn1 is mapped to pfn1 on host                                   |
103|	spte is the shadow page table entry corresponding with gpte and  |
104|	spte = pfn1                                                      |
105+------------------------------------------------------------------------+
106| On fast page fault path:                                               |
107+------------------------------------+-----------------------------------+
108| CPU 0:                             | CPU 1:                            |
109+------------------------------------+-----------------------------------+
110| ::                                 |                                   |
111|                                    |                                   |
112|   old_spte = *spte;                |                                   |
113+------------------------------------+-----------------------------------+
114|                                    | pfn1 is swapped out::             |
115|                                    |                                   |
116|                                    |    spte = 0;                      |
117|                                    |                                   |
118|                                    | pfn1 is re-alloced for gfn2.      |
119|                                    |                                   |
120|                                    | gpte is changed to point to       |
121|                                    | gfn2 by the guest::               |
122|                                    |                                   |
123|                                    |    spte = pfn1;                   |
124+------------------------------------+-----------------------------------+
125| ::                                                                     |
126|                                                                        |
127|   if (cmpxchg(spte, old_spte, old_spte+W)                              |
128|	mark_page_dirty(vcpu->kvm, gfn1)                                 |
129|            OOPS!!!                                                     |
130+------------------------------------------------------------------------+
131
132We dirty-log for gfn1, that means gfn2 is lost in dirty-bitmap.
133
134For direct sp, we can easily avoid it since the spte of direct sp is fixed
135to gfn.  For indirect sp, we disabled fast page fault for simplicity.
136
137A solution for indirect sp could be to pin the gfn, for example via
138kvm_vcpu_gfn_to_pfn_atomic, before the cmpxchg.  After the pinning:
139
140- We have held the refcount of pfn; that means the pfn can not be freed and
141  be reused for another gfn.
142- The pfn is writable and therefore it cannot be shared between different gfns
143  by KSM.
144
145Then, we can ensure the dirty bitmaps is correctly set for a gfn.
146
1472) Dirty bit tracking
148
149In the origin code, the spte can be fast updated (non-atomically) if the
150spte is read-only and the Accessed bit has already been set since the
151Accessed bit and Dirty bit can not be lost.
152
153But it is not true after fast page fault since the spte can be marked
154writable between reading spte and updating spte. Like below case:
155
156+------------------------------------------------------------------------+
157| At the beginning::                                                     |
158|                                                                        |
159|	spte.W = 0                                                       |
160|	spte.Accessed = 1                                                |
161+------------------------------------+-----------------------------------+
162| CPU 0:                             | CPU 1:                            |
163+------------------------------------+-----------------------------------+
164| In mmu_spte_clear_track_bits()::   |                                   |
165|                                    |                                   |
166|  old_spte = *spte;                 |                                   |
167|                                    |                                   |
168|                                    |                                   |
169|  /* 'if' condition is satisfied. */|                                   |
170|  if (old_spte.Accessed == 1 &&     |                                   |
171|       old_spte.W == 0)             |                                   |
172|     spte = 0ull;                   |                                   |
173+------------------------------------+-----------------------------------+
174|                                    | on fast page fault path::         |
175|                                    |                                   |
176|                                    |    spte.W = 1                     |
177|                                    |                                   |
178|                                    | memory write on the spte::        |
179|                                    |                                   |
180|                                    |    spte.Dirty = 1                 |
181+------------------------------------+-----------------------------------+
182|  ::                                |                                   |
183|                                    |                                   |
184|   else                             |                                   |
185|     old_spte = xchg(spte, 0ull)    |                                   |
186|   if (old_spte.Accessed == 1)      |                                   |
187|     kvm_set_pfn_accessed(spte.pfn);|                                   |
188|   if (old_spte.Dirty == 1)         |                                   |
189|     kvm_set_pfn_dirty(spte.pfn);   |                                   |
190|     OOPS!!!                        |                                   |
191+------------------------------------+-----------------------------------+
192
193The Dirty bit is lost in this case.
194
195In order to avoid this kind of issue, we always treat the spte as "volatile"
196if it can be updated out of mmu-lock [see spte_has_volatile_bits()]; it means
197the spte is always atomically updated in this case.
198
1993) flush tlbs due to spte updated
200
201If the spte is updated from writable to read-only, we should flush all TLBs,
202otherwise rmap_write_protect will find a read-only spte, even though the
203writable spte might be cached on a CPU's TLB.
204
205As mentioned before, the spte can be updated to writable out of mmu-lock on
206fast page fault path. In order to easily audit the path, we see if TLBs needing
207to be flushed caused this reason in mmu_spte_update() since this is a common
208function to update spte (present -> present).
209
210Since the spte is "volatile" if it can be updated out of mmu-lock, we always
211atomically update the spte and the race caused by fast page fault can be avoided.
212See the comments in spte_has_volatile_bits() and mmu_spte_update().
213
214Lockless Access Tracking:
215
216This is used for Intel CPUs that are using EPT but do not support the EPT A/D
217bits. In this case, PTEs are tagged as A/D disabled (using ignored bits), and
218when the KVM MMU notifier is called to track accesses to a page (via
219kvm_mmu_notifier_clear_flush_young), it marks the PTE not-present in hardware
220by clearing the RWX bits in the PTE and storing the original R & X bits in more
221unused/ignored bits. When the VM tries to access the page later on, a fault is
222generated and the fast page fault mechanism described above is used to
223atomically restore the PTE to a Present state. The W bit is not saved when the
224PTE is marked for access tracking and during restoration to the Present state,
225the W bit is set depending on whether or not it was a write access. If it
226wasn't, then the W bit will remain clear until a write access happens, at which
227time it will be set using the Dirty tracking mechanism described above.
228
2293. Reference
230------------
231
232``kvm_lock``
233^^^^^^^^^^^^
234
235:Type:		mutex
236:Arch:		any
237:Protects:	- vm_list
238
239``kvm_usage_lock``
240^^^^^^^^^^^^^^^^^^
241
242:Type:		mutex
243:Arch:		any
244:Protects:	- kvm_usage_count
245		- hardware virtualization enable/disable
246:Comment:	Exists because using kvm_lock leads to deadlock (see earlier comment
247		on cpus_read_lock() vs kvm_lock).  Note, KVM also disables CPU hotplug via
248		cpus_read_lock() when enabling/disabling virtualization.
249
250``kvm->mn_invalidate_lock``
251^^^^^^^^^^^^^^^^^^^^^^^^^^^
252
253:Type:          spinlock_t
254:Arch:          any
255:Protects:      mn_active_invalidate_count, mn_memslots_update_rcuwait
256
257``kvm_arch::tsc_write_lock``
258^^^^^^^^^^^^^^^^^^^^^^^^^^^^
259
260:Type:		raw_spinlock_t
261:Arch:		x86
262:Protects:	- kvm_arch::{last_tsc_write,last_tsc_nsec,last_tsc_offset}
263		- tsc offset in vmcb
264:Comment:	'raw' because updating the tsc offsets must not be preempted.
265
266``kvm->mmu_lock``
267^^^^^^^^^^^^^^^^^
268:Type:		spinlock_t or rwlock_t
269:Arch:		any
270:Protects:	-shadow page/shadow tlb entry
271:Comment:	it is a spinlock since it is used in mmu notifier.
272
273``kvm->srcu``
274^^^^^^^^^^^^^
275:Type:		srcu lock
276:Arch:		any
277:Protects:	- kvm->memslots
278		- kvm->buses
279:Comment:	The srcu read lock must be held while accessing memslots (e.g.
280		when using gfn_to_* functions) and while accessing in-kernel
281		MMIO/PIO address->device structure mapping (kvm->buses).
282		The srcu index can be stored in kvm_vcpu->srcu_idx per vcpu
283		if it is needed by multiple functions.
284
285``kvm->slots_arch_lock``
286^^^^^^^^^^^^^^^^^^^^^^^^
287:Type:          mutex
288:Arch:          any (only needed on x86 though)
289:Protects:      any arch-specific fields of memslots that have to be modified
290                in a ``kvm->srcu`` read-side critical section.
291:Comment:       must be held before reading the pointer to the current memslots,
292                until after all changes to the memslots are complete
293
294``wakeup_vcpus_on_cpu_lock``
295^^^^^^^^^^^^^^^^^^^^^^^^^^^^
296:Type:		spinlock_t
297:Arch:		x86
298:Protects:	wakeup_vcpus_on_cpu
299:Comment:	This is a per-CPU lock and it is used for VT-d posted-interrupts.
300		When VT-d posted-interrupts are supported and the VM has assigned
301		devices, we put the blocked vCPU on the list blocked_vcpu_on_cpu
302		protected by blocked_vcpu_on_cpu_lock. When VT-d hardware issues
303		wakeup notification event since external interrupts from the
304		assigned devices happens, we will find the vCPU on the list to
305		wakeup.
306
307``vendor_module_lock``
308^^^^^^^^^^^^^^^^^^^^^^
309:Type:		mutex
310:Arch:		x86
311:Protects:	loading a vendor module (kvm_amd or kvm_intel)
312:Comment:	Exists because using kvm_lock leads to deadlock.  kvm_lock is taken
313    in notifiers, e.g. __kvmclock_cpufreq_notifier(), that may be invoked while
314    cpu_hotplug_lock is held, e.g. from cpufreq_boost_trigger_state(), and many
315    operations need to take cpu_hotplug_lock when loading a vendor module, e.g.
316    updating static calls.
317