1.. SPDX-License-Identifier: GPL-2.0
2
3=================
4KVM Lock Overview
5=================
6
71. Acquisition Orders
8---------------------
9
10The acquisition orders for mutexes are as follows:
11
12- kvm->lock is taken outside vcpu->mutex
13
14- kvm->lock is taken outside kvm->slots_lock and kvm->irq_lock
15
16- kvm->slots_lock is taken outside kvm->irq_lock, though acquiring
17  them together is quite rare.
18
19On x86, vcpu->mutex is taken outside kvm->arch.hyperv.hv_lock.
20
21Everything else is a leaf: no other lock is taken inside the critical
22sections.
23
242. Exception
25------------
26
27Fast page fault:
28
29Fast page fault is the fast path which fixes the guest page fault out of
30the mmu-lock on x86. Currently, the page fault can be fast in one of the
31following two cases:
32
331. Access Tracking: The SPTE is not present, but it is marked for access
34   tracking i.e. the SPTE_SPECIAL_MASK is set. That means we need to
35   restore the saved R/X bits. This is described in more detail later below.
36
372. Write-Protection: The SPTE is present and the fault is
38   caused by write-protect. That means we just need to change the W bit of
39   the spte.
40
41What we use to avoid all the race is the SPTE_HOST_WRITEABLE bit and
42SPTE_MMU_WRITEABLE bit on the spte:
43
44- SPTE_HOST_WRITEABLE means the gfn is writable on host.
45- SPTE_MMU_WRITEABLE means the gfn is writable on mmu. The bit is set when
46  the gfn is writable on guest mmu and it is not write-protected by shadow
47  page write-protection.
48
49On fast page fault path, we will use cmpxchg to atomically set the spte W
50bit if spte.SPTE_HOST_WRITEABLE = 1 and spte.SPTE_WRITE_PROTECT = 1, or
51restore the saved R/X bits if VMX_EPT_TRACK_ACCESS mask is set, or both. This
52is safe because whenever changing these bits can be detected by cmpxchg.
53
54But we need carefully check these cases:
55
561) The mapping from gfn to pfn
57
58The mapping from gfn to pfn may be changed since we can only ensure the pfn
59is not changed during cmpxchg. This is a ABA problem, for example, below case
60will happen:
61
62+------------------------------------------------------------------------+
63| At the beginning::                                                     |
64|                                                                        |
65|	gpte = gfn1                                                      |
66|	gfn1 is mapped to pfn1 on host                                   |
67|	spte is the shadow page table entry corresponding with gpte and  |
68|	spte = pfn1                                                      |
69+------------------------------------------------------------------------+
70| On fast page fault path:                                               |
71+------------------------------------+-----------------------------------+
72| CPU 0:                             | CPU 1:                            |
73+------------------------------------+-----------------------------------+
74| ::                                 |                                   |
75|                                    |                                   |
76|   old_spte = *spte;                |                                   |
77+------------------------------------+-----------------------------------+
78|                                    | pfn1 is swapped out::             |
79|                                    |                                   |
80|                                    |    spte = 0;                      |
81|                                    |                                   |
82|                                    | pfn1 is re-alloced for gfn2.      |
83|                                    |                                   |
84|                                    | gpte is changed to point to       |
85|                                    | gfn2 by the guest::               |
86|                                    |                                   |
87|                                    |    spte = pfn1;                   |
88+------------------------------------+-----------------------------------+
89| ::                                                                     |
90|                                                                        |
91|   if (cmpxchg(spte, old_spte, old_spte+W)                              |
92|	mark_page_dirty(vcpu->kvm, gfn1)                                 |
93|            OOPS!!!                                                     |
94+------------------------------------------------------------------------+
95
96We dirty-log for gfn1, that means gfn2 is lost in dirty-bitmap.
97
98For direct sp, we can easily avoid it since the spte of direct sp is fixed
99to gfn.  For indirect sp, we disabled fast page fault for simplicity.
100
101A solution for indirect sp could be to pin the gfn, for example via
102kvm_vcpu_gfn_to_pfn_atomic, before the cmpxchg.  After the pinning:
103
104- We have held the refcount of pfn that means the pfn can not be freed and
105  be reused for another gfn.
106- The pfn is writable and therefore it cannot be shared between different gfns
107  by KSM.
108
109Then, we can ensure the dirty bitmaps is correctly set for a gfn.
110
1112) Dirty bit tracking
112
113In the origin code, the spte can be fast updated (non-atomically) if the
114spte is read-only and the Accessed bit has already been set since the
115Accessed bit and Dirty bit can not be lost.
116
117But it is not true after fast page fault since the spte can be marked
118writable between reading spte and updating spte. Like below case:
119
120+------------------------------------------------------------------------+
121| At the beginning::                                                     |
122|                                                                        |
123|	spte.W = 0                                                       |
124|	spte.Accessed = 1                                                |
125+------------------------------------+-----------------------------------+
126| CPU 0:                             | CPU 1:                            |
127+------------------------------------+-----------------------------------+
128| In mmu_spte_clear_track_bits()::   |                                   |
129|                                    |                                   |
130|  old_spte = *spte;                 |                                   |
131|                                    |                                   |
132|                                    |                                   |
133|  /* 'if' condition is satisfied. */|                                   |
134|  if (old_spte.Accessed == 1 &&     |                                   |
135|       old_spte.W == 0)             |                                   |
136|     spte = 0ull;                   |                                   |
137+------------------------------------+-----------------------------------+
138|                                    | on fast page fault path::         |
139|                                    |                                   |
140|                                    |    spte.W = 1                     |
141|                                    |                                   |
142|                                    | memory write on the spte::        |
143|                                    |                                   |
144|                                    |    spte.Dirty = 1                 |
145+------------------------------------+-----------------------------------+
146|  ::                                |                                   |
147|                                    |                                   |
148|   else                             |                                   |
149|     old_spte = xchg(spte, 0ull)    |                                   |
150|   if (old_spte.Accessed == 1)      |                                   |
151|     kvm_set_pfn_accessed(spte.pfn);|                                   |
152|   if (old_spte.Dirty == 1)         |                                   |
153|     kvm_set_pfn_dirty(spte.pfn);   |                                   |
154|     OOPS!!!                        |                                   |
155+------------------------------------+-----------------------------------+
156
157The Dirty bit is lost in this case.
158
159In order to avoid this kind of issue, we always treat the spte as "volatile"
160if it can be updated out of mmu-lock, see spte_has_volatile_bits(), it means,
161the spte is always atomically updated in this case.
162
1633) flush tlbs due to spte updated
164
165If the spte is updated from writable to readonly, we should flush all TLBs,
166otherwise rmap_write_protect will find a read-only spte, even though the
167writable spte might be cached on a CPU's TLB.
168
169As mentioned before, the spte can be updated to writable out of mmu-lock on
170fast page fault path, in order to easily audit the path, we see if TLBs need
171be flushed caused by this reason in mmu_spte_update() since this is a common
172function to update spte (present -> present).
173
174Since the spte is "volatile" if it can be updated out of mmu-lock, we always
175atomically update the spte, the race caused by fast page fault can be avoided,
176See the comments in spte_has_volatile_bits() and mmu_spte_update().
177
178Lockless Access Tracking:
179
180This is used for Intel CPUs that are using EPT but do not support the EPT A/D
181bits. In this case, when the KVM MMU notifier is called to track accesses to a
182page (via kvm_mmu_notifier_clear_flush_young), it marks the PTE as not-present
183by clearing the RWX bits in the PTE and storing the original R & X bits in
184some unused/ignored bits. In addition, the SPTE_SPECIAL_MASK is also set on the
185PTE (using the ignored bit 62). When the VM tries to access the page later on,
186a fault is generated and the fast page fault mechanism described above is used
187to atomically restore the PTE to a Present state. The W bit is not saved when
188the PTE is marked for access tracking and during restoration to the Present
189state, the W bit is set depending on whether or not it was a write access. If
190it wasn't, then the W bit will remain clear until a write access happens, at
191which time it will be set using the Dirty tracking mechanism described above.
192
1933. Reference
194------------
195
196:Name:		kvm_lock
197:Type:		mutex
198:Arch:		any
199:Protects:	- vm_list
200
201:Name:		kvm_count_lock
202:Type:		raw_spinlock_t
203:Arch:		any
204:Protects:	- hardware virtualization enable/disable
205:Comment:	'raw' because hardware enabling/disabling must be atomic /wrt
206		migration.
207
208:Name:		kvm_arch::tsc_write_lock
209:Type:		raw_spinlock
210:Arch:		x86
211:Protects:	- kvm_arch::{last_tsc_write,last_tsc_nsec,last_tsc_offset}
212		- tsc offset in vmcb
213:Comment:	'raw' because updating the tsc offsets must not be preempted.
214
215:Name:		kvm->mmu_lock
216:Type:		spinlock_t
217:Arch:		any
218:Protects:	-shadow page/shadow tlb entry
219:Comment:	it is a spinlock since it is used in mmu notifier.
220
221:Name:		kvm->srcu
222:Type:		srcu lock
223:Arch:		any
224:Protects:	- kvm->memslots
225		- kvm->buses
226:Comment:	The srcu read lock must be held while accessing memslots (e.g.
227		when using gfn_to_* functions) and while accessing in-kernel
228		MMIO/PIO address->device structure mapping (kvm->buses).
229		The srcu index can be stored in kvm_vcpu->srcu_idx per vcpu
230		if it is needed by multiple functions.
231
232:Name:		blocked_vcpu_on_cpu_lock
233:Type:		spinlock_t
234:Arch:		x86
235:Protects:	blocked_vcpu_on_cpu
236:Comment:	This is a per-CPU lock and it is used for VT-d posted-interrupts.
237		When VT-d posted-interrupts is supported and the VM has assigned
238		devices, we put the blocked vCPU on the list blocked_vcpu_on_cpu
239		protected by blocked_vcpu_on_cpu_lock, when VT-d hardware issues
240		wakeup notification event since external interrupts from the
241		assigned devices happens, we will find the vCPU on the list to
242		wakeup.
243