1.. SPDX-License-Identifier: GPL-2.0 2 3================= 4KVM Lock Overview 5================= 6 71. Acquisition Orders 8--------------------- 9 10The acquisition orders for mutexes are as follows: 11 12- cpus_read_lock() is taken outside kvm_lock 13 14- kvm->lock is taken outside vcpu->mutex 15 16- kvm->lock is taken outside kvm->slots_lock and kvm->irq_lock 17 18- kvm->slots_lock is taken outside kvm->irq_lock, though acquiring 19 them together is quite rare. 20 21- kvm->mn_active_invalidate_count ensures that pairs of 22 invalidate_range_start() and invalidate_range_end() callbacks 23 use the same memslots array. kvm->slots_lock and kvm->slots_arch_lock 24 are taken on the waiting side when modifying memslots, so MMU notifiers 25 must not take either kvm->slots_lock or kvm->slots_arch_lock. 26 27For SRCU: 28 29- ``synchronize_srcu(&kvm->srcu)`` is called inside critical sections 30 for kvm->lock, vcpu->mutex and kvm->slots_lock. These locks _cannot_ 31 be taken inside a kvm->srcu read-side critical section; that is, the 32 following is broken:: 33 34 srcu_read_lock(&kvm->srcu); 35 mutex_lock(&kvm->slots_lock); 36 37- kvm->slots_arch_lock instead is released before the call to 38 ``synchronize_srcu()``. It _can_ therefore be taken inside a 39 kvm->srcu read-side critical section, for example while processing 40 a vmexit. 41 42On x86: 43 44- vcpu->mutex is taken outside kvm->arch.hyperv.hv_lock and kvm->arch.xen.xen_lock 45 46- kvm->arch.mmu_lock is an rwlock. kvm->arch.tdp_mmu_pages_lock and 47 kvm->arch.mmu_unsync_pages_lock are taken inside kvm->arch.mmu_lock, and 48 cannot be taken without already holding kvm->arch.mmu_lock (typically with 49 ``read_lock`` for the TDP MMU, thus the need for additional spinlocks). 50 51Everything else is a leaf: no other lock is taken inside the critical 52sections. 53 542. Exception 55------------ 56 57Fast page fault: 58 59Fast page fault is the fast path which fixes the guest page fault out of 60the mmu-lock on x86. Currently, the page fault can be fast in one of the 61following two cases: 62 631. Access Tracking: The SPTE is not present, but it is marked for access 64 tracking. That means we need to restore the saved R/X bits. This is 65 described in more detail later below. 66 672. Write-Protection: The SPTE is present and the fault is caused by 68 write-protect. That means we just need to change the W bit of the spte. 69 70What we use to avoid all the races is the Host-writable bit and MMU-writable bit 71on the spte: 72 73- Host-writable means the gfn is writable in the host kernel page tables and in 74 its KVM memslot. 75- MMU-writable means the gfn is writable in the guest's mmu and it is not 76 write-protected by shadow page write-protection. 77 78On fast page fault path, we will use cmpxchg to atomically set the spte W 79bit if spte.HOST_WRITEABLE = 1 and spte.WRITE_PROTECT = 1, to restore the saved 80R/X bits if for an access-traced spte, or both. This is safe because whenever 81changing these bits can be detected by cmpxchg. 82 83But we need carefully check these cases: 84 851) The mapping from gfn to pfn 86 87The mapping from gfn to pfn may be changed since we can only ensure the pfn 88is not changed during cmpxchg. This is a ABA problem, for example, below case 89will happen: 90 91+------------------------------------------------------------------------+ 92| At the beginning:: | 93| | 94| gpte = gfn1 | 95| gfn1 is mapped to pfn1 on host | 96| spte is the shadow page table entry corresponding with gpte and | 97| spte = pfn1 | 98+------------------------------------------------------------------------+ 99| On fast page fault path: | 100+------------------------------------+-----------------------------------+ 101| CPU 0: | CPU 1: | 102+------------------------------------+-----------------------------------+ 103| :: | | 104| | | 105| old_spte = *spte; | | 106+------------------------------------+-----------------------------------+ 107| | pfn1 is swapped out:: | 108| | | 109| | spte = 0; | 110| | | 111| | pfn1 is re-alloced for gfn2. | 112| | | 113| | gpte is changed to point to | 114| | gfn2 by the guest:: | 115| | | 116| | spte = pfn1; | 117+------------------------------------+-----------------------------------+ 118| :: | 119| | 120| if (cmpxchg(spte, old_spte, old_spte+W) | 121| mark_page_dirty(vcpu->kvm, gfn1) | 122| OOPS!!! | 123+------------------------------------------------------------------------+ 124 125We dirty-log for gfn1, that means gfn2 is lost in dirty-bitmap. 126 127For direct sp, we can easily avoid it since the spte of direct sp is fixed 128to gfn. For indirect sp, we disabled fast page fault for simplicity. 129 130A solution for indirect sp could be to pin the gfn, for example via 131kvm_vcpu_gfn_to_pfn_atomic, before the cmpxchg. After the pinning: 132 133- We have held the refcount of pfn; that means the pfn can not be freed and 134 be reused for another gfn. 135- The pfn is writable and therefore it cannot be shared between different gfns 136 by KSM. 137 138Then, we can ensure the dirty bitmaps is correctly set for a gfn. 139 1402) Dirty bit tracking 141 142In the origin code, the spte can be fast updated (non-atomically) if the 143spte is read-only and the Accessed bit has already been set since the 144Accessed bit and Dirty bit can not be lost. 145 146But it is not true after fast page fault since the spte can be marked 147writable between reading spte and updating spte. Like below case: 148 149+------------------------------------------------------------------------+ 150| At the beginning:: | 151| | 152| spte.W = 0 | 153| spte.Accessed = 1 | 154+------------------------------------+-----------------------------------+ 155| CPU 0: | CPU 1: | 156+------------------------------------+-----------------------------------+ 157| In mmu_spte_clear_track_bits():: | | 158| | | 159| old_spte = *spte; | | 160| | | 161| | | 162| /* 'if' condition is satisfied. */| | 163| if (old_spte.Accessed == 1 && | | 164| old_spte.W == 0) | | 165| spte = 0ull; | | 166+------------------------------------+-----------------------------------+ 167| | on fast page fault path:: | 168| | | 169| | spte.W = 1 | 170| | | 171| | memory write on the spte:: | 172| | | 173| | spte.Dirty = 1 | 174+------------------------------------+-----------------------------------+ 175| :: | | 176| | | 177| else | | 178| old_spte = xchg(spte, 0ull) | | 179| if (old_spte.Accessed == 1) | | 180| kvm_set_pfn_accessed(spte.pfn);| | 181| if (old_spte.Dirty == 1) | | 182| kvm_set_pfn_dirty(spte.pfn); | | 183| OOPS!!! | | 184+------------------------------------+-----------------------------------+ 185 186The Dirty bit is lost in this case. 187 188In order to avoid this kind of issue, we always treat the spte as "volatile" 189if it can be updated out of mmu-lock [see spte_has_volatile_bits()]; it means 190the spte is always atomically updated in this case. 191 1923) flush tlbs due to spte updated 193 194If the spte is updated from writable to read-only, we should flush all TLBs, 195otherwise rmap_write_protect will find a read-only spte, even though the 196writable spte might be cached on a CPU's TLB. 197 198As mentioned before, the spte can be updated to writable out of mmu-lock on 199fast page fault path. In order to easily audit the path, we see if TLBs needing 200to be flushed caused this reason in mmu_spte_update() since this is a common 201function to update spte (present -> present). 202 203Since the spte is "volatile" if it can be updated out of mmu-lock, we always 204atomically update the spte and the race caused by fast page fault can be avoided. 205See the comments in spte_has_volatile_bits() and mmu_spte_update(). 206 207Lockless Access Tracking: 208 209This is used for Intel CPUs that are using EPT but do not support the EPT A/D 210bits. In this case, PTEs are tagged as A/D disabled (using ignored bits), and 211when the KVM MMU notifier is called to track accesses to a page (via 212kvm_mmu_notifier_clear_flush_young), it marks the PTE not-present in hardware 213by clearing the RWX bits in the PTE and storing the original R & X bits in more 214unused/ignored bits. When the VM tries to access the page later on, a fault is 215generated and the fast page fault mechanism described above is used to 216atomically restore the PTE to a Present state. The W bit is not saved when the 217PTE is marked for access tracking and during restoration to the Present state, 218the W bit is set depending on whether or not it was a write access. If it 219wasn't, then the W bit will remain clear until a write access happens, at which 220time it will be set using the Dirty tracking mechanism described above. 221 2223. Reference 223------------ 224 225``kvm_lock`` 226^^^^^^^^^^^^ 227 228:Type: mutex 229:Arch: any 230:Protects: - vm_list 231 - kvm_usage_count 232 - hardware virtualization enable/disable 233:Comment: KVM also disables CPU hotplug via cpus_read_lock() during 234 enable/disable. 235 236``kvm->mn_invalidate_lock`` 237^^^^^^^^^^^^^^^^^^^^^^^^^^^ 238 239:Type: spinlock_t 240:Arch: any 241:Protects: mn_active_invalidate_count, mn_memslots_update_rcuwait 242 243``kvm_arch::tsc_write_lock`` 244^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 245 246:Type: raw_spinlock_t 247:Arch: x86 248:Protects: - kvm_arch::{last_tsc_write,last_tsc_nsec,last_tsc_offset} 249 - tsc offset in vmcb 250:Comment: 'raw' because updating the tsc offsets must not be preempted. 251 252``kvm->mmu_lock`` 253^^^^^^^^^^^^^^^^^ 254:Type: spinlock_t or rwlock_t 255:Arch: any 256:Protects: -shadow page/shadow tlb entry 257:Comment: it is a spinlock since it is used in mmu notifier. 258 259``kvm->srcu`` 260^^^^^^^^^^^^^ 261:Type: srcu lock 262:Arch: any 263:Protects: - kvm->memslots 264 - kvm->buses 265:Comment: The srcu read lock must be held while accessing memslots (e.g. 266 when using gfn_to_* functions) and while accessing in-kernel 267 MMIO/PIO address->device structure mapping (kvm->buses). 268 The srcu index can be stored in kvm_vcpu->srcu_idx per vcpu 269 if it is needed by multiple functions. 270 271``kvm->slots_arch_lock`` 272^^^^^^^^^^^^^^^^^^^^^^^^ 273:Type: mutex 274:Arch: any (only needed on x86 though) 275:Protects: any arch-specific fields of memslots that have to be modified 276 in a ``kvm->srcu`` read-side critical section. 277:Comment: must be held before reading the pointer to the current memslots, 278 until after all changes to the memslots are complete 279 280``wakeup_vcpus_on_cpu_lock`` 281^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 282:Type: spinlock_t 283:Arch: x86 284:Protects: wakeup_vcpus_on_cpu 285:Comment: This is a per-CPU lock and it is used for VT-d posted-interrupts. 286 When VT-d posted-interrupts are supported and the VM has assigned 287 devices, we put the blocked vCPU on the list blocked_vcpu_on_cpu 288 protected by blocked_vcpu_on_cpu_lock. When VT-d hardware issues 289 wakeup notification event since external interrupts from the 290 assigned devices happens, we will find the vCPU on the list to 291 wakeup. 292 293``vendor_module_lock`` 294^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 295:Type: mutex 296:Arch: x86 297:Protects: loading a vendor module (kvm_amd or kvm_intel) 298:Comment: Exists because using kvm_lock leads to deadlock. cpu_hotplug_lock is 299 taken outside of kvm_lock, e.g. in KVM's CPU online/offline callbacks, and 300 many operations need to take cpu_hotplug_lock when loading a vendor module, 301 e.g. updating static calls. 302