Lines Matching +full:gpa +full:- +full:1
1 .. SPDX-License-Identifier: GPL-2.0
13 - correctness:
18 - security:
21 - performance:
23 - scaling:
25 - hardware:
27 - integration:
31 - dirty tracking:
33 and framebuffer-based displays
34 - footprint:
37 - reliability:
48 gpa guest physical address
62 The mmu supports first-generation mmu hardware, which allows an atomic switch
64 two-dimensional paging (AMD's NPT and Intel's EPT). The emulated hardware
66 pages, pae, pse, pse36, cr0.wp, and 1GB pages. Emulated hardware also
76 - when guest paging is disabled, we translate guest physical addresses to
77 host physical addresses (gpa->hpa)
78 - when guest paging is enabled, we translate guest virtual addresses, to
79 guest physical addresses, to host physical addresses (gva->gpa->hpa)
80 - when the guest launches a guest of its own, we translate nested guest
82 addresses, to host physical addresses (ngva->ngpa->gpa->hpa)
84 The primary challenge is to encode between 1 and 3 translations into hardware
85 that support only 1 (traditional) and 2 (tdp) translations. When the
92 Guest memory (gpa) is part of the user address space of the process that is
94 addresses (gpa->hva); note that two gpas may alias to the same hva, but not
108 - writes to control registers (especially cr3)
109 - invlpg/invlpga instruction execution
110 - access to missing or protected translations
114 - changes in the gpa->hpa translation (either through gpa->hva changes or
115 through hva->hpa changes)
116 - memory pressure (the shrinker)
133 The following table shows translations encoded by leaf ptes, with higher-level
136 Non-nested guests::
138 nonpaging: gpa->hpa
139 paging: gva->gpa->hpa
140 paging, tdp: (gva->)gpa->hpa
144 non-tdp: ngva->gpa->hpa (*)
145 tdp: (ngva->)ngpa->gpa->hpa
147 (*) the guest hypervisor will encode the ngva->gpa translation into its page
153 1=4k sptes, 2=2M sptes, 3=1G sptes, etc.
157 host pages, and gpa->hpa translations when NPT or EPT is active.
159 by role.level (2MB for first level, 1GB for second level, 0.5TB for third
164 When role.has_4_byte_gpte=1, the guest uses 32-bit gptes while the host uses 64-bit
167 For first-level shadow pages, role.quadrant can be 0 or 1 and denotes the
168 first or second 512-gpte block in the guest page table. For second-level
169 page tables, each 32-bit gpte is converted to two 64-bit sptes
170 (since each first-level guest page is shadowed by two first-level
172 quadrant maps 1GB virtual address space.
182 if direct map or 64-bit gptes are in use, '1' if 32-bit gptes are in use.
196 Is 1 if the page is valid in system management mode. This field
202 Is 1 if the MMU instance cannot use A/D bits. EPT did not have A/D
207 points to one. This is set if NPT uses 5-level page tables (host
208 CR4.LA57=1) and is shadowing L1's 4-level NPT (L1 CR4.LA57=0).
213 A pageful of 64-bit sptes containing the translations for this page.
215 The page pointed to by spt will have its page->private pointing back
217 sptes in spt point either at guest pages, or at lower-level shadow pages.
218 Specifically, if sp1 and sp2 are shadow pages, then sp1->spt[n] may point
219 at __pa(sp2->spt). sp2 will point back at sp1 through parent_pte.
251 Only present on 32-bit hosts, where a 64-bit spte cannot be written
253 to detect in-progress updates and retry them until the writer has
257 emulations if the page needs to be write-protected (see "Synchronized
260 possible for non-leafs. This field counts the number of emulations
297 - guest page fault (or npt page fault, or ept violation)
301 - a true guest fault (the guest translation won't allow the access) (*)
302 - access to a missing translation
303 - access to a protected translation
304 - when logging dirty pages, memory is write protected
305 - synchronized shadow pages are write protected (*)
306 - access to untranslatable memory (mmio)
312 - if the RSV bit of the error code is set, the page fault is caused by guest
315 - walk shadow page table
316 - check for valid generation number in the spte (see "Fast invalidation of
318 - cache the information to vcpu->arch.mmio_gva, vcpu->arch.mmio_access and
319 vcpu->arch.mmio_gfn, and call the emulator
321 - If both P bit and R/W bit of error code are set, this could possibly
325 - if needed, walk the guest page tables to determine the guest translation
326 (gva->gpa or ngpa->gpa)
328 - if permissions are insufficient, reflect the fault back to the guest
330 - determine the host page
332 - if this is an mmio request, there is no host page; cache the info to
333 vcpu->arch.mmio_gva, vcpu->arch.mmio_access and vcpu->arch.mmio_gfn
335 - walk the shadow page table to find the spte for the translation,
338 - If this is an mmio request, cache the mmio info to the spte and set some
341 - try to unsynchronize the page
343 - if successful, we can let the guest continue and modify the gpte
345 - emulate the instruction
347 - if failed, unshadow the page and let the guest continue
349 - update any translations that were modified by the instruction
353 - walk the shadow page hierarchy and drop affected translations
354 - try to reinstantiate the indicated translation in the hope that the
359 - mov to cr3
361 - look up new shadow roots
362 - synchronize newly reachable shadow pages
364 - mov to cr0/cr4/efer
366 - set up mmu context for new paging mode
367 - look up new shadow roots
368 - synchronize newly reachable shadow pages
372 - mmu notifier called with updated hva
373 - look up affected sptes through reverse map
374 - drop (or update) translations
379 If tdp is not enabled, the host must keep cr0.wp=1 so page write protection
381 cr0.wp=1, this does not present a problem. However when the guest cr0.wp=0,
382 we cannot map the permissions for gpte.u=1, gpte.w=0 to any spte (the
388 - kernel write fault: spte.u=0, spte.w=1 (allows full kernel access,
390 - read fault: spte.u=1, spte.w=0 (allows full read access, disallows kernel
397 - if CR4.SMEP is enabled: since we've turned the page into a kernel page,
399 If we get a user fetch or read fault, we'll change spte.u=1 and
400 spte.nx=gpte.nx back. For this to work, KVM forces EFER.NX to 1 when
402 - if CR4.SMAP is disabled: since the page has been changed to a kernel
409 from being written by the kernel after cr0.wp has changed to 1, we make
411 with one value of cr0.wp cannot be used when cr0.wp has a different value -
414 changing cr4.smep to 1. To avoid this, the value of !cr0.wp && cr4.smep
421 Supported page sizes include 4k, 2M, 4M, and 1G. 4M pages are treated as
427 - the spte must point to a large host page
428 - the guest pte must be a large pte of at least equivalent size (if tdp is
430 - if the spte will be writeable, the large page frame may not overlap any
431 write-protected pages
432 - the guest page must be wholly contained by a single memory slot
434 To check the last two conditions, the mmu maintains a ->disallow_lpage set of
438 artificially inflated ->disallow_lpages so they can never be instantiated.
451 kvm_memslots(kvm)->generation, and increased whenever guest memory info
459 Since only 18 bits are used to store generation-number on mmio spte, all
465 out-of-date information, but with an up-to-date generation number.
468 returns; thus, bit 63 of kvm_memslots(kvm)->generation set to 1 only during a
471 this without losing a bit in the MMIO spte. The "update in-progress" bit of the
474 spte while an update is in-progress, the next access to the spte will always be
476 miss due to the in-progress flag diverging, while an access after the update
483 - NPT presentation from KVM Forum 2008
484 https://www.linux-kvm.org/images/c/c8/KvmForum2008%24kdf2008_21.pdf