/openbmc/linux/Documentation/mm/ |
H A D | arch_pgtable_helpers.rst | 15 PTE Page Table Helpers 19 | pte_same | Tests whether both PTE entries are the same | 21 | pte_bad | Tests a non-table mapped PTE | 23 | pte_present | Tests a valid mapped PTE | 25 | pte_young | Tests a young PTE | 27 | pte_dirty | Tests a dirty PTE | 29 | pte_write | Tests a writable PTE | 31 | pte_special | Tests a special PTE | 33 | pte_protnone | Tests a PROT_NONE PTE | 35 | pte_devmap | Tests a ZONE_DEVICE mapped PTE | [all …]
|
H A D | split_page_table_lock.rst | 11 access to the table. At the moment we use split lock for PTE and PMD 17 maps PTE and takes PTE table lock, returns pointer to PTE with 18 pointer to its PTE table lock, or returns NULL if no PTE table; 20 maps PTE, returns pointer to PTE with pointer to its PTE table 21 lock (not taken), or returns NULL if no PTE table; 23 maps PTE, returns pointer to PTE, or returns NULL if no PTE table; 25 unmaps PTE table; 27 unlocks and unmaps PTE table; 29 allocates PTE table if needed and takes its lock, returns pointer to 30 PTE with pointer to its lock, or returns NULL if allocation failed; [all …]
|
H A D | remap_file_pages.rst | 16 PTE for this purpose. PTE flags are scarce resource especially on some CPU
|
H A D | multigen_lru.rst | 31 profit from discovering a young PTE. A page table walk can sweep all 123 the latter, when the eviction walks the rmap and finds a young PTE, 124 the aging scans the adjacent PTEs. For both, on finding a young PTE, 126 page mapped by this PTE to ``(max_seq%MAX_NR_GENS)+1``. 190 trips into the rmap. It scans the adjacent PTEs of a young PTE and 192 adds the PMD entry pointing to the PTE table to the Bloom filter. This 203 filter. In the aging path, set membership means that the PTE range
|
H A D | page_tables.rst | 42 this single table were referred to as *PTE*:s - page table entries. 80 +-->| PTE | 130 --> +-----+ PTE 136 | ptr | \ PTE
|
/openbmc/u-boot/arch/riscv/include/asm/ |
H A D | encoding.h | 136 #define PTE_TABLE(PTE) ((0x0000000AU >> ((PTE) & 0x1F)) & 1) argument 137 #define PTE_UR(PTE) ((0x0000AAA0U >> ((PTE) & 0x1F)) & 1) argument 138 #define PTE_UW(PTE) ((0x00008880U >> ((PTE) & 0x1F)) & 1) argument 139 #define PTE_UX(PTE) ((0x0000A0A0U >> ((PTE) & 0x1F)) & 1) argument 140 #define PTE_SR(PTE) ((0xAAAAAAA0U >> ((PTE) & 0x1F)) & 1) argument 141 #define PTE_SW(PTE) ((0x88888880U >> ((PTE) & 0x1F)) & 1) argument 142 #define PTE_SX(PTE) ((0xA0A0A000U >> ((PTE) & 0x1F)) & 1) argument 145 typeof(_PTE) (PTE) = (_PTE); \ 147 ((STORE) ? ((SUPERVISOR) ? PTE_SW(PTE) : PTE_UW(PTE)) : \ 148 (FETCH) ? ((SUPERVISOR) ? PTE_SX(PTE) : PTE_UX(PTE)) : \ [all …]
|
/openbmc/linux/Documentation/translations/zh_CN/mm/ |
H A D | split_page_table_lock.rst | 18 有了分页表锁,我们就有了单独的每张表锁来顺序化对表的访问。目前,我们对PTE和 24 映射pte并获取PTE表锁,返回所取锁的指针; 26 解锁和解映射PTE表; 28 如果需要的话,分配PTE表并获取锁,如果分配失败,返回已获取的锁的指针 31 返回指向PTE表锁的指针; 38 时启用PTE表的分页表锁。如果分页锁被禁用,所有的表都由mm->page_table_lock 59 没有必要特别启用PTE分页表锁:所有需要的东西都由pagetable_pte_ctor() 60 和pagetable_pte_dtor()完成,它们必须在PTE表分配/释放时被调用。 93 PTE表的spinlock_t分配在pagetable_pte_ctor()中,PMD表的spinlock_t
|
H A D | remap_file_pages.rst | 20 偏移的项(pte_file)。内核为达到这个目的在PTE中保留了标志。PTE标志是稀缺资
|
H A D | hmm.rst | 280 除它,而不是复制一个零页。到系统内存或设备私有结构页的有效PTE条目将被 282 程中取消映射,并插入一个特殊的迁移PTE来代替原来的PTE。 migrate_vma_setup() 333 一些设备具有诸如原子PTE位的功能,可以用来实现对系统内存的原子访问。为了支持对一
|
H A D | highmem.rst | 139 是,PAE有更多的PTE位,可以提供像NX和PAT这样的高级功能。
|
/openbmc/linux/Documentation/arch/arm64/ |
H A D | ptdump.rst | 38 level PTE or block level PGD, PMD and PUD, and access status of a page 49 | 0xfff0000000000000-0xfff0000000210000 2112K PTE RW NX SHD AF UXN MEM/NORMAL-TAGGED | 50 | 0xfff0000000210000-0xfff0000001c00000 26560K PTE ro NX SHD AF UXN MEM/NORMAL | 56 | 0xffff800000000000-0xffff800008000000 128M PTE | 62 | 0xffff800008010000-0xffff800008200000 1984K PTE ro x SHD AF UXN MEM/NORMAL | 63 | 0xffff800008200000-0xffff800008e00000 12M PTE ro x SHD AF CON UXN MEM/NORMAL | 69 | 0xfffffbfffdb80000-0xfffffbfffdb90000 64K PTE ro x SHD AF UXN MEM/NORMAL | 70 | 0xfffffbfffdb90000-0xfffffbfffdba0000 64K PTE ro NX SHD AF UXN MEM/NORMAL | 76 | 0xfffffbfffe800000-0xfffffbffff800000 16M PTE | 82 | 0xfffffc0002000000-0xfffffc0002200000 2M PTE RW NX SHD AF UXN MEM/NORMAL | [all …]
|
H A D | hugetlbpage.rst | 38 - CONT PTE PMD CONT PMD PUD
|
/openbmc/linux/arch/sparc/include/asm/ |
H A D | pgalloc_64.h | 72 #define pmd_populate_kernel(MM, PMD, PTE) pmd_set(MM, PMD, PTE) argument 73 #define pmd_populate(MM, PMD, PTE) pmd_set(MM, PMD, PTE) argument
|
/openbmc/linux/Documentation/admin-guide/mm/ |
H A D | soft-dirty.rst | 5 The soft-dirty is a bit on a PTE which helps to track which pages a task 18 64-bit qword is the soft-dirty one. If set, the respective PTE was 25 the soft-dirty bit on the respective PTE. 31 bits on the PTE. 36 the same place. When unmap is called, the kernel internally clears PTE values
|
/openbmc/linux/Documentation/translations/zh_CN/mm/damon/ |
H A D | design.rst | 65 基于PTE访问位的访问检查 68 物理和虚拟地址空间的实现都使用PTE Accessed-bit进行基本访问检查。唯一的区别在于从地址中 69 找到相关的PTE访问位的方式。虚拟地址的实现是为该地址的目标任务查找页表,而物理地址的实现则
|
H A D | faq.rst | 40 尽管如此,DAMON默认为虚拟内存和物理内存提供了基于vma/rmap跟踪和PTE访问位检查的地址空间
|
/openbmc/linux/tools/testing/selftests/mm/ |
H A D | mremap_test.c | 53 #define PTE page_size macro 475 test_cases[3] = MAKE_TEST(PTE, PTE, PTE * 2, in main() 480 test_cases[4] = MAKE_TEST(_1MB, PTE, _2MB, NON_OVERLAPPING, EXPECT_SUCCESS, in main() 486 test_cases[6] = MAKE_TEST(PMD, PTE, _4MB, NON_OVERLAPPING, EXPECT_SUCCESS, in main() 494 test_cases[9] = MAKE_TEST(PUD, PTE, _2GB, NON_OVERLAPPING, EXPECT_SUCCESS, in main()
|
/openbmc/linux/Documentation/translations/zh_CN/arch/arm64/ |
H A D | hugetlbpage.rst | 40 - CONT PTE PMD CONT PMD PUD
|
/openbmc/linux/arch/microblaze/include/asm/ |
H A D | mmu.h | 33 } PTE; typedef
|
/openbmc/linux/Documentation/translations/zh_TW/arch/arm64/ |
H A D | hugetlbpage.rst | 43 - CONT PTE PMD CONT PMD PUD
|
/openbmc/qemu/tests/tcg/aarch64/system/ |
H A D | kernel.ld | 25 * Symbol 'mte_page' is used in boot.S to setup the PTE and in the mte.S
|
/openbmc/linux/Documentation/admin-guide/hw-vuln/ |
H A D | l1tf.rst | 47 table entry (PTE) has the Present bit cleared or other reserved bits set, 48 then speculative execution ignores the invalid PTE and loads the referenced 50 by the address bits in the PTE was still present and accessible. 72 PTE which is marked non present. This allows a malicious user space 75 encoded in the address bits of the PTE, thus making attacks more 78 The Linux kernel contains a mitigation for this attack vector, PTE 92 PTE inversion mitigation for L1TF, to attack physical host memory. 132 'Mitigation: PTE Inversion' The host protection is active 136 information is appended to the 'Mitigation: PTE Inversion' part: 582 - PTE inversion to protect against malicious user space. This is done
|
/openbmc/linux/Documentation/mm/damon/ |
H A D | faq.rst | 16 Nonetheless, DAMON provides vma/rmap tracking and PTE Accessed bit check based
|
/openbmc/linux/Documentation/virt/kvm/ |
H A D | locking.rst | 219 kvm_mmu_notifier_clear_flush_young), it marks the PTE not-present in hardware 220 by clearing the RWX bits in the PTE and storing the original R & X bits in more 223 atomically restore the PTE to a Present state. The W bit is not saved when the 224 PTE is marked for access tracking and during restoration to the Present state,
|
/openbmc/linux/Documentation/arch/x86/ |
H A D | iommu.rst | 131 DMAR:[fault reason 05] PTE Write access is not set 133 DMAR:[fault reason 05] PTE Write access is not set
|