/openbmc/linux/Documentation/admin-guide/mm/ |
H A D | hugetlbpage.rst | 19 Users can use the huge page support in Linux kernel by either using the mmap 28 persistent hugetlb pages in the kernel's huge page pool. It also displays 29 default huge page size and information about the number of free, reserved 30 and surplus huge pages in the pool of huge pages of default size. 31 The huge page size is needed for generating the proper alignment and 32 size of the arguments to system calls that map huge page regions. 46 is the size of the pool of huge pages. 48 is the number of huge pages in the pool that are not yet 51 is short for "reserved," and is the number of huge pages for 53 but no allocation has yet been made. Reserved huge pages [all …]
|
H A D | transhuge.rst | 11 using huge pages for the backing of virtual memory with huge pages 20 the huge page size is 2M, although the actual numbers may vary 51 collapses sequences of basic pages into huge pages. 149 By default kernel tries to use huge zero page on read page fault to 150 anonymous mapping. It's possible to disable huge zero page by writing 0 219 swap when collapsing a group of pages into a transparent huge page:: 247 ``huge=``. It can have following values: 250 Attempt to allocate huge pages every time we need a new page; 253 Do not allocate huge pages; 256 Only allocate huge page if it will be fully within i_size. [all …]
|
H A D | concepts.rst | 79 `huge`. Usage of huge pages significantly reduces pressure on TLB, 83 memory with the huge pages. The first one is `HugeTLB filesystem`, or 86 the memory and mapped using huge pages. The hugetlbfs is described at 89 Another, more recent, mechanism that enables use of the huge pages is 92 the system memory should and can be mapped by the huge pages, THP 201 buffer for DMA, or when THP allocates a huge page. Memory `compaction`
|
/openbmc/linux/tools/testing/selftests/mm/ |
H A D | charge_reserved_hugetlb.sh | 52 if [[ -e /mnt/huge ]]; then 53 rm -rf /mnt/huge/* 54 umount /mnt/huge || echo error 55 rmdir /mnt/huge 260 if [[ -e /mnt/huge ]]; then 261 rm -rf /mnt/huge/* 262 umount /mnt/huge 263 rmdir /mnt/huge 290 mkdir -p /mnt/huge 291 mount -t hugetlbfs -o pagesize=${MB}M,size=256M none /mnt/huge [all …]
|
H A D | run_vmtests.sh | 64 test transparent huge pages 112 for huge in -t -T "-H -m $hugetlb_mb"; do 124 $huge $test_cmd $write $share $num
|
/openbmc/linux/Documentation/mm/ |
H A D | hugetlbfs_reserv.rst | 9 typically preallocated for application use. These huge pages are instantiated 10 in a task's address space at page fault time if the VMA indicates huge pages 11 are to be used. If no huge page exists at page fault time, the task is sent 12 a SIGBUS and often dies an unhappy death. Shortly after huge page support 14 of huge pages at mmap() time. The idea is that if there were not enough 15 huge pages to cover the mapping, the mmap() would fail. This was first 17 were enough free huge pages to cover the mapping. Like most things in the 19 'reserve' huge pages at mmap() time to ensure that huge pages would be 21 describe how huge page reserve processing is done in the v4.10 kernel. 34 This is a global (per-hstate) count of reserved huge pages. Reserved [all …]
|
H A D | transhuge.rst | 13 knowledge fall back to breaking huge pmd mapping into table of ptes and, 41 is complete, so they won't ever notice the fact the page is huge. But 57 Code walking pagetables but unaware about huge pmds can simply call 92 To make pagetable walks huge pmd aware, all you need to do is to call 94 mmap_lock in read (or write) mode to be sure a huge pmd cannot be 100 page table lock will prevent the huge pmd being converted into a 104 before. Otherwise, you can proceed to process the huge pmd and the 107 Refcounts and transparent huge pages 133 requests to split pinned huge pages: it expects page count to be equal to
|
H A D | zsmalloc.rst | 157 per zspage. Any object larger than 3264 bytes is considered huge and belongs 159 in huge classes do not share pages). 162 for the huge size class and fewer huge classes overall. This allows for more 165 For zspage chain size of 8, huge class watermark becomes 3632 bytes::: 178 For zspage chain size of 16, huge class watermark becomes 3840 bytes::: 207 pages per zspage number of size classes (clusters) huge size class watermark
|
H A D | arch_pgtable_helpers.rst | 148 | pmd_set_huge | Creates a PMD huge mapping | 150 | pmd_clear_huge | Clears a PMD huge mapping | 205 | pud_set_huge | Creates a PUD huge mapping | 207 | pud_clear_huge | Clears a PUD huge mapping |
|
/openbmc/linux/arch/powerpc/include/asm/nohash/32/ |
H A D | pgtable.h | 236 static int number_of_cells_per_pte(pmd_t *pmd, pte_basic_t val, int huge) in number_of_cells_per_pte() argument 238 if (!huge) in number_of_cells_per_pte() 249 unsigned long clr, unsigned long set, int huge) in pte_update() argument 257 num = number_of_cells_per_pte(pmd, new, huge); in pte_update() 284 unsigned long clr, unsigned long set, int huge) in pte_update() argument 334 int huge = psize > mmu_virtual_psize ? 1 : 0; in __ptep_set_access_flags() local 336 pte_update(vma->vm_mm, address, ptep, 0, set, huge); in __ptep_set_access_flags()
|
H A D | pte-8xx.h | 147 unsigned long clr, unsigned long set, int huge); 160 int huge = psize > mmu_virtual_psize ? 1 : 0; in __ptep_set_access_flags() local 162 pte_update(vma->vm_mm, address, ptep, clr, set, huge); in __ptep_set_access_flags()
|
/openbmc/linux/arch/powerpc/include/asm/book3s/64/ |
H A D | hash.h | 161 pte_t *ptep, unsigned long pte, int huge); 168 int huge) in hash__pte_update() argument 186 if (!huge) in hash__pte_update() 191 hpte_need_flush(mm, addr, ptep, old, huge); in hash__pte_update()
|
/openbmc/linux/Documentation/filesystems/ |
H A D | tmpfs.rst | 112 configured with CONFIG_TRANSPARENT_HUGEPAGE and with huge supported for 117 huge=never Do not allocate huge pages. This is the default. 118 huge=always Attempt to allocate huge page every time a new page is needed. 119 huge=within_size Only allocate huge page if it will be fully within i_size. 121 huge=advise Only allocate huge page if requested with madvise(2). 126 be used to deny huge pages on all tmpfs mounts in an emergency, or to 127 force huge pages on all tmpfs mounts for testing.
|
/openbmc/linux/arch/loongarch/mm/ |
H A D | init.c | 143 int huge = pmd_val(*pmd) & _PAGE_HUGE; in vmemmap_check_pmd() local 145 if (huge) in vmemmap_check_pmd() 148 return huge; in vmemmap_check_pmd()
|
/openbmc/linux/Documentation/admin-guide/hw-vuln/ |
H A D | multihit.rst | 81 * - KVM: Mitigation: Split huge pages 111 In order to mitigate the vulnerability, KVM initially marks all huge pages 125 The KVM hypervisor mitigation mechanism for marking huge pages as 134 non-executable huge pages in Linux kernel KVM module. All huge
|
/openbmc/linux/Documentation/core-api/ |
H A D | pin_user_pages.rst | 64 severely by huge pages, because each tail page adds a refcount to the 66 field, refcount overflows were seen in some huge page stress tests. 68 This also means that huge pages and large folios do not suffer 246 acquired since the system was powered on. For huge pages, the head page is 247 pinned once for each page (head page and each tail page) within the huge page. 248 This follows the same sort of behavior that get_user_pages() uses for huge 249 pages: the head page is refcounted once for each tail or head page in the huge 250 page, when get_user_pages() is applied to a huge page. 254 PAGE_SIZE granularity, even if the original pin was applied to a huge page.
|
/openbmc/u-boot/fs/ubifs/ |
H A D | misc.h | 151 dev->huge = cpu_to_le64(huge_encode_dev(rdev)); in ubifs_encode_dev() 152 return sizeof(dev->huge); in ubifs_encode_dev()
|
/openbmc/linux/arch/alpha/lib/ |
H A D | ev6-clear_user.S | 86 subq $1, 16, $4 # .. .. .. E : If < 16, we can not use the huge loop 87 and $16, 0x3f, $2 # .. .. E .. : Forward work for huge loop 88 subq $2, 0x40, $3 # .. E .. .. : bias counter (huge loop)
|
/openbmc/linux/arch/powerpc/mm/book3s64/ |
H A D | hash_tlb.c | 41 pte_t *ptep, unsigned long pte, int huge) in hpte_need_flush() argument 61 if (huge) { in hpte_need_flush()
|
/openbmc/linux/mm/ |
H A D | memory-failure.c | 2521 bool huge = false; in unpoison_memory() local 2582 huge = true; in unpoison_memory() 2598 huge = true; in unpoison_memory() 2616 if (!huge) in unpoison_memory() 2670 bool huge = PageHuge(page); in soft_offline_in_use_page() local 2677 if (!huge && PageTransHuge(hpage)) { in soft_offline_in_use_page() 2686 if (!huge) in soft_offline_in_use_page() 2695 if (!huge && PageLRU(page) && !PageSwapCache(page)) in soft_offline_in_use_page() 2713 bool release = !huge; in soft_offline_in_use_page() 2715 if (!page_handle_poison(page, huge, release)) in soft_offline_in_use_page() [all …]
|
H A D | shmem.c | 120 int huge; member 553 switch (SHMEM_SB(inode->i_sb)->huge) { in __shmem_is_huge() 601 static const char *shmem_format_huge(int huge) in shmem_format_huge() argument 603 switch (huge) { in shmem_format_huge() 1687 pgoff_t index, bool huge) in shmem_alloc_and_acct_folio() argument 1695 huge = false; in shmem_alloc_and_acct_folio() 1696 nr = huge ? HPAGE_PMD_NR : 1; in shmem_alloc_and_acct_folio() 1702 if (huge) in shmem_alloc_and_acct_folio() 2315 if (SHMEM_SB(sb)->huge == SHMEM_HUGE_NEVER) in shmem_get_unmapped_area() 3973 ctx->huge = result.uint_32; in shmem_parse_one() [all …]
|
/openbmc/linux/drivers/misc/lkdtm/ |
H A D | bugs.c | 276 static volatile unsigned int huge = INT_MAX - 2; variable 283 value = huge; in lkdtm_OVERFLOW_SIGNED() 298 value = huge; in lkdtm_OVERFLOW_UNSIGNED()
|
/openbmc/linux/Documentation/features/vm/huge-vmap/ |
H A D | arch-support.txt | 2 # Feature name: huge-vmap
|
/openbmc/linux/Documentation/riscv/ |
H A D | vm-layout.rst | 42 …0000004000000000 | +256 GB | ffffffbfffffffff | ~16M TB | ... huge, almost 64 bits wide hole of… 78 …0000800000000000 | +128 TB | ffff7fffffffffff | ~16M TB | ... huge, almost 64 bits wide hole of… 114 …0100000000000000 | +64 PB | feffffffffffffff | ~16K PB | ... huge, almost 64 bits wide hole of…
|
/openbmc/qemu/docs/system/s390x/ |
H A D | protvirt.rst | 43 Host huge page backings are not supported. However guests can use huge
|