1.. _hugetlbpage: 2 3============= 4HugeTLB Pages 5============= 6 7Overview 8======== 9 10The intent of this file is to give a brief summary of hugetlbpage support in 11the Linux kernel. This support is built on top of multiple page size support 12that is provided by most modern architectures. For example, x86 CPUs normally 13support 4K and 2M (1G if architecturally supported) page sizes, ia64 14architecture supports multiple page sizes 4K, 8K, 64K, 256K, 1M, 4M, 16M, 15256M and ppc64 supports 4K and 16M. A TLB is a cache of virtual-to-physical 16translations. Typically this is a very scarce resource on processor. 17Operating systems try to make best use of limited number of TLB resources. 18This optimization is more critical now as bigger and bigger physical memories 19(several GBs) are more readily available. 20 21Users can use the huge page support in Linux kernel by either using the mmap 22system call or standard SYSV shared memory system calls (shmget, shmat). 23 24First the Linux kernel needs to be built with the CONFIG_HUGETLBFS 25(present under "File systems") and CONFIG_HUGETLB_PAGE (selected 26automatically when CONFIG_HUGETLBFS is selected) configuration 27options. 28 29The ``/proc/meminfo`` file provides information about the total number of 30persistent hugetlb pages in the kernel's huge page pool. It also displays 31default huge page size and information about the number of free, reserved 32and surplus huge pages in the pool of huge pages of default size. 33The huge page size is needed for generating the proper alignment and 34size of the arguments to system calls that map huge page regions. 35 36The output of ``cat /proc/meminfo`` will include lines like:: 37 38 HugePages_Total: uuu 39 HugePages_Free: vvv 40 HugePages_Rsvd: www 41 HugePages_Surp: xxx 42 Hugepagesize: yyy kB 43 Hugetlb: zzz kB 44 45where: 46 47HugePages_Total 48 is the size of the pool of huge pages. 49HugePages_Free 50 is the number of huge pages in the pool that are not yet 51 allocated. 52HugePages_Rsvd 53 is short for "reserved," and is the number of huge pages for 54 which a commitment to allocate from the pool has been made, 55 but no allocation has yet been made. Reserved huge pages 56 guarantee that an application will be able to allocate a 57 huge page from the pool of huge pages at fault time. 58HugePages_Surp 59 is short for "surplus," and is the number of huge pages in 60 the pool above the value in ``/proc/sys/vm/nr_hugepages``. The 61 maximum number of surplus huge pages is controlled by 62 ``/proc/sys/vm/nr_overcommit_hugepages``. 63Hugepagesize 64 is the default hugepage size (in Kb). 65Hugetlb 66 is the total amount of memory (in kB), consumed by huge 67 pages of all sizes. 68 If huge pages of different sizes are in use, this number 69 will exceed HugePages_Total \* Hugepagesize. To get more 70 detailed information, please, refer to 71 ``/sys/kernel/mm/hugepages`` (described below). 72 73 74``/proc/filesystems`` should also show a filesystem of type "hugetlbfs" 75configured in the kernel. 76 77``/proc/sys/vm/nr_hugepages`` indicates the current number of "persistent" huge 78pages in the kernel's huge page pool. "Persistent" huge pages will be 79returned to the huge page pool when freed by a task. A user with root 80privileges can dynamically allocate more or free some persistent huge pages 81by increasing or decreasing the value of ``nr_hugepages``. 82 83Pages that are used as huge pages are reserved inside the kernel and cannot 84be used for other purposes. Huge pages cannot be swapped out under 85memory pressure. 86 87Once a number of huge pages have been pre-allocated to the kernel huge page 88pool, a user with appropriate privilege can use either the mmap system call 89or shared memory system calls to use the huge pages. See the discussion of 90:ref:`Using Huge Pages <using_huge_pages>`, below. 91 92The administrator can allocate persistent huge pages on the kernel boot 93command line by specifying the "hugepages=N" parameter, where 'N' = the 94number of huge pages requested. This is the most reliable method of 95allocating huge pages as memory has not yet become fragmented. 96 97Some platforms support multiple huge page sizes. To allocate huge pages 98of a specific size, one must precede the huge pages boot command parameters 99with a huge page size selection parameter "hugepagesz=<size>". <size> must 100be specified in bytes with optional scale suffix [kKmMgG]. The default huge 101page size may be selected with the "default_hugepagesz=<size>" boot parameter. 102 103Hugetlb boot command line parameter semantics 104hugepagesz - Specify a huge page size. Used in conjunction with hugepages 105 parameter to preallocate a number of huge pages of the specified 106 size. Hence, hugepagesz and hugepages are typically specified in 107 pairs such as: 108 hugepagesz=2M hugepages=512 109 hugepagesz can only be specified once on the command line for a 110 specific huge page size. Valid huge page sizes are architecture 111 dependent. 112hugepages - Specify the number of huge pages to preallocate. This typically 113 follows a valid hugepagesz or default_hugepagesz parameter. However, 114 if hugepages is the first or only hugetlb command line parameter it 115 implicitly specifies the number of huge pages of default size to 116 allocate. If the number of huge pages of default size is implicitly 117 specified, it can not be overwritten by a hugepagesz,hugepages 118 parameter pair for the default size. 119 For example, on an architecture with 2M default huge page size: 120 hugepages=256 hugepagesz=2M hugepages=512 121 will result in 256 2M huge pages being allocated and a warning message 122 indicating that the hugepages=512 parameter is ignored. If a hugepages 123 parameter is preceded by an invalid hugepagesz parameter, it will 124 be ignored. 125default_hugepagesz - Specify the default huge page size. This parameter can 126 only be specified once on the command line. default_hugepagesz can 127 optionally be followed by the hugepages parameter to preallocate a 128 specific number of huge pages of default size. The number of default 129 sized huge pages to preallocate can also be implicitly specified as 130 mentioned in the hugepages section above. Therefore, on an 131 architecture with 2M default huge page size: 132 hugepages=256 133 default_hugepagesz=2M hugepages=256 134 hugepages=256 default_hugepagesz=2M 135 will all result in 256 2M huge pages being allocated. Valid default 136 huge page size is architecture dependent. 137 138When multiple huge page sizes are supported, ``/proc/sys/vm/nr_hugepages`` 139indicates the current number of pre-allocated huge pages of the default size. 140Thus, one can use the following command to dynamically allocate/deallocate 141default sized persistent huge pages:: 142 143 echo 20 > /proc/sys/vm/nr_hugepages 144 145This command will try to adjust the number of default sized huge pages in the 146huge page pool to 20, allocating or freeing huge pages, as required. 147 148On a NUMA platform, the kernel will attempt to distribute the huge page pool 149over all the set of allowed nodes specified by the NUMA memory policy of the 150task that modifies ``nr_hugepages``. The default for the allowed nodes--when the 151task has default memory policy--is all on-line nodes with memory. Allowed 152nodes with insufficient available, contiguous memory for a huge page will be 153silently skipped when allocating persistent huge pages. See the 154:ref:`discussion below <mem_policy_and_hp_alloc>` 155of the interaction of task memory policy, cpusets and per node attributes 156with the allocation and freeing of persistent huge pages. 157 158The success or failure of huge page allocation depends on the amount of 159physically contiguous memory that is present in system at the time of the 160allocation attempt. If the kernel is unable to allocate huge pages from 161some nodes in a NUMA system, it will attempt to make up the difference by 162allocating extra pages on other nodes with sufficient available contiguous 163memory, if any. 164 165System administrators may want to put this command in one of the local rc 166init files. This will enable the kernel to allocate huge pages early in 167the boot process when the possibility of getting physical contiguous pages 168is still very high. Administrators can verify the number of huge pages 169actually allocated by checking the sysctl or meminfo. To check the per node 170distribution of huge pages in a NUMA system, use:: 171 172 cat /sys/devices/system/node/node*/meminfo | fgrep Huge 173 174``/proc/sys/vm/nr_overcommit_hugepages`` specifies how large the pool of 175huge pages can grow, if more huge pages than ``/proc/sys/vm/nr_hugepages`` are 176requested by applications. Writing any non-zero value into this file 177indicates that the hugetlb subsystem is allowed to try to obtain that 178number of "surplus" huge pages from the kernel's normal page pool, when the 179persistent huge page pool is exhausted. As these surplus huge pages become 180unused, they are freed back to the kernel's normal page pool. 181 182When increasing the huge page pool size via ``nr_hugepages``, any existing 183surplus pages will first be promoted to persistent huge pages. Then, additional 184huge pages will be allocated, if necessary and if possible, to fulfill 185the new persistent huge page pool size. 186 187The administrator may shrink the pool of persistent huge pages for 188the default huge page size by setting the ``nr_hugepages`` sysctl to a 189smaller value. The kernel will attempt to balance the freeing of huge pages 190across all nodes in the memory policy of the task modifying ``nr_hugepages``. 191Any free huge pages on the selected nodes will be freed back to the kernel's 192normal page pool. 193 194Caveat: Shrinking the persistent huge page pool via ``nr_hugepages`` such that 195it becomes less than the number of huge pages in use will convert the balance 196of the in-use huge pages to surplus huge pages. This will occur even if 197the number of surplus pages would exceed the overcommit value. As long as 198this condition holds--that is, until ``nr_hugepages+nr_overcommit_hugepages`` is 199increased sufficiently, or the surplus huge pages go out of use and are freed-- 200no more surplus huge pages will be allowed to be allocated. 201 202With support for multiple huge page pools at run-time available, much of 203the huge page userspace interface in ``/proc/sys/vm`` has been duplicated in 204sysfs. 205The ``/proc`` interfaces discussed above have been retained for backwards 206compatibility. The root huge page control directory in sysfs is:: 207 208 /sys/kernel/mm/hugepages 209 210For each huge page size supported by the running kernel, a subdirectory 211will exist, of the form:: 212 213 hugepages-${size}kB 214 215Inside each of these directories, the same set of files will exist:: 216 217 nr_hugepages 218 nr_hugepages_mempolicy 219 nr_overcommit_hugepages 220 free_hugepages 221 resv_hugepages 222 surplus_hugepages 223 224which function as described above for the default huge page-sized case. 225 226.. _mem_policy_and_hp_alloc: 227 228Interaction of Task Memory Policy with Huge Page Allocation/Freeing 229=================================================================== 230 231Whether huge pages are allocated and freed via the ``/proc`` interface or 232the ``/sysfs`` interface using the ``nr_hugepages_mempolicy`` attribute, the 233NUMA nodes from which huge pages are allocated or freed are controlled by the 234NUMA memory policy of the task that modifies the ``nr_hugepages_mempolicy`` 235sysctl or attribute. When the ``nr_hugepages`` attribute is used, mempolicy 236is ignored. 237 238The recommended method to allocate or free huge pages to/from the kernel 239huge page pool, using the ``nr_hugepages`` example above, is:: 240 241 numactl --interleave <node-list> echo 20 \ 242 >/proc/sys/vm/nr_hugepages_mempolicy 243 244or, more succinctly:: 245 246 numactl -m <node-list> echo 20 >/proc/sys/vm/nr_hugepages_mempolicy 247 248This will allocate or free ``abs(20 - nr_hugepages)`` to or from the nodes 249specified in <node-list>, depending on whether number of persistent huge pages 250is initially less than or greater than 20, respectively. No huge pages will be 251allocated nor freed on any node not included in the specified <node-list>. 252 253When adjusting the persistent hugepage count via ``nr_hugepages_mempolicy``, any 254memory policy mode--bind, preferred, local or interleave--may be used. The 255resulting effect on persistent huge page allocation is as follows: 256 257#. Regardless of mempolicy mode [see 258 :ref:`Documentation/admin-guide/mm/numa_memory_policy.rst <numa_memory_policy>`], 259 persistent huge pages will be distributed across the node or nodes 260 specified in the mempolicy as if "interleave" had been specified. 261 However, if a node in the policy does not contain sufficient contiguous 262 memory for a huge page, the allocation will not "fallback" to the nearest 263 neighbor node with sufficient contiguous memory. To do this would cause 264 undesirable imbalance in the distribution of the huge page pool, or 265 possibly, allocation of persistent huge pages on nodes not allowed by 266 the task's memory policy. 267 268#. One or more nodes may be specified with the bind or interleave policy. 269 If more than one node is specified with the preferred policy, only the 270 lowest numeric id will be used. Local policy will select the node where 271 the task is running at the time the nodes_allowed mask is constructed. 272 For local policy to be deterministic, the task must be bound to a cpu or 273 cpus in a single node. Otherwise, the task could be migrated to some 274 other node at any time after launch and the resulting node will be 275 indeterminate. Thus, local policy is not very useful for this purpose. 276 Any of the other mempolicy modes may be used to specify a single node. 277 278#. The nodes allowed mask will be derived from any non-default task mempolicy, 279 whether this policy was set explicitly by the task itself or one of its 280 ancestors, such as numactl. This means that if the task is invoked from a 281 shell with non-default policy, that policy will be used. One can specify a 282 node list of "all" with numactl --interleave or --membind [-m] to achieve 283 interleaving over all nodes in the system or cpuset. 284 285#. Any task mempolicy specified--e.g., using numactl--will be constrained by 286 the resource limits of any cpuset in which the task runs. Thus, there will 287 be no way for a task with non-default policy running in a cpuset with a 288 subset of the system nodes to allocate huge pages outside the cpuset 289 without first moving to a cpuset that contains all of the desired nodes. 290 291#. Boot-time huge page allocation attempts to distribute the requested number 292 of huge pages over all on-lines nodes with memory. 293 294Per Node Hugepages Attributes 295============================= 296 297A subset of the contents of the root huge page control directory in sysfs, 298described above, will be replicated under each the system device of each 299NUMA node with memory in:: 300 301 /sys/devices/system/node/node[0-9]*/hugepages/ 302 303Under this directory, the subdirectory for each supported huge page size 304contains the following attribute files:: 305 306 nr_hugepages 307 free_hugepages 308 surplus_hugepages 309 310The free\_' and surplus\_' attribute files are read-only. They return the number 311of free and surplus [overcommitted] huge pages, respectively, on the parent 312node. 313 314The ``nr_hugepages`` attribute returns the total number of huge pages on the 315specified node. When this attribute is written, the number of persistent huge 316pages on the parent node will be adjusted to the specified value, if sufficient 317resources exist, regardless of the task's mempolicy or cpuset constraints. 318 319Note that the number of overcommit and reserve pages remain global quantities, 320as we don't know until fault time, when the faulting task's mempolicy is 321applied, from which node the huge page allocation will be attempted. 322 323.. _using_huge_pages: 324 325Using Huge Pages 326================ 327 328If the user applications are going to request huge pages using mmap system 329call, then it is required that system administrator mount a file system of 330type hugetlbfs:: 331 332 mount -t hugetlbfs \ 333 -o uid=<value>,gid=<value>,mode=<value>,pagesize=<value>,size=<value>,\ 334 min_size=<value>,nr_inodes=<value> none /mnt/huge 335 336This command mounts a (pseudo) filesystem of type hugetlbfs on the directory 337``/mnt/huge``. Any file created on ``/mnt/huge`` uses huge pages. 338 339The ``uid`` and ``gid`` options sets the owner and group of the root of the 340file system. By default the ``uid`` and ``gid`` of the current process 341are taken. 342 343The ``mode`` option sets the mode of root of file system to value & 01777. 344This value is given in octal. By default the value 0755 is picked. 345 346If the platform supports multiple huge page sizes, the ``pagesize`` option can 347be used to specify the huge page size and associated pool. ``pagesize`` 348is specified in bytes. If ``pagesize`` is not specified the platform's 349default huge page size and associated pool will be used. 350 351The ``size`` option sets the maximum value of memory (huge pages) allowed 352for that filesystem (``/mnt/huge``). The ``size`` option can be specified 353in bytes, or as a percentage of the specified huge page pool (``nr_hugepages``). 354The size is rounded down to HPAGE_SIZE boundary. 355 356The ``min_size`` option sets the minimum value of memory (huge pages) allowed 357for the filesystem. ``min_size`` can be specified in the same way as ``size``, 358either bytes or a percentage of the huge page pool. 359At mount time, the number of huge pages specified by ``min_size`` are reserved 360for use by the filesystem. 361If there are not enough free huge pages available, the mount will fail. 362As huge pages are allocated to the filesystem and freed, the reserve count 363is adjusted so that the sum of allocated and reserved huge pages is always 364at least ``min_size``. 365 366The option ``nr_inodes`` sets the maximum number of inodes that ``/mnt/huge`` 367can use. 368 369If the ``size``, ``min_size`` or ``nr_inodes`` option is not provided on 370command line then no limits are set. 371 372For ``pagesize``, ``size``, ``min_size`` and ``nr_inodes`` options, you can 373use [G|g]/[M|m]/[K|k] to represent giga/mega/kilo. 374For example, size=2K has the same meaning as size=2048. 375 376While read system calls are supported on files that reside on hugetlb 377file systems, write system calls are not. 378 379Regular chown, chgrp, and chmod commands (with right permissions) could be 380used to change the file attributes on hugetlbfs. 381 382Also, it is important to note that no such mount command is required if 383applications are going to use only shmat/shmget system calls or mmap with 384MAP_HUGETLB. For an example of how to use mmap with MAP_HUGETLB see 385:ref:`map_hugetlb <map_hugetlb>` below. 386 387Users who wish to use hugetlb memory via shared memory segment should be 388members of a supplementary group and system admin needs to configure that gid 389into ``/proc/sys/vm/hugetlb_shm_group``. It is possible for same or different 390applications to use any combination of mmaps and shm* calls, though the mount of 391filesystem will be required for using mmap calls without MAP_HUGETLB. 392 393Syscalls that operate on memory backed by hugetlb pages only have their lengths 394aligned to the native page size of the processor; they will normally fail with 395errno set to EINVAL or exclude hugetlb pages that extend beyond the length if 396not hugepage aligned. For example, munmap(2) will fail if memory is backed by 397a hugetlb page and the length is smaller than the hugepage size. 398 399 400Examples 401======== 402 403.. _map_hugetlb: 404 405``map_hugetlb`` 406 see tools/testing/selftests/vm/map_hugetlb.c 407 408``hugepage-shm`` 409 see tools/testing/selftests/vm/hugepage-shm.c 410 411``hugepage-mmap`` 412 see tools/testing/selftests/vm/hugepage-mmap.c 413 414The `libhugetlbfs`_ library provides a wide range of userspace tools 415to help with huge page usability, environment setup, and control. 416 417.. _libhugetlbfs: https://github.com/libhugetlbfs/libhugetlbfs 418