Lines Matching +full:- +full:- +full:disable +full:- +full:malloc +full:- +full:trim
1 # SPDX-License-Identifier: GPL-2.0-only
33 compress them into a dynamically allocated RAM-based memory pool.
62 swapped out again, it will be re-compressed.
190 linux-mm@kvack.org and the zswap maintainers.
208 zsmalloc is a slab-based memory allocator designed to store
223 int "Maximum number of physical pages per-zspage"
256 If you cannot migrate to SLUB, please contact linux-mm@kvack.org
305 can usually only damage objects in the same cache. To disable
325 sanity-checking than others. This option is most effective with
339 Try running: slabinfo -DA
376 utilization of a direct-mapped memory-side-cache. See section
379 the presence of a memory-side-cache. There are also incidental
389 after runtime detection of a direct-mapped memory-side-cache.
396 bool "Disable heap randomization"
405 On non-ancient distros (post-2000 ones) N is usually a safe choice.
419 This is taken advantage of by uClibc's malloc(), and also by
420 ELF-FDPIC binfmt's brk and stack allocator.
424 userspace. Since that isn't generally a problem on no-MMU systems,
427 See Documentation/admin-guide/mm/nommu-mmap.rst for more information.
448 This option is best suited for non-NUMA systems with
464 memory hot-plug systems. This is normal.
468 hot-plug and hot-remove.
538 # Keep arch NUMA mapping infrastructure post-init.
584 See Documentation/admin-guide/mm/memory-hotplug.rst for more information.
586 Say Y here if you want all hot-plugged memory blocks to appear in
588 Say N here if you want the default policy to keep all hot-plugged
607 # Heavily threaded applications may benefit from splitting the mm-wide
611 # ARM's adjust_pte (unused if VIPT) depends on mm-wide page_table_lock.
612 # PA-RISC 7xxx's spinlock_t would enlarge struct page from 32 to 44 bytes.
614 # a per-page lock leads to problems when multiple tables need to be locked
662 disable this option unless there really is a strong reason for
664 linux-mm@kvack.org.
721 int "Maximum scale factor of PCP (Per-CPU pageset) batch allocate/free"
725 In page allocator, PCP (Per-CPU pageset) is refilled and drained in
775 this low address space will need CAP_SYS_RAWIO or disable this
808 allocator for chunks in 2^N*PAGE_SIZE amounts - which is frequently
809 more than it requires. To deal with this, mmap() is able to trim off
817 long-term mappings means that the space is wasted.
827 See Documentation/admin-guide/mm/nommu-mmap.rst for more information.
887 bool "Read-only THP for filesystems (EXPERIMENTAL)"
891 Allow khugepaged to put read-only file-backed pages in THP.
926 subsystems to allocate big physically-contiguous blocks of memory.
974 soft-dirty bit on pte-s. This bit it set when someone writes
978 See Documentation/admin-guide/mm/soft-dirty.rst for more details.
984 int "Default maximum user stack size for 32-bit processes (MB)"
989 This is the maximum stack size in Megabytes in the VM layout of 32-bit
1028 See Documentation/admin-guide/mm/idle_page_tracking.rst for
1038 checking, an architecture-agnostic way to find the stack pointer
1070 "device-physical" addresses which is needed for using a DAX
1109 suitable for 64-bit architectures with CONFIG_FLATMEM or
1111 enough room for additional bits in page->flags.
1130 bool "Enable infrastructure for get_user_pages()-related unit tests"
1134 to make ioctl calls that can launch kernel-based unit tests for
1139 the non-_fast variants.
1141 There is also a sub-test that allows running dump_page() on any
1143 range of user-space addresses. These pages are either pinned via
1241 file-backed memory types like shmem and hugetlbfs.
1243 # multi-gen LRU {
1245 bool "Multi-Gen LRU"
1247 # make sure folio->flags has enough spare bits
1251 Documentation/admin-guide/mm/multigen_lru.rst for details.
1257 This option enables the multi-gen LRU by default.
1266 This option has a per-memcg and per-node memory overhead.
1276 Allow per-vma locking during page fault handling.