Lines Matching +full:non +full:- +full:tunable

13 ------------------------------------------------------------------------------
27 - admin_reserve_kbytes
28 - compact_memory
29 - compaction_proactiveness
30 - compact_unevictable_allowed
31 - dirty_background_bytes
32 - dirty_background_ratio
33 - dirty_bytes
34 - dirty_expire_centisecs
35 - dirty_ratio
36 - dirtytime_expire_seconds
37 - dirty_writeback_centisecs
38 - drop_caches
39 - extfrag_threshold
40 - highmem_is_dirtyable
41 - hugetlb_shm_group
42 - laptop_mode
43 - legacy_va_layout
44 - lowmem_reserve_ratio
45 - max_map_count
46 - memory_failure_early_kill
47 - memory_failure_recovery
48 - min_free_kbytes
49 - min_slab_ratio
50 - min_unmapped_ratio
51 - mmap_min_addr
52 - mmap_rnd_bits
53 - mmap_rnd_compat_bits
54 - nr_hugepages
55 - nr_hugepages_mempolicy
56 - nr_overcommit_hugepages
57 - nr_trim_pages (only if CONFIG_MMU=n)
58 - numa_zonelist_order
59 - oom_dump_tasks
60 - oom_kill_allocating_task
61 - overcommit_kbytes
62 - overcommit_memory
63 - overcommit_ratio
64 - page-cluster
65 - page_lock_unfairness
66 - panic_on_oom
67 - percpu_pagelist_high_fraction
68 - stat_interval
69 - stat_refresh
70 - numa_stat
71 - swappiness
72 - unprivileged_userfaultfd
73 - user_reserve_kbytes
74 - vfs_cache_pressure
75 - watermark_boost_factor
76 - watermark_scale_factor
77 - zone_reclaim_mode
120 This tunable takes a value in the range [0, 100] with a default value of
121 20. This tunable determines how aggressively compaction is done in the
122 background. Write of a non zero value to this tunable will immediately
125 Note that compaction has a non-trivial system-wide impact as pages
189 This tunable is used to define when dirty data is old enough to be eligible
191 of a second. Data which has been dirty in-memory for longer than this
212 eventually gets pushed out to disk. This tunable is used to define when dirty
221 out to disk. This tunable expresses the interval between those wakeups, in
246 This is a non-destructive operation and will not free any dirty objects.
274 reclaim to satisfy a high-order allocation. The extfrag/extfrag_index file in
277 of memory, values towards 1000 imply failures are due to fragmentation and -1
296 Changing the value to non zero would allow more memory to be dirtied
298 storage more effectively. Note this also comes with a risk of pre-mature
315 controlled by this knob are discussed in Documentation/admin-guide/laptops/laptop-mode.rst.
321 If non-zero, this sysctl disables the new 32-bit mmap layout - the kernel
345 The `lowmem_reserve_ratio` tunable determines how aggressive the kernel is
359 in /proc/zoneinfo like the following. (This is an example of x86-64 box).
389 zone[i]->protection[j]
409 The minimum value is 1 (1/1 -> 100%). The value less than 1 completely
417 may have. Memory map areas are used as a side-effect of calling
436 no other up-to-date copy of the data it will kill to prevent any data
510 against all file-backed unmapped pages including swapcache pages and tmpfs
540 /proc/sys/vm/mmap_rnd_bits tunable
554 /proc/sys/vm/mmap_rnd_compat_bits tunable
562 See Documentation/admin-guide/mm/hugetlbpage.rst
605 Change the size of the hugepage pool at run-time on a specific
608 See Documentation/admin-guide/mm/hugetlbpage.rst
617 See Documentation/admin-guide/mm/hugetlbpage.rst
625 This value adjusts the excess page trimming behaviour of power-of-2 aligned
634 See Documentation/admin-guide/mm/nommu-mmap.rst for more information.
648 In non-NUMA case, a zonelist for GFP_KERNEL is ordered as following.
649 ZONE_NORMAL -> ZONE_DMA
656 (A) Node(0) ZONE_NORMAL -> Node(0) ZONE_DMA -> Node(1) ZONE_NORMAL
657 (B) Node(0) ZONE_NORMAL -> Node(1) ZONE_NORMAL -> Node(0) ZONE_DMA.
661 out-of-memory(OOM) of ZONE_DMA because ZONE_DMA is tend to be small.
676 On 32-bit, the Normal zone needs to be preserved for allocations accessible
679 On 64-bit, devices that require DMA32/DMA are relatively rare, so "node"
689 Enables a system-wide task dump (excluding kernel threads) to be produced
690 when the kernel performs an OOM-killing and includes such information as
702 If this is set to non-zero, this information is shown whenever the
703 OOM killer actually kills a memory-hogging task.
711 This enables or disables killing the OOM-triggering task in
712 out-of-memory situations.
716 selects a rogue memory-hogging task that frees up a large amount of
719 If this is set to non-zero, the OOM killer simply kills the task that
720 triggered the out-of-memory condition. This avoids the expensive
756 programs that malloc() huge amounts of memory "just-in-case"
761 See Documentation/mm/overcommit-accounting.rst and
773 page-cluster
776 page-cluster controls the number of pages up to which consecutive pages
780 but consecutive on swap space - that means they were swapped out together.
782 It is a logarithmic value - setting it to zero means "1 page", setting
788 swap-intensive.
806 This enables or disables panic on out-of-memory feature.
812 If this is set to 1, the kernel panics when out-of-memory happens.
815 may be killed by oom-killer. No panic occurs in this case.
820 above-mentioned. Even oom happens under memory cgroup, the whole
836 per-cpu page lists. It is an upper boundary that is divided depending
839 on per-cpu page lists. This entry only changes the value of hot per-cpu
841 each zone between per-cpu lists.
843 The batch value of each per-cpu page list remains the same regardless of
846 The initial value is zero. Kernel uses this value to set the high pcp->high
862 Any read or write (by root only) flushes all the per-cpu vm statistics
866 As a side-effect, it also checks for negative totals (elsewhere reported
895 cache and swap-backed pages equally; lower values signify more
900 experimentation and will also be workload-dependent.
904 For in-memory swap, like zram or zswap, as well as hybrid setups that
911 file-backed pages is less than the high watermark in a zone.
931 Documentation/admin-guide/mm/userfaultfd.rst.
962 lead to out-of-memory conditions. Increasing vfs_cache_pressure beyond 100
978 increase the success rate of future high-order allocations such as SLUB
987 worth of pages will be reclaimed (e.g. 2MB on 64-bit x86). A boost factor