Home
last modified time | relevance | path

Searched full:reclaim (Results 1 – 25 of 481) sorted by relevance

12345678910>>...20

/openbmc/linux/tools/testing/selftests/cgroup/
H A Dmemcg_protection.m5 % This script simulates reclaim protection behavior on a single level of memcg
10 % reclaim) and then the reclaim starts, all memory is reclaimable, i.e. treated
11 % same. It simulates only non-low reclaim and assumes all memory.min = 0.
24 % Reclaim parameters
27 % Minimal reclaim amount (GB)
30 % Reclaim coefficient (think as 0.5^sc->priority)
72 % nothing to reclaim, reached equilibrium
79 % XXX here I do parallel reclaim of all siblings
80 % in reality reclaim is serialized and each sibling recalculates own residual
H A Dtest_memcontrol.c282 * Then we try to reclaim from A/B/C using memory.reclaim until its
285 * (a) We ignore the protection of the reclaim target memcg.
672 * Reclaim from @memcg until usage reaches @goal by writing to
673 * memory.reclaim.
678 * This function assumes that writing to memory.reclaim is the only
680 * reclaim).
682 * This function makes sure memory.reclaim is sane. It will return
683 * false if memory.reclaim's error codes do not make sense, even if
698 /* Did memory.reclaim return 0 incorrectly? */ in reclaim_until()
704 err = cg_write(memcg, "memory.reclaim", buf); in reclaim_until()
[all …]
/openbmc/linux/drivers/md/
H A Ddm-zoned-reclaim.c12 #define DM_MSG_PREFIX "zoned reclaim"
33 * Reclaim state flags.
45 * Percentage of unmapped (free) random zones below which reclaim starts
51 * Percentage of unmapped (free) random zones above which reclaim will
338 * Reclaim an empty zone.
362 * Find a candidate zone for reclaim and process it.
376 DMDEBUG("(%s/%u): No zone found to reclaim", in dmz_do_reclaim()
390 * Reclaim the random data zone by moving its in dmz_do_reclaim()
412 * Reclaim the data zone by merging it into the in dmz_do_reclaim()
422 DMDEBUG("(%s/%u): reclaim zone %u interrupted", in dmz_do_reclaim()
[all …]
/openbmc/linux/Documentation/core-api/
H A Dmemory-allocation.rst43 direct reclaim may be triggered under memory pressure; the calling
46 handler, use ``GFP_NOWAIT``. This flag prevents direct reclaim and
74 prevent recursion deadlocks caused by direct memory reclaim calling
87 GFP flags and reclaim behavior
89 Memory allocations may trigger direct or background reclaim and it is
95 doesn't kick the background reclaim. Should be used carefully because it
97 reclaim.
101 context but can wake kswapd to reclaim memory if the zone is below
111 * ``GFP_KERNEL`` - both background and direct reclaim are allowed and the
119 reclaim (one round of reclaim in this implementation). The OOM killer
[all …]
H A Dgfp_mask-from-fs-io.rst15 memory reclaim calling back into the FS or IO paths and blocking on
25 of GFP_NOFS/GFP_NOIO can lead to memory over-reclaim or other memory
26 reclaim issues.
44 any critical section with respect to the reclaim is started - e.g.
45 lock shared with the reclaim context or when a transaction context
46 nesting would be possible via reclaim. The restore function should be
48 explanation what is the reclaim context for easier maintenance.
/openbmc/linux/include/linux/
H A Dgfp_types.h140 * DOC: Reclaim modifiers
142 * Reclaim modifiers
153 * %__GFP_DIRECT_RECLAIM indicates that the caller may enter direct reclaim.
158 * the low watermark is reached and have it reclaim pages until the high
160 * options are available and the reclaim is likely to disrupt the system. The
162 * reclaim/compaction may cause indirect stalls.
164 * %__GFP_RECLAIM is shorthand to allow/forbid both direct and kswapd reclaim.
176 * memory direct reclaim to get some memory under memory pressure (thus
182 * %__GFP_RETRY_MAYFAIL: The VM implementation will retry memory reclaim
212 #define __GFP_DIRECT_RECLAIM ((__force gfp_t)___GFP_DIRECT_RECLAIM) /* Caller can reclaim */
[all …]
H A Dshrinker.h9 * This struct is used to pass information from page reclaim to the shrinkers.
22 * How many objects scan_objects should scan and try to reclaim.
58 * attempts to call the @scan_objects will be made from the current reclaim
69 long batch; /* reclaim batch size, 0 = default */
H A Dmmzone.h42 * coalesce naturally under reasonable reclaim pressure and those which
137 NR_ZONE_LRU_BASE, /* Used only for compaction and reclaim retry */
192 NR_VMSCAN_IMMEDIATE, /* Prioritise for reclaim when writeback ends */
195 NR_THROTTLED_WRITTEN, /* NR_WRITTEN while reclaim throttled */
303 * 1. LRUVEC_CGROUP_CONGESTED is set by cgroup-level reclaim.
304 * It can be cleared by cgroup reclaim or kswapd.
305 * 2. LRUVEC_NODE_CONGESTED is set by kswapd node-level reclaim.
309 * reclaim, but not vice versa. This only applies to the root cgroup.
310 * The goal is to prevent cgroup reclaim on the root cgroup (e.g.
311 * memory.reclaim) to unthrottle an unbalanced node (that was throttled
[all …]
/openbmc/linux/Documentation/mm/
H A Dmultigen_lru.rst7 page reclaim and improves performance under memory pressure. Page
8 reclaim decides the kernel's caching policy and ability to overcommit
110 eviction. They form a closed-loop system, i.e., the page reclaim.
174 ignored when the current memcg is under reclaim. Similarly, page table
175 walkers will ignore pages from nodes other than the one under reclaim.
187 can incur the highest CPU cost in the reclaim path.
228 global reclaim, which is critical to system-wide memory overcommit in
229 data centers. Note that memcg LRU only applies to global reclaim.
241 In terms of global reclaim, it has two distinct features:
245 2. Eventual fairness, which allows direct reclaim to bail out at will
[all …]
H A Dphysical_memory.rst271 Reclaim control
280 Workqueues used to synchronize memory reclaim tasks
286 Number of pages written while reclaim is throttled waiting for writeback.
289 Controls the order kswapd tries to reclaim
295 Number of runs kswapd was unable to reclaim any pages
307 Flags controlling reclaim behavior.
/openbmc/linux/Documentation/admin-guide/device-mapper/
H A Ddm-zoned.rst27 internally for storing metadata and performing reclaim operations.
108 situation, a reclaim process regularly scans used conventional zones and
109 tries to reclaim the least recently used zones by copying the valid
128 (for both incoming BIO processing and reclaim process) and all dirty
184 Normally the reclaim process will be started once there are less than 50
185 percent free random zones. In order to start the reclaim process manually
191 dmsetup message /dev/dm-X 0 reclaim
193 will start the reclaim process and random zones will be moved to sequential
/openbmc/linux/mm/
H A Dvmscan.c75 /* How many pages shrink_list() should reclaim */
86 * primary target of this reclaim invocation.
96 /* Can active folios be deactivated as part of reclaim? */
109 /* Can folios be swapped as part of reclaim? */
112 /* Proactive reclaim invoked by userspace through memory.reclaim */
146 /* The highest zone to isolate folios for reclaim from */
432 /* Returns true for reclaim through cgroup limits or cgroup interfaces. */
439 * Returns true for reclaim on the root cgroup. This is true for direct
440 * allocator reclaim and reclaim through cgroup interfaces on the root cgroup.
521 * flush_reclaim_state(): add pages reclaimed outside of LRU-based reclaim to
[all …]
H A Dswap.c183 * safe side, underestimate, let page reclaim fix it, rather in lru_add_fn()
241 * immediate reclaim. If it still appears to be reclaimable, move it
280 * 1) The pinned lruvec in reclaim, or in lru_note_cost()
534 * inactive list to speed up its reclaim. It is moved to the
537 * effective than the single-page writeout from reclaim.
540 * could be reclaimed asap using the reclaim flag.
543 * 2. active, dirty/writeback folio -> inactive, head, reclaim
545 * 4. inactive, dirty/writeback folio -> inactive, head, reclaim
551 * than the single-page writeout from reclaim.
571 * Setting the reclaim flag could race with in lru_deactivate_file_fn()
[all …]
H A Dworkingset.c25 * the head of the inactive list and page reclaim scans pages from the
27 * are promoted to the active list, to protect them from reclaim,
34 * reclaim <- | inactive | <-+-- demotion | active | <--+
164 * actively used cache from reclaim. The cache is NOT transitioning to
352 * to the in-memory dimensions. This function allows reclaim and LRU
375 * @target_memcg: the cgroup that is causing the reclaim
469 * unconditionally with *every* reclaim invocation for the in workingset_test_recent()
530 * during folio reclaim is being determined. in workingset_refault()
595 * track shadow nodes and reclaim them when they grow way past the
657 * each, this will reclaim shadow entries when they consume in count_shadow_nodes()
[all …]
H A Dvmpressure.c226 * This function should be called from the vmscan reclaim path to account
232 * notified of the entire subtree's reclaim efficiency.
234 * If @tree is not set, reclaim efficiency is recorded for @memcg, and
248 * The in-kernel users only care about the reclaim efficiency in vmpressure()
265 * Indirect reclaim (kswapd) sets sc->gfp_mask to GFP_KERNEL, so in vmpressure()
330 * This function should be called from the reclaim path every time when
/openbmc/linux/drivers/gpu/drm/amd/amdgpu/
H A Damdgpu_mes.h401 * A bit more detail about why to set no-FS reclaim with MES lock:
418 * notifiers can be called in reclaim-FS context. That's where the
420 * memory pressure. While we are running in reclaim-FS context, we must
421 * not trigger another memory reclaim operation because that would
422 * recursively reenter the reclaim code and cause a deadlock. The
428 * Thread A: takes and holds reservation lock | triggers reclaim-FS |
433 * triggering a reclaim-FS operation itself.
441 * As a result, make sure no reclaim-FS happens while holding this lock anywhere
442 * to prevent deadlocks when an MMU notifier runs in reclaim-FS context.
/openbmc/linux/Documentation/ABI/testing/
H A Dsysfs-kernel-mm-numa9 Description: Enable/disable demoting pages during reclaim
11 Page migration during reclaim is intended for systems
16 Allowing page migration during reclaim enables these
/openbmc/linux/Documentation/trace/postprocess/
H A Dtrace-vmscan-postprocess.pl3 # page reclaim. It makes an attempt to extract some high-level information on
325 # Record how long direct reclaim took this time
482 printf("Reclaim latencies expressed as order-latency_in_ms\n") if !$opt_ignorepid;
638 print "Direct reclaim pages scanned: $total_direct_nr_scanned\n";
639 print "Direct reclaim file pages scanned: $total_direct_nr_file_scanned\n";
640 print "Direct reclaim anon pages scanned: $total_direct_nr_anon_scanned\n";
641 print "Direct reclaim pages reclaimed: $total_direct_nr_reclaimed\n";
642 print "Direct reclaim file pages reclaimed: $total_direct_nr_file_reclaimed\n";
643 print "Direct reclaim anon pages reclaimed: $total_direct_nr_anon_reclaimed\n";
644 print "Direct reclaim write file sync I/O: $total_direct_writepage_file_sync\n";
[all …]
/openbmc/linux/fs/xfs/
H A Dxfs_icache.c185 * Queue background inode reclaim work if there are reclaimable inodes and there
186 * isn't reclaim work already scheduled or in progress.
273 * Reclaim can signal (with a null agino) that it cleared its own tag in xfs_perag_clear_inode_tag()
350 * the actual reclaim workers from stomping over us while we recycle in xfs_iget_recycle()
365 * trouble. Try to re-add it to the reclaim list. in xfs_iget_recycle()
806 * Grab the inode for reclaim exclusively.
813 * avoid inodes that are no longer reclaim candidates.
817 * ensured that we are able to reclaim this inode and the world can see that we
818 * are going to reclaim it.
832 /* not a reclaim candidate. */ in xfs_reclaim_igrab()
[all …]
/openbmc/linux/Documentation/admin-guide/mm/
H A Dmultigen_lru.rst7 page reclaim and improves performance under memory pressure. Page
8 reclaim decides the kernel's caching policy and ability to overcommit
138 Proactive reclaim
140 Proactive reclaim induces page reclaim when there is no memory
142 comes in, the job scheduler wants to proactively reclaim cold pages on
H A Dconcepts.rst154 Reclaim chapter
179 repurposing them is called (surprise!) `reclaim`. Linux can reclaim
190 will trigger `direct reclaim`. In this case allocation is stalled
208 Like reclaim, the compaction may happen asynchronously in the ``kcompactd``
215 kernel will be unable to reclaim enough memory to continue to operate. In
/openbmc/linux/Documentation/admin-guide/sysctl/
H A Dvm.rst274 reclaim to satisfy a high-order allocation. The extfrag/extfrag_index file in
487 A percentage of the total pages in each zone. On Zone reclaim
491 systems that rarely perform global reclaim.
495 Note that slab reclaim is triggered in a per zone / node fashion.
505 This is a percentage of the total pages in each zone. Zone reclaim will
954 This percentage value controls the tendency of the kernel to reclaim
958 reclaim dentries and inodes at a "fair" rate with respect to pagecache and
959 swapcache reclaim. Decreasing vfs_cache_pressure causes the kernel to prefer
961 never reclaim dentries and inodes due to memory pressure and this can easily
963 causes the kernel to prefer to reclaim dentries and inodes.
[all …]
/openbmc/linux/Documentation/admin-guide/cgroup-v1/
H A Dmemory.rst85 memory.force_empty trigger forced page reclaim
190 charged is over its limit. If it is, then reclaim is invoked on the cgroup.
191 More details can be found in the reclaim section of this document.
274 2.5 Reclaim
279 to reclaim memory from the cgroup so as to make space for the new
280 pages that the cgroup has touched. If the reclaim is unsuccessful,
284 The reclaim algorithm has not been modified for cgroups, except that
289 Reclaim does not work for the root cgroup, since we cannot set any
336 to trigger slab reclaim when those limits are reached.
384 In the current implementation, memory reclaim will NOT be triggered for
[all …]
/openbmc/linux/Documentation/accounting/
H A Ddelay-accounting.rst15 d) memory reclaim
52 delay seen for cpu, sync block I/O, swapin, memory reclaim, thrash page
116 RECLAIM count delay total delay average
/openbmc/linux/include/linux/sched/
H A Dsd_flags.h56 * SHARED_CHILD: Set from the base domain up to the NUMA reclaim level.
64 * SHARED_CHILD: Set from the base domain up to the NUMA reclaim level.
80 * SHARED_CHILD: Set from the base domain up to the NUMA reclaim level.

12345678910>>...20