1==========================
2Memory Resource Controller
3==========================
4
5NOTE:
6      This document is hopelessly outdated and it asks for a complete
7      rewrite. It still contains a useful information so we are keeping it
8      here but make sure to check the current code if you need a deeper
9      understanding.
10
11NOTE:
12      The Memory Resource Controller has generically been referred to as the
13      memory controller in this document. Do not confuse memory controller
14      used here with the memory controller that is used in hardware.
15
16(For editors) In this document:
17      When we mention a cgroup (cgroupfs's directory) with memory controller,
18      we call it "memory cgroup". When you see git-log and source code, you'll
19      see patch's title and function names tend to use "memcg".
20      In this document, we avoid using it.
21
22Benefits and Purpose of the memory controller
23=============================================
24
25The memory controller isolates the memory behaviour of a group of tasks
26from the rest of the system. The article on LWN [12] mentions some probable
27uses of the memory controller. The memory controller can be used to
28
29a. Isolate an application or a group of applications
30   Memory-hungry applications can be isolated and limited to a smaller
31   amount of memory.
32b. Create a cgroup with a limited amount of memory; this can be used
33   as a good alternative to booting with mem=XXXX.
34c. Virtualization solutions can control the amount of memory they want
35   to assign to a virtual machine instance.
36d. A CD/DVD burner could control the amount of memory used by the
37   rest of the system to ensure that burning does not fail due to lack
38   of available memory.
39e. There are several other use cases; find one or use the controller just
40   for fun (to learn and hack on the VM subsystem).
41
42Current Status: linux-2.6.34-mmotm(development version of 2010/April)
43
44Features:
45
46 - accounting anonymous pages, file caches, swap caches usage and limiting them.
47 - pages are linked to per-memcg LRU exclusively, and there is no global LRU.
48 - optionally, memory+swap usage can be accounted and limited.
49 - hierarchical accounting
50 - soft limit
51 - moving (recharging) account at moving a task is selectable.
52 - usage threshold notifier
53 - memory pressure notifier
54 - oom-killer disable knob and oom-notifier
55 - Root cgroup has no limit controls.
56
57 Kernel memory support is a work in progress, and the current version provides
58 basically functionality. (See Section 2.7)
59
60Brief summary of control files.
61
62==================================== ==========================================
63 tasks				     attach a task(thread) and show list of
64				     threads
65 cgroup.procs			     show list of processes
66 cgroup.event_control		     an interface for event_fd()
67 memory.usage_in_bytes		     show current usage for memory
68				     (See 5.5 for details)
69 memory.memsw.usage_in_bytes	     show current usage for memory+Swap
70				     (See 5.5 for details)
71 memory.limit_in_bytes		     set/show limit of memory usage
72 memory.memsw.limit_in_bytes	     set/show limit of memory+Swap usage
73 memory.failcnt			     show the number of memory usage hits limits
74 memory.memsw.failcnt		     show the number of memory+Swap hits limits
75 memory.max_usage_in_bytes	     show max memory usage recorded
76 memory.memsw.max_usage_in_bytes     show max memory+Swap usage recorded
77 memory.soft_limit_in_bytes	     set/show soft limit of memory usage
78 memory.stat			     show various statistics
79 memory.use_hierarchy		     set/show hierarchical account enabled
80                                     This knob is deprecated and shouldn't be
81                                     used.
82 memory.force_empty		     trigger forced page reclaim
83 memory.pressure_level		     set memory pressure notifications
84 memory.swappiness		     set/show swappiness parameter of vmscan
85				     (See sysctl's vm.swappiness)
86 memory.move_charge_at_immigrate     set/show controls of moving charges
87 memory.oom_control		     set/show oom controls.
88 memory.numa_stat		     show the number of memory usage per numa
89				     node
90 memory.kmem.limit_in_bytes          set/show hard limit for kernel memory
91                                     This knob is deprecated and shouldn't be
92                                     used. It is planned that this be removed in
93                                     the foreseeable future.
94 memory.kmem.usage_in_bytes          show current kernel memory allocation
95 memory.kmem.failcnt                 show the number of kernel memory usage
96				     hits limits
97 memory.kmem.max_usage_in_bytes      show max kernel memory usage recorded
98
99 memory.kmem.tcp.limit_in_bytes      set/show hard limit for tcp buf memory
100 memory.kmem.tcp.usage_in_bytes      show current tcp buf memory allocation
101 memory.kmem.tcp.failcnt             show the number of tcp buf memory usage
102				     hits limits
103 memory.kmem.tcp.max_usage_in_bytes  show max tcp buf memory usage recorded
104==================================== ==========================================
105
1061. History
107==========
108
109The memory controller has a long history. A request for comments for the memory
110controller was posted by Balbir Singh [1]. At the time the RFC was posted
111there were several implementations for memory control. The goal of the
112RFC was to build consensus and agreement for the minimal features required
113for memory control. The first RSS controller was posted by Balbir Singh[2]
114in Feb 2007. Pavel Emelianov [3][4][5] has since posted three versions of the
115RSS controller. At OLS, at the resource management BoF, everyone suggested
116that we handle both page cache and RSS together. Another request was raised
117to allow user space handling of OOM. The current memory controller is
118at version 6; it combines both mapped (RSS) and unmapped Page
119Cache Control [11].
120
1212. Memory Control
122=================
123
124Memory is a unique resource in the sense that it is present in a limited
125amount. If a task requires a lot of CPU processing, the task can spread
126its processing over a period of hours, days, months or years, but with
127memory, the same physical memory needs to be reused to accomplish the task.
128
129The memory controller implementation has been divided into phases. These
130are:
131
1321. Memory controller
1332. mlock(2) controller
1343. Kernel user memory accounting and slab control
1354. user mappings length controller
136
137The memory controller is the first controller developed.
138
1392.1. Design
140-----------
141
142The core of the design is a counter called the page_counter. The
143page_counter tracks the current memory usage and limit of the group of
144processes associated with the controller. Each cgroup has a memory controller
145specific data structure (mem_cgroup) associated with it.
146
1472.2. Accounting
148---------------
149
150::
151
152		+--------------------+
153		|  mem_cgroup        |
154		|  (page_counter)    |
155		+--------------------+
156		 /            ^      \
157		/             |       \
158           +---------------+  |        +---------------+
159           | mm_struct     |  |....    | mm_struct     |
160           |               |  |        |               |
161           +---------------+  |        +---------------+
162                              |
163                              + --------------+
164                                              |
165           +---------------+           +------+--------+
166           | page          +---------->  page_cgroup|
167           |               |           |               |
168           +---------------+           +---------------+
169
170             (Figure 1: Hierarchy of Accounting)
171
172
173Figure 1 shows the important aspects of the controller
174
1751. Accounting happens per cgroup
1762. Each mm_struct knows about which cgroup it belongs to
1773. Each page has a pointer to the page_cgroup, which in turn knows the
178   cgroup it belongs to
179
180The accounting is done as follows: mem_cgroup_charge_common() is invoked to
181set up the necessary data structures and check if the cgroup that is being
182charged is over its limit. If it is, then reclaim is invoked on the cgroup.
183More details can be found in the reclaim section of this document.
184If everything goes well, a page meta-data-structure called page_cgroup is
185updated. page_cgroup has its own LRU on cgroup.
186(*) page_cgroup structure is allocated at boot/memory-hotplug time.
187
1882.2.1 Accounting details
189------------------------
190
191All mapped anon pages (RSS) and cache pages (Page Cache) are accounted.
192Some pages which are never reclaimable and will not be on the LRU
193are not accounted. We just account pages under usual VM management.
194
195RSS pages are accounted at page_fault unless they've already been accounted
196for earlier. A file page will be accounted for as Page Cache when it's
197inserted into inode (radix-tree). While it's mapped into the page tables of
198processes, duplicate accounting is carefully avoided.
199
200An RSS page is unaccounted when it's fully unmapped. A PageCache page is
201unaccounted when it's removed from radix-tree. Even if RSS pages are fully
202unmapped (by kswapd), they may exist as SwapCache in the system until they
203are really freed. Such SwapCaches are also accounted.
204A swapped-in page is accounted after adding into swapcache.
205
206Note: The kernel does swapin-readahead and reads multiple swaps at once.
207Since page's memcg recorded into swap whatever memsw enabled, the page will
208be accounted after swapin.
209
210At page migration, accounting information is kept.
211
212Note: we just account pages-on-LRU because our purpose is to control amount
213of used pages; not-on-LRU pages tend to be out-of-control from VM view.
214
2152.3 Shared Page Accounting
216--------------------------
217
218Shared pages are accounted on the basis of the first touch approach. The
219cgroup that first touches a page is accounted for the page. The principle
220behind this approach is that a cgroup that aggressively uses a shared
221page will eventually get charged for it (once it is uncharged from
222the cgroup that brought it in -- this will happen on memory pressure).
223
224But see section 8.2: when moving a task to another cgroup, its pages may
225be recharged to the new cgroup, if move_charge_at_immigrate has been chosen.
226
2272.4 Swap Extension
228--------------------------------------
229
230Swap usage is always recorded for each of cgroup. Swap Extension allows you to
231read and limit it.
232
233When CONFIG_SWAP is enabled, following files are added.
234
235 - memory.memsw.usage_in_bytes.
236 - memory.memsw.limit_in_bytes.
237
238memsw means memory+swap. Usage of memory+swap is limited by
239memsw.limit_in_bytes.
240
241Example: Assume a system with 4G of swap. A task which allocates 6G of memory
242(by mistake) under 2G memory limitation will use all swap.
243In this case, setting memsw.limit_in_bytes=3G will prevent bad use of swap.
244By using the memsw limit, you can avoid system OOM which can be caused by swap
245shortage.
246
247**why 'memory+swap' rather than swap**
248
249The global LRU(kswapd) can swap out arbitrary pages. Swap-out means
250to move account from memory to swap...there is no change in usage of
251memory+swap. In other words, when we want to limit the usage of swap without
252affecting global LRU, memory+swap limit is better than just limiting swap from
253an OS point of view.
254
255**What happens when a cgroup hits memory.memsw.limit_in_bytes**
256
257When a cgroup hits memory.memsw.limit_in_bytes, it's useless to do swap-out
258in this cgroup. Then, swap-out will not be done by cgroup routine and file
259caches are dropped. But as mentioned above, global LRU can do swapout memory
260from it for sanity of the system's memory management state. You can't forbid
261it by cgroup.
262
2632.5 Reclaim
264-----------
265
266Each cgroup maintains a per cgroup LRU which has the same structure as
267global VM. When a cgroup goes over its limit, we first try
268to reclaim memory from the cgroup so as to make space for the new
269pages that the cgroup has touched. If the reclaim is unsuccessful,
270an OOM routine is invoked to select and kill the bulkiest task in the
271cgroup. (See 10. OOM Control below.)
272
273The reclaim algorithm has not been modified for cgroups, except that
274pages that are selected for reclaiming come from the per-cgroup LRU
275list.
276
277NOTE:
278  Reclaim does not work for the root cgroup, since we cannot set any
279  limits on the root cgroup.
280
281Note2:
282  When panic_on_oom is set to "2", the whole system will panic.
283
284When oom event notifier is registered, event will be delivered.
285(See oom_control section)
286
2872.6 Locking
288-----------
289
290Lock order is as follows:
291
292  Page lock (PG_locked bit of page->flags)
293    mm->page_table_lock or split pte_lock
294      lock_page_memcg (memcg->move_lock)
295        mapping->i_pages lock
296          lruvec->lru_lock.
297
298Per-node-per-memcgroup LRU (cgroup's private LRU) is guarded by
299lruvec->lru_lock; PG_lru bit of page->flags is cleared before
300isolating a page from its LRU under lruvec->lru_lock.
301
3022.7 Kernel Memory Extension (CONFIG_MEMCG_KMEM)
303-----------------------------------------------
304
305With the Kernel memory extension, the Memory Controller is able to limit
306the amount of kernel memory used by the system. Kernel memory is fundamentally
307different than user memory, since it can't be swapped out, which makes it
308possible to DoS the system by consuming too much of this precious resource.
309
310Kernel memory accounting is enabled for all memory cgroups by default. But
311it can be disabled system-wide by passing cgroup.memory=nokmem to the kernel
312at boot time. In this case, kernel memory will not be accounted at all.
313
314Kernel memory limits are not imposed for the root cgroup. Usage for the root
315cgroup may or may not be accounted. The memory used is accumulated into
316memory.kmem.usage_in_bytes, or in a separate counter when it makes sense.
317(currently only for tcp).
318
319The main "kmem" counter is fed into the main counter, so kmem charges will
320also be visible from the user counter.
321
322Currently no soft limit is implemented for kernel memory. It is future work
323to trigger slab reclaim when those limits are reached.
324
3252.7.1 Current Kernel Memory resources accounted
326-----------------------------------------------
327
328stack pages:
329  every process consumes some stack pages. By accounting into
330  kernel memory, we prevent new processes from being created when the kernel
331  memory usage is too high.
332
333slab pages:
334  pages allocated by the SLAB or SLUB allocator are tracked. A copy
335  of each kmem_cache is created every time the cache is touched by the first time
336  from inside the memcg. The creation is done lazily, so some objects can still be
337  skipped while the cache is being created. All objects in a slab page should
338  belong to the same memcg. This only fails to hold when a task is migrated to a
339  different memcg during the page allocation by the cache.
340
341sockets memory pressure:
342  some sockets protocols have memory pressure
343  thresholds. The Memory Controller allows them to be controlled individually
344  per cgroup, instead of globally.
345
346tcp memory pressure:
347  sockets memory pressure for the tcp protocol.
348
3492.7.2 Common use cases
350----------------------
351
352Because the "kmem" counter is fed to the main user counter, kernel memory can
353never be limited completely independently of user memory. Say "U" is the user
354limit, and "K" the kernel limit. There are three possible ways limits can be
355set:
356
357U != 0, K = unlimited:
358    This is the standard memcg limitation mechanism already present before kmem
359    accounting. Kernel memory is completely ignored.
360
361U != 0, K < U:
362    Kernel memory is a subset of the user memory. This setup is useful in
363    deployments where the total amount of memory per-cgroup is overcommitted.
364    Overcommitting kernel memory limits is definitely not recommended, since the
365    box can still run out of non-reclaimable memory.
366    In this case, the admin could set up K so that the sum of all groups is
367    never greater than the total memory, and freely set U at the cost of his
368    QoS.
369
370WARNING:
371    In the current implementation, memory reclaim will NOT be
372    triggered for a cgroup when it hits K while staying below U, which makes
373    this setup impractical.
374
375U != 0, K >= U:
376    Since kmem charges will also be fed to the user counter and reclaim will be
377    triggered for the cgroup for both kinds of memory. This setup gives the
378    admin a unified view of memory, and it is also useful for people who just
379    want to track kernel memory usage.
380
3813. User Interface
382=================
383
3843.0. Configuration
385------------------
386
387a. Enable CONFIG_CGROUPS
388b. Enable CONFIG_MEMCG
389c. Enable CONFIG_MEMCG_SWAP (to use swap extension)
390d. Enable CONFIG_MEMCG_KMEM (to use kmem extension)
391
3923.1. Prepare the cgroups (see cgroups.txt, Why are cgroups needed?)
393-------------------------------------------------------------------
394
395::
396
397	# mount -t tmpfs none /sys/fs/cgroup
398	# mkdir /sys/fs/cgroup/memory
399	# mount -t cgroup none /sys/fs/cgroup/memory -o memory
400
4013.2. Make the new group and move bash into it::
402
403	# mkdir /sys/fs/cgroup/memory/0
404	# echo $$ > /sys/fs/cgroup/memory/0/tasks
405
406Since now we're in the 0 cgroup, we can alter the memory limit::
407
408	# echo 4M > /sys/fs/cgroup/memory/0/memory.limit_in_bytes
409
410NOTE:
411  We can use a suffix (k, K, m, M, g or G) to indicate values in kilo,
412  mega or gigabytes. (Here, Kilo, Mega, Giga are Kibibytes, Mebibytes,
413  Gibibytes.)
414
415NOTE:
416  We can write "-1" to reset the ``*.limit_in_bytes(unlimited)``.
417
418NOTE:
419  We cannot set limits on the root cgroup any more.
420
421::
422
423  # cat /sys/fs/cgroup/memory/0/memory.limit_in_bytes
424  4194304
425
426We can check the usage::
427
428  # cat /sys/fs/cgroup/memory/0/memory.usage_in_bytes
429  1216512
430
431A successful write to this file does not guarantee a successful setting of
432this limit to the value written into the file. This can be due to a
433number of factors, such as rounding up to page boundaries or the total
434availability of memory on the system. The user is required to re-read
435this file after a write to guarantee the value committed by the kernel::
436
437  # echo 1 > memory.limit_in_bytes
438  # cat memory.limit_in_bytes
439  4096
440
441The memory.failcnt field gives the number of times that the cgroup limit was
442exceeded.
443
444The memory.stat file gives accounting information. Now, the number of
445caches, RSS and Active pages/Inactive pages are shown.
446
4474. Testing
448==========
449
450For testing features and implementation, see memcg_test.txt.
451
452Performance test is also important. To see pure memory controller's overhead,
453testing on tmpfs will give you good numbers of small overheads.
454Example: do kernel make on tmpfs.
455
456Page-fault scalability is also important. At measuring parallel
457page fault test, multi-process test may be better than multi-thread
458test because it has noise of shared objects/status.
459
460But the above two are testing extreme situations.
461Trying usual test under memory controller is always helpful.
462
4634.1 Troubleshooting
464-------------------
465
466Sometimes a user might find that the application under a cgroup is
467terminated by the OOM killer. There are several causes for this:
468
4691. The cgroup limit is too low (just too low to do anything useful)
4702. The user is using anonymous memory and swap is turned off or too low
471
472A sync followed by echo 1 > /proc/sys/vm/drop_caches will help get rid of
473some of the pages cached in the cgroup (page cache pages).
474
475To know what happens, disabling OOM_Kill as per "10. OOM Control" (below) and
476seeing what happens will be helpful.
477
4784.2 Task migration
479------------------
480
481When a task migrates from one cgroup to another, its charge is not
482carried forward by default. The pages allocated from the original cgroup still
483remain charged to it, the charge is dropped when the page is freed or
484reclaimed.
485
486You can move charges of a task along with task migration.
487See 8. "Move charges at task migration"
488
4894.3 Removing a cgroup
490---------------------
491
492A cgroup can be removed by rmdir, but as discussed in sections 4.1 and 4.2, a
493cgroup might have some charge associated with it, even though all
494tasks have migrated away from it. (because we charge against pages, not
495against tasks.)
496
497We move the stats to parent, and no change on the charge except uncharging
498from the child.
499
500Charges recorded in swap information is not updated at removal of cgroup.
501Recorded information is discarded and a cgroup which uses swap (swapcache)
502will be charged as a new owner of it.
503
5045. Misc. interfaces
505===================
506
5075.1 force_empty
508---------------
509  memory.force_empty interface is provided to make cgroup's memory usage empty.
510  When writing anything to this::
511
512    # echo 0 > memory.force_empty
513
514  the cgroup will be reclaimed and as many pages reclaimed as possible.
515
516  The typical use case for this interface is before calling rmdir().
517  Though rmdir() offlines memcg, but the memcg may still stay there due to
518  charged file caches. Some out-of-use page caches may keep charged until
519  memory pressure happens. If you want to avoid that, force_empty will be useful.
520
521  Also, note that when memory.kmem.limit_in_bytes is set the charges due to
522  kernel pages will still be seen. This is not considered a failure and the
523  write will still return success. In this case, it is expected that
524  memory.kmem.usage_in_bytes == memory.usage_in_bytes.
525
5265.2 stat file
527-------------
528
529memory.stat file includes following statistics
530
531per-memory cgroup local status
532^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
533
534=============== ===============================================================
535cache		# of bytes of page cache memory.
536rss		# of bytes of anonymous and swap cache memory (includes
537		transparent hugepages).
538rss_huge	# of bytes of anonymous transparent hugepages.
539mapped_file	# of bytes of mapped file (includes tmpfs/shmem)
540pgpgin		# of charging events to the memory cgroup. The charging
541		event happens each time a page is accounted as either mapped
542		anon page(RSS) or cache page(Page Cache) to the cgroup.
543pgpgout		# of uncharging events to the memory cgroup. The uncharging
544		event happens each time a page is unaccounted from the cgroup.
545swap		# of bytes of swap usage
546dirty		# of bytes that are waiting to get written back to the disk.
547writeback	# of bytes of file/anon cache that are queued for syncing to
548		disk.
549inactive_anon	# of bytes of anonymous and swap cache memory on inactive
550		LRU list.
551active_anon	# of bytes of anonymous and swap cache memory on active
552		LRU list.
553inactive_file	# of bytes of file-backed memory on inactive LRU list.
554active_file	# of bytes of file-backed memory on active LRU list.
555unevictable	# of bytes of memory that cannot be reclaimed (mlocked etc).
556=============== ===============================================================
557
558status considering hierarchy (see memory.use_hierarchy settings)
559^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
560
561========================= ===================================================
562hierarchical_memory_limit # of bytes of memory limit with regard to hierarchy
563			  under which the memory cgroup is
564hierarchical_memsw_limit  # of bytes of memory+swap limit with regard to
565			  hierarchy under which memory cgroup is.
566
567total_<counter>		  # hierarchical version of <counter>, which in
568			  addition to the cgroup's own value includes the
569			  sum of all hierarchical children's values of
570			  <counter>, i.e. total_cache
571========================= ===================================================
572
573The following additional stats are dependent on CONFIG_DEBUG_VM
574^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
575
576========================= ========================================
577recent_rotated_anon	  VM internal parameter. (see mm/vmscan.c)
578recent_rotated_file	  VM internal parameter. (see mm/vmscan.c)
579recent_scanned_anon	  VM internal parameter. (see mm/vmscan.c)
580recent_scanned_file	  VM internal parameter. (see mm/vmscan.c)
581========================= ========================================
582
583Memo:
584	recent_rotated means recent frequency of LRU rotation.
585	recent_scanned means recent # of scans to LRU.
586	showing for better debug please see the code for meanings.
587
588Note:
589	Only anonymous and swap cache memory is listed as part of 'rss' stat.
590	This should not be confused with the true 'resident set size' or the
591	amount of physical memory used by the cgroup.
592
593	'rss + mapped_file" will give you resident set size of cgroup.
594
595	(Note: file and shmem may be shared among other cgroups. In that case,
596	mapped_file is accounted only when the memory cgroup is owner of page
597	cache.)
598
5995.3 swappiness
600--------------
601
602Overrides /proc/sys/vm/swappiness for the particular group. The tunable
603in the root cgroup corresponds to the global swappiness setting.
604
605Please note that unlike during the global reclaim, limit reclaim
606enforces that 0 swappiness really prevents from any swapping even if
607there is a swap storage available. This might lead to memcg OOM killer
608if there are no file pages to reclaim.
609
6105.4 failcnt
611-----------
612
613A memory cgroup provides memory.failcnt and memory.memsw.failcnt files.
614This failcnt(== failure count) shows the number of times that a usage counter
615hit its limit. When a memory cgroup hits a limit, failcnt increases and
616memory under it will be reclaimed.
617
618You can reset failcnt by writing 0 to failcnt file::
619
620	# echo 0 > .../memory.failcnt
621
6225.5 usage_in_bytes
623------------------
624
625For efficiency, as other kernel components, memory cgroup uses some optimization
626to avoid unnecessary cacheline false sharing. usage_in_bytes is affected by the
627method and doesn't show 'exact' value of memory (and swap) usage, it's a fuzz
628value for efficient access. (Of course, when necessary, it's synchronized.)
629If you want to know more exact memory usage, you should use RSS+CACHE(+SWAP)
630value in memory.stat(see 5.2).
631
6325.6 numa_stat
633-------------
634
635This is similar to numa_maps but operates on a per-memcg basis.  This is
636useful for providing visibility into the numa locality information within
637an memcg since the pages are allowed to be allocated from any physical
638node.  One of the use cases is evaluating application performance by
639combining this information with the application's CPU allocation.
640
641Each memcg's numa_stat file includes "total", "file", "anon" and "unevictable"
642per-node page counts including "hierarchical_<counter>" which sums up all
643hierarchical children's values in addition to the memcg's own value.
644
645The output format of memory.numa_stat is::
646
647  total=<total pages> N0=<node 0 pages> N1=<node 1 pages> ...
648  file=<total file pages> N0=<node 0 pages> N1=<node 1 pages> ...
649  anon=<total anon pages> N0=<node 0 pages> N1=<node 1 pages> ...
650  unevictable=<total anon pages> N0=<node 0 pages> N1=<node 1 pages> ...
651  hierarchical_<counter>=<counter pages> N0=<node 0 pages> N1=<node 1 pages> ...
652
653The "total" count is sum of file + anon + unevictable.
654
6556. Hierarchy support
656====================
657
658The memory controller supports a deep hierarchy and hierarchical accounting.
659The hierarchy is created by creating the appropriate cgroups in the
660cgroup filesystem. Consider for example, the following cgroup filesystem
661hierarchy::
662
663	       root
664	     /  |   \
665            /	|    \
666	   a	b     c
667		      | \
668		      |  \
669		      d   e
670
671In the diagram above, with hierarchical accounting enabled, all memory
672usage of e, is accounted to its ancestors up until the root (i.e, c and root).
673If one of the ancestors goes over its limit, the reclaim algorithm reclaims
674from the tasks in the ancestor and the children of the ancestor.
675
6766.1 Hierarchical accounting and reclaim
677---------------------------------------
678
679Hierarchical accounting is enabled by default. Disabling the hierarchical
680accounting is deprecated. An attempt to do it will result in a failure
681and a warning printed to dmesg.
682
683For compatibility reasons writing 1 to memory.use_hierarchy will always pass::
684
685	# echo 1 > memory.use_hierarchy
686
6877. Soft limits
688==============
689
690Soft limits allow for greater sharing of memory. The idea behind soft limits
691is to allow control groups to use as much of the memory as needed, provided
692
693a. There is no memory contention
694b. They do not exceed their hard limit
695
696When the system detects memory contention or low memory, control groups
697are pushed back to their soft limits. If the soft limit of each control
698group is very high, they are pushed back as much as possible to make
699sure that one control group does not starve the others of memory.
700
701Please note that soft limits is a best-effort feature; it comes with
702no guarantees, but it does its best to make sure that when memory is
703heavily contended for, memory is allocated based on the soft limit
704hints/setup. Currently soft limit based reclaim is set up such that
705it gets invoked from balance_pgdat (kswapd).
706
7077.1 Interface
708-------------
709
710Soft limits can be setup by using the following commands (in this example we
711assume a soft limit of 256 MiB)::
712
713	# echo 256M > memory.soft_limit_in_bytes
714
715If we want to change this to 1G, we can at any time use::
716
717	# echo 1G > memory.soft_limit_in_bytes
718
719NOTE1:
720       Soft limits take effect over a long period of time, since they involve
721       reclaiming memory for balancing between memory cgroups
722NOTE2:
723       It is recommended to set the soft limit always below the hard limit,
724       otherwise the hard limit will take precedence.
725
7268. Move charges at task migration
727=================================
728
729Users can move charges associated with a task along with task migration, that
730is, uncharge task's pages from the old cgroup and charge them to the new cgroup.
731This feature is not supported in !CONFIG_MMU environments because of lack of
732page tables.
733
7348.1 Interface
735-------------
736
737This feature is disabled by default. It can be enabled (and disabled again) by
738writing to memory.move_charge_at_immigrate of the destination cgroup.
739
740If you want to enable it::
741
742	# echo (some positive value) > memory.move_charge_at_immigrate
743
744Note:
745      Each bits of move_charge_at_immigrate has its own meaning about what type
746      of charges should be moved. See 8.2 for details.
747Note:
748      Charges are moved only when you move mm->owner, in other words,
749      a leader of a thread group.
750Note:
751      If we cannot find enough space for the task in the destination cgroup, we
752      try to make space by reclaiming memory. Task migration may fail if we
753      cannot make enough space.
754Note:
755      It can take several seconds if you move charges much.
756
757And if you want disable it again::
758
759	# echo 0 > memory.move_charge_at_immigrate
760
7618.2 Type of charges which can be moved
762--------------------------------------
763
764Each bit in move_charge_at_immigrate has its own meaning about what type of
765charges should be moved. But in any case, it must be noted that an account of
766a page or a swap can be moved only when it is charged to the task's current
767(old) memory cgroup.
768
769+---+--------------------------------------------------------------------------+
770|bit| what type of charges would be moved ?                                    |
771+===+==========================================================================+
772| 0 | A charge of an anonymous page (or swap of it) used by the target task.   |
773|   | You must enable Swap Extension (see 2.4) to enable move of swap charges. |
774+---+--------------------------------------------------------------------------+
775| 1 | A charge of file pages (normal file, tmpfs file (e.g. ipc shared memory) |
776|   | and swaps of tmpfs file) mmapped by the target task. Unlike the case of  |
777|   | anonymous pages, file pages (and swaps) in the range mmapped by the task |
778|   | will be moved even if the task hasn't done page fault, i.e. they might   |
779|   | not be the task's "RSS", but other task's "RSS" that maps the same file. |
780|   | And mapcount of the page is ignored (the page can be moved even if       |
781|   | page_mapcount(page) > 1). You must enable Swap Extension (see 2.4) to    |
782|   | enable move of swap charges.                                             |
783+---+--------------------------------------------------------------------------+
784
7858.3 TODO
786--------
787
788- All of moving charge operations are done under cgroup_mutex. It's not good
789  behavior to hold the mutex too long, so we may need some trick.
790
7919. Memory thresholds
792====================
793
794Memory cgroup implements memory thresholds using the cgroups notification
795API (see cgroups.txt). It allows to register multiple memory and memsw
796thresholds and gets notifications when it crosses.
797
798To register a threshold, an application must:
799
800- create an eventfd using eventfd(2);
801- open memory.usage_in_bytes or memory.memsw.usage_in_bytes;
802- write string like "<event_fd> <fd of memory.usage_in_bytes> <threshold>" to
803  cgroup.event_control.
804
805Application will be notified through eventfd when memory usage crosses
806threshold in any direction.
807
808It's applicable for root and non-root cgroup.
809
81010. OOM Control
811===============
812
813memory.oom_control file is for OOM notification and other controls.
814
815Memory cgroup implements OOM notifier using the cgroup notification
816API (See cgroups.txt). It allows to register multiple OOM notification
817delivery and gets notification when OOM happens.
818
819To register a notifier, an application must:
820
821 - create an eventfd using eventfd(2)
822 - open memory.oom_control file
823 - write string like "<event_fd> <fd of memory.oom_control>" to
824   cgroup.event_control
825
826The application will be notified through eventfd when OOM happens.
827OOM notification doesn't work for the root cgroup.
828
829You can disable the OOM-killer by writing "1" to memory.oom_control file, as:
830
831	#echo 1 > memory.oom_control
832
833If OOM-killer is disabled, tasks under cgroup will hang/sleep
834in memory cgroup's OOM-waitqueue when they request accountable memory.
835
836For running them, you have to relax the memory cgroup's OOM status by
837
838	* enlarge limit or reduce usage.
839
840To reduce usage,
841
842	* kill some tasks.
843	* move some tasks to other group with account migration.
844	* remove some files (on tmpfs?)
845
846Then, stopped tasks will work again.
847
848At reading, current status of OOM is shown.
849
850	- oom_kill_disable 0 or 1
851	  (if 1, oom-killer is disabled)
852	- under_oom	   0 or 1
853	  (if 1, the memory cgroup is under OOM, tasks may be stopped.)
854        - oom_kill         integer counter
855          The number of processes belonging to this cgroup killed by any
856          kind of OOM killer.
857
85811. Memory Pressure
859===================
860
861The pressure level notifications can be used to monitor the memory
862allocation cost; based on the pressure, applications can implement
863different strategies of managing their memory resources. The pressure
864levels are defined as following:
865
866The "low" level means that the system is reclaiming memory for new
867allocations. Monitoring this reclaiming activity might be useful for
868maintaining cache level. Upon notification, the program (typically
869"Activity Manager") might analyze vmstat and act in advance (i.e.
870prematurely shutdown unimportant services).
871
872The "medium" level means that the system is experiencing medium memory
873pressure, the system might be making swap, paging out active file caches,
874etc. Upon this event applications may decide to further analyze
875vmstat/zoneinfo/memcg or internal memory usage statistics and free any
876resources that can be easily reconstructed or re-read from a disk.
877
878The "critical" level means that the system is actively thrashing, it is
879about to out of memory (OOM) or even the in-kernel OOM killer is on its
880way to trigger. Applications should do whatever they can to help the
881system. It might be too late to consult with vmstat or any other
882statistics, so it's advisable to take an immediate action.
883
884By default, events are propagated upward until the event is handled, i.e. the
885events are not pass-through. For example, you have three cgroups: A->B->C. Now
886you set up an event listener on cgroups A, B and C, and suppose group C
887experiences some pressure. In this situation, only group C will receive the
888notification, i.e. groups A and B will not receive it. This is done to avoid
889excessive "broadcasting" of messages, which disturbs the system and which is
890especially bad if we are low on memory or thrashing. Group B, will receive
891notification only if there are no event listers for group C.
892
893There are three optional modes that specify different propagation behavior:
894
895 - "default": this is the default behavior specified above. This mode is the
896   same as omitting the optional mode parameter, preserved by backwards
897   compatibility.
898
899 - "hierarchy": events always propagate up to the root, similar to the default
900   behavior, except that propagation continues regardless of whether there are
901   event listeners at each level, with the "hierarchy" mode. In the above
902   example, groups A, B, and C will receive notification of memory pressure.
903
904 - "local": events are pass-through, i.e. they only receive notifications when
905   memory pressure is experienced in the memcg for which the notification is
906   registered. In the above example, group C will receive notification if
907   registered for "local" notification and the group experiences memory
908   pressure. However, group B will never receive notification, regardless if
909   there is an event listener for group C or not, if group B is registered for
910   local notification.
911
912The level and event notification mode ("hierarchy" or "local", if necessary) are
913specified by a comma-delimited string, i.e. "low,hierarchy" specifies
914hierarchical, pass-through, notification for all ancestor memcgs. Notification
915that is the default, non pass-through behavior, does not specify a mode.
916"medium,local" specifies pass-through notification for the medium level.
917
918The file memory.pressure_level is only used to setup an eventfd. To
919register a notification, an application must:
920
921- create an eventfd using eventfd(2);
922- open memory.pressure_level;
923- write string as "<event_fd> <fd of memory.pressure_level> <level[,mode]>"
924  to cgroup.event_control.
925
926Application will be notified through eventfd when memory pressure is at
927the specific level (or higher). Read/write operations to
928memory.pressure_level are no implemented.
929
930Test:
931
932   Here is a small script example that makes a new cgroup, sets up a
933   memory limit, sets up a notification in the cgroup and then makes child
934   cgroup experience a critical pressure::
935
936	# cd /sys/fs/cgroup/memory/
937	# mkdir foo
938	# cd foo
939	# cgroup_event_listener memory.pressure_level low,hierarchy &
940	# echo 8000000 > memory.limit_in_bytes
941	# echo 8000000 > memory.memsw.limit_in_bytes
942	# echo $$ > tasks
943	# dd if=/dev/zero | read x
944
945   (Expect a bunch of notifications, and eventually, the oom-killer will
946   trigger.)
947
94812. TODO
949========
950
9511. Make per-cgroup scanner reclaim not-shared pages first
9522. Teach controller to account for shared-pages
9533. Start reclamation in the background when the limit is
954   not yet hit but the usage is getting closer
955
956Summary
957=======
958
959Overall, the memory controller has been a stable controller and has been
960commented and discussed quite extensively in the community.
961
962References
963==========
964
9651. Singh, Balbir. RFC: Memory Controller, http://lwn.net/Articles/206697/
9662. Singh, Balbir. Memory Controller (RSS Control),
967   http://lwn.net/Articles/222762/
9683. Emelianov, Pavel. Resource controllers based on process cgroups
969   https://lore.kernel.org/r/45ED7DEC.7010403@sw.ru
9704. Emelianov, Pavel. RSS controller based on process cgroups (v2)
971   https://lore.kernel.org/r/461A3010.90403@sw.ru
9725. Emelianov, Pavel. RSS controller based on process cgroups (v3)
973   https://lore.kernel.org/r/465D9739.8070209@openvz.org
9746. Menage, Paul. Control Groups v10, http://lwn.net/Articles/236032/
9757. Vaidyanathan, Srinivasan, Control Groups: Pagecache accounting and control
976   subsystem (v3), http://lwn.net/Articles/235534/
9778. Singh, Balbir. RSS controller v2 test results (lmbench),
978   https://lore.kernel.org/r/464C95D4.7070806@linux.vnet.ibm.com
9799. Singh, Balbir. RSS controller v2 AIM9 results
980   https://lore.kernel.org/r/464D267A.50107@linux.vnet.ibm.com
98110. Singh, Balbir. Memory controller v6 test results,
982    https://lore.kernel.org/r/20070819094658.654.84837.sendpatchset@balbir-laptop
98311. Singh, Balbir. Memory controller introduction (v6),
984    https://lore.kernel.org/r/20070817084228.26003.12568.sendpatchset@balbir-laptop
98512. Corbet, Jonathan, Controlling memory use in cgroups,
986    http://lwn.net/Articles/243795/
987