xref: /openbmc/linux/Documentation/admin-guide/cgroup-v2.rst (revision 19b438592238b3b40c3f945bb5f9c4ca971c0c45)
1.. _cgroup-v2:
2
3================
4Control Group v2
5================
6
7:Date: October, 2015
8:Author: Tejun Heo <tj@kernel.org>
9
10This is the authoritative documentation on the design, interface and
11conventions of cgroup v2.  It describes all userland-visible aspects
12of cgroup including core and specific controller behaviors.  All
13future changes must be reflected in this document.  Documentation for
14v1 is available under :ref:`Documentation/admin-guide/cgroup-v1/index.rst <cgroup-v1>`.
15
16.. CONTENTS
17
18   1. Introduction
19     1-1. Terminology
20     1-2. What is cgroup?
21   2. Basic Operations
22     2-1. Mounting
23     2-2. Organizing Processes and Threads
24       2-2-1. Processes
25       2-2-2. Threads
26     2-3. [Un]populated Notification
27     2-4. Controlling Controllers
28       2-4-1. Enabling and Disabling
29       2-4-2. Top-down Constraint
30       2-4-3. No Internal Process Constraint
31     2-5. Delegation
32       2-5-1. Model of Delegation
33       2-5-2. Delegation Containment
34     2-6. Guidelines
35       2-6-1. Organize Once and Control
36       2-6-2. Avoid Name Collisions
37   3. Resource Distribution Models
38     3-1. Weights
39     3-2. Limits
40     3-3. Protections
41     3-4. Allocations
42   4. Interface Files
43     4-1. Format
44     4-2. Conventions
45     4-3. Core Interface Files
46   5. Controllers
47     5-1. CPU
48       5-1-1. CPU Interface Files
49     5-2. Memory
50       5-2-1. Memory Interface Files
51       5-2-2. Usage Guidelines
52       5-2-3. Memory Ownership
53     5-3. IO
54       5-3-1. IO Interface Files
55       5-3-2. Writeback
56       5-3-3. IO Latency
57         5-3-3-1. How IO Latency Throttling Works
58         5-3-3-2. IO Latency Interface Files
59       5-3-4. IO Priority
60     5-4. PID
61       5-4-1. PID Interface Files
62     5-5. Cpuset
63       5.5-1. Cpuset Interface Files
64     5-6. Device
65     5-7. RDMA
66       5-7-1. RDMA Interface Files
67     5-8. HugeTLB
68       5.8-1. HugeTLB Interface Files
69     5-9. Misc
70       5.9-1 Miscellaneous cgroup Interface Files
71       5.9-2 Migration and Ownership
72     5-10. Others
73       5-10-1. perf_event
74     5-N. Non-normative information
75       5-N-1. CPU controller root cgroup process behaviour
76       5-N-2. IO controller root cgroup process behaviour
77   6. Namespace
78     6-1. Basics
79     6-2. The Root and Views
80     6-3. Migration and setns(2)
81     6-4. Interaction with Other Namespaces
82   P. Information on Kernel Programming
83     P-1. Filesystem Support for Writeback
84   D. Deprecated v1 Core Features
85   R. Issues with v1 and Rationales for v2
86     R-1. Multiple Hierarchies
87     R-2. Thread Granularity
88     R-3. Competition Between Inner Nodes and Threads
89     R-4. Other Interface Issues
90     R-5. Controller Issues and Remedies
91       R-5-1. Memory
92
93
94Introduction
95============
96
97Terminology
98-----------
99
100"cgroup" stands for "control group" and is never capitalized.  The
101singular form is used to designate the whole feature and also as a
102qualifier as in "cgroup controllers".  When explicitly referring to
103multiple individual control groups, the plural form "cgroups" is used.
104
105
106What is cgroup?
107---------------
108
109cgroup is a mechanism to organize processes hierarchically and
110distribute system resources along the hierarchy in a controlled and
111configurable manner.
112
113cgroup is largely composed of two parts - the core and controllers.
114cgroup core is primarily responsible for hierarchically organizing
115processes.  A cgroup controller is usually responsible for
116distributing a specific type of system resource along the hierarchy
117although there are utility controllers which serve purposes other than
118resource distribution.
119
120cgroups form a tree structure and every process in the system belongs
121to one and only one cgroup.  All threads of a process belong to the
122same cgroup.  On creation, all processes are put in the cgroup that
123the parent process belongs to at the time.  A process can be migrated
124to another cgroup.  Migration of a process doesn't affect already
125existing descendant processes.
126
127Following certain structural constraints, controllers may be enabled or
128disabled selectively on a cgroup.  All controller behaviors are
129hierarchical - if a controller is enabled on a cgroup, it affects all
130processes which belong to the cgroups consisting the inclusive
131sub-hierarchy of the cgroup.  When a controller is enabled on a nested
132cgroup, it always restricts the resource distribution further.  The
133restrictions set closer to the root in the hierarchy can not be
134overridden from further away.
135
136
137Basic Operations
138================
139
140Mounting
141--------
142
143Unlike v1, cgroup v2 has only single hierarchy.  The cgroup v2
144hierarchy can be mounted with the following mount command::
145
146  # mount -t cgroup2 none $MOUNT_POINT
147
148cgroup2 filesystem has the magic number 0x63677270 ("cgrp").  All
149controllers which support v2 and are not bound to a v1 hierarchy are
150automatically bound to the v2 hierarchy and show up at the root.
151Controllers which are not in active use in the v2 hierarchy can be
152bound to other hierarchies.  This allows mixing v2 hierarchy with the
153legacy v1 multiple hierarchies in a fully backward compatible way.
154
155A controller can be moved across hierarchies only after the controller
156is no longer referenced in its current hierarchy.  Because per-cgroup
157controller states are destroyed asynchronously and controllers may
158have lingering references, a controller may not show up immediately on
159the v2 hierarchy after the final umount of the previous hierarchy.
160Similarly, a controller should be fully disabled to be moved out of
161the unified hierarchy and it may take some time for the disabled
162controller to become available for other hierarchies; furthermore, due
163to inter-controller dependencies, other controllers may need to be
164disabled too.
165
166While useful for development and manual configurations, moving
167controllers dynamically between the v2 and other hierarchies is
168strongly discouraged for production use.  It is recommended to decide
169the hierarchies and controller associations before starting using the
170controllers after system boot.
171
172During transition to v2, system management software might still
173automount the v1 cgroup filesystem and so hijack all controllers
174during boot, before manual intervention is possible. To make testing
175and experimenting easier, the kernel parameter cgroup_no_v1= allows
176disabling controllers in v1 and make them always available in v2.
177
178cgroup v2 currently supports the following mount options.
179
180  nsdelegate
181	Consider cgroup namespaces as delegation boundaries.  This
182	option is system wide and can only be set on mount or modified
183	through remount from the init namespace.  The mount option is
184	ignored on non-init namespace mounts.  Please refer to the
185	Delegation section for details.
186
187  memory_localevents
188        Only populate memory.events with data for the current cgroup,
189        and not any subtrees. This is legacy behaviour, the default
190        behaviour without this option is to include subtree counts.
191        This option is system wide and can only be set on mount or
192        modified through remount from the init namespace. The mount
193        option is ignored on non-init namespace mounts.
194
195  memory_recursiveprot
196        Recursively apply memory.min and memory.low protection to
197        entire subtrees, without requiring explicit downward
198        propagation into leaf cgroups.  This allows protecting entire
199        subtrees from one another, while retaining free competition
200        within those subtrees.  This should have been the default
201        behavior but is a mount-option to avoid regressing setups
202        relying on the original semantics (e.g. specifying bogusly
203        high 'bypass' protection values at higher tree levels).
204
205
206Organizing Processes and Threads
207--------------------------------
208
209Processes
210~~~~~~~~~
211
212Initially, only the root cgroup exists to which all processes belong.
213A child cgroup can be created by creating a sub-directory::
214
215  # mkdir $CGROUP_NAME
216
217A given cgroup may have multiple child cgroups forming a tree
218structure.  Each cgroup has a read-writable interface file
219"cgroup.procs".  When read, it lists the PIDs of all processes which
220belong to the cgroup one-per-line.  The PIDs are not ordered and the
221same PID may show up more than once if the process got moved to
222another cgroup and then back or the PID got recycled while reading.
223
224A process can be migrated into a cgroup by writing its PID to the
225target cgroup's "cgroup.procs" file.  Only one process can be migrated
226on a single write(2) call.  If a process is composed of multiple
227threads, writing the PID of any thread migrates all threads of the
228process.
229
230When a process forks a child process, the new process is born into the
231cgroup that the forking process belongs to at the time of the
232operation.  After exit, a process stays associated with the cgroup
233that it belonged to at the time of exit until it's reaped; however, a
234zombie process does not appear in "cgroup.procs" and thus can't be
235moved to another cgroup.
236
237A cgroup which doesn't have any children or live processes can be
238destroyed by removing the directory.  Note that a cgroup which doesn't
239have any children and is associated only with zombie processes is
240considered empty and can be removed::
241
242  # rmdir $CGROUP_NAME
243
244"/proc/$PID/cgroup" lists a process's cgroup membership.  If legacy
245cgroup is in use in the system, this file may contain multiple lines,
246one for each hierarchy.  The entry for cgroup v2 is always in the
247format "0::$PATH"::
248
249  # cat /proc/842/cgroup
250  ...
251  0::/test-cgroup/test-cgroup-nested
252
253If the process becomes a zombie and the cgroup it was associated with
254is removed subsequently, " (deleted)" is appended to the path::
255
256  # cat /proc/842/cgroup
257  ...
258  0::/test-cgroup/test-cgroup-nested (deleted)
259
260
261Threads
262~~~~~~~
263
264cgroup v2 supports thread granularity for a subset of controllers to
265support use cases requiring hierarchical resource distribution across
266the threads of a group of processes.  By default, all threads of a
267process belong to the same cgroup, which also serves as the resource
268domain to host resource consumptions which are not specific to a
269process or thread.  The thread mode allows threads to be spread across
270a subtree while still maintaining the common resource domain for them.
271
272Controllers which support thread mode are called threaded controllers.
273The ones which don't are called domain controllers.
274
275Marking a cgroup threaded makes it join the resource domain of its
276parent as a threaded cgroup.  The parent may be another threaded
277cgroup whose resource domain is further up in the hierarchy.  The root
278of a threaded subtree, that is, the nearest ancestor which is not
279threaded, is called threaded domain or thread root interchangeably and
280serves as the resource domain for the entire subtree.
281
282Inside a threaded subtree, threads of a process can be put in
283different cgroups and are not subject to the no internal process
284constraint - threaded controllers can be enabled on non-leaf cgroups
285whether they have threads in them or not.
286
287As the threaded domain cgroup hosts all the domain resource
288consumptions of the subtree, it is considered to have internal
289resource consumptions whether there are processes in it or not and
290can't have populated child cgroups which aren't threaded.  Because the
291root cgroup is not subject to no internal process constraint, it can
292serve both as a threaded domain and a parent to domain cgroups.
293
294The current operation mode or type of the cgroup is shown in the
295"cgroup.type" file which indicates whether the cgroup is a normal
296domain, a domain which is serving as the domain of a threaded subtree,
297or a threaded cgroup.
298
299On creation, a cgroup is always a domain cgroup and can be made
300threaded by writing "threaded" to the "cgroup.type" file.  The
301operation is single direction::
302
303  # echo threaded > cgroup.type
304
305Once threaded, the cgroup can't be made a domain again.  To enable the
306thread mode, the following conditions must be met.
307
308- As the cgroup will join the parent's resource domain.  The parent
309  must either be a valid (threaded) domain or a threaded cgroup.
310
311- When the parent is an unthreaded domain, it must not have any domain
312  controllers enabled or populated domain children.  The root is
313  exempt from this requirement.
314
315Topology-wise, a cgroup can be in an invalid state.  Please consider
316the following topology::
317
318  A (threaded domain) - B (threaded) - C (domain, just created)
319
320C is created as a domain but isn't connected to a parent which can
321host child domains.  C can't be used until it is turned into a
322threaded cgroup.  "cgroup.type" file will report "domain (invalid)" in
323these cases.  Operations which fail due to invalid topology use
324EOPNOTSUPP as the errno.
325
326A domain cgroup is turned into a threaded domain when one of its child
327cgroup becomes threaded or threaded controllers are enabled in the
328"cgroup.subtree_control" file while there are processes in the cgroup.
329A threaded domain reverts to a normal domain when the conditions
330clear.
331
332When read, "cgroup.threads" contains the list of the thread IDs of all
333threads in the cgroup.  Except that the operations are per-thread
334instead of per-process, "cgroup.threads" has the same format and
335behaves the same way as "cgroup.procs".  While "cgroup.threads" can be
336written to in any cgroup, as it can only move threads inside the same
337threaded domain, its operations are confined inside each threaded
338subtree.
339
340The threaded domain cgroup serves as the resource domain for the whole
341subtree, and, while the threads can be scattered across the subtree,
342all the processes are considered to be in the threaded domain cgroup.
343"cgroup.procs" in a threaded domain cgroup contains the PIDs of all
344processes in the subtree and is not readable in the subtree proper.
345However, "cgroup.procs" can be written to from anywhere in the subtree
346to migrate all threads of the matching process to the cgroup.
347
348Only threaded controllers can be enabled in a threaded subtree.  When
349a threaded controller is enabled inside a threaded subtree, it only
350accounts for and controls resource consumptions associated with the
351threads in the cgroup and its descendants.  All consumptions which
352aren't tied to a specific thread belong to the threaded domain cgroup.
353
354Because a threaded subtree is exempt from no internal process
355constraint, a threaded controller must be able to handle competition
356between threads in a non-leaf cgroup and its child cgroups.  Each
357threaded controller defines how such competitions are handled.
358
359
360[Un]populated Notification
361--------------------------
362
363Each non-root cgroup has a "cgroup.events" file which contains
364"populated" field indicating whether the cgroup's sub-hierarchy has
365live processes in it.  Its value is 0 if there is no live process in
366the cgroup and its descendants; otherwise, 1.  poll and [id]notify
367events are triggered when the value changes.  This can be used, for
368example, to start a clean-up operation after all processes of a given
369sub-hierarchy have exited.  The populated state updates and
370notifications are recursive.  Consider the following sub-hierarchy
371where the numbers in the parentheses represent the numbers of processes
372in each cgroup::
373
374  A(4) - B(0) - C(1)
375              \ D(0)
376
377A, B and C's "populated" fields would be 1 while D's 0.  After the one
378process in C exits, B and C's "populated" fields would flip to "0" and
379file modified events will be generated on the "cgroup.events" files of
380both cgroups.
381
382
383Controlling Controllers
384-----------------------
385
386Enabling and Disabling
387~~~~~~~~~~~~~~~~~~~~~~
388
389Each cgroup has a "cgroup.controllers" file which lists all
390controllers available for the cgroup to enable::
391
392  # cat cgroup.controllers
393  cpu io memory
394
395No controller is enabled by default.  Controllers can be enabled and
396disabled by writing to the "cgroup.subtree_control" file::
397
398  # echo "+cpu +memory -io" > cgroup.subtree_control
399
400Only controllers which are listed in "cgroup.controllers" can be
401enabled.  When multiple operations are specified as above, either they
402all succeed or fail.  If multiple operations on the same controller
403are specified, the last one is effective.
404
405Enabling a controller in a cgroup indicates that the distribution of
406the target resource across its immediate children will be controlled.
407Consider the following sub-hierarchy.  The enabled controllers are
408listed in parentheses::
409
410  A(cpu,memory) - B(memory) - C()
411                            \ D()
412
413As A has "cpu" and "memory" enabled, A will control the distribution
414of CPU cycles and memory to its children, in this case, B.  As B has
415"memory" enabled but not "CPU", C and D will compete freely on CPU
416cycles but their division of memory available to B will be controlled.
417
418As a controller regulates the distribution of the target resource to
419the cgroup's children, enabling it creates the controller's interface
420files in the child cgroups.  In the above example, enabling "cpu" on B
421would create the "cpu." prefixed controller interface files in C and
422D.  Likewise, disabling "memory" from B would remove the "memory."
423prefixed controller interface files from C and D.  This means that the
424controller interface files - anything which doesn't start with
425"cgroup." are owned by the parent rather than the cgroup itself.
426
427
428Top-down Constraint
429~~~~~~~~~~~~~~~~~~~
430
431Resources are distributed top-down and a cgroup can further distribute
432a resource only if the resource has been distributed to it from the
433parent.  This means that all non-root "cgroup.subtree_control" files
434can only contain controllers which are enabled in the parent's
435"cgroup.subtree_control" file.  A controller can be enabled only if
436the parent has the controller enabled and a controller can't be
437disabled if one or more children have it enabled.
438
439
440No Internal Process Constraint
441~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
442
443Non-root cgroups can distribute domain resources to their children
444only when they don't have any processes of their own.  In other words,
445only domain cgroups which don't contain any processes can have domain
446controllers enabled in their "cgroup.subtree_control" files.
447
448This guarantees that, when a domain controller is looking at the part
449of the hierarchy which has it enabled, processes are always only on
450the leaves.  This rules out situations where child cgroups compete
451against internal processes of the parent.
452
453The root cgroup is exempt from this restriction.  Root contains
454processes and anonymous resource consumption which can't be associated
455with any other cgroups and requires special treatment from most
456controllers.  How resource consumption in the root cgroup is governed
457is up to each controller (for more information on this topic please
458refer to the Non-normative information section in the Controllers
459chapter).
460
461Note that the restriction doesn't get in the way if there is no
462enabled controller in the cgroup's "cgroup.subtree_control".  This is
463important as otherwise it wouldn't be possible to create children of a
464populated cgroup.  To control resource distribution of a cgroup, the
465cgroup must create children and transfer all its processes to the
466children before enabling controllers in its "cgroup.subtree_control"
467file.
468
469
470Delegation
471----------
472
473Model of Delegation
474~~~~~~~~~~~~~~~~~~~
475
476A cgroup can be delegated in two ways.  First, to a less privileged
477user by granting write access of the directory and its "cgroup.procs",
478"cgroup.threads" and "cgroup.subtree_control" files to the user.
479Second, if the "nsdelegate" mount option is set, automatically to a
480cgroup namespace on namespace creation.
481
482Because the resource control interface files in a given directory
483control the distribution of the parent's resources, the delegatee
484shouldn't be allowed to write to them.  For the first method, this is
485achieved by not granting access to these files.  For the second, the
486kernel rejects writes to all files other than "cgroup.procs" and
487"cgroup.subtree_control" on a namespace root from inside the
488namespace.
489
490The end results are equivalent for both delegation types.  Once
491delegated, the user can build sub-hierarchy under the directory,
492organize processes inside it as it sees fit and further distribute the
493resources it received from the parent.  The limits and other settings
494of all resource controllers are hierarchical and regardless of what
495happens in the delegated sub-hierarchy, nothing can escape the
496resource restrictions imposed by the parent.
497
498Currently, cgroup doesn't impose any restrictions on the number of
499cgroups in or nesting depth of a delegated sub-hierarchy; however,
500this may be limited explicitly in the future.
501
502
503Delegation Containment
504~~~~~~~~~~~~~~~~~~~~~~
505
506A delegated sub-hierarchy is contained in the sense that processes
507can't be moved into or out of the sub-hierarchy by the delegatee.
508
509For delegations to a less privileged user, this is achieved by
510requiring the following conditions for a process with a non-root euid
511to migrate a target process into a cgroup by writing its PID to the
512"cgroup.procs" file.
513
514- The writer must have write access to the "cgroup.procs" file.
515
516- The writer must have write access to the "cgroup.procs" file of the
517  common ancestor of the source and destination cgroups.
518
519The above two constraints ensure that while a delegatee may migrate
520processes around freely in the delegated sub-hierarchy it can't pull
521in from or push out to outside the sub-hierarchy.
522
523For an example, let's assume cgroups C0 and C1 have been delegated to
524user U0 who created C00, C01 under C0 and C10 under C1 as follows and
525all processes under C0 and C1 belong to U0::
526
527  ~~~~~~~~~~~~~ - C0 - C00
528  ~ cgroup    ~      \ C01
529  ~ hierarchy ~
530  ~~~~~~~~~~~~~ - C1 - C10
531
532Let's also say U0 wants to write the PID of a process which is
533currently in C10 into "C00/cgroup.procs".  U0 has write access to the
534file; however, the common ancestor of the source cgroup C10 and the
535destination cgroup C00 is above the points of delegation and U0 would
536not have write access to its "cgroup.procs" files and thus the write
537will be denied with -EACCES.
538
539For delegations to namespaces, containment is achieved by requiring
540that both the source and destination cgroups are reachable from the
541namespace of the process which is attempting the migration.  If either
542is not reachable, the migration is rejected with -ENOENT.
543
544
545Guidelines
546----------
547
548Organize Once and Control
549~~~~~~~~~~~~~~~~~~~~~~~~~
550
551Migrating a process across cgroups is a relatively expensive operation
552and stateful resources such as memory are not moved together with the
553process.  This is an explicit design decision as there often exist
554inherent trade-offs between migration and various hot paths in terms
555of synchronization cost.
556
557As such, migrating processes across cgroups frequently as a means to
558apply different resource restrictions is discouraged.  A workload
559should be assigned to a cgroup according to the system's logical and
560resource structure once on start-up.  Dynamic adjustments to resource
561distribution can be made by changing controller configuration through
562the interface files.
563
564
565Avoid Name Collisions
566~~~~~~~~~~~~~~~~~~~~~
567
568Interface files for a cgroup and its children cgroups occupy the same
569directory and it is possible to create children cgroups which collide
570with interface files.
571
572All cgroup core interface files are prefixed with "cgroup." and each
573controller's interface files are prefixed with the controller name and
574a dot.  A controller's name is composed of lower case alphabets and
575'_'s but never begins with an '_' so it can be used as the prefix
576character for collision avoidance.  Also, interface file names won't
577start or end with terms which are often used in categorizing workloads
578such as job, service, slice, unit or workload.
579
580cgroup doesn't do anything to prevent name collisions and it's the
581user's responsibility to avoid them.
582
583
584Resource Distribution Models
585============================
586
587cgroup controllers implement several resource distribution schemes
588depending on the resource type and expected use cases.  This section
589describes major schemes in use along with their expected behaviors.
590
591
592Weights
593-------
594
595A parent's resource is distributed by adding up the weights of all
596active children and giving each the fraction matching the ratio of its
597weight against the sum.  As only children which can make use of the
598resource at the moment participate in the distribution, this is
599work-conserving.  Due to the dynamic nature, this model is usually
600used for stateless resources.
601
602All weights are in the range [1, 10000] with the default at 100.  This
603allows symmetric multiplicative biases in both directions at fine
604enough granularity while staying in the intuitive range.
605
606As long as the weight is in range, all configuration combinations are
607valid and there is no reason to reject configuration changes or
608process migrations.
609
610"cpu.weight" proportionally distributes CPU cycles to active children
611and is an example of this type.
612
613
614Limits
615------
616
617A child can only consume upto the configured amount of the resource.
618Limits can be over-committed - the sum of the limits of children can
619exceed the amount of resource available to the parent.
620
621Limits are in the range [0, max] and defaults to "max", which is noop.
622
623As limits can be over-committed, all configuration combinations are
624valid and there is no reason to reject configuration changes or
625process migrations.
626
627"io.max" limits the maximum BPS and/or IOPS that a cgroup can consume
628on an IO device and is an example of this type.
629
630
631Protections
632-----------
633
634A cgroup is protected upto the configured amount of the resource
635as long as the usages of all its ancestors are under their
636protected levels.  Protections can be hard guarantees or best effort
637soft boundaries.  Protections can also be over-committed in which case
638only upto the amount available to the parent is protected among
639children.
640
641Protections are in the range [0, max] and defaults to 0, which is
642noop.
643
644As protections can be over-committed, all configuration combinations
645are valid and there is no reason to reject configuration changes or
646process migrations.
647
648"memory.low" implements best-effort memory protection and is an
649example of this type.
650
651
652Allocations
653-----------
654
655A cgroup is exclusively allocated a certain amount of a finite
656resource.  Allocations can't be over-committed - the sum of the
657allocations of children can not exceed the amount of resource
658available to the parent.
659
660Allocations are in the range [0, max] and defaults to 0, which is no
661resource.
662
663As allocations can't be over-committed, some configuration
664combinations are invalid and should be rejected.  Also, if the
665resource is mandatory for execution of processes, process migrations
666may be rejected.
667
668"cpu.rt.max" hard-allocates realtime slices and is an example of this
669type.
670
671
672Interface Files
673===============
674
675Format
676------
677
678All interface files should be in one of the following formats whenever
679possible::
680
681  New-line separated values
682  (when only one value can be written at once)
683
684	VAL0\n
685	VAL1\n
686	...
687
688  Space separated values
689  (when read-only or multiple values can be written at once)
690
691	VAL0 VAL1 ...\n
692
693  Flat keyed
694
695	KEY0 VAL0\n
696	KEY1 VAL1\n
697	...
698
699  Nested keyed
700
701	KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01...
702	KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11...
703	...
704
705For a writable file, the format for writing should generally match
706reading; however, controllers may allow omitting later fields or
707implement restricted shortcuts for most common use cases.
708
709For both flat and nested keyed files, only the values for a single key
710can be written at a time.  For nested keyed files, the sub key pairs
711may be specified in any order and not all pairs have to be specified.
712
713
714Conventions
715-----------
716
717- Settings for a single feature should be contained in a single file.
718
719- The root cgroup should be exempt from resource control and thus
720  shouldn't have resource control interface files.
721
722- The default time unit is microseconds.  If a different unit is ever
723  used, an explicit unit suffix must be present.
724
725- A parts-per quantity should use a percentage decimal with at least
726  two digit fractional part - e.g. 13.40.
727
728- If a controller implements weight based resource distribution, its
729  interface file should be named "weight" and have the range [1,
730  10000] with 100 as the default.  The values are chosen to allow
731  enough and symmetric bias in both directions while keeping it
732  intuitive (the default is 100%).
733
734- If a controller implements an absolute resource guarantee and/or
735  limit, the interface files should be named "min" and "max"
736  respectively.  If a controller implements best effort resource
737  guarantee and/or limit, the interface files should be named "low"
738  and "high" respectively.
739
740  In the above four control files, the special token "max" should be
741  used to represent upward infinity for both reading and writing.
742
743- If a setting has a configurable default value and keyed specific
744  overrides, the default entry should be keyed with "default" and
745  appear as the first entry in the file.
746
747  The default value can be updated by writing either "default $VAL" or
748  "$VAL".
749
750  When writing to update a specific override, "default" can be used as
751  the value to indicate removal of the override.  Override entries
752  with "default" as the value must not appear when read.
753
754  For example, a setting which is keyed by major:minor device numbers
755  with integer values may look like the following::
756
757    # cat cgroup-example-interface-file
758    default 150
759    8:0 300
760
761  The default value can be updated by::
762
763    # echo 125 > cgroup-example-interface-file
764
765  or::
766
767    # echo "default 125" > cgroup-example-interface-file
768
769  An override can be set by::
770
771    # echo "8:16 170" > cgroup-example-interface-file
772
773  and cleared by::
774
775    # echo "8:0 default" > cgroup-example-interface-file
776    # cat cgroup-example-interface-file
777    default 125
778    8:16 170
779
780- For events which are not very high frequency, an interface file
781  "events" should be created which lists event key value pairs.
782  Whenever a notifiable event happens, file modified event should be
783  generated on the file.
784
785
786Core Interface Files
787--------------------
788
789All cgroup core files are prefixed with "cgroup."
790
791  cgroup.type
792	A read-write single value file which exists on non-root
793	cgroups.
794
795	When read, it indicates the current type of the cgroup, which
796	can be one of the following values.
797
798	- "domain" : A normal valid domain cgroup.
799
800	- "domain threaded" : A threaded domain cgroup which is
801          serving as the root of a threaded subtree.
802
803	- "domain invalid" : A cgroup which is in an invalid state.
804	  It can't be populated or have controllers enabled.  It may
805	  be allowed to become a threaded cgroup.
806
807	- "threaded" : A threaded cgroup which is a member of a
808          threaded subtree.
809
810	A cgroup can be turned into a threaded cgroup by writing
811	"threaded" to this file.
812
813  cgroup.procs
814	A read-write new-line separated values file which exists on
815	all cgroups.
816
817	When read, it lists the PIDs of all processes which belong to
818	the cgroup one-per-line.  The PIDs are not ordered and the
819	same PID may show up more than once if the process got moved
820	to another cgroup and then back or the PID got recycled while
821	reading.
822
823	A PID can be written to migrate the process associated with
824	the PID to the cgroup.  The writer should match all of the
825	following conditions.
826
827	- It must have write access to the "cgroup.procs" file.
828
829	- It must have write access to the "cgroup.procs" file of the
830	  common ancestor of the source and destination cgroups.
831
832	When delegating a sub-hierarchy, write access to this file
833	should be granted along with the containing directory.
834
835	In a threaded cgroup, reading this file fails with EOPNOTSUPP
836	as all the processes belong to the thread root.  Writing is
837	supported and moves every thread of the process to the cgroup.
838
839  cgroup.threads
840	A read-write new-line separated values file which exists on
841	all cgroups.
842
843	When read, it lists the TIDs of all threads which belong to
844	the cgroup one-per-line.  The TIDs are not ordered and the
845	same TID may show up more than once if the thread got moved to
846	another cgroup and then back or the TID got recycled while
847	reading.
848
849	A TID can be written to migrate the thread associated with the
850	TID to the cgroup.  The writer should match all of the
851	following conditions.
852
853	- It must have write access to the "cgroup.threads" file.
854
855	- The cgroup that the thread is currently in must be in the
856          same resource domain as the destination cgroup.
857
858	- It must have write access to the "cgroup.procs" file of the
859	  common ancestor of the source and destination cgroups.
860
861	When delegating a sub-hierarchy, write access to this file
862	should be granted along with the containing directory.
863
864  cgroup.controllers
865	A read-only space separated values file which exists on all
866	cgroups.
867
868	It shows space separated list of all controllers available to
869	the cgroup.  The controllers are not ordered.
870
871  cgroup.subtree_control
872	A read-write space separated values file which exists on all
873	cgroups.  Starts out empty.
874
875	When read, it shows space separated list of the controllers
876	which are enabled to control resource distribution from the
877	cgroup to its children.
878
879	Space separated list of controllers prefixed with '+' or '-'
880	can be written to enable or disable controllers.  A controller
881	name prefixed with '+' enables the controller and '-'
882	disables.  If a controller appears more than once on the list,
883	the last one is effective.  When multiple enable and disable
884	operations are specified, either all succeed or all fail.
885
886  cgroup.events
887	A read-only flat-keyed file which exists on non-root cgroups.
888	The following entries are defined.  Unless specified
889	otherwise, a value change in this file generates a file
890	modified event.
891
892	  populated
893		1 if the cgroup or its descendants contains any live
894		processes; otherwise, 0.
895	  frozen
896		1 if the cgroup is frozen; otherwise, 0.
897
898  cgroup.max.descendants
899	A read-write single value files.  The default is "max".
900
901	Maximum allowed number of descent cgroups.
902	If the actual number of descendants is equal or larger,
903	an attempt to create a new cgroup in the hierarchy will fail.
904
905  cgroup.max.depth
906	A read-write single value files.  The default is "max".
907
908	Maximum allowed descent depth below the current cgroup.
909	If the actual descent depth is equal or larger,
910	an attempt to create a new child cgroup will fail.
911
912  cgroup.stat
913	A read-only flat-keyed file with the following entries:
914
915	  nr_descendants
916		Total number of visible descendant cgroups.
917
918	  nr_dying_descendants
919		Total number of dying descendant cgroups. A cgroup becomes
920		dying after being deleted by a user. The cgroup will remain
921		in dying state for some time undefined time (which can depend
922		on system load) before being completely destroyed.
923
924		A process can't enter a dying cgroup under any circumstances,
925		a dying cgroup can't revive.
926
927		A dying cgroup can consume system resources not exceeding
928		limits, which were active at the moment of cgroup deletion.
929
930  cgroup.freeze
931	A read-write single value file which exists on non-root cgroups.
932	Allowed values are "0" and "1". The default is "0".
933
934	Writing "1" to the file causes freezing of the cgroup and all
935	descendant cgroups. This means that all belonging processes will
936	be stopped and will not run until the cgroup will be explicitly
937	unfrozen. Freezing of the cgroup may take some time; when this action
938	is completed, the "frozen" value in the cgroup.events control file
939	will be updated to "1" and the corresponding notification will be
940	issued.
941
942	A cgroup can be frozen either by its own settings, or by settings
943	of any ancestor cgroups. If any of ancestor cgroups is frozen, the
944	cgroup will remain frozen.
945
946	Processes in the frozen cgroup can be killed by a fatal signal.
947	They also can enter and leave a frozen cgroup: either by an explicit
948	move by a user, or if freezing of the cgroup races with fork().
949	If a process is moved to a frozen cgroup, it stops. If a process is
950	moved out of a frozen cgroup, it becomes running.
951
952	Frozen status of a cgroup doesn't affect any cgroup tree operations:
953	it's possible to delete a frozen (and empty) cgroup, as well as
954	create new sub-cgroups.
955
956Controllers
957===========
958
959.. _cgroup-v2-cpu:
960
961CPU
962---
963
964The "cpu" controllers regulates distribution of CPU cycles.  This
965controller implements weight and absolute bandwidth limit models for
966normal scheduling policy and absolute bandwidth allocation model for
967realtime scheduling policy.
968
969In all the above models, cycles distribution is defined only on a temporal
970base and it does not account for the frequency at which tasks are executed.
971The (optional) utilization clamping support allows to hint the schedutil
972cpufreq governor about the minimum desired frequency which should always be
973provided by a CPU, as well as the maximum desired frequency, which should not
974be exceeded by a CPU.
975
976WARNING: cgroup2 doesn't yet support control of realtime processes and
977the cpu controller can only be enabled when all RT processes are in
978the root cgroup.  Be aware that system management software may already
979have placed RT processes into nonroot cgroups during the system boot
980process, and these processes may need to be moved to the root cgroup
981before the cpu controller can be enabled.
982
983
984CPU Interface Files
985~~~~~~~~~~~~~~~~~~~
986
987All time durations are in microseconds.
988
989  cpu.stat
990	A read-only flat-keyed file.
991	This file exists whether the controller is enabled or not.
992
993	It always reports the following three stats:
994
995	- usage_usec
996	- user_usec
997	- system_usec
998
999	and the following three when the controller is enabled:
1000
1001	- nr_periods
1002	- nr_throttled
1003	- throttled_usec
1004
1005  cpu.weight
1006	A read-write single value file which exists on non-root
1007	cgroups.  The default is "100".
1008
1009	The weight in the range [1, 10000].
1010
1011  cpu.weight.nice
1012	A read-write single value file which exists on non-root
1013	cgroups.  The default is "0".
1014
1015	The nice value is in the range [-20, 19].
1016
1017	This interface file is an alternative interface for
1018	"cpu.weight" and allows reading and setting weight using the
1019	same values used by nice(2).  Because the range is smaller and
1020	granularity is coarser for the nice values, the read value is
1021	the closest approximation of the current weight.
1022
1023  cpu.max
1024	A read-write two value file which exists on non-root cgroups.
1025	The default is "max 100000".
1026
1027	The maximum bandwidth limit.  It's in the following format::
1028
1029	  $MAX $PERIOD
1030
1031	which indicates that the group may consume upto $MAX in each
1032	$PERIOD duration.  "max" for $MAX indicates no limit.  If only
1033	one number is written, $MAX is updated.
1034
1035  cpu.pressure
1036	A read-write nested-keyed file.
1037
1038	Shows pressure stall information for CPU. See
1039	:ref:`Documentation/accounting/psi.rst <psi>` for details.
1040
1041  cpu.uclamp.min
1042        A read-write single value file which exists on non-root cgroups.
1043        The default is "0", i.e. no utilization boosting.
1044
1045        The requested minimum utilization (protection) as a percentage
1046        rational number, e.g. 12.34 for 12.34%.
1047
1048        This interface allows reading and setting minimum utilization clamp
1049        values similar to the sched_setattr(2). This minimum utilization
1050        value is used to clamp the task specific minimum utilization clamp.
1051
1052        The requested minimum utilization (protection) is always capped by
1053        the current value for the maximum utilization (limit), i.e.
1054        `cpu.uclamp.max`.
1055
1056  cpu.uclamp.max
1057        A read-write single value file which exists on non-root cgroups.
1058        The default is "max". i.e. no utilization capping
1059
1060        The requested maximum utilization (limit) as a percentage rational
1061        number, e.g. 98.76 for 98.76%.
1062
1063        This interface allows reading and setting maximum utilization clamp
1064        values similar to the sched_setattr(2). This maximum utilization
1065        value is used to clamp the task specific maximum utilization clamp.
1066
1067
1068
1069Memory
1070------
1071
1072The "memory" controller regulates distribution of memory.  Memory is
1073stateful and implements both limit and protection models.  Due to the
1074intertwining between memory usage and reclaim pressure and the
1075stateful nature of memory, the distribution model is relatively
1076complex.
1077
1078While not completely water-tight, all major memory usages by a given
1079cgroup are tracked so that the total memory consumption can be
1080accounted and controlled to a reasonable extent.  Currently, the
1081following types of memory usages are tracked.
1082
1083- Userland memory - page cache and anonymous memory.
1084
1085- Kernel data structures such as dentries and inodes.
1086
1087- TCP socket buffers.
1088
1089The above list may expand in the future for better coverage.
1090
1091
1092Memory Interface Files
1093~~~~~~~~~~~~~~~~~~~~~~
1094
1095All memory amounts are in bytes.  If a value which is not aligned to
1096PAGE_SIZE is written, the value may be rounded up to the closest
1097PAGE_SIZE multiple when read back.
1098
1099  memory.current
1100	A read-only single value file which exists on non-root
1101	cgroups.
1102
1103	The total amount of memory currently being used by the cgroup
1104	and its descendants.
1105
1106  memory.min
1107	A read-write single value file which exists on non-root
1108	cgroups.  The default is "0".
1109
1110	Hard memory protection.  If the memory usage of a cgroup
1111	is within its effective min boundary, the cgroup's memory
1112	won't be reclaimed under any conditions. If there is no
1113	unprotected reclaimable memory available, OOM killer
1114	is invoked. Above the effective min boundary (or
1115	effective low boundary if it is higher), pages are reclaimed
1116	proportionally to the overage, reducing reclaim pressure for
1117	smaller overages.
1118
1119	Effective min boundary is limited by memory.min values of
1120	all ancestor cgroups. If there is memory.min overcommitment
1121	(child cgroup or cgroups are requiring more protected memory
1122	than parent will allow), then each child cgroup will get
1123	the part of parent's protection proportional to its
1124	actual memory usage below memory.min.
1125
1126	Putting more memory than generally available under this
1127	protection is discouraged and may lead to constant OOMs.
1128
1129	If a memory cgroup is not populated with processes,
1130	its memory.min is ignored.
1131
1132  memory.low
1133	A read-write single value file which exists on non-root
1134	cgroups.  The default is "0".
1135
1136	Best-effort memory protection.  If the memory usage of a
1137	cgroup is within its effective low boundary, the cgroup's
1138	memory won't be reclaimed unless there is no reclaimable
1139	memory available in unprotected cgroups.
1140	Above the effective low	boundary (or
1141	effective min boundary if it is higher), pages are reclaimed
1142	proportionally to the overage, reducing reclaim pressure for
1143	smaller overages.
1144
1145	Effective low boundary is limited by memory.low values of
1146	all ancestor cgroups. If there is memory.low overcommitment
1147	(child cgroup or cgroups are requiring more protected memory
1148	than parent will allow), then each child cgroup will get
1149	the part of parent's protection proportional to its
1150	actual memory usage below memory.low.
1151
1152	Putting more memory than generally available under this
1153	protection is discouraged.
1154
1155  memory.high
1156	A read-write single value file which exists on non-root
1157	cgroups.  The default is "max".
1158
1159	Memory usage throttle limit.  This is the main mechanism to
1160	control memory usage of a cgroup.  If a cgroup's usage goes
1161	over the high boundary, the processes of the cgroup are
1162	throttled and put under heavy reclaim pressure.
1163
1164	Going over the high limit never invokes the OOM killer and
1165	under extreme conditions the limit may be breached.
1166
1167  memory.max
1168	A read-write single value file which exists on non-root
1169	cgroups.  The default is "max".
1170
1171	Memory usage hard limit.  This is the final protection
1172	mechanism.  If a cgroup's memory usage reaches this limit and
1173	can't be reduced, the OOM killer is invoked in the cgroup.
1174	Under certain circumstances, the usage may go over the limit
1175	temporarily.
1176
1177	In default configuration regular 0-order allocations always
1178	succeed unless OOM killer chooses current task as a victim.
1179
1180	Some kinds of allocations don't invoke the OOM killer.
1181	Caller could retry them differently, return into userspace
1182	as -ENOMEM or silently ignore in cases like disk readahead.
1183
1184	This is the ultimate protection mechanism.  As long as the
1185	high limit is used and monitored properly, this limit's
1186	utility is limited to providing the final safety net.
1187
1188  memory.oom.group
1189	A read-write single value file which exists on non-root
1190	cgroups.  The default value is "0".
1191
1192	Determines whether the cgroup should be treated as
1193	an indivisible workload by the OOM killer. If set,
1194	all tasks belonging to the cgroup or to its descendants
1195	(if the memory cgroup is not a leaf cgroup) are killed
1196	together or not at all. This can be used to avoid
1197	partial kills to guarantee workload integrity.
1198
1199	Tasks with the OOM protection (oom_score_adj set to -1000)
1200	are treated as an exception and are never killed.
1201
1202	If the OOM killer is invoked in a cgroup, it's not going
1203	to kill any tasks outside of this cgroup, regardless
1204	memory.oom.group values of ancestor cgroups.
1205
1206  memory.events
1207	A read-only flat-keyed file which exists on non-root cgroups.
1208	The following entries are defined.  Unless specified
1209	otherwise, a value change in this file generates a file
1210	modified event.
1211
1212	Note that all fields in this file are hierarchical and the
1213	file modified event can be generated due to an event down the
1214	hierarchy. For for the local events at the cgroup level see
1215	memory.events.local.
1216
1217	  low
1218		The number of times the cgroup is reclaimed due to
1219		high memory pressure even though its usage is under
1220		the low boundary.  This usually indicates that the low
1221		boundary is over-committed.
1222
1223	  high
1224		The number of times processes of the cgroup are
1225		throttled and routed to perform direct memory reclaim
1226		because the high memory boundary was exceeded.  For a
1227		cgroup whose memory usage is capped by the high limit
1228		rather than global memory pressure, this event's
1229		occurrences are expected.
1230
1231	  max
1232		The number of times the cgroup's memory usage was
1233		about to go over the max boundary.  If direct reclaim
1234		fails to bring it down, the cgroup goes to OOM state.
1235
1236	  oom
1237		The number of time the cgroup's memory usage was
1238		reached the limit and allocation was about to fail.
1239
1240		This event is not raised if the OOM killer is not
1241		considered as an option, e.g. for failed high-order
1242		allocations or if caller asked to not retry attempts.
1243
1244	  oom_kill
1245		The number of processes belonging to this cgroup
1246		killed by any kind of OOM killer.
1247
1248  memory.events.local
1249	Similar to memory.events but the fields in the file are local
1250	to the cgroup i.e. not hierarchical. The file modified event
1251	generated on this file reflects only the local events.
1252
1253  memory.stat
1254	A read-only flat-keyed file which exists on non-root cgroups.
1255
1256	This breaks down the cgroup's memory footprint into different
1257	types of memory, type-specific details, and other information
1258	on the state and past events of the memory management system.
1259
1260	All memory amounts are in bytes.
1261
1262	The entries are ordered to be human readable, and new entries
1263	can show up in the middle. Don't rely on items remaining in a
1264	fixed position; use the keys to look up specific values!
1265
1266	If the entry has no per-node counter (or not show in the
1267	memory.numa_stat). We use 'npn' (non-per-node) as the tag
1268	to indicate that it will not show in the memory.numa_stat.
1269
1270	  anon
1271		Amount of memory used in anonymous mappings such as
1272		brk(), sbrk(), and mmap(MAP_ANONYMOUS)
1273
1274	  file
1275		Amount of memory used to cache filesystem data,
1276		including tmpfs and shared memory.
1277
1278	  kernel_stack
1279		Amount of memory allocated to kernel stacks.
1280
1281	  pagetables
1282                Amount of memory allocated for page tables.
1283
1284	  percpu (npn)
1285		Amount of memory used for storing per-cpu kernel
1286		data structures.
1287
1288	  sock (npn)
1289		Amount of memory used in network transmission buffers
1290
1291	  shmem
1292		Amount of cached filesystem data that is swap-backed,
1293		such as tmpfs, shm segments, shared anonymous mmap()s
1294
1295	  file_mapped
1296		Amount of cached filesystem data mapped with mmap()
1297
1298	  file_dirty
1299		Amount of cached filesystem data that was modified but
1300		not yet written back to disk
1301
1302	  file_writeback
1303		Amount of cached filesystem data that was modified and
1304		is currently being written back to disk
1305
1306	  swapcached
1307		Amount of swap cached in memory. The swapcache is accounted
1308		against both memory and swap usage.
1309
1310	  anon_thp
1311		Amount of memory used in anonymous mappings backed by
1312		transparent hugepages
1313
1314	  file_thp
1315		Amount of cached filesystem data backed by transparent
1316		hugepages
1317
1318	  shmem_thp
1319		Amount of shm, tmpfs, shared anonymous mmap()s backed by
1320		transparent hugepages
1321
1322	  inactive_anon, active_anon, inactive_file, active_file, unevictable
1323		Amount of memory, swap-backed and filesystem-backed,
1324		on the internal memory management lists used by the
1325		page reclaim algorithm.
1326
1327		As these represent internal list state (eg. shmem pages are on anon
1328		memory management lists), inactive_foo + active_foo may not be equal to
1329		the value for the foo counter, since the foo counter is type-based, not
1330		list-based.
1331
1332	  slab_reclaimable
1333		Part of "slab" that might be reclaimed, such as
1334		dentries and inodes.
1335
1336	  slab_unreclaimable
1337		Part of "slab" that cannot be reclaimed on memory
1338		pressure.
1339
1340	  slab (npn)
1341		Amount of memory used for storing in-kernel data
1342		structures.
1343
1344	  workingset_refault_anon
1345		Number of refaults of previously evicted anonymous pages.
1346
1347	  workingset_refault_file
1348		Number of refaults of previously evicted file pages.
1349
1350	  workingset_activate_anon
1351		Number of refaulted anonymous pages that were immediately
1352		activated.
1353
1354	  workingset_activate_file
1355		Number of refaulted file pages that were immediately activated.
1356
1357	  workingset_restore_anon
1358		Number of restored anonymous pages which have been detected as
1359		an active workingset before they got reclaimed.
1360
1361	  workingset_restore_file
1362		Number of restored file pages which have been detected as an
1363		active workingset before they got reclaimed.
1364
1365	  workingset_nodereclaim
1366		Number of times a shadow node has been reclaimed
1367
1368	  pgfault (npn)
1369		Total number of page faults incurred
1370
1371	  pgmajfault (npn)
1372		Number of major page faults incurred
1373
1374	  pgrefill (npn)
1375		Amount of scanned pages (in an active LRU list)
1376
1377	  pgscan (npn)
1378		Amount of scanned pages (in an inactive LRU list)
1379
1380	  pgsteal (npn)
1381		Amount of reclaimed pages
1382
1383	  pgactivate (npn)
1384		Amount of pages moved to the active LRU list
1385
1386	  pgdeactivate (npn)
1387		Amount of pages moved to the inactive LRU list
1388
1389	  pglazyfree (npn)
1390		Amount of pages postponed to be freed under memory pressure
1391
1392	  pglazyfreed (npn)
1393		Amount of reclaimed lazyfree pages
1394
1395	  thp_fault_alloc (npn)
1396		Number of transparent hugepages which were allocated to satisfy
1397		a page fault. This counter is not present when CONFIG_TRANSPARENT_HUGEPAGE
1398                is not set.
1399
1400	  thp_collapse_alloc (npn)
1401		Number of transparent hugepages which were allocated to allow
1402		collapsing an existing range of pages. This counter is not
1403		present when CONFIG_TRANSPARENT_HUGEPAGE is not set.
1404
1405  memory.numa_stat
1406	A read-only nested-keyed file which exists on non-root cgroups.
1407
1408	This breaks down the cgroup's memory footprint into different
1409	types of memory, type-specific details, and other information
1410	per node on the state of the memory management system.
1411
1412	This is useful for providing visibility into the NUMA locality
1413	information within an memcg since the pages are allowed to be
1414	allocated from any physical node. One of the use case is evaluating
1415	application performance by combining this information with the
1416	application's CPU allocation.
1417
1418	All memory amounts are in bytes.
1419
1420	The output format of memory.numa_stat is::
1421
1422	  type N0=<bytes in node 0> N1=<bytes in node 1> ...
1423
1424	The entries are ordered to be human readable, and new entries
1425	can show up in the middle. Don't rely on items remaining in a
1426	fixed position; use the keys to look up specific values!
1427
1428	The entries can refer to the memory.stat.
1429
1430  memory.swap.current
1431	A read-only single value file which exists on non-root
1432	cgroups.
1433
1434	The total amount of swap currently being used by the cgroup
1435	and its descendants.
1436
1437  memory.swap.high
1438	A read-write single value file which exists on non-root
1439	cgroups.  The default is "max".
1440
1441	Swap usage throttle limit.  If a cgroup's swap usage exceeds
1442	this limit, all its further allocations will be throttled to
1443	allow userspace to implement custom out-of-memory procedures.
1444
1445	This limit marks a point of no return for the cgroup. It is NOT
1446	designed to manage the amount of swapping a workload does
1447	during regular operation. Compare to memory.swap.max, which
1448	prohibits swapping past a set amount, but lets the cgroup
1449	continue unimpeded as long as other memory can be reclaimed.
1450
1451	Healthy workloads are not expected to reach this limit.
1452
1453  memory.swap.max
1454	A read-write single value file which exists on non-root
1455	cgroups.  The default is "max".
1456
1457	Swap usage hard limit.  If a cgroup's swap usage reaches this
1458	limit, anonymous memory of the cgroup will not be swapped out.
1459
1460  memory.swap.events
1461	A read-only flat-keyed file which exists on non-root cgroups.
1462	The following entries are defined.  Unless specified
1463	otherwise, a value change in this file generates a file
1464	modified event.
1465
1466	  high
1467		The number of times the cgroup's swap usage was over
1468		the high threshold.
1469
1470	  max
1471		The number of times the cgroup's swap usage was about
1472		to go over the max boundary and swap allocation
1473		failed.
1474
1475	  fail
1476		The number of times swap allocation failed either
1477		because of running out of swap system-wide or max
1478		limit.
1479
1480	When reduced under the current usage, the existing swap
1481	entries are reclaimed gradually and the swap usage may stay
1482	higher than the limit for an extended period of time.  This
1483	reduces the impact on the workload and memory management.
1484
1485  memory.pressure
1486	A read-only nested-keyed file.
1487
1488	Shows pressure stall information for memory. See
1489	:ref:`Documentation/accounting/psi.rst <psi>` for details.
1490
1491
1492Usage Guidelines
1493~~~~~~~~~~~~~~~~
1494
1495"memory.high" is the main mechanism to control memory usage.
1496Over-committing on high limit (sum of high limits > available memory)
1497and letting global memory pressure to distribute memory according to
1498usage is a viable strategy.
1499
1500Because breach of the high limit doesn't trigger the OOM killer but
1501throttles the offending cgroup, a management agent has ample
1502opportunities to monitor and take appropriate actions such as granting
1503more memory or terminating the workload.
1504
1505Determining whether a cgroup has enough memory is not trivial as
1506memory usage doesn't indicate whether the workload can benefit from
1507more memory.  For example, a workload which writes data received from
1508network to a file can use all available memory but can also operate as
1509performant with a small amount of memory.  A measure of memory
1510pressure - how much the workload is being impacted due to lack of
1511memory - is necessary to determine whether a workload needs more
1512memory; unfortunately, memory pressure monitoring mechanism isn't
1513implemented yet.
1514
1515
1516Memory Ownership
1517~~~~~~~~~~~~~~~~
1518
1519A memory area is charged to the cgroup which instantiated it and stays
1520charged to the cgroup until the area is released.  Migrating a process
1521to a different cgroup doesn't move the memory usages that it
1522instantiated while in the previous cgroup to the new cgroup.
1523
1524A memory area may be used by processes belonging to different cgroups.
1525To which cgroup the area will be charged is in-deterministic; however,
1526over time, the memory area is likely to end up in a cgroup which has
1527enough memory allowance to avoid high reclaim pressure.
1528
1529If a cgroup sweeps a considerable amount of memory which is expected
1530to be accessed repeatedly by other cgroups, it may make sense to use
1531POSIX_FADV_DONTNEED to relinquish the ownership of memory areas
1532belonging to the affected files to ensure correct memory ownership.
1533
1534
1535IO
1536--
1537
1538The "io" controller regulates the distribution of IO resources.  This
1539controller implements both weight based and absolute bandwidth or IOPS
1540limit distribution; however, weight based distribution is available
1541only if cfq-iosched is in use and neither scheme is available for
1542blk-mq devices.
1543
1544
1545IO Interface Files
1546~~~~~~~~~~~~~~~~~~
1547
1548  io.stat
1549	A read-only nested-keyed file.
1550
1551	Lines are keyed by $MAJ:$MIN device numbers and not ordered.
1552	The following nested keys are defined.
1553
1554	  ======	=====================
1555	  rbytes	Bytes read
1556	  wbytes	Bytes written
1557	  rios		Number of read IOs
1558	  wios		Number of write IOs
1559	  dbytes	Bytes discarded
1560	  dios		Number of discard IOs
1561	  ======	=====================
1562
1563	An example read output follows::
1564
1565	  8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0
1566	  8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021
1567
1568  io.cost.qos
1569	A read-write nested-keyed file which exists only on the root
1570	cgroup.
1571
1572	This file configures the Quality of Service of the IO cost
1573	model based controller (CONFIG_BLK_CGROUP_IOCOST) which
1574	currently implements "io.weight" proportional control.  Lines
1575	are keyed by $MAJ:$MIN device numbers and not ordered.  The
1576	line for a given device is populated on the first write for
1577	the device on "io.cost.qos" or "io.cost.model".  The following
1578	nested keys are defined.
1579
1580	  ======	=====================================
1581	  enable	Weight-based control enable
1582	  ctrl		"auto" or "user"
1583	  rpct		Read latency percentile    [0, 100]
1584	  rlat		Read latency threshold
1585	  wpct		Write latency percentile   [0, 100]
1586	  wlat		Write latency threshold
1587	  min		Minimum scaling percentage [1, 10000]
1588	  max		Maximum scaling percentage [1, 10000]
1589	  ======	=====================================
1590
1591	The controller is disabled by default and can be enabled by
1592	setting "enable" to 1.  "rpct" and "wpct" parameters default
1593	to zero and the controller uses internal device saturation
1594	state to adjust the overall IO rate between "min" and "max".
1595
1596	When a better control quality is needed, latency QoS
1597	parameters can be configured.  For example::
1598
1599	  8:16 enable=1 ctrl=auto rpct=95.00 rlat=75000 wpct=95.00 wlat=150000 min=50.00 max=150.0
1600
1601	shows that on sdb, the controller is enabled, will consider
1602	the device saturated if the 95th percentile of read completion
1603	latencies is above 75ms or write 150ms, and adjust the overall
1604	IO issue rate between 50% and 150% accordingly.
1605
1606	The lower the saturation point, the better the latency QoS at
1607	the cost of aggregate bandwidth.  The narrower the allowed
1608	adjustment range between "min" and "max", the more conformant
1609	to the cost model the IO behavior.  Note that the IO issue
1610	base rate may be far off from 100% and setting "min" and "max"
1611	blindly can lead to a significant loss of device capacity or
1612	control quality.  "min" and "max" are useful for regulating
1613	devices which show wide temporary behavior changes - e.g. a
1614	ssd which accepts writes at the line speed for a while and
1615	then completely stalls for multiple seconds.
1616
1617	When "ctrl" is "auto", the parameters are controlled by the
1618	kernel and may change automatically.  Setting "ctrl" to "user"
1619	or setting any of the percentile and latency parameters puts
1620	it into "user" mode and disables the automatic changes.  The
1621	automatic mode can be restored by setting "ctrl" to "auto".
1622
1623  io.cost.model
1624	A read-write nested-keyed file which exists only on the root
1625	cgroup.
1626
1627	This file configures the cost model of the IO cost model based
1628	controller (CONFIG_BLK_CGROUP_IOCOST) which currently
1629	implements "io.weight" proportional control.  Lines are keyed
1630	by $MAJ:$MIN device numbers and not ordered.  The line for a
1631	given device is populated on the first write for the device on
1632	"io.cost.qos" or "io.cost.model".  The following nested keys
1633	are defined.
1634
1635	  =====		================================
1636	  ctrl		"auto" or "user"
1637	  model		The cost model in use - "linear"
1638	  =====		================================
1639
1640	When "ctrl" is "auto", the kernel may change all parameters
1641	dynamically.  When "ctrl" is set to "user" or any other
1642	parameters are written to, "ctrl" become "user" and the
1643	automatic changes are disabled.
1644
1645	When "model" is "linear", the following model parameters are
1646	defined.
1647
1648	  =============	========================================
1649	  [r|w]bps	The maximum sequential IO throughput
1650	  [r|w]seqiops	The maximum 4k sequential IOs per second
1651	  [r|w]randiops	The maximum 4k random IOs per second
1652	  =============	========================================
1653
1654	From the above, the builtin linear model determines the base
1655	costs of a sequential and random IO and the cost coefficient
1656	for the IO size.  While simple, this model can cover most
1657	common device classes acceptably.
1658
1659	The IO cost model isn't expected to be accurate in absolute
1660	sense and is scaled to the device behavior dynamically.
1661
1662	If needed, tools/cgroup/iocost_coef_gen.py can be used to
1663	generate device-specific coefficients.
1664
1665  io.weight
1666	A read-write flat-keyed file which exists on non-root cgroups.
1667	The default is "default 100".
1668
1669	The first line is the default weight applied to devices
1670	without specific override.  The rest are overrides keyed by
1671	$MAJ:$MIN device numbers and not ordered.  The weights are in
1672	the range [1, 10000] and specifies the relative amount IO time
1673	the cgroup can use in relation to its siblings.
1674
1675	The default weight can be updated by writing either "default
1676	$WEIGHT" or simply "$WEIGHT".  Overrides can be set by writing
1677	"$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default".
1678
1679	An example read output follows::
1680
1681	  default 100
1682	  8:16 200
1683	  8:0 50
1684
1685  io.max
1686	A read-write nested-keyed file which exists on non-root
1687	cgroups.
1688
1689	BPS and IOPS based IO limit.  Lines are keyed by $MAJ:$MIN
1690	device numbers and not ordered.  The following nested keys are
1691	defined.
1692
1693	  =====		==================================
1694	  rbps		Max read bytes per second
1695	  wbps		Max write bytes per second
1696	  riops		Max read IO operations per second
1697	  wiops		Max write IO operations per second
1698	  =====		==================================
1699
1700	When writing, any number of nested key-value pairs can be
1701	specified in any order.  "max" can be specified as the value
1702	to remove a specific limit.  If the same key is specified
1703	multiple times, the outcome is undefined.
1704
1705	BPS and IOPS are measured in each IO direction and IOs are
1706	delayed if limit is reached.  Temporary bursts are allowed.
1707
1708	Setting read limit at 2M BPS and write at 120 IOPS for 8:16::
1709
1710	  echo "8:16 rbps=2097152 wiops=120" > io.max
1711
1712	Reading returns the following::
1713
1714	  8:16 rbps=2097152 wbps=max riops=max wiops=120
1715
1716	Write IOPS limit can be removed by writing the following::
1717
1718	  echo "8:16 wiops=max" > io.max
1719
1720	Reading now returns the following::
1721
1722	  8:16 rbps=2097152 wbps=max riops=max wiops=max
1723
1724  io.pressure
1725	A read-only nested-keyed file.
1726
1727	Shows pressure stall information for IO. See
1728	:ref:`Documentation/accounting/psi.rst <psi>` for details.
1729
1730
1731Writeback
1732~~~~~~~~~
1733
1734Page cache is dirtied through buffered writes and shared mmaps and
1735written asynchronously to the backing filesystem by the writeback
1736mechanism.  Writeback sits between the memory and IO domains and
1737regulates the proportion of dirty memory by balancing dirtying and
1738write IOs.
1739
1740The io controller, in conjunction with the memory controller,
1741implements control of page cache writeback IOs.  The memory controller
1742defines the memory domain that dirty memory ratio is calculated and
1743maintained for and the io controller defines the io domain which
1744writes out dirty pages for the memory domain.  Both system-wide and
1745per-cgroup dirty memory states are examined and the more restrictive
1746of the two is enforced.
1747
1748cgroup writeback requires explicit support from the underlying
1749filesystem.  Currently, cgroup writeback is implemented on ext2, ext4,
1750btrfs, f2fs, and xfs.  On other filesystems, all writeback IOs are
1751attributed to the root cgroup.
1752
1753There are inherent differences in memory and writeback management
1754which affects how cgroup ownership is tracked.  Memory is tracked per
1755page while writeback per inode.  For the purpose of writeback, an
1756inode is assigned to a cgroup and all IO requests to write dirty pages
1757from the inode are attributed to that cgroup.
1758
1759As cgroup ownership for memory is tracked per page, there can be pages
1760which are associated with different cgroups than the one the inode is
1761associated with.  These are called foreign pages.  The writeback
1762constantly keeps track of foreign pages and, if a particular foreign
1763cgroup becomes the majority over a certain period of time, switches
1764the ownership of the inode to that cgroup.
1765
1766While this model is enough for most use cases where a given inode is
1767mostly dirtied by a single cgroup even when the main writing cgroup
1768changes over time, use cases where multiple cgroups write to a single
1769inode simultaneously are not supported well.  In such circumstances, a
1770significant portion of IOs are likely to be attributed incorrectly.
1771As memory controller assigns page ownership on the first use and
1772doesn't update it until the page is released, even if writeback
1773strictly follows page ownership, multiple cgroups dirtying overlapping
1774areas wouldn't work as expected.  It's recommended to avoid such usage
1775patterns.
1776
1777The sysctl knobs which affect writeback behavior are applied to cgroup
1778writeback as follows.
1779
1780  vm.dirty_background_ratio, vm.dirty_ratio
1781	These ratios apply the same to cgroup writeback with the
1782	amount of available memory capped by limits imposed by the
1783	memory controller and system-wide clean memory.
1784
1785  vm.dirty_background_bytes, vm.dirty_bytes
1786	For cgroup writeback, this is calculated into ratio against
1787	total available memory and applied the same way as
1788	vm.dirty[_background]_ratio.
1789
1790
1791IO Latency
1792~~~~~~~~~~
1793
1794This is a cgroup v2 controller for IO workload protection.  You provide a group
1795with a latency target, and if the average latency exceeds that target the
1796controller will throttle any peers that have a lower latency target than the
1797protected workload.
1798
1799The limits are only applied at the peer level in the hierarchy.  This means that
1800in the diagram below, only groups A, B, and C will influence each other, and
1801groups D and F will influence each other.  Group G will influence nobody::
1802
1803			[root]
1804		/	   |		\
1805		A	   B		C
1806	       /  \        |
1807	      D    F	   G
1808
1809
1810So the ideal way to configure this is to set io.latency in groups A, B, and C.
1811Generally you do not want to set a value lower than the latency your device
1812supports.  Experiment to find the value that works best for your workload.
1813Start at higher than the expected latency for your device and watch the
1814avg_lat value in io.stat for your workload group to get an idea of the
1815latency you see during normal operation.  Use the avg_lat value as a basis for
1816your real setting, setting at 10-15% higher than the value in io.stat.
1817
1818How IO Latency Throttling Works
1819~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1820
1821io.latency is work conserving; so as long as everybody is meeting their latency
1822target the controller doesn't do anything.  Once a group starts missing its
1823target it begins throttling any peer group that has a higher target than itself.
1824This throttling takes 2 forms:
1825
1826- Queue depth throttling.  This is the number of outstanding IO's a group is
1827  allowed to have.  We will clamp down relatively quickly, starting at no limit
1828  and going all the way down to 1 IO at a time.
1829
1830- Artificial delay induction.  There are certain types of IO that cannot be
1831  throttled without possibly adversely affecting higher priority groups.  This
1832  includes swapping and metadata IO.  These types of IO are allowed to occur
1833  normally, however they are "charged" to the originating group.  If the
1834  originating group is being throttled you will see the use_delay and delay
1835  fields in io.stat increase.  The delay value is how many microseconds that are
1836  being added to any process that runs in this group.  Because this number can
1837  grow quite large if there is a lot of swapping or metadata IO occurring we
1838  limit the individual delay events to 1 second at a time.
1839
1840Once the victimized group starts meeting its latency target again it will start
1841unthrottling any peer groups that were throttled previously.  If the victimized
1842group simply stops doing IO the global counter will unthrottle appropriately.
1843
1844IO Latency Interface Files
1845~~~~~~~~~~~~~~~~~~~~~~~~~~
1846
1847  io.latency
1848	This takes a similar format as the other controllers.
1849
1850		"MAJOR:MINOR target=<target time in microseconds"
1851
1852  io.stat
1853	If the controller is enabled you will see extra stats in io.stat in
1854	addition to the normal ones.
1855
1856	  depth
1857		This is the current queue depth for the group.
1858
1859	  avg_lat
1860		This is an exponential moving average with a decay rate of 1/exp
1861		bound by the sampling interval.  The decay rate interval can be
1862		calculated by multiplying the win value in io.stat by the
1863		corresponding number of samples based on the win value.
1864
1865	  win
1866		The sampling window size in milliseconds.  This is the minimum
1867		duration of time between evaluation events.  Windows only elapse
1868		with IO activity.  Idle periods extend the most recent window.
1869
1870IO Priority
1871~~~~~~~~~~~
1872
1873A single attribute controls the behavior of the I/O priority cgroup policy,
1874namely the blkio.prio.class attribute. The following values are accepted for
1875that attribute:
1876
1877  no-change
1878	Do not modify the I/O priority class.
1879
1880  none-to-rt
1881	For requests that do not have an I/O priority class (NONE),
1882	change the I/O priority class into RT. Do not modify
1883	the I/O priority class of other requests.
1884
1885  restrict-to-be
1886	For requests that do not have an I/O priority class or that have I/O
1887	priority class RT, change it into BE. Do not modify the I/O priority
1888	class of requests that have priority class IDLE.
1889
1890  idle
1891	Change the I/O priority class of all requests into IDLE, the lowest
1892	I/O priority class.
1893
1894The following numerical values are associated with the I/O priority policies:
1895
1896+-------------+---+
1897| no-change   | 0 |
1898+-------------+---+
1899| none-to-rt  | 1 |
1900+-------------+---+
1901| rt-to-be    | 2 |
1902+-------------+---+
1903| all-to-idle | 3 |
1904+-------------+---+
1905
1906The numerical value that corresponds to each I/O priority class is as follows:
1907
1908+-------------------------------+---+
1909| IOPRIO_CLASS_NONE             | 0 |
1910+-------------------------------+---+
1911| IOPRIO_CLASS_RT (real-time)   | 1 |
1912+-------------------------------+---+
1913| IOPRIO_CLASS_BE (best effort) | 2 |
1914+-------------------------------+---+
1915| IOPRIO_CLASS_IDLE             | 3 |
1916+-------------------------------+---+
1917
1918The algorithm to set the I/O priority class for a request is as follows:
1919
1920- Translate the I/O priority class policy into a number.
1921- Change the request I/O priority class into the maximum of the I/O priority
1922  class policy number and the numerical I/O priority class.
1923
1924PID
1925---
1926
1927The process number controller is used to allow a cgroup to stop any
1928new tasks from being fork()'d or clone()'d after a specified limit is
1929reached.
1930
1931The number of tasks in a cgroup can be exhausted in ways which other
1932controllers cannot prevent, thus warranting its own controller.  For
1933example, a fork bomb is likely to exhaust the number of tasks before
1934hitting memory restrictions.
1935
1936Note that PIDs used in this controller refer to TIDs, process IDs as
1937used by the kernel.
1938
1939
1940PID Interface Files
1941~~~~~~~~~~~~~~~~~~~
1942
1943  pids.max
1944	A read-write single value file which exists on non-root
1945	cgroups.  The default is "max".
1946
1947	Hard limit of number of processes.
1948
1949  pids.current
1950	A read-only single value file which exists on all cgroups.
1951
1952	The number of processes currently in the cgroup and its
1953	descendants.
1954
1955Organisational operations are not blocked by cgroup policies, so it is
1956possible to have pids.current > pids.max.  This can be done by either
1957setting the limit to be smaller than pids.current, or attaching enough
1958processes to the cgroup such that pids.current is larger than
1959pids.max.  However, it is not possible to violate a cgroup PID policy
1960through fork() or clone(). These will return -EAGAIN if the creation
1961of a new process would cause a cgroup policy to be violated.
1962
1963
1964Cpuset
1965------
1966
1967The "cpuset" controller provides a mechanism for constraining
1968the CPU and memory node placement of tasks to only the resources
1969specified in the cpuset interface files in a task's current cgroup.
1970This is especially valuable on large NUMA systems where placing jobs
1971on properly sized subsets of the systems with careful processor and
1972memory placement to reduce cross-node memory access and contention
1973can improve overall system performance.
1974
1975The "cpuset" controller is hierarchical.  That means the controller
1976cannot use CPUs or memory nodes not allowed in its parent.
1977
1978
1979Cpuset Interface Files
1980~~~~~~~~~~~~~~~~~~~~~~
1981
1982  cpuset.cpus
1983	A read-write multiple values file which exists on non-root
1984	cpuset-enabled cgroups.
1985
1986	It lists the requested CPUs to be used by tasks within this
1987	cgroup.  The actual list of CPUs to be granted, however, is
1988	subjected to constraints imposed by its parent and can differ
1989	from the requested CPUs.
1990
1991	The CPU numbers are comma-separated numbers or ranges.
1992	For example::
1993
1994	  # cat cpuset.cpus
1995	  0-4,6,8-10
1996
1997	An empty value indicates that the cgroup is using the same
1998	setting as the nearest cgroup ancestor with a non-empty
1999	"cpuset.cpus" or all the available CPUs if none is found.
2000
2001	The value of "cpuset.cpus" stays constant until the next update
2002	and won't be affected by any CPU hotplug events.
2003
2004  cpuset.cpus.effective
2005	A read-only multiple values file which exists on all
2006	cpuset-enabled cgroups.
2007
2008	It lists the onlined CPUs that are actually granted to this
2009	cgroup by its parent.  These CPUs are allowed to be used by
2010	tasks within the current cgroup.
2011
2012	If "cpuset.cpus" is empty, the "cpuset.cpus.effective" file shows
2013	all the CPUs from the parent cgroup that can be available to
2014	be used by this cgroup.  Otherwise, it should be a subset of
2015	"cpuset.cpus" unless none of the CPUs listed in "cpuset.cpus"
2016	can be granted.  In this case, it will be treated just like an
2017	empty "cpuset.cpus".
2018
2019	Its value will be affected by CPU hotplug events.
2020
2021  cpuset.mems
2022	A read-write multiple values file which exists on non-root
2023	cpuset-enabled cgroups.
2024
2025	It lists the requested memory nodes to be used by tasks within
2026	this cgroup.  The actual list of memory nodes granted, however,
2027	is subjected to constraints imposed by its parent and can differ
2028	from the requested memory nodes.
2029
2030	The memory node numbers are comma-separated numbers or ranges.
2031	For example::
2032
2033	  # cat cpuset.mems
2034	  0-1,3
2035
2036	An empty value indicates that the cgroup is using the same
2037	setting as the nearest cgroup ancestor with a non-empty
2038	"cpuset.mems" or all the available memory nodes if none
2039	is found.
2040
2041	The value of "cpuset.mems" stays constant until the next update
2042	and won't be affected by any memory nodes hotplug events.
2043
2044  cpuset.mems.effective
2045	A read-only multiple values file which exists on all
2046	cpuset-enabled cgroups.
2047
2048	It lists the onlined memory nodes that are actually granted to
2049	this cgroup by its parent. These memory nodes are allowed to
2050	be used by tasks within the current cgroup.
2051
2052	If "cpuset.mems" is empty, it shows all the memory nodes from the
2053	parent cgroup that will be available to be used by this cgroup.
2054	Otherwise, it should be a subset of "cpuset.mems" unless none of
2055	the memory nodes listed in "cpuset.mems" can be granted.  In this
2056	case, it will be treated just like an empty "cpuset.mems".
2057
2058	Its value will be affected by memory nodes hotplug events.
2059
2060  cpuset.cpus.partition
2061	A read-write single value file which exists on non-root
2062	cpuset-enabled cgroups.  This flag is owned by the parent cgroup
2063	and is not delegatable.
2064
2065	It accepts only the following input values when written to.
2066
2067	  ========	================================
2068	  "root"	a partition root
2069	  "member"	a non-root member of a partition
2070	  ========	================================
2071
2072	When set to be a partition root, the current cgroup is the
2073	root of a new partition or scheduling domain that comprises
2074	itself and all its descendants except those that are separate
2075	partition roots themselves and their descendants.  The root
2076	cgroup is always a partition root.
2077
2078	There are constraints on where a partition root can be set.
2079	It can only be set in a cgroup if all the following conditions
2080	are true.
2081
2082	1) The "cpuset.cpus" is not empty and the list of CPUs are
2083	   exclusive, i.e. they are not shared by any of its siblings.
2084	2) The parent cgroup is a partition root.
2085	3) The "cpuset.cpus" is also a proper subset of the parent's
2086	   "cpuset.cpus.effective".
2087	4) There is no child cgroups with cpuset enabled.  This is for
2088	   eliminating corner cases that have to be handled if such a
2089	   condition is allowed.
2090
2091	Setting it to partition root will take the CPUs away from the
2092	effective CPUs of the parent cgroup.  Once it is set, this
2093	file cannot be reverted back to "member" if there are any child
2094	cgroups with cpuset enabled.
2095
2096	A parent partition cannot distribute all its CPUs to its
2097	child partitions.  There must be at least one cpu left in the
2098	parent partition.
2099
2100	Once becoming a partition root, changes to "cpuset.cpus" is
2101	generally allowed as long as the first condition above is true,
2102	the change will not take away all the CPUs from the parent
2103	partition and the new "cpuset.cpus" value is a superset of its
2104	children's "cpuset.cpus" values.
2105
2106	Sometimes, external factors like changes to ancestors'
2107	"cpuset.cpus" or cpu hotplug can cause the state of the partition
2108	root to change.  On read, the "cpuset.sched.partition" file
2109	can show the following values.
2110
2111	  ==============	==============================
2112	  "member"		Non-root member of a partition
2113	  "root"		Partition root
2114	  "root invalid"	Invalid partition root
2115	  ==============	==============================
2116
2117	It is a partition root if the first 2 partition root conditions
2118	above are true and at least one CPU from "cpuset.cpus" is
2119	granted by the parent cgroup.
2120
2121	A partition root can become invalid if none of CPUs requested
2122	in "cpuset.cpus" can be granted by the parent cgroup or the
2123	parent cgroup is no longer a partition root itself.  In this
2124	case, it is not a real partition even though the restriction
2125	of the first partition root condition above will still apply.
2126	The cpu affinity of all the tasks in the cgroup will then be
2127	associated with CPUs in the nearest ancestor partition.
2128
2129	An invalid partition root can be transitioned back to a
2130	real partition root if at least one of the requested CPUs
2131	can now be granted by its parent.  In this case, the cpu
2132	affinity of all the tasks in the formerly invalid partition
2133	will be associated to the CPUs of the newly formed partition.
2134	Changing the partition state of an invalid partition root to
2135	"member" is always allowed even if child cpusets are present.
2136
2137
2138Device controller
2139-----------------
2140
2141Device controller manages access to device files. It includes both
2142creation of new device files (using mknod), and access to the
2143existing device files.
2144
2145Cgroup v2 device controller has no interface files and is implemented
2146on top of cgroup BPF. To control access to device files, a user may
2147create bpf programs of the BPF_CGROUP_DEVICE type and attach them
2148to cgroups. On an attempt to access a device file, corresponding
2149BPF programs will be executed, and depending on the return value
2150the attempt will succeed or fail with -EPERM.
2151
2152A BPF_CGROUP_DEVICE program takes a pointer to the bpf_cgroup_dev_ctx
2153structure, which describes the device access attempt: access type
2154(mknod/read/write) and device (type, major and minor numbers).
2155If the program returns 0, the attempt fails with -EPERM, otherwise
2156it succeeds.
2157
2158An example of BPF_CGROUP_DEVICE program may be found in the kernel
2159source tree in the tools/testing/selftests/bpf/progs/dev_cgroup.c file.
2160
2161
2162RDMA
2163----
2164
2165The "rdma" controller regulates the distribution and accounting of
2166RDMA resources.
2167
2168RDMA Interface Files
2169~~~~~~~~~~~~~~~~~~~~
2170
2171  rdma.max
2172	A readwrite nested-keyed file that exists for all the cgroups
2173	except root that describes current configured resource limit
2174	for a RDMA/IB device.
2175
2176	Lines are keyed by device name and are not ordered.
2177	Each line contains space separated resource name and its configured
2178	limit that can be distributed.
2179
2180	The following nested keys are defined.
2181
2182	  ==========	=============================
2183	  hca_handle	Maximum number of HCA Handles
2184	  hca_object 	Maximum number of HCA Objects
2185	  ==========	=============================
2186
2187	An example for mlx4 and ocrdma device follows::
2188
2189	  mlx4_0 hca_handle=2 hca_object=2000
2190	  ocrdma1 hca_handle=3 hca_object=max
2191
2192  rdma.current
2193	A read-only file that describes current resource usage.
2194	It exists for all the cgroup except root.
2195
2196	An example for mlx4 and ocrdma device follows::
2197
2198	  mlx4_0 hca_handle=1 hca_object=20
2199	  ocrdma1 hca_handle=1 hca_object=23
2200
2201HugeTLB
2202-------
2203
2204The HugeTLB controller allows to limit the HugeTLB usage per control group and
2205enforces the controller limit during page fault.
2206
2207HugeTLB Interface Files
2208~~~~~~~~~~~~~~~~~~~~~~~
2209
2210  hugetlb.<hugepagesize>.current
2211	Show current usage for "hugepagesize" hugetlb.  It exists for all
2212	the cgroup except root.
2213
2214  hugetlb.<hugepagesize>.max
2215	Set/show the hard limit of "hugepagesize" hugetlb usage.
2216	The default value is "max".  It exists for all the cgroup except root.
2217
2218  hugetlb.<hugepagesize>.events
2219	A read-only flat-keyed file which exists on non-root cgroups.
2220
2221	  max
2222		The number of allocation failure due to HugeTLB limit
2223
2224  hugetlb.<hugepagesize>.events.local
2225	Similar to hugetlb.<hugepagesize>.events but the fields in the file
2226	are local to the cgroup i.e. not hierarchical. The file modified event
2227	generated on this file reflects only the local events.
2228
2229Misc
2230----
2231
2232The Miscellaneous cgroup provides the resource limiting and tracking
2233mechanism for the scalar resources which cannot be abstracted like the other
2234cgroup resources. Controller is enabled by the CONFIG_CGROUP_MISC config
2235option.
2236
2237A resource can be added to the controller via enum misc_res_type{} in the
2238include/linux/misc_cgroup.h file and the corresponding name via misc_res_name[]
2239in the kernel/cgroup/misc.c file. Provider of the resource must set its
2240capacity prior to using the resource by calling misc_cg_set_capacity().
2241
2242Once a capacity is set then the resource usage can be updated using charge and
2243uncharge APIs. All of the APIs to interact with misc controller are in
2244include/linux/misc_cgroup.h.
2245
2246Misc Interface Files
2247~~~~~~~~~~~~~~~~~~~~
2248
2249Miscellaneous controller provides 3 interface files. If two misc resources (res_a and res_b) are registered then:
2250
2251  misc.capacity
2252        A read-only flat-keyed file shown only in the root cgroup.  It shows
2253        miscellaneous scalar resources available on the platform along with
2254        their quantities::
2255
2256	  $ cat misc.capacity
2257	  res_a 50
2258	  res_b 10
2259
2260  misc.current
2261        A read-only flat-keyed file shown in the non-root cgroups.  It shows
2262        the current usage of the resources in the cgroup and its children.::
2263
2264	  $ cat misc.current
2265	  res_a 3
2266	  res_b 0
2267
2268  misc.max
2269        A read-write flat-keyed file shown in the non root cgroups. Allowed
2270        maximum usage of the resources in the cgroup and its children.::
2271
2272	  $ cat misc.max
2273	  res_a max
2274	  res_b 4
2275
2276	Limit can be set by::
2277
2278	  # echo res_a 1 > misc.max
2279
2280	Limit can be set to max by::
2281
2282	  # echo res_a max > misc.max
2283
2284        Limits can be set higher than the capacity value in the misc.capacity
2285        file.
2286
2287Migration and Ownership
2288~~~~~~~~~~~~~~~~~~~~~~~
2289
2290A miscellaneous scalar resource is charged to the cgroup in which it is used
2291first, and stays charged to that cgroup until that resource is freed. Migrating
2292a process to a different cgroup does not move the charge to the destination
2293cgroup where the process has moved.
2294
2295Others
2296------
2297
2298perf_event
2299~~~~~~~~~~
2300
2301perf_event controller, if not mounted on a legacy hierarchy, is
2302automatically enabled on the v2 hierarchy so that perf events can
2303always be filtered by cgroup v2 path.  The controller can still be
2304moved to a legacy hierarchy after v2 hierarchy is populated.
2305
2306
2307Non-normative information
2308-------------------------
2309
2310This section contains information that isn't considered to be a part of
2311the stable kernel API and so is subject to change.
2312
2313
2314CPU controller root cgroup process behaviour
2315~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2316
2317When distributing CPU cycles in the root cgroup each thread in this
2318cgroup is treated as if it was hosted in a separate child cgroup of the
2319root cgroup. This child cgroup weight is dependent on its thread nice
2320level.
2321
2322For details of this mapping see sched_prio_to_weight array in
2323kernel/sched/core.c file (values from this array should be scaled
2324appropriately so the neutral - nice 0 - value is 100 instead of 1024).
2325
2326
2327IO controller root cgroup process behaviour
2328~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2329
2330Root cgroup processes are hosted in an implicit leaf child node.
2331When distributing IO resources this implicit child node is taken into
2332account as if it was a normal child cgroup of the root cgroup with a
2333weight value of 200.
2334
2335
2336Namespace
2337=========
2338
2339Basics
2340------
2341
2342cgroup namespace provides a mechanism to virtualize the view of the
2343"/proc/$PID/cgroup" file and cgroup mounts.  The CLONE_NEWCGROUP clone
2344flag can be used with clone(2) and unshare(2) to create a new cgroup
2345namespace.  The process running inside the cgroup namespace will have
2346its "/proc/$PID/cgroup" output restricted to cgroupns root.  The
2347cgroupns root is the cgroup of the process at the time of creation of
2348the cgroup namespace.
2349
2350Without cgroup namespace, the "/proc/$PID/cgroup" file shows the
2351complete path of the cgroup of a process.  In a container setup where
2352a set of cgroups and namespaces are intended to isolate processes the
2353"/proc/$PID/cgroup" file may leak potential system level information
2354to the isolated processes.  For example::
2355
2356  # cat /proc/self/cgroup
2357  0::/batchjobs/container_id1
2358
2359The path '/batchjobs/container_id1' can be considered as system-data
2360and undesirable to expose to the isolated processes.  cgroup namespace
2361can be used to restrict visibility of this path.  For example, before
2362creating a cgroup namespace, one would see::
2363
2364  # ls -l /proc/self/ns/cgroup
2365  lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835]
2366  # cat /proc/self/cgroup
2367  0::/batchjobs/container_id1
2368
2369After unsharing a new namespace, the view changes::
2370
2371  # ls -l /proc/self/ns/cgroup
2372  lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183]
2373  # cat /proc/self/cgroup
2374  0::/
2375
2376When some thread from a multi-threaded process unshares its cgroup
2377namespace, the new cgroupns gets applied to the entire process (all
2378the threads).  This is natural for the v2 hierarchy; however, for the
2379legacy hierarchies, this may be unexpected.
2380
2381A cgroup namespace is alive as long as there are processes inside or
2382mounts pinning it.  When the last usage goes away, the cgroup
2383namespace is destroyed.  The cgroupns root and the actual cgroups
2384remain.
2385
2386
2387The Root and Views
2388------------------
2389
2390The 'cgroupns root' for a cgroup namespace is the cgroup in which the
2391process calling unshare(2) is running.  For example, if a process in
2392/batchjobs/container_id1 cgroup calls unshare, cgroup
2393/batchjobs/container_id1 becomes the cgroupns root.  For the
2394init_cgroup_ns, this is the real root ('/') cgroup.
2395
2396The cgroupns root cgroup does not change even if the namespace creator
2397process later moves to a different cgroup::
2398
2399  # ~/unshare -c # unshare cgroupns in some cgroup
2400  # cat /proc/self/cgroup
2401  0::/
2402  # mkdir sub_cgrp_1
2403  # echo 0 > sub_cgrp_1/cgroup.procs
2404  # cat /proc/self/cgroup
2405  0::/sub_cgrp_1
2406
2407Each process gets its namespace-specific view of "/proc/$PID/cgroup"
2408
2409Processes running inside the cgroup namespace will be able to see
2410cgroup paths (in /proc/self/cgroup) only inside their root cgroup.
2411From within an unshared cgroupns::
2412
2413  # sleep 100000 &
2414  [1] 7353
2415  # echo 7353 > sub_cgrp_1/cgroup.procs
2416  # cat /proc/7353/cgroup
2417  0::/sub_cgrp_1
2418
2419From the initial cgroup namespace, the real cgroup path will be
2420visible::
2421
2422  $ cat /proc/7353/cgroup
2423  0::/batchjobs/container_id1/sub_cgrp_1
2424
2425From a sibling cgroup namespace (that is, a namespace rooted at a
2426different cgroup), the cgroup path relative to its own cgroup
2427namespace root will be shown.  For instance, if PID 7353's cgroup
2428namespace root is at '/batchjobs/container_id2', then it will see::
2429
2430  # cat /proc/7353/cgroup
2431  0::/../container_id2/sub_cgrp_1
2432
2433Note that the relative path always starts with '/' to indicate that
2434its relative to the cgroup namespace root of the caller.
2435
2436
2437Migration and setns(2)
2438----------------------
2439
2440Processes inside a cgroup namespace can move into and out of the
2441namespace root if they have proper access to external cgroups.  For
2442example, from inside a namespace with cgroupns root at
2443/batchjobs/container_id1, and assuming that the global hierarchy is
2444still accessible inside cgroupns::
2445
2446  # cat /proc/7353/cgroup
2447  0::/sub_cgrp_1
2448  # echo 7353 > batchjobs/container_id2/cgroup.procs
2449  # cat /proc/7353/cgroup
2450  0::/../container_id2
2451
2452Note that this kind of setup is not encouraged.  A task inside cgroup
2453namespace should only be exposed to its own cgroupns hierarchy.
2454
2455setns(2) to another cgroup namespace is allowed when:
2456
2457(a) the process has CAP_SYS_ADMIN against its current user namespace
2458(b) the process has CAP_SYS_ADMIN against the target cgroup
2459    namespace's userns
2460
2461No implicit cgroup changes happen with attaching to another cgroup
2462namespace.  It is expected that the someone moves the attaching
2463process under the target cgroup namespace root.
2464
2465
2466Interaction with Other Namespaces
2467---------------------------------
2468
2469Namespace specific cgroup hierarchy can be mounted by a process
2470running inside a non-init cgroup namespace::
2471
2472  # mount -t cgroup2 none $MOUNT_POINT
2473
2474This will mount the unified cgroup hierarchy with cgroupns root as the
2475filesystem root.  The process needs CAP_SYS_ADMIN against its user and
2476mount namespaces.
2477
2478The virtualization of /proc/self/cgroup file combined with restricting
2479the view of cgroup hierarchy by namespace-private cgroupfs mount
2480provides a properly isolated cgroup view inside the container.
2481
2482
2483Information on Kernel Programming
2484=================================
2485
2486This section contains kernel programming information in the areas
2487where interacting with cgroup is necessary.  cgroup core and
2488controllers are not covered.
2489
2490
2491Filesystem Support for Writeback
2492--------------------------------
2493
2494A filesystem can support cgroup writeback by updating
2495address_space_operations->writepage[s]() to annotate bio's using the
2496following two functions.
2497
2498  wbc_init_bio(@wbc, @bio)
2499	Should be called for each bio carrying writeback data and
2500	associates the bio with the inode's owner cgroup and the
2501	corresponding request queue.  This must be called after
2502	a queue (device) has been associated with the bio and
2503	before submission.
2504
2505  wbc_account_cgroup_owner(@wbc, @page, @bytes)
2506	Should be called for each data segment being written out.
2507	While this function doesn't care exactly when it's called
2508	during the writeback session, it's the easiest and most
2509	natural to call it as data segments are added to a bio.
2510
2511With writeback bio's annotated, cgroup support can be enabled per
2512super_block by setting SB_I_CGROUPWB in ->s_iflags.  This allows for
2513selective disabling of cgroup writeback support which is helpful when
2514certain filesystem features, e.g. journaled data mode, are
2515incompatible.
2516
2517wbc_init_bio() binds the specified bio to its cgroup.  Depending on
2518the configuration, the bio may be executed at a lower priority and if
2519the writeback session is holding shared resources, e.g. a journal
2520entry, may lead to priority inversion.  There is no one easy solution
2521for the problem.  Filesystems can try to work around specific problem
2522cases by skipping wbc_init_bio() and using bio_associate_blkg()
2523directly.
2524
2525
2526Deprecated v1 Core Features
2527===========================
2528
2529- Multiple hierarchies including named ones are not supported.
2530
2531- All v1 mount options are not supported.
2532
2533- The "tasks" file is removed and "cgroup.procs" is not sorted.
2534
2535- "cgroup.clone_children" is removed.
2536
2537- /proc/cgroups is meaningless for v2.  Use "cgroup.controllers" file
2538  at the root instead.
2539
2540
2541Issues with v1 and Rationales for v2
2542====================================
2543
2544Multiple Hierarchies
2545--------------------
2546
2547cgroup v1 allowed an arbitrary number of hierarchies and each
2548hierarchy could host any number of controllers.  While this seemed to
2549provide a high level of flexibility, it wasn't useful in practice.
2550
2551For example, as there is only one instance of each controller, utility
2552type controllers such as freezer which can be useful in all
2553hierarchies could only be used in one.  The issue is exacerbated by
2554the fact that controllers couldn't be moved to another hierarchy once
2555hierarchies were populated.  Another issue was that all controllers
2556bound to a hierarchy were forced to have exactly the same view of the
2557hierarchy.  It wasn't possible to vary the granularity depending on
2558the specific controller.
2559
2560In practice, these issues heavily limited which controllers could be
2561put on the same hierarchy and most configurations resorted to putting
2562each controller on its own hierarchy.  Only closely related ones, such
2563as the cpu and cpuacct controllers, made sense to be put on the same
2564hierarchy.  This often meant that userland ended up managing multiple
2565similar hierarchies repeating the same steps on each hierarchy
2566whenever a hierarchy management operation was necessary.
2567
2568Furthermore, support for multiple hierarchies came at a steep cost.
2569It greatly complicated cgroup core implementation but more importantly
2570the support for multiple hierarchies restricted how cgroup could be
2571used in general and what controllers was able to do.
2572
2573There was no limit on how many hierarchies there might be, which meant
2574that a thread's cgroup membership couldn't be described in finite
2575length.  The key might contain any number of entries and was unlimited
2576in length, which made it highly awkward to manipulate and led to
2577addition of controllers which existed only to identify membership,
2578which in turn exacerbated the original problem of proliferating number
2579of hierarchies.
2580
2581Also, as a controller couldn't have any expectation regarding the
2582topologies of hierarchies other controllers might be on, each
2583controller had to assume that all other controllers were attached to
2584completely orthogonal hierarchies.  This made it impossible, or at
2585least very cumbersome, for controllers to cooperate with each other.
2586
2587In most use cases, putting controllers on hierarchies which are
2588completely orthogonal to each other isn't necessary.  What usually is
2589called for is the ability to have differing levels of granularity
2590depending on the specific controller.  In other words, hierarchy may
2591be collapsed from leaf towards root when viewed from specific
2592controllers.  For example, a given configuration might not care about
2593how memory is distributed beyond a certain level while still wanting
2594to control how CPU cycles are distributed.
2595
2596
2597Thread Granularity
2598------------------
2599
2600cgroup v1 allowed threads of a process to belong to different cgroups.
2601This didn't make sense for some controllers and those controllers
2602ended up implementing different ways to ignore such situations but
2603much more importantly it blurred the line between API exposed to
2604individual applications and system management interface.
2605
2606Generally, in-process knowledge is available only to the process
2607itself; thus, unlike service-level organization of processes,
2608categorizing threads of a process requires active participation from
2609the application which owns the target process.
2610
2611cgroup v1 had an ambiguously defined delegation model which got abused
2612in combination with thread granularity.  cgroups were delegated to
2613individual applications so that they can create and manage their own
2614sub-hierarchies and control resource distributions along them.  This
2615effectively raised cgroup to the status of a syscall-like API exposed
2616to lay programs.
2617
2618First of all, cgroup has a fundamentally inadequate interface to be
2619exposed this way.  For a process to access its own knobs, it has to
2620extract the path on the target hierarchy from /proc/self/cgroup,
2621construct the path by appending the name of the knob to the path, open
2622and then read and/or write to it.  This is not only extremely clunky
2623and unusual but also inherently racy.  There is no conventional way to
2624define transaction across the required steps and nothing can guarantee
2625that the process would actually be operating on its own sub-hierarchy.
2626
2627cgroup controllers implemented a number of knobs which would never be
2628accepted as public APIs because they were just adding control knobs to
2629system-management pseudo filesystem.  cgroup ended up with interface
2630knobs which were not properly abstracted or refined and directly
2631revealed kernel internal details.  These knobs got exposed to
2632individual applications through the ill-defined delegation mechanism
2633effectively abusing cgroup as a shortcut to implementing public APIs
2634without going through the required scrutiny.
2635
2636This was painful for both userland and kernel.  Userland ended up with
2637misbehaving and poorly abstracted interfaces and kernel exposing and
2638locked into constructs inadvertently.
2639
2640
2641Competition Between Inner Nodes and Threads
2642-------------------------------------------
2643
2644cgroup v1 allowed threads to be in any cgroups which created an
2645interesting problem where threads belonging to a parent cgroup and its
2646children cgroups competed for resources.  This was nasty as two
2647different types of entities competed and there was no obvious way to
2648settle it.  Different controllers did different things.
2649
2650The cpu controller considered threads and cgroups as equivalents and
2651mapped nice levels to cgroup weights.  This worked for some cases but
2652fell flat when children wanted to be allocated specific ratios of CPU
2653cycles and the number of internal threads fluctuated - the ratios
2654constantly changed as the number of competing entities fluctuated.
2655There also were other issues.  The mapping from nice level to weight
2656wasn't obvious or universal, and there were various other knobs which
2657simply weren't available for threads.
2658
2659The io controller implicitly created a hidden leaf node for each
2660cgroup to host the threads.  The hidden leaf had its own copies of all
2661the knobs with ``leaf_`` prefixed.  While this allowed equivalent
2662control over internal threads, it was with serious drawbacks.  It
2663always added an extra layer of nesting which wouldn't be necessary
2664otherwise, made the interface messy and significantly complicated the
2665implementation.
2666
2667The memory controller didn't have a way to control what happened
2668between internal tasks and child cgroups and the behavior was not
2669clearly defined.  There were attempts to add ad-hoc behaviors and
2670knobs to tailor the behavior to specific workloads which would have
2671led to problems extremely difficult to resolve in the long term.
2672
2673Multiple controllers struggled with internal tasks and came up with
2674different ways to deal with it; unfortunately, all the approaches were
2675severely flawed and, furthermore, the widely different behaviors
2676made cgroup as a whole highly inconsistent.
2677
2678This clearly is a problem which needs to be addressed from cgroup core
2679in a uniform way.
2680
2681
2682Other Interface Issues
2683----------------------
2684
2685cgroup v1 grew without oversight and developed a large number of
2686idiosyncrasies and inconsistencies.  One issue on the cgroup core side
2687was how an empty cgroup was notified - a userland helper binary was
2688forked and executed for each event.  The event delivery wasn't
2689recursive or delegatable.  The limitations of the mechanism also led
2690to in-kernel event delivery filtering mechanism further complicating
2691the interface.
2692
2693Controller interfaces were problematic too.  An extreme example is
2694controllers completely ignoring hierarchical organization and treating
2695all cgroups as if they were all located directly under the root
2696cgroup.  Some controllers exposed a large amount of inconsistent
2697implementation details to userland.
2698
2699There also was no consistency across controllers.  When a new cgroup
2700was created, some controllers defaulted to not imposing extra
2701restrictions while others disallowed any resource usage until
2702explicitly configured.  Configuration knobs for the same type of
2703control used widely differing naming schemes and formats.  Statistics
2704and information knobs were named arbitrarily and used different
2705formats and units even in the same controller.
2706
2707cgroup v2 establishes common conventions where appropriate and updates
2708controllers so that they expose minimal and consistent interfaces.
2709
2710
2711Controller Issues and Remedies
2712------------------------------
2713
2714Memory
2715~~~~~~
2716
2717The original lower boundary, the soft limit, is defined as a limit
2718that is per default unset.  As a result, the set of cgroups that
2719global reclaim prefers is opt-in, rather than opt-out.  The costs for
2720optimizing these mostly negative lookups are so high that the
2721implementation, despite its enormous size, does not even provide the
2722basic desirable behavior.  First off, the soft limit has no
2723hierarchical meaning.  All configured groups are organized in a global
2724rbtree and treated like equal peers, regardless where they are located
2725in the hierarchy.  This makes subtree delegation impossible.  Second,
2726the soft limit reclaim pass is so aggressive that it not just
2727introduces high allocation latencies into the system, but also impacts
2728system performance due to overreclaim, to the point where the feature
2729becomes self-defeating.
2730
2731The memory.low boundary on the other hand is a top-down allocated
2732reserve.  A cgroup enjoys reclaim protection when it's within its
2733effective low, which makes delegation of subtrees possible. It also
2734enjoys having reclaim pressure proportional to its overage when
2735above its effective low.
2736
2737The original high boundary, the hard limit, is defined as a strict
2738limit that can not budge, even if the OOM killer has to be called.
2739But this generally goes against the goal of making the most out of the
2740available memory.  The memory consumption of workloads varies during
2741runtime, and that requires users to overcommit.  But doing that with a
2742strict upper limit requires either a fairly accurate prediction of the
2743working set size or adding slack to the limit.  Since working set size
2744estimation is hard and error prone, and getting it wrong results in
2745OOM kills, most users tend to err on the side of a looser limit and
2746end up wasting precious resources.
2747
2748The memory.high boundary on the other hand can be set much more
2749conservatively.  When hit, it throttles allocations by forcing them
2750into direct reclaim to work off the excess, but it never invokes the
2751OOM killer.  As a result, a high boundary that is chosen too
2752aggressively will not terminate the processes, but instead it will
2753lead to gradual performance degradation.  The user can monitor this
2754and make corrections until the minimal memory footprint that still
2755gives acceptable performance is found.
2756
2757In extreme cases, with many concurrent allocations and a complete
2758breakdown of reclaim progress within the group, the high boundary can
2759be exceeded.  But even then it's mostly better to satisfy the
2760allocation from the slack available in other groups or the rest of the
2761system than killing the group.  Otherwise, memory.max is there to
2762limit this type of spillover and ultimately contain buggy or even
2763malicious applications.
2764
2765Setting the original memory.limit_in_bytes below the current usage was
2766subject to a race condition, where concurrent charges could cause the
2767limit setting to fail. memory.max on the other hand will first set the
2768limit to prevent new charges, and then reclaim and OOM kill until the
2769new limit is met - or the task writing to memory.max is killed.
2770
2771The combined memory+swap accounting and limiting is replaced by real
2772control over swap space.
2773
2774The main argument for a combined memory+swap facility in the original
2775cgroup design was that global or parental pressure would always be
2776able to swap all anonymous memory of a child group, regardless of the
2777child's own (possibly untrusted) configuration.  However, untrusted
2778groups can sabotage swapping by other means - such as referencing its
2779anonymous memory in a tight loop - and an admin can not assume full
2780swappability when overcommitting untrusted jobs.
2781
2782For trusted jobs, on the other hand, a combined counter is not an
2783intuitive userspace interface, and it flies in the face of the idea
2784that cgroup controllers should account and limit specific physical
2785resources.  Swap space is a resource like all others in the system,
2786and that's why unified hierarchy allows distributing it separately.
2787