1.. _cgroup-v2:
2
3================
4Control Group v2
5================
6
7:Date: October, 2015
8:Author: Tejun Heo <tj@kernel.org>
9
10This is the authoritative documentation on the design, interface and
11conventions of cgroup v2.  It describes all userland-visible aspects
12of cgroup including core and specific controller behaviors.  All
13future changes must be reflected in this document.  Documentation for
14v1 is available under :ref:`Documentation/admin-guide/cgroup-v1/index.rst <cgroup-v1>`.
15
16.. CONTENTS
17
18   1. Introduction
19     1-1. Terminology
20     1-2. What is cgroup?
21   2. Basic Operations
22     2-1. Mounting
23     2-2. Organizing Processes and Threads
24       2-2-1. Processes
25       2-2-2. Threads
26     2-3. [Un]populated Notification
27     2-4. Controlling Controllers
28       2-4-1. Enabling and Disabling
29       2-4-2. Top-down Constraint
30       2-4-3. No Internal Process Constraint
31     2-5. Delegation
32       2-5-1. Model of Delegation
33       2-5-2. Delegation Containment
34     2-6. Guidelines
35       2-6-1. Organize Once and Control
36       2-6-2. Avoid Name Collisions
37   3. Resource Distribution Models
38     3-1. Weights
39     3-2. Limits
40     3-3. Protections
41     3-4. Allocations
42   4. Interface Files
43     4-1. Format
44     4-2. Conventions
45     4-3. Core Interface Files
46   5. Controllers
47     5-1. CPU
48       5-1-1. CPU Interface Files
49     5-2. Memory
50       5-2-1. Memory Interface Files
51       5-2-2. Usage Guidelines
52       5-2-3. Memory Ownership
53     5-3. IO
54       5-3-1. IO Interface Files
55       5-3-2. Writeback
56       5-3-3. IO Latency
57         5-3-3-1. How IO Latency Throttling Works
58         5-3-3-2. IO Latency Interface Files
59       5-3-4. IO Priority
60     5-4. PID
61       5-4-1. PID Interface Files
62     5-5. Cpuset
63       5.5-1. Cpuset Interface Files
64     5-6. Device
65     5-7. RDMA
66       5-7-1. RDMA Interface Files
67     5-8. HugeTLB
68       5.8-1. HugeTLB Interface Files
69     5-9. Misc
70       5.9-1 Miscellaneous cgroup Interface Files
71       5.9-2 Migration and Ownership
72     5-10. Others
73       5-10-1. perf_event
74     5-N. Non-normative information
75       5-N-1. CPU controller root cgroup process behaviour
76       5-N-2. IO controller root cgroup process behaviour
77   6. Namespace
78     6-1. Basics
79     6-2. The Root and Views
80     6-3. Migration and setns(2)
81     6-4. Interaction with Other Namespaces
82   P. Information on Kernel Programming
83     P-1. Filesystem Support for Writeback
84   D. Deprecated v1 Core Features
85   R. Issues with v1 and Rationales for v2
86     R-1. Multiple Hierarchies
87     R-2. Thread Granularity
88     R-3. Competition Between Inner Nodes and Threads
89     R-4. Other Interface Issues
90     R-5. Controller Issues and Remedies
91       R-5-1. Memory
92
93
94Introduction
95============
96
97Terminology
98-----------
99
100"cgroup" stands for "control group" and is never capitalized.  The
101singular form is used to designate the whole feature and also as a
102qualifier as in "cgroup controllers".  When explicitly referring to
103multiple individual control groups, the plural form "cgroups" is used.
104
105
106What is cgroup?
107---------------
108
109cgroup is a mechanism to organize processes hierarchically and
110distribute system resources along the hierarchy in a controlled and
111configurable manner.
112
113cgroup is largely composed of two parts - the core and controllers.
114cgroup core is primarily responsible for hierarchically organizing
115processes.  A cgroup controller is usually responsible for
116distributing a specific type of system resource along the hierarchy
117although there are utility controllers which serve purposes other than
118resource distribution.
119
120cgroups form a tree structure and every process in the system belongs
121to one and only one cgroup.  All threads of a process belong to the
122same cgroup.  On creation, all processes are put in the cgroup that
123the parent process belongs to at the time.  A process can be migrated
124to another cgroup.  Migration of a process doesn't affect already
125existing descendant processes.
126
127Following certain structural constraints, controllers may be enabled or
128disabled selectively on a cgroup.  All controller behaviors are
129hierarchical - if a controller is enabled on a cgroup, it affects all
130processes which belong to the cgroups consisting the inclusive
131sub-hierarchy of the cgroup.  When a controller is enabled on a nested
132cgroup, it always restricts the resource distribution further.  The
133restrictions set closer to the root in the hierarchy can not be
134overridden from further away.
135
136
137Basic Operations
138================
139
140Mounting
141--------
142
143Unlike v1, cgroup v2 has only single hierarchy.  The cgroup v2
144hierarchy can be mounted with the following mount command::
145
146  # mount -t cgroup2 none $MOUNT_POINT
147
148cgroup2 filesystem has the magic number 0x63677270 ("cgrp").  All
149controllers which support v2 and are not bound to a v1 hierarchy are
150automatically bound to the v2 hierarchy and show up at the root.
151Controllers which are not in active use in the v2 hierarchy can be
152bound to other hierarchies.  This allows mixing v2 hierarchy with the
153legacy v1 multiple hierarchies in a fully backward compatible way.
154
155A controller can be moved across hierarchies only after the controller
156is no longer referenced in its current hierarchy.  Because per-cgroup
157controller states are destroyed asynchronously and controllers may
158have lingering references, a controller may not show up immediately on
159the v2 hierarchy after the final umount of the previous hierarchy.
160Similarly, a controller should be fully disabled to be moved out of
161the unified hierarchy and it may take some time for the disabled
162controller to become available for other hierarchies; furthermore, due
163to inter-controller dependencies, other controllers may need to be
164disabled too.
165
166While useful for development and manual configurations, moving
167controllers dynamically between the v2 and other hierarchies is
168strongly discouraged for production use.  It is recommended to decide
169the hierarchies and controller associations before starting using the
170controllers after system boot.
171
172During transition to v2, system management software might still
173automount the v1 cgroup filesystem and so hijack all controllers
174during boot, before manual intervention is possible. To make testing
175and experimenting easier, the kernel parameter cgroup_no_v1= allows
176disabling controllers in v1 and make them always available in v2.
177
178cgroup v2 currently supports the following mount options.
179
180  nsdelegate
181	Consider cgroup namespaces as delegation boundaries.  This
182	option is system wide and can only be set on mount or modified
183	through remount from the init namespace.  The mount option is
184	ignored on non-init namespace mounts.  Please refer to the
185	Delegation section for details.
186
187  favordynmods
188        Reduce the latencies of dynamic cgroup modifications such as
189        task migrations and controller on/offs at the cost of making
190        hot path operations such as forks and exits more expensive.
191        The static usage pattern of creating a cgroup, enabling
192        controllers, and then seeding it with CLONE_INTO_CGROUP is
193        not affected by this option.
194
195  memory_localevents
196        Only populate memory.events with data for the current cgroup,
197        and not any subtrees. This is legacy behaviour, the default
198        behaviour without this option is to include subtree counts.
199        This option is system wide and can only be set on mount or
200        modified through remount from the init namespace. The mount
201        option is ignored on non-init namespace mounts.
202
203  memory_recursiveprot
204        Recursively apply memory.min and memory.low protection to
205        entire subtrees, without requiring explicit downward
206        propagation into leaf cgroups.  This allows protecting entire
207        subtrees from one another, while retaining free competition
208        within those subtrees.  This should have been the default
209        behavior but is a mount-option to avoid regressing setups
210        relying on the original semantics (e.g. specifying bogusly
211        high 'bypass' protection values at higher tree levels).
212
213
214Organizing Processes and Threads
215--------------------------------
216
217Processes
218~~~~~~~~~
219
220Initially, only the root cgroup exists to which all processes belong.
221A child cgroup can be created by creating a sub-directory::
222
223  # mkdir $CGROUP_NAME
224
225A given cgroup may have multiple child cgroups forming a tree
226structure.  Each cgroup has a read-writable interface file
227"cgroup.procs".  When read, it lists the PIDs of all processes which
228belong to the cgroup one-per-line.  The PIDs are not ordered and the
229same PID may show up more than once if the process got moved to
230another cgroup and then back or the PID got recycled while reading.
231
232A process can be migrated into a cgroup by writing its PID to the
233target cgroup's "cgroup.procs" file.  Only one process can be migrated
234on a single write(2) call.  If a process is composed of multiple
235threads, writing the PID of any thread migrates all threads of the
236process.
237
238When a process forks a child process, the new process is born into the
239cgroup that the forking process belongs to at the time of the
240operation.  After exit, a process stays associated with the cgroup
241that it belonged to at the time of exit until it's reaped; however, a
242zombie process does not appear in "cgroup.procs" and thus can't be
243moved to another cgroup.
244
245A cgroup which doesn't have any children or live processes can be
246destroyed by removing the directory.  Note that a cgroup which doesn't
247have any children and is associated only with zombie processes is
248considered empty and can be removed::
249
250  # rmdir $CGROUP_NAME
251
252"/proc/$PID/cgroup" lists a process's cgroup membership.  If legacy
253cgroup is in use in the system, this file may contain multiple lines,
254one for each hierarchy.  The entry for cgroup v2 is always in the
255format "0::$PATH"::
256
257  # cat /proc/842/cgroup
258  ...
259  0::/test-cgroup/test-cgroup-nested
260
261If the process becomes a zombie and the cgroup it was associated with
262is removed subsequently, " (deleted)" is appended to the path::
263
264  # cat /proc/842/cgroup
265  ...
266  0::/test-cgroup/test-cgroup-nested (deleted)
267
268
269Threads
270~~~~~~~
271
272cgroup v2 supports thread granularity for a subset of controllers to
273support use cases requiring hierarchical resource distribution across
274the threads of a group of processes.  By default, all threads of a
275process belong to the same cgroup, which also serves as the resource
276domain to host resource consumptions which are not specific to a
277process or thread.  The thread mode allows threads to be spread across
278a subtree while still maintaining the common resource domain for them.
279
280Controllers which support thread mode are called threaded controllers.
281The ones which don't are called domain controllers.
282
283Marking a cgroup threaded makes it join the resource domain of its
284parent as a threaded cgroup.  The parent may be another threaded
285cgroup whose resource domain is further up in the hierarchy.  The root
286of a threaded subtree, that is, the nearest ancestor which is not
287threaded, is called threaded domain or thread root interchangeably and
288serves as the resource domain for the entire subtree.
289
290Inside a threaded subtree, threads of a process can be put in
291different cgroups and are not subject to the no internal process
292constraint - threaded controllers can be enabled on non-leaf cgroups
293whether they have threads in them or not.
294
295As the threaded domain cgroup hosts all the domain resource
296consumptions of the subtree, it is considered to have internal
297resource consumptions whether there are processes in it or not and
298can't have populated child cgroups which aren't threaded.  Because the
299root cgroup is not subject to no internal process constraint, it can
300serve both as a threaded domain and a parent to domain cgroups.
301
302The current operation mode or type of the cgroup is shown in the
303"cgroup.type" file which indicates whether the cgroup is a normal
304domain, a domain which is serving as the domain of a threaded subtree,
305or a threaded cgroup.
306
307On creation, a cgroup is always a domain cgroup and can be made
308threaded by writing "threaded" to the "cgroup.type" file.  The
309operation is single direction::
310
311  # echo threaded > cgroup.type
312
313Once threaded, the cgroup can't be made a domain again.  To enable the
314thread mode, the following conditions must be met.
315
316- As the cgroup will join the parent's resource domain.  The parent
317  must either be a valid (threaded) domain or a threaded cgroup.
318
319- When the parent is an unthreaded domain, it must not have any domain
320  controllers enabled or populated domain children.  The root is
321  exempt from this requirement.
322
323Topology-wise, a cgroup can be in an invalid state.  Please consider
324the following topology::
325
326  A (threaded domain) - B (threaded) - C (domain, just created)
327
328C is created as a domain but isn't connected to a parent which can
329host child domains.  C can't be used until it is turned into a
330threaded cgroup.  "cgroup.type" file will report "domain (invalid)" in
331these cases.  Operations which fail due to invalid topology use
332EOPNOTSUPP as the errno.
333
334A domain cgroup is turned into a threaded domain when one of its child
335cgroup becomes threaded or threaded controllers are enabled in the
336"cgroup.subtree_control" file while there are processes in the cgroup.
337A threaded domain reverts to a normal domain when the conditions
338clear.
339
340When read, "cgroup.threads" contains the list of the thread IDs of all
341threads in the cgroup.  Except that the operations are per-thread
342instead of per-process, "cgroup.threads" has the same format and
343behaves the same way as "cgroup.procs".  While "cgroup.threads" can be
344written to in any cgroup, as it can only move threads inside the same
345threaded domain, its operations are confined inside each threaded
346subtree.
347
348The threaded domain cgroup serves as the resource domain for the whole
349subtree, and, while the threads can be scattered across the subtree,
350all the processes are considered to be in the threaded domain cgroup.
351"cgroup.procs" in a threaded domain cgroup contains the PIDs of all
352processes in the subtree and is not readable in the subtree proper.
353However, "cgroup.procs" can be written to from anywhere in the subtree
354to migrate all threads of the matching process to the cgroup.
355
356Only threaded controllers can be enabled in a threaded subtree.  When
357a threaded controller is enabled inside a threaded subtree, it only
358accounts for and controls resource consumptions associated with the
359threads in the cgroup and its descendants.  All consumptions which
360aren't tied to a specific thread belong to the threaded domain cgroup.
361
362Because a threaded subtree is exempt from no internal process
363constraint, a threaded controller must be able to handle competition
364between threads in a non-leaf cgroup and its child cgroups.  Each
365threaded controller defines how such competitions are handled.
366
367
368[Un]populated Notification
369--------------------------
370
371Each non-root cgroup has a "cgroup.events" file which contains
372"populated" field indicating whether the cgroup's sub-hierarchy has
373live processes in it.  Its value is 0 if there is no live process in
374the cgroup and its descendants; otherwise, 1.  poll and [id]notify
375events are triggered when the value changes.  This can be used, for
376example, to start a clean-up operation after all processes of a given
377sub-hierarchy have exited.  The populated state updates and
378notifications are recursive.  Consider the following sub-hierarchy
379where the numbers in the parentheses represent the numbers of processes
380in each cgroup::
381
382  A(4) - B(0) - C(1)
383              \ D(0)
384
385A, B and C's "populated" fields would be 1 while D's 0.  After the one
386process in C exits, B and C's "populated" fields would flip to "0" and
387file modified events will be generated on the "cgroup.events" files of
388both cgroups.
389
390
391Controlling Controllers
392-----------------------
393
394Enabling and Disabling
395~~~~~~~~~~~~~~~~~~~~~~
396
397Each cgroup has a "cgroup.controllers" file which lists all
398controllers available for the cgroup to enable::
399
400  # cat cgroup.controllers
401  cpu io memory
402
403No controller is enabled by default.  Controllers can be enabled and
404disabled by writing to the "cgroup.subtree_control" file::
405
406  # echo "+cpu +memory -io" > cgroup.subtree_control
407
408Only controllers which are listed in "cgroup.controllers" can be
409enabled.  When multiple operations are specified as above, either they
410all succeed or fail.  If multiple operations on the same controller
411are specified, the last one is effective.
412
413Enabling a controller in a cgroup indicates that the distribution of
414the target resource across its immediate children will be controlled.
415Consider the following sub-hierarchy.  The enabled controllers are
416listed in parentheses::
417
418  A(cpu,memory) - B(memory) - C()
419                            \ D()
420
421As A has "cpu" and "memory" enabled, A will control the distribution
422of CPU cycles and memory to its children, in this case, B.  As B has
423"memory" enabled but not "CPU", C and D will compete freely on CPU
424cycles but their division of memory available to B will be controlled.
425
426As a controller regulates the distribution of the target resource to
427the cgroup's children, enabling it creates the controller's interface
428files in the child cgroups.  In the above example, enabling "cpu" on B
429would create the "cpu." prefixed controller interface files in C and
430D.  Likewise, disabling "memory" from B would remove the "memory."
431prefixed controller interface files from C and D.  This means that the
432controller interface files - anything which doesn't start with
433"cgroup." are owned by the parent rather than the cgroup itself.
434
435
436Top-down Constraint
437~~~~~~~~~~~~~~~~~~~
438
439Resources are distributed top-down and a cgroup can further distribute
440a resource only if the resource has been distributed to it from the
441parent.  This means that all non-root "cgroup.subtree_control" files
442can only contain controllers which are enabled in the parent's
443"cgroup.subtree_control" file.  A controller can be enabled only if
444the parent has the controller enabled and a controller can't be
445disabled if one or more children have it enabled.
446
447
448No Internal Process Constraint
449~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
450
451Non-root cgroups can distribute domain resources to their children
452only when they don't have any processes of their own.  In other words,
453only domain cgroups which don't contain any processes can have domain
454controllers enabled in their "cgroup.subtree_control" files.
455
456This guarantees that, when a domain controller is looking at the part
457of the hierarchy which has it enabled, processes are always only on
458the leaves.  This rules out situations where child cgroups compete
459against internal processes of the parent.
460
461The root cgroup is exempt from this restriction.  Root contains
462processes and anonymous resource consumption which can't be associated
463with any other cgroups and requires special treatment from most
464controllers.  How resource consumption in the root cgroup is governed
465is up to each controller (for more information on this topic please
466refer to the Non-normative information section in the Controllers
467chapter).
468
469Note that the restriction doesn't get in the way if there is no
470enabled controller in the cgroup's "cgroup.subtree_control".  This is
471important as otherwise it wouldn't be possible to create children of a
472populated cgroup.  To control resource distribution of a cgroup, the
473cgroup must create children and transfer all its processes to the
474children before enabling controllers in its "cgroup.subtree_control"
475file.
476
477
478Delegation
479----------
480
481Model of Delegation
482~~~~~~~~~~~~~~~~~~~
483
484A cgroup can be delegated in two ways.  First, to a less privileged
485user by granting write access of the directory and its "cgroup.procs",
486"cgroup.threads" and "cgroup.subtree_control" files to the user.
487Second, if the "nsdelegate" mount option is set, automatically to a
488cgroup namespace on namespace creation.
489
490Because the resource control interface files in a given directory
491control the distribution of the parent's resources, the delegatee
492shouldn't be allowed to write to them.  For the first method, this is
493achieved by not granting access to these files.  For the second, the
494kernel rejects writes to all files other than "cgroup.procs" and
495"cgroup.subtree_control" on a namespace root from inside the
496namespace.
497
498The end results are equivalent for both delegation types.  Once
499delegated, the user can build sub-hierarchy under the directory,
500organize processes inside it as it sees fit and further distribute the
501resources it received from the parent.  The limits and other settings
502of all resource controllers are hierarchical and regardless of what
503happens in the delegated sub-hierarchy, nothing can escape the
504resource restrictions imposed by the parent.
505
506Currently, cgroup doesn't impose any restrictions on the number of
507cgroups in or nesting depth of a delegated sub-hierarchy; however,
508this may be limited explicitly in the future.
509
510
511Delegation Containment
512~~~~~~~~~~~~~~~~~~~~~~
513
514A delegated sub-hierarchy is contained in the sense that processes
515can't be moved into or out of the sub-hierarchy by the delegatee.
516
517For delegations to a less privileged user, this is achieved by
518requiring the following conditions for a process with a non-root euid
519to migrate a target process into a cgroup by writing its PID to the
520"cgroup.procs" file.
521
522- The writer must have write access to the "cgroup.procs" file.
523
524- The writer must have write access to the "cgroup.procs" file of the
525  common ancestor of the source and destination cgroups.
526
527The above two constraints ensure that while a delegatee may migrate
528processes around freely in the delegated sub-hierarchy it can't pull
529in from or push out to outside the sub-hierarchy.
530
531For an example, let's assume cgroups C0 and C1 have been delegated to
532user U0 who created C00, C01 under C0 and C10 under C1 as follows and
533all processes under C0 and C1 belong to U0::
534
535  ~~~~~~~~~~~~~ - C0 - C00
536  ~ cgroup    ~      \ C01
537  ~ hierarchy ~
538  ~~~~~~~~~~~~~ - C1 - C10
539
540Let's also say U0 wants to write the PID of a process which is
541currently in C10 into "C00/cgroup.procs".  U0 has write access to the
542file; however, the common ancestor of the source cgroup C10 and the
543destination cgroup C00 is above the points of delegation and U0 would
544not have write access to its "cgroup.procs" files and thus the write
545will be denied with -EACCES.
546
547For delegations to namespaces, containment is achieved by requiring
548that both the source and destination cgroups are reachable from the
549namespace of the process which is attempting the migration.  If either
550is not reachable, the migration is rejected with -ENOENT.
551
552
553Guidelines
554----------
555
556Organize Once and Control
557~~~~~~~~~~~~~~~~~~~~~~~~~
558
559Migrating a process across cgroups is a relatively expensive operation
560and stateful resources such as memory are not moved together with the
561process.  This is an explicit design decision as there often exist
562inherent trade-offs between migration and various hot paths in terms
563of synchronization cost.
564
565As such, migrating processes across cgroups frequently as a means to
566apply different resource restrictions is discouraged.  A workload
567should be assigned to a cgroup according to the system's logical and
568resource structure once on start-up.  Dynamic adjustments to resource
569distribution can be made by changing controller configuration through
570the interface files.
571
572
573Avoid Name Collisions
574~~~~~~~~~~~~~~~~~~~~~
575
576Interface files for a cgroup and its children cgroups occupy the same
577directory and it is possible to create children cgroups which collide
578with interface files.
579
580All cgroup core interface files are prefixed with "cgroup." and each
581controller's interface files are prefixed with the controller name and
582a dot.  A controller's name is composed of lower case alphabets and
583'_'s but never begins with an '_' so it can be used as the prefix
584character for collision avoidance.  Also, interface file names won't
585start or end with terms which are often used in categorizing workloads
586such as job, service, slice, unit or workload.
587
588cgroup doesn't do anything to prevent name collisions and it's the
589user's responsibility to avoid them.
590
591
592Resource Distribution Models
593============================
594
595cgroup controllers implement several resource distribution schemes
596depending on the resource type and expected use cases.  This section
597describes major schemes in use along with their expected behaviors.
598
599
600Weights
601-------
602
603A parent's resource is distributed by adding up the weights of all
604active children and giving each the fraction matching the ratio of its
605weight against the sum.  As only children which can make use of the
606resource at the moment participate in the distribution, this is
607work-conserving.  Due to the dynamic nature, this model is usually
608used for stateless resources.
609
610All weights are in the range [1, 10000] with the default at 100.  This
611allows symmetric multiplicative biases in both directions at fine
612enough granularity while staying in the intuitive range.
613
614As long as the weight is in range, all configuration combinations are
615valid and there is no reason to reject configuration changes or
616process migrations.
617
618"cpu.weight" proportionally distributes CPU cycles to active children
619and is an example of this type.
620
621
622Limits
623------
624
625A child can only consume upto the configured amount of the resource.
626Limits can be over-committed - the sum of the limits of children can
627exceed the amount of resource available to the parent.
628
629Limits are in the range [0, max] and defaults to "max", which is noop.
630
631As limits can be over-committed, all configuration combinations are
632valid and there is no reason to reject configuration changes or
633process migrations.
634
635"io.max" limits the maximum BPS and/or IOPS that a cgroup can consume
636on an IO device and is an example of this type.
637
638
639Protections
640-----------
641
642A cgroup is protected upto the configured amount of the resource
643as long as the usages of all its ancestors are under their
644protected levels.  Protections can be hard guarantees or best effort
645soft boundaries.  Protections can also be over-committed in which case
646only upto the amount available to the parent is protected among
647children.
648
649Protections are in the range [0, max] and defaults to 0, which is
650noop.
651
652As protections can be over-committed, all configuration combinations
653are valid and there is no reason to reject configuration changes or
654process migrations.
655
656"memory.low" implements best-effort memory protection and is an
657example of this type.
658
659
660Allocations
661-----------
662
663A cgroup is exclusively allocated a certain amount of a finite
664resource.  Allocations can't be over-committed - the sum of the
665allocations of children can not exceed the amount of resource
666available to the parent.
667
668Allocations are in the range [0, max] and defaults to 0, which is no
669resource.
670
671As allocations can't be over-committed, some configuration
672combinations are invalid and should be rejected.  Also, if the
673resource is mandatory for execution of processes, process migrations
674may be rejected.
675
676"cpu.rt.max" hard-allocates realtime slices and is an example of this
677type.
678
679
680Interface Files
681===============
682
683Format
684------
685
686All interface files should be in one of the following formats whenever
687possible::
688
689  New-line separated values
690  (when only one value can be written at once)
691
692	VAL0\n
693	VAL1\n
694	...
695
696  Space separated values
697  (when read-only or multiple values can be written at once)
698
699	VAL0 VAL1 ...\n
700
701  Flat keyed
702
703	KEY0 VAL0\n
704	KEY1 VAL1\n
705	...
706
707  Nested keyed
708
709	KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01...
710	KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11...
711	...
712
713For a writable file, the format for writing should generally match
714reading; however, controllers may allow omitting later fields or
715implement restricted shortcuts for most common use cases.
716
717For both flat and nested keyed files, only the values for a single key
718can be written at a time.  For nested keyed files, the sub key pairs
719may be specified in any order and not all pairs have to be specified.
720
721
722Conventions
723-----------
724
725- Settings for a single feature should be contained in a single file.
726
727- The root cgroup should be exempt from resource control and thus
728  shouldn't have resource control interface files.
729
730- The default time unit is microseconds.  If a different unit is ever
731  used, an explicit unit suffix must be present.
732
733- A parts-per quantity should use a percentage decimal with at least
734  two digit fractional part - e.g. 13.40.
735
736- If a controller implements weight based resource distribution, its
737  interface file should be named "weight" and have the range [1,
738  10000] with 100 as the default.  The values are chosen to allow
739  enough and symmetric bias in both directions while keeping it
740  intuitive (the default is 100%).
741
742- If a controller implements an absolute resource guarantee and/or
743  limit, the interface files should be named "min" and "max"
744  respectively.  If a controller implements best effort resource
745  guarantee and/or limit, the interface files should be named "low"
746  and "high" respectively.
747
748  In the above four control files, the special token "max" should be
749  used to represent upward infinity for both reading and writing.
750
751- If a setting has a configurable default value and keyed specific
752  overrides, the default entry should be keyed with "default" and
753  appear as the first entry in the file.
754
755  The default value can be updated by writing either "default $VAL" or
756  "$VAL".
757
758  When writing to update a specific override, "default" can be used as
759  the value to indicate removal of the override.  Override entries
760  with "default" as the value must not appear when read.
761
762  For example, a setting which is keyed by major:minor device numbers
763  with integer values may look like the following::
764
765    # cat cgroup-example-interface-file
766    default 150
767    8:0 300
768
769  The default value can be updated by::
770
771    # echo 125 > cgroup-example-interface-file
772
773  or::
774
775    # echo "default 125" > cgroup-example-interface-file
776
777  An override can be set by::
778
779    # echo "8:16 170" > cgroup-example-interface-file
780
781  and cleared by::
782
783    # echo "8:0 default" > cgroup-example-interface-file
784    # cat cgroup-example-interface-file
785    default 125
786    8:16 170
787
788- For events which are not very high frequency, an interface file
789  "events" should be created which lists event key value pairs.
790  Whenever a notifiable event happens, file modified event should be
791  generated on the file.
792
793
794Core Interface Files
795--------------------
796
797All cgroup core files are prefixed with "cgroup."
798
799  cgroup.type
800	A read-write single value file which exists on non-root
801	cgroups.
802
803	When read, it indicates the current type of the cgroup, which
804	can be one of the following values.
805
806	- "domain" : A normal valid domain cgroup.
807
808	- "domain threaded" : A threaded domain cgroup which is
809          serving as the root of a threaded subtree.
810
811	- "domain invalid" : A cgroup which is in an invalid state.
812	  It can't be populated or have controllers enabled.  It may
813	  be allowed to become a threaded cgroup.
814
815	- "threaded" : A threaded cgroup which is a member of a
816          threaded subtree.
817
818	A cgroup can be turned into a threaded cgroup by writing
819	"threaded" to this file.
820
821  cgroup.procs
822	A read-write new-line separated values file which exists on
823	all cgroups.
824
825	When read, it lists the PIDs of all processes which belong to
826	the cgroup one-per-line.  The PIDs are not ordered and the
827	same PID may show up more than once if the process got moved
828	to another cgroup and then back or the PID got recycled while
829	reading.
830
831	A PID can be written to migrate the process associated with
832	the PID to the cgroup.  The writer should match all of the
833	following conditions.
834
835	- It must have write access to the "cgroup.procs" file.
836
837	- It must have write access to the "cgroup.procs" file of the
838	  common ancestor of the source and destination cgroups.
839
840	When delegating a sub-hierarchy, write access to this file
841	should be granted along with the containing directory.
842
843	In a threaded cgroup, reading this file fails with EOPNOTSUPP
844	as all the processes belong to the thread root.  Writing is
845	supported and moves every thread of the process to the cgroup.
846
847  cgroup.threads
848	A read-write new-line separated values file which exists on
849	all cgroups.
850
851	When read, it lists the TIDs of all threads which belong to
852	the cgroup one-per-line.  The TIDs are not ordered and the
853	same TID may show up more than once if the thread got moved to
854	another cgroup and then back or the TID got recycled while
855	reading.
856
857	A TID can be written to migrate the thread associated with the
858	TID to the cgroup.  The writer should match all of the
859	following conditions.
860
861	- It must have write access to the "cgroup.threads" file.
862
863	- The cgroup that the thread is currently in must be in the
864          same resource domain as the destination cgroup.
865
866	- It must have write access to the "cgroup.procs" file of the
867	  common ancestor of the source and destination cgroups.
868
869	When delegating a sub-hierarchy, write access to this file
870	should be granted along with the containing directory.
871
872  cgroup.controllers
873	A read-only space separated values file which exists on all
874	cgroups.
875
876	It shows space separated list of all controllers available to
877	the cgroup.  The controllers are not ordered.
878
879  cgroup.subtree_control
880	A read-write space separated values file which exists on all
881	cgroups.  Starts out empty.
882
883	When read, it shows space separated list of the controllers
884	which are enabled to control resource distribution from the
885	cgroup to its children.
886
887	Space separated list of controllers prefixed with '+' or '-'
888	can be written to enable or disable controllers.  A controller
889	name prefixed with '+' enables the controller and '-'
890	disables.  If a controller appears more than once on the list,
891	the last one is effective.  When multiple enable and disable
892	operations are specified, either all succeed or all fail.
893
894  cgroup.events
895	A read-only flat-keyed file which exists on non-root cgroups.
896	The following entries are defined.  Unless specified
897	otherwise, a value change in this file generates a file
898	modified event.
899
900	  populated
901		1 if the cgroup or its descendants contains any live
902		processes; otherwise, 0.
903	  frozen
904		1 if the cgroup is frozen; otherwise, 0.
905
906  cgroup.max.descendants
907	A read-write single value files.  The default is "max".
908
909	Maximum allowed number of descent cgroups.
910	If the actual number of descendants is equal or larger,
911	an attempt to create a new cgroup in the hierarchy will fail.
912
913  cgroup.max.depth
914	A read-write single value files.  The default is "max".
915
916	Maximum allowed descent depth below the current cgroup.
917	If the actual descent depth is equal or larger,
918	an attempt to create a new child cgroup will fail.
919
920  cgroup.stat
921	A read-only flat-keyed file with the following entries:
922
923	  nr_descendants
924		Total number of visible descendant cgroups.
925
926	  nr_dying_descendants
927		Total number of dying descendant cgroups. A cgroup becomes
928		dying after being deleted by a user. The cgroup will remain
929		in dying state for some time undefined time (which can depend
930		on system load) before being completely destroyed.
931
932		A process can't enter a dying cgroup under any circumstances,
933		a dying cgroup can't revive.
934
935		A dying cgroup can consume system resources not exceeding
936		limits, which were active at the moment of cgroup deletion.
937
938  cgroup.freeze
939	A read-write single value file which exists on non-root cgroups.
940	Allowed values are "0" and "1". The default is "0".
941
942	Writing "1" to the file causes freezing of the cgroup and all
943	descendant cgroups. This means that all belonging processes will
944	be stopped and will not run until the cgroup will be explicitly
945	unfrozen. Freezing of the cgroup may take some time; when this action
946	is completed, the "frozen" value in the cgroup.events control file
947	will be updated to "1" and the corresponding notification will be
948	issued.
949
950	A cgroup can be frozen either by its own settings, or by settings
951	of any ancestor cgroups. If any of ancestor cgroups is frozen, the
952	cgroup will remain frozen.
953
954	Processes in the frozen cgroup can be killed by a fatal signal.
955	They also can enter and leave a frozen cgroup: either by an explicit
956	move by a user, or if freezing of the cgroup races with fork().
957	If a process is moved to a frozen cgroup, it stops. If a process is
958	moved out of a frozen cgroup, it becomes running.
959
960	Frozen status of a cgroup doesn't affect any cgroup tree operations:
961	it's possible to delete a frozen (and empty) cgroup, as well as
962	create new sub-cgroups.
963
964  cgroup.kill
965	A write-only single value file which exists in non-root cgroups.
966	The only allowed value is "1".
967
968	Writing "1" to the file causes the cgroup and all descendant cgroups to
969	be killed. This means that all processes located in the affected cgroup
970	tree will be killed via SIGKILL.
971
972	Killing a cgroup tree will deal with concurrent forks appropriately and
973	is protected against migrations.
974
975	In a threaded cgroup, writing this file fails with EOPNOTSUPP as
976	killing cgroups is a process directed operation, i.e. it affects
977	the whole thread-group.
978
979Controllers
980===========
981
982.. _cgroup-v2-cpu:
983
984CPU
985---
986
987The "cpu" controllers regulates distribution of CPU cycles.  This
988controller implements weight and absolute bandwidth limit models for
989normal scheduling policy and absolute bandwidth allocation model for
990realtime scheduling policy.
991
992In all the above models, cycles distribution is defined only on a temporal
993base and it does not account for the frequency at which tasks are executed.
994The (optional) utilization clamping support allows to hint the schedutil
995cpufreq governor about the minimum desired frequency which should always be
996provided by a CPU, as well as the maximum desired frequency, which should not
997be exceeded by a CPU.
998
999WARNING: cgroup2 doesn't yet support control of realtime processes and
1000the cpu controller can only be enabled when all RT processes are in
1001the root cgroup.  Be aware that system management software may already
1002have placed RT processes into nonroot cgroups during the system boot
1003process, and these processes may need to be moved to the root cgroup
1004before the cpu controller can be enabled.
1005
1006
1007CPU Interface Files
1008~~~~~~~~~~~~~~~~~~~
1009
1010All time durations are in microseconds.
1011
1012  cpu.stat
1013	A read-only flat-keyed file.
1014	This file exists whether the controller is enabled or not.
1015
1016	It always reports the following three stats:
1017
1018	- usage_usec
1019	- user_usec
1020	- system_usec
1021
1022	and the following three when the controller is enabled:
1023
1024	- nr_periods
1025	- nr_throttled
1026	- throttled_usec
1027	- nr_bursts
1028	- burst_usec
1029
1030  cpu.weight
1031	A read-write single value file which exists on non-root
1032	cgroups.  The default is "100".
1033
1034	The weight in the range [1, 10000].
1035
1036  cpu.weight.nice
1037	A read-write single value file which exists on non-root
1038	cgroups.  The default is "0".
1039
1040	The nice value is in the range [-20, 19].
1041
1042	This interface file is an alternative interface for
1043	"cpu.weight" and allows reading and setting weight using the
1044	same values used by nice(2).  Because the range is smaller and
1045	granularity is coarser for the nice values, the read value is
1046	the closest approximation of the current weight.
1047
1048  cpu.max
1049	A read-write two value file which exists on non-root cgroups.
1050	The default is "max 100000".
1051
1052	The maximum bandwidth limit.  It's in the following format::
1053
1054	  $MAX $PERIOD
1055
1056	which indicates that the group may consume upto $MAX in each
1057	$PERIOD duration.  "max" for $MAX indicates no limit.  If only
1058	one number is written, $MAX is updated.
1059
1060  cpu.max.burst
1061	A read-write single value file which exists on non-root
1062	cgroups.  The default is "0".
1063
1064	The burst in the range [0, $MAX].
1065
1066  cpu.pressure
1067	A read-write nested-keyed file.
1068
1069	Shows pressure stall information for CPU. See
1070	:ref:`Documentation/accounting/psi.rst <psi>` for details.
1071
1072  cpu.uclamp.min
1073        A read-write single value file which exists on non-root cgroups.
1074        The default is "0", i.e. no utilization boosting.
1075
1076        The requested minimum utilization (protection) as a percentage
1077        rational number, e.g. 12.34 for 12.34%.
1078
1079        This interface allows reading and setting minimum utilization clamp
1080        values similar to the sched_setattr(2). This minimum utilization
1081        value is used to clamp the task specific minimum utilization clamp.
1082
1083        The requested minimum utilization (protection) is always capped by
1084        the current value for the maximum utilization (limit), i.e.
1085        `cpu.uclamp.max`.
1086
1087  cpu.uclamp.max
1088        A read-write single value file which exists on non-root cgroups.
1089        The default is "max". i.e. no utilization capping
1090
1091        The requested maximum utilization (limit) as a percentage rational
1092        number, e.g. 98.76 for 98.76%.
1093
1094        This interface allows reading and setting maximum utilization clamp
1095        values similar to the sched_setattr(2). This maximum utilization
1096        value is used to clamp the task specific maximum utilization clamp.
1097
1098
1099
1100Memory
1101------
1102
1103The "memory" controller regulates distribution of memory.  Memory is
1104stateful and implements both limit and protection models.  Due to the
1105intertwining between memory usage and reclaim pressure and the
1106stateful nature of memory, the distribution model is relatively
1107complex.
1108
1109While not completely water-tight, all major memory usages by a given
1110cgroup are tracked so that the total memory consumption can be
1111accounted and controlled to a reasonable extent.  Currently, the
1112following types of memory usages are tracked.
1113
1114- Userland memory - page cache and anonymous memory.
1115
1116- Kernel data structures such as dentries and inodes.
1117
1118- TCP socket buffers.
1119
1120The above list may expand in the future for better coverage.
1121
1122
1123Memory Interface Files
1124~~~~~~~~~~~~~~~~~~~~~~
1125
1126All memory amounts are in bytes.  If a value which is not aligned to
1127PAGE_SIZE is written, the value may be rounded up to the closest
1128PAGE_SIZE multiple when read back.
1129
1130  memory.current
1131	A read-only single value file which exists on non-root
1132	cgroups.
1133
1134	The total amount of memory currently being used by the cgroup
1135	and its descendants.
1136
1137  memory.min
1138	A read-write single value file which exists on non-root
1139	cgroups.  The default is "0".
1140
1141	Hard memory protection.  If the memory usage of a cgroup
1142	is within its effective min boundary, the cgroup's memory
1143	won't be reclaimed under any conditions. If there is no
1144	unprotected reclaimable memory available, OOM killer
1145	is invoked. Above the effective min boundary (or
1146	effective low boundary if it is higher), pages are reclaimed
1147	proportionally to the overage, reducing reclaim pressure for
1148	smaller overages.
1149
1150	Effective min boundary is limited by memory.min values of
1151	all ancestor cgroups. If there is memory.min overcommitment
1152	(child cgroup or cgroups are requiring more protected memory
1153	than parent will allow), then each child cgroup will get
1154	the part of parent's protection proportional to its
1155	actual memory usage below memory.min.
1156
1157	Putting more memory than generally available under this
1158	protection is discouraged and may lead to constant OOMs.
1159
1160	If a memory cgroup is not populated with processes,
1161	its memory.min is ignored.
1162
1163  memory.low
1164	A read-write single value file which exists on non-root
1165	cgroups.  The default is "0".
1166
1167	Best-effort memory protection.  If the memory usage of a
1168	cgroup is within its effective low boundary, the cgroup's
1169	memory won't be reclaimed unless there is no reclaimable
1170	memory available in unprotected cgroups.
1171	Above the effective low	boundary (or
1172	effective min boundary if it is higher), pages are reclaimed
1173	proportionally to the overage, reducing reclaim pressure for
1174	smaller overages.
1175
1176	Effective low boundary is limited by memory.low values of
1177	all ancestor cgroups. If there is memory.low overcommitment
1178	(child cgroup or cgroups are requiring more protected memory
1179	than parent will allow), then each child cgroup will get
1180	the part of parent's protection proportional to its
1181	actual memory usage below memory.low.
1182
1183	Putting more memory than generally available under this
1184	protection is discouraged.
1185
1186  memory.high
1187	A read-write single value file which exists on non-root
1188	cgroups.  The default is "max".
1189
1190	Memory usage throttle limit.  This is the main mechanism to
1191	control memory usage of a cgroup.  If a cgroup's usage goes
1192	over the high boundary, the processes of the cgroup are
1193	throttled and put under heavy reclaim pressure.
1194
1195	Going over the high limit never invokes the OOM killer and
1196	under extreme conditions the limit may be breached.
1197
1198  memory.max
1199	A read-write single value file which exists on non-root
1200	cgroups.  The default is "max".
1201
1202	Memory usage hard limit.  This is the final protection
1203	mechanism.  If a cgroup's memory usage reaches this limit and
1204	can't be reduced, the OOM killer is invoked in the cgroup.
1205	Under certain circumstances, the usage may go over the limit
1206	temporarily.
1207
1208	In default configuration regular 0-order allocations always
1209	succeed unless OOM killer chooses current task as a victim.
1210
1211	Some kinds of allocations don't invoke the OOM killer.
1212	Caller could retry them differently, return into userspace
1213	as -ENOMEM or silently ignore in cases like disk readahead.
1214
1215	This is the ultimate protection mechanism.  As long as the
1216	high limit is used and monitored properly, this limit's
1217	utility is limited to providing the final safety net.
1218
1219  memory.reclaim
1220	A write-only nested-keyed file which exists for all cgroups.
1221
1222	This is a simple interface to trigger memory reclaim in the
1223	target cgroup.
1224
1225	This file accepts a single key, the number of bytes to reclaim.
1226	No nested keys are currently supported.
1227
1228	Example::
1229
1230	  echo "1G" > memory.reclaim
1231
1232	The interface can be later extended with nested keys to
1233	configure the reclaim behavior. For example, specify the
1234	type of memory to reclaim from (anon, file, ..).
1235
1236	Please note that the kernel can over or under reclaim from
1237	the target cgroup. If less bytes are reclaimed than the
1238	specified amount, -EAGAIN is returned.
1239
1240	Please note that the proactive reclaim (triggered by this
1241	interface) is not meant to indicate memory pressure on the
1242	memory cgroup. Therefore socket memory balancing triggered by
1243	the memory reclaim normally is not exercised in this case.
1244	This means that the networking layer will not adapt based on
1245	reclaim induced by memory.reclaim.
1246
1247  memory.peak
1248	A read-only single value file which exists on non-root
1249	cgroups.
1250
1251	The max memory usage recorded for the cgroup and its
1252	descendants since the creation of the cgroup.
1253
1254  memory.oom.group
1255	A read-write single value file which exists on non-root
1256	cgroups.  The default value is "0".
1257
1258	Determines whether the cgroup should be treated as
1259	an indivisible workload by the OOM killer. If set,
1260	all tasks belonging to the cgroup or to its descendants
1261	(if the memory cgroup is not a leaf cgroup) are killed
1262	together or not at all. This can be used to avoid
1263	partial kills to guarantee workload integrity.
1264
1265	Tasks with the OOM protection (oom_score_adj set to -1000)
1266	are treated as an exception and are never killed.
1267
1268	If the OOM killer is invoked in a cgroup, it's not going
1269	to kill any tasks outside of this cgroup, regardless
1270	memory.oom.group values of ancestor cgroups.
1271
1272  memory.events
1273	A read-only flat-keyed file which exists on non-root cgroups.
1274	The following entries are defined.  Unless specified
1275	otherwise, a value change in this file generates a file
1276	modified event.
1277
1278	Note that all fields in this file are hierarchical and the
1279	file modified event can be generated due to an event down the
1280	hierarchy. For the local events at the cgroup level see
1281	memory.events.local.
1282
1283	  low
1284		The number of times the cgroup is reclaimed due to
1285		high memory pressure even though its usage is under
1286		the low boundary.  This usually indicates that the low
1287		boundary is over-committed.
1288
1289	  high
1290		The number of times processes of the cgroup are
1291		throttled and routed to perform direct memory reclaim
1292		because the high memory boundary was exceeded.  For a
1293		cgroup whose memory usage is capped by the high limit
1294		rather than global memory pressure, this event's
1295		occurrences are expected.
1296
1297	  max
1298		The number of times the cgroup's memory usage was
1299		about to go over the max boundary.  If direct reclaim
1300		fails to bring it down, the cgroup goes to OOM state.
1301
1302	  oom
1303		The number of time the cgroup's memory usage was
1304		reached the limit and allocation was about to fail.
1305
1306		This event is not raised if the OOM killer is not
1307		considered as an option, e.g. for failed high-order
1308		allocations or if caller asked to not retry attempts.
1309
1310	  oom_kill
1311		The number of processes belonging to this cgroup
1312		killed by any kind of OOM killer.
1313
1314          oom_group_kill
1315                The number of times a group OOM has occurred.
1316
1317  memory.events.local
1318	Similar to memory.events but the fields in the file are local
1319	to the cgroup i.e. not hierarchical. The file modified event
1320	generated on this file reflects only the local events.
1321
1322  memory.stat
1323	A read-only flat-keyed file which exists on non-root cgroups.
1324
1325	This breaks down the cgroup's memory footprint into different
1326	types of memory, type-specific details, and other information
1327	on the state and past events of the memory management system.
1328
1329	All memory amounts are in bytes.
1330
1331	The entries are ordered to be human readable, and new entries
1332	can show up in the middle. Don't rely on items remaining in a
1333	fixed position; use the keys to look up specific values!
1334
1335	If the entry has no per-node counter (or not show in the
1336	memory.numa_stat). We use 'npn' (non-per-node) as the tag
1337	to indicate that it will not show in the memory.numa_stat.
1338
1339	  anon
1340		Amount of memory used in anonymous mappings such as
1341		brk(), sbrk(), and mmap(MAP_ANONYMOUS)
1342
1343	  file
1344		Amount of memory used to cache filesystem data,
1345		including tmpfs and shared memory.
1346
1347	  kernel (npn)
1348		Amount of total kernel memory, including
1349		(kernel_stack, pagetables, percpu, vmalloc, slab) in
1350		addition to other kernel memory use cases.
1351
1352	  kernel_stack
1353		Amount of memory allocated to kernel stacks.
1354
1355	  pagetables
1356                Amount of memory allocated for page tables.
1357
1358	  sec_pagetables
1359		Amount of memory allocated for secondary page tables,
1360		this currently includes KVM mmu allocations on x86
1361		and arm64.
1362
1363	  percpu (npn)
1364		Amount of memory used for storing per-cpu kernel
1365		data structures.
1366
1367	  sock (npn)
1368		Amount of memory used in network transmission buffers
1369
1370	  vmalloc (npn)
1371		Amount of memory used for vmap backed memory.
1372
1373	  shmem
1374		Amount of cached filesystem data that is swap-backed,
1375		such as tmpfs, shm segments, shared anonymous mmap()s
1376
1377	  zswap
1378		Amount of memory consumed by the zswap compression backend.
1379
1380	  zswapped
1381		Amount of application memory swapped out to zswap.
1382
1383	  file_mapped
1384		Amount of cached filesystem data mapped with mmap()
1385
1386	  file_dirty
1387		Amount of cached filesystem data that was modified but
1388		not yet written back to disk
1389
1390	  file_writeback
1391		Amount of cached filesystem data that was modified and
1392		is currently being written back to disk
1393
1394	  swapcached
1395		Amount of swap cached in memory. The swapcache is accounted
1396		against both memory and swap usage.
1397
1398	  anon_thp
1399		Amount of memory used in anonymous mappings backed by
1400		transparent hugepages
1401
1402	  file_thp
1403		Amount of cached filesystem data backed by transparent
1404		hugepages
1405
1406	  shmem_thp
1407		Amount of shm, tmpfs, shared anonymous mmap()s backed by
1408		transparent hugepages
1409
1410	  inactive_anon, active_anon, inactive_file, active_file, unevictable
1411		Amount of memory, swap-backed and filesystem-backed,
1412		on the internal memory management lists used by the
1413		page reclaim algorithm.
1414
1415		As these represent internal list state (eg. shmem pages are on anon
1416		memory management lists), inactive_foo + active_foo may not be equal to
1417		the value for the foo counter, since the foo counter is type-based, not
1418		list-based.
1419
1420	  slab_reclaimable
1421		Part of "slab" that might be reclaimed, such as
1422		dentries and inodes.
1423
1424	  slab_unreclaimable
1425		Part of "slab" that cannot be reclaimed on memory
1426		pressure.
1427
1428	  slab (npn)
1429		Amount of memory used for storing in-kernel data
1430		structures.
1431
1432	  workingset_refault_anon
1433		Number of refaults of previously evicted anonymous pages.
1434
1435	  workingset_refault_file
1436		Number of refaults of previously evicted file pages.
1437
1438	  workingset_activate_anon
1439		Number of refaulted anonymous pages that were immediately
1440		activated.
1441
1442	  workingset_activate_file
1443		Number of refaulted file pages that were immediately activated.
1444
1445	  workingset_restore_anon
1446		Number of restored anonymous pages which have been detected as
1447		an active workingset before they got reclaimed.
1448
1449	  workingset_restore_file
1450		Number of restored file pages which have been detected as an
1451		active workingset before they got reclaimed.
1452
1453	  workingset_nodereclaim
1454		Number of times a shadow node has been reclaimed
1455
1456	  pgscan (npn)
1457		Amount of scanned pages (in an inactive LRU list)
1458
1459	  pgsteal (npn)
1460		Amount of reclaimed pages
1461
1462	  pgscan_kswapd (npn)
1463		Amount of scanned pages by kswapd (in an inactive LRU list)
1464
1465	  pgscan_direct (npn)
1466		Amount of scanned pages directly  (in an inactive LRU list)
1467
1468	  pgsteal_kswapd (npn)
1469		Amount of reclaimed pages by kswapd
1470
1471	  pgsteal_direct (npn)
1472		Amount of reclaimed pages directly
1473
1474	  pgfault (npn)
1475		Total number of page faults incurred
1476
1477	  pgmajfault (npn)
1478		Number of major page faults incurred
1479
1480	  pgrefill (npn)
1481		Amount of scanned pages (in an active LRU list)
1482
1483	  pgactivate (npn)
1484		Amount of pages moved to the active LRU list
1485
1486	  pgdeactivate (npn)
1487		Amount of pages moved to the inactive LRU list
1488
1489	  pglazyfree (npn)
1490		Amount of pages postponed to be freed under memory pressure
1491
1492	  pglazyfreed (npn)
1493		Amount of reclaimed lazyfree pages
1494
1495	  thp_fault_alloc (npn)
1496		Number of transparent hugepages which were allocated to satisfy
1497		a page fault. This counter is not present when CONFIG_TRANSPARENT_HUGEPAGE
1498                is not set.
1499
1500	  thp_collapse_alloc (npn)
1501		Number of transparent hugepages which were allocated to allow
1502		collapsing an existing range of pages. This counter is not
1503		present when CONFIG_TRANSPARENT_HUGEPAGE is not set.
1504
1505  memory.numa_stat
1506	A read-only nested-keyed file which exists on non-root cgroups.
1507
1508	This breaks down the cgroup's memory footprint into different
1509	types of memory, type-specific details, and other information
1510	per node on the state of the memory management system.
1511
1512	This is useful for providing visibility into the NUMA locality
1513	information within an memcg since the pages are allowed to be
1514	allocated from any physical node. One of the use case is evaluating
1515	application performance by combining this information with the
1516	application's CPU allocation.
1517
1518	All memory amounts are in bytes.
1519
1520	The output format of memory.numa_stat is::
1521
1522	  type N0=<bytes in node 0> N1=<bytes in node 1> ...
1523
1524	The entries are ordered to be human readable, and new entries
1525	can show up in the middle. Don't rely on items remaining in a
1526	fixed position; use the keys to look up specific values!
1527
1528	The entries can refer to the memory.stat.
1529
1530  memory.swap.current
1531	A read-only single value file which exists on non-root
1532	cgroups.
1533
1534	The total amount of swap currently being used by the cgroup
1535	and its descendants.
1536
1537  memory.swap.high
1538	A read-write single value file which exists on non-root
1539	cgroups.  The default is "max".
1540
1541	Swap usage throttle limit.  If a cgroup's swap usage exceeds
1542	this limit, all its further allocations will be throttled to
1543	allow userspace to implement custom out-of-memory procedures.
1544
1545	This limit marks a point of no return for the cgroup. It is NOT
1546	designed to manage the amount of swapping a workload does
1547	during regular operation. Compare to memory.swap.max, which
1548	prohibits swapping past a set amount, but lets the cgroup
1549	continue unimpeded as long as other memory can be reclaimed.
1550
1551	Healthy workloads are not expected to reach this limit.
1552
1553  memory.swap.max
1554	A read-write single value file which exists on non-root
1555	cgroups.  The default is "max".
1556
1557	Swap usage hard limit.  If a cgroup's swap usage reaches this
1558	limit, anonymous memory of the cgroup will not be swapped out.
1559
1560  memory.swap.events
1561	A read-only flat-keyed file which exists on non-root cgroups.
1562	The following entries are defined.  Unless specified
1563	otherwise, a value change in this file generates a file
1564	modified event.
1565
1566	  high
1567		The number of times the cgroup's swap usage was over
1568		the high threshold.
1569
1570	  max
1571		The number of times the cgroup's swap usage was about
1572		to go over the max boundary and swap allocation
1573		failed.
1574
1575	  fail
1576		The number of times swap allocation failed either
1577		because of running out of swap system-wide or max
1578		limit.
1579
1580	When reduced under the current usage, the existing swap
1581	entries are reclaimed gradually and the swap usage may stay
1582	higher than the limit for an extended period of time.  This
1583	reduces the impact on the workload and memory management.
1584
1585  memory.zswap.current
1586	A read-only single value file which exists on non-root
1587	cgroups.
1588
1589	The total amount of memory consumed by the zswap compression
1590	backend.
1591
1592  memory.zswap.max
1593	A read-write single value file which exists on non-root
1594	cgroups.  The default is "max".
1595
1596	Zswap usage hard limit. If a cgroup's zswap pool reaches this
1597	limit, it will refuse to take any more stores before existing
1598	entries fault back in or are written out to disk.
1599
1600  memory.pressure
1601	A read-only nested-keyed file.
1602
1603	Shows pressure stall information for memory. See
1604	:ref:`Documentation/accounting/psi.rst <psi>` for details.
1605
1606
1607Usage Guidelines
1608~~~~~~~~~~~~~~~~
1609
1610"memory.high" is the main mechanism to control memory usage.
1611Over-committing on high limit (sum of high limits > available memory)
1612and letting global memory pressure to distribute memory according to
1613usage is a viable strategy.
1614
1615Because breach of the high limit doesn't trigger the OOM killer but
1616throttles the offending cgroup, a management agent has ample
1617opportunities to monitor and take appropriate actions such as granting
1618more memory or terminating the workload.
1619
1620Determining whether a cgroup has enough memory is not trivial as
1621memory usage doesn't indicate whether the workload can benefit from
1622more memory.  For example, a workload which writes data received from
1623network to a file can use all available memory but can also operate as
1624performant with a small amount of memory.  A measure of memory
1625pressure - how much the workload is being impacted due to lack of
1626memory - is necessary to determine whether a workload needs more
1627memory; unfortunately, memory pressure monitoring mechanism isn't
1628implemented yet.
1629
1630
1631Memory Ownership
1632~~~~~~~~~~~~~~~~
1633
1634A memory area is charged to the cgroup which instantiated it and stays
1635charged to the cgroup until the area is released.  Migrating a process
1636to a different cgroup doesn't move the memory usages that it
1637instantiated while in the previous cgroup to the new cgroup.
1638
1639A memory area may be used by processes belonging to different cgroups.
1640To which cgroup the area will be charged is in-deterministic; however,
1641over time, the memory area is likely to end up in a cgroup which has
1642enough memory allowance to avoid high reclaim pressure.
1643
1644If a cgroup sweeps a considerable amount of memory which is expected
1645to be accessed repeatedly by other cgroups, it may make sense to use
1646POSIX_FADV_DONTNEED to relinquish the ownership of memory areas
1647belonging to the affected files to ensure correct memory ownership.
1648
1649
1650IO
1651--
1652
1653The "io" controller regulates the distribution of IO resources.  This
1654controller implements both weight based and absolute bandwidth or IOPS
1655limit distribution; however, weight based distribution is available
1656only if cfq-iosched is in use and neither scheme is available for
1657blk-mq devices.
1658
1659
1660IO Interface Files
1661~~~~~~~~~~~~~~~~~~
1662
1663  io.stat
1664	A read-only nested-keyed file.
1665
1666	Lines are keyed by $MAJ:$MIN device numbers and not ordered.
1667	The following nested keys are defined.
1668
1669	  ======	=====================
1670	  rbytes	Bytes read
1671	  wbytes	Bytes written
1672	  rios		Number of read IOs
1673	  wios		Number of write IOs
1674	  dbytes	Bytes discarded
1675	  dios		Number of discard IOs
1676	  ======	=====================
1677
1678	An example read output follows::
1679
1680	  8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0
1681	  8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021
1682
1683  io.cost.qos
1684	A read-write nested-keyed file which exists only on the root
1685	cgroup.
1686
1687	This file configures the Quality of Service of the IO cost
1688	model based controller (CONFIG_BLK_CGROUP_IOCOST) which
1689	currently implements "io.weight" proportional control.  Lines
1690	are keyed by $MAJ:$MIN device numbers and not ordered.  The
1691	line for a given device is populated on the first write for
1692	the device on "io.cost.qos" or "io.cost.model".  The following
1693	nested keys are defined.
1694
1695	  ======	=====================================
1696	  enable	Weight-based control enable
1697	  ctrl		"auto" or "user"
1698	  rpct		Read latency percentile    [0, 100]
1699	  rlat		Read latency threshold
1700	  wpct		Write latency percentile   [0, 100]
1701	  wlat		Write latency threshold
1702	  min		Minimum scaling percentage [1, 10000]
1703	  max		Maximum scaling percentage [1, 10000]
1704	  ======	=====================================
1705
1706	The controller is disabled by default and can be enabled by
1707	setting "enable" to 1.  "rpct" and "wpct" parameters default
1708	to zero and the controller uses internal device saturation
1709	state to adjust the overall IO rate between "min" and "max".
1710
1711	When a better control quality is needed, latency QoS
1712	parameters can be configured.  For example::
1713
1714	  8:16 enable=1 ctrl=auto rpct=95.00 rlat=75000 wpct=95.00 wlat=150000 min=50.00 max=150.0
1715
1716	shows that on sdb, the controller is enabled, will consider
1717	the device saturated if the 95th percentile of read completion
1718	latencies is above 75ms or write 150ms, and adjust the overall
1719	IO issue rate between 50% and 150% accordingly.
1720
1721	The lower the saturation point, the better the latency QoS at
1722	the cost of aggregate bandwidth.  The narrower the allowed
1723	adjustment range between "min" and "max", the more conformant
1724	to the cost model the IO behavior.  Note that the IO issue
1725	base rate may be far off from 100% and setting "min" and "max"
1726	blindly can lead to a significant loss of device capacity or
1727	control quality.  "min" and "max" are useful for regulating
1728	devices which show wide temporary behavior changes - e.g. a
1729	ssd which accepts writes at the line speed for a while and
1730	then completely stalls for multiple seconds.
1731
1732	When "ctrl" is "auto", the parameters are controlled by the
1733	kernel and may change automatically.  Setting "ctrl" to "user"
1734	or setting any of the percentile and latency parameters puts
1735	it into "user" mode and disables the automatic changes.  The
1736	automatic mode can be restored by setting "ctrl" to "auto".
1737
1738  io.cost.model
1739	A read-write nested-keyed file which exists only on the root
1740	cgroup.
1741
1742	This file configures the cost model of the IO cost model based
1743	controller (CONFIG_BLK_CGROUP_IOCOST) which currently
1744	implements "io.weight" proportional control.  Lines are keyed
1745	by $MAJ:$MIN device numbers and not ordered.  The line for a
1746	given device is populated on the first write for the device on
1747	"io.cost.qos" or "io.cost.model".  The following nested keys
1748	are defined.
1749
1750	  =====		================================
1751	  ctrl		"auto" or "user"
1752	  model		The cost model in use - "linear"
1753	  =====		================================
1754
1755	When "ctrl" is "auto", the kernel may change all parameters
1756	dynamically.  When "ctrl" is set to "user" or any other
1757	parameters are written to, "ctrl" become "user" and the
1758	automatic changes are disabled.
1759
1760	When "model" is "linear", the following model parameters are
1761	defined.
1762
1763	  =============	========================================
1764	  [r|w]bps	The maximum sequential IO throughput
1765	  [r|w]seqiops	The maximum 4k sequential IOs per second
1766	  [r|w]randiops	The maximum 4k random IOs per second
1767	  =============	========================================
1768
1769	From the above, the builtin linear model determines the base
1770	costs of a sequential and random IO and the cost coefficient
1771	for the IO size.  While simple, this model can cover most
1772	common device classes acceptably.
1773
1774	The IO cost model isn't expected to be accurate in absolute
1775	sense and is scaled to the device behavior dynamically.
1776
1777	If needed, tools/cgroup/iocost_coef_gen.py can be used to
1778	generate device-specific coefficients.
1779
1780  io.weight
1781	A read-write flat-keyed file which exists on non-root cgroups.
1782	The default is "default 100".
1783
1784	The first line is the default weight applied to devices
1785	without specific override.  The rest are overrides keyed by
1786	$MAJ:$MIN device numbers and not ordered.  The weights are in
1787	the range [1, 10000] and specifies the relative amount IO time
1788	the cgroup can use in relation to its siblings.
1789
1790	The default weight can be updated by writing either "default
1791	$WEIGHT" or simply "$WEIGHT".  Overrides can be set by writing
1792	"$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default".
1793
1794	An example read output follows::
1795
1796	  default 100
1797	  8:16 200
1798	  8:0 50
1799
1800  io.max
1801	A read-write nested-keyed file which exists on non-root
1802	cgroups.
1803
1804	BPS and IOPS based IO limit.  Lines are keyed by $MAJ:$MIN
1805	device numbers and not ordered.  The following nested keys are
1806	defined.
1807
1808	  =====		==================================
1809	  rbps		Max read bytes per second
1810	  wbps		Max write bytes per second
1811	  riops		Max read IO operations per second
1812	  wiops		Max write IO operations per second
1813	  =====		==================================
1814
1815	When writing, any number of nested key-value pairs can be
1816	specified in any order.  "max" can be specified as the value
1817	to remove a specific limit.  If the same key is specified
1818	multiple times, the outcome is undefined.
1819
1820	BPS and IOPS are measured in each IO direction and IOs are
1821	delayed if limit is reached.  Temporary bursts are allowed.
1822
1823	Setting read limit at 2M BPS and write at 120 IOPS for 8:16::
1824
1825	  echo "8:16 rbps=2097152 wiops=120" > io.max
1826
1827	Reading returns the following::
1828
1829	  8:16 rbps=2097152 wbps=max riops=max wiops=120
1830
1831	Write IOPS limit can be removed by writing the following::
1832
1833	  echo "8:16 wiops=max" > io.max
1834
1835	Reading now returns the following::
1836
1837	  8:16 rbps=2097152 wbps=max riops=max wiops=max
1838
1839  io.pressure
1840	A read-only nested-keyed file.
1841
1842	Shows pressure stall information for IO. See
1843	:ref:`Documentation/accounting/psi.rst <psi>` for details.
1844
1845
1846Writeback
1847~~~~~~~~~
1848
1849Page cache is dirtied through buffered writes and shared mmaps and
1850written asynchronously to the backing filesystem by the writeback
1851mechanism.  Writeback sits between the memory and IO domains and
1852regulates the proportion of dirty memory by balancing dirtying and
1853write IOs.
1854
1855The io controller, in conjunction with the memory controller,
1856implements control of page cache writeback IOs.  The memory controller
1857defines the memory domain that dirty memory ratio is calculated and
1858maintained for and the io controller defines the io domain which
1859writes out dirty pages for the memory domain.  Both system-wide and
1860per-cgroup dirty memory states are examined and the more restrictive
1861of the two is enforced.
1862
1863cgroup writeback requires explicit support from the underlying
1864filesystem.  Currently, cgroup writeback is implemented on ext2, ext4,
1865btrfs, f2fs, and xfs.  On other filesystems, all writeback IOs are
1866attributed to the root cgroup.
1867
1868There are inherent differences in memory and writeback management
1869which affects how cgroup ownership is tracked.  Memory is tracked per
1870page while writeback per inode.  For the purpose of writeback, an
1871inode is assigned to a cgroup and all IO requests to write dirty pages
1872from the inode are attributed to that cgroup.
1873
1874As cgroup ownership for memory is tracked per page, there can be pages
1875which are associated with different cgroups than the one the inode is
1876associated with.  These are called foreign pages.  The writeback
1877constantly keeps track of foreign pages and, if a particular foreign
1878cgroup becomes the majority over a certain period of time, switches
1879the ownership of the inode to that cgroup.
1880
1881While this model is enough for most use cases where a given inode is
1882mostly dirtied by a single cgroup even when the main writing cgroup
1883changes over time, use cases where multiple cgroups write to a single
1884inode simultaneously are not supported well.  In such circumstances, a
1885significant portion of IOs are likely to be attributed incorrectly.
1886As memory controller assigns page ownership on the first use and
1887doesn't update it until the page is released, even if writeback
1888strictly follows page ownership, multiple cgroups dirtying overlapping
1889areas wouldn't work as expected.  It's recommended to avoid such usage
1890patterns.
1891
1892The sysctl knobs which affect writeback behavior are applied to cgroup
1893writeback as follows.
1894
1895  vm.dirty_background_ratio, vm.dirty_ratio
1896	These ratios apply the same to cgroup writeback with the
1897	amount of available memory capped by limits imposed by the
1898	memory controller and system-wide clean memory.
1899
1900  vm.dirty_background_bytes, vm.dirty_bytes
1901	For cgroup writeback, this is calculated into ratio against
1902	total available memory and applied the same way as
1903	vm.dirty[_background]_ratio.
1904
1905
1906IO Latency
1907~~~~~~~~~~
1908
1909This is a cgroup v2 controller for IO workload protection.  You provide a group
1910with a latency target, and if the average latency exceeds that target the
1911controller will throttle any peers that have a lower latency target than the
1912protected workload.
1913
1914The limits are only applied at the peer level in the hierarchy.  This means that
1915in the diagram below, only groups A, B, and C will influence each other, and
1916groups D and F will influence each other.  Group G will influence nobody::
1917
1918			[root]
1919		/	   |		\
1920		A	   B		C
1921	       /  \        |
1922	      D    F	   G
1923
1924
1925So the ideal way to configure this is to set io.latency in groups A, B, and C.
1926Generally you do not want to set a value lower than the latency your device
1927supports.  Experiment to find the value that works best for your workload.
1928Start at higher than the expected latency for your device and watch the
1929avg_lat value in io.stat for your workload group to get an idea of the
1930latency you see during normal operation.  Use the avg_lat value as a basis for
1931your real setting, setting at 10-15% higher than the value in io.stat.
1932
1933How IO Latency Throttling Works
1934~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1935
1936io.latency is work conserving; so as long as everybody is meeting their latency
1937target the controller doesn't do anything.  Once a group starts missing its
1938target it begins throttling any peer group that has a higher target than itself.
1939This throttling takes 2 forms:
1940
1941- Queue depth throttling.  This is the number of outstanding IO's a group is
1942  allowed to have.  We will clamp down relatively quickly, starting at no limit
1943  and going all the way down to 1 IO at a time.
1944
1945- Artificial delay induction.  There are certain types of IO that cannot be
1946  throttled without possibly adversely affecting higher priority groups.  This
1947  includes swapping and metadata IO.  These types of IO are allowed to occur
1948  normally, however they are "charged" to the originating group.  If the
1949  originating group is being throttled you will see the use_delay and delay
1950  fields in io.stat increase.  The delay value is how many microseconds that are
1951  being added to any process that runs in this group.  Because this number can
1952  grow quite large if there is a lot of swapping or metadata IO occurring we
1953  limit the individual delay events to 1 second at a time.
1954
1955Once the victimized group starts meeting its latency target again it will start
1956unthrottling any peer groups that were throttled previously.  If the victimized
1957group simply stops doing IO the global counter will unthrottle appropriately.
1958
1959IO Latency Interface Files
1960~~~~~~~~~~~~~~~~~~~~~~~~~~
1961
1962  io.latency
1963	This takes a similar format as the other controllers.
1964
1965		"MAJOR:MINOR target=<target time in microseconds>"
1966
1967  io.stat
1968	If the controller is enabled you will see extra stats in io.stat in
1969	addition to the normal ones.
1970
1971	  depth
1972		This is the current queue depth for the group.
1973
1974	  avg_lat
1975		This is an exponential moving average with a decay rate of 1/exp
1976		bound by the sampling interval.  The decay rate interval can be
1977		calculated by multiplying the win value in io.stat by the
1978		corresponding number of samples based on the win value.
1979
1980	  win
1981		The sampling window size in milliseconds.  This is the minimum
1982		duration of time between evaluation events.  Windows only elapse
1983		with IO activity.  Idle periods extend the most recent window.
1984
1985IO Priority
1986~~~~~~~~~~~
1987
1988A single attribute controls the behavior of the I/O priority cgroup policy,
1989namely the blkio.prio.class attribute. The following values are accepted for
1990that attribute:
1991
1992  no-change
1993	Do not modify the I/O priority class.
1994
1995  none-to-rt
1996	For requests that do not have an I/O priority class (NONE),
1997	change the I/O priority class into RT. Do not modify
1998	the I/O priority class of other requests.
1999
2000  restrict-to-be
2001	For requests that do not have an I/O priority class or that have I/O
2002	priority class RT, change it into BE. Do not modify the I/O priority
2003	class of requests that have priority class IDLE.
2004
2005  idle
2006	Change the I/O priority class of all requests into IDLE, the lowest
2007	I/O priority class.
2008
2009The following numerical values are associated with the I/O priority policies:
2010
2011+-------------+---+
2012| no-change   | 0 |
2013+-------------+---+
2014| none-to-rt  | 1 |
2015+-------------+---+
2016| rt-to-be    | 2 |
2017+-------------+---+
2018| all-to-idle | 3 |
2019+-------------+---+
2020
2021The numerical value that corresponds to each I/O priority class is as follows:
2022
2023+-------------------------------+---+
2024| IOPRIO_CLASS_NONE             | 0 |
2025+-------------------------------+---+
2026| IOPRIO_CLASS_RT (real-time)   | 1 |
2027+-------------------------------+---+
2028| IOPRIO_CLASS_BE (best effort) | 2 |
2029+-------------------------------+---+
2030| IOPRIO_CLASS_IDLE             | 3 |
2031+-------------------------------+---+
2032
2033The algorithm to set the I/O priority class for a request is as follows:
2034
2035- Translate the I/O priority class policy into a number.
2036- Change the request I/O priority class into the maximum of the I/O priority
2037  class policy number and the numerical I/O priority class.
2038
2039PID
2040---
2041
2042The process number controller is used to allow a cgroup to stop any
2043new tasks from being fork()'d or clone()'d after a specified limit is
2044reached.
2045
2046The number of tasks in a cgroup can be exhausted in ways which other
2047controllers cannot prevent, thus warranting its own controller.  For
2048example, a fork bomb is likely to exhaust the number of tasks before
2049hitting memory restrictions.
2050
2051Note that PIDs used in this controller refer to TIDs, process IDs as
2052used by the kernel.
2053
2054
2055PID Interface Files
2056~~~~~~~~~~~~~~~~~~~
2057
2058  pids.max
2059	A read-write single value file which exists on non-root
2060	cgroups.  The default is "max".
2061
2062	Hard limit of number of processes.
2063
2064  pids.current
2065	A read-only single value file which exists on all cgroups.
2066
2067	The number of processes currently in the cgroup and its
2068	descendants.
2069
2070Organisational operations are not blocked by cgroup policies, so it is
2071possible to have pids.current > pids.max.  This can be done by either
2072setting the limit to be smaller than pids.current, or attaching enough
2073processes to the cgroup such that pids.current is larger than
2074pids.max.  However, it is not possible to violate a cgroup PID policy
2075through fork() or clone(). These will return -EAGAIN if the creation
2076of a new process would cause a cgroup policy to be violated.
2077
2078
2079Cpuset
2080------
2081
2082The "cpuset" controller provides a mechanism for constraining
2083the CPU and memory node placement of tasks to only the resources
2084specified in the cpuset interface files in a task's current cgroup.
2085This is especially valuable on large NUMA systems where placing jobs
2086on properly sized subsets of the systems with careful processor and
2087memory placement to reduce cross-node memory access and contention
2088can improve overall system performance.
2089
2090The "cpuset" controller is hierarchical.  That means the controller
2091cannot use CPUs or memory nodes not allowed in its parent.
2092
2093
2094Cpuset Interface Files
2095~~~~~~~~~~~~~~~~~~~~~~
2096
2097  cpuset.cpus
2098	A read-write multiple values file which exists on non-root
2099	cpuset-enabled cgroups.
2100
2101	It lists the requested CPUs to be used by tasks within this
2102	cgroup.  The actual list of CPUs to be granted, however, is
2103	subjected to constraints imposed by its parent and can differ
2104	from the requested CPUs.
2105
2106	The CPU numbers are comma-separated numbers or ranges.
2107	For example::
2108
2109	  # cat cpuset.cpus
2110	  0-4,6,8-10
2111
2112	An empty value indicates that the cgroup is using the same
2113	setting as the nearest cgroup ancestor with a non-empty
2114	"cpuset.cpus" or all the available CPUs if none is found.
2115
2116	The value of "cpuset.cpus" stays constant until the next update
2117	and won't be affected by any CPU hotplug events.
2118
2119  cpuset.cpus.effective
2120	A read-only multiple values file which exists on all
2121	cpuset-enabled cgroups.
2122
2123	It lists the onlined CPUs that are actually granted to this
2124	cgroup by its parent.  These CPUs are allowed to be used by
2125	tasks within the current cgroup.
2126
2127	If "cpuset.cpus" is empty, the "cpuset.cpus.effective" file shows
2128	all the CPUs from the parent cgroup that can be available to
2129	be used by this cgroup.  Otherwise, it should be a subset of
2130	"cpuset.cpus" unless none of the CPUs listed in "cpuset.cpus"
2131	can be granted.  In this case, it will be treated just like an
2132	empty "cpuset.cpus".
2133
2134	Its value will be affected by CPU hotplug events.
2135
2136  cpuset.mems
2137	A read-write multiple values file which exists on non-root
2138	cpuset-enabled cgroups.
2139
2140	It lists the requested memory nodes to be used by tasks within
2141	this cgroup.  The actual list of memory nodes granted, however,
2142	is subjected to constraints imposed by its parent and can differ
2143	from the requested memory nodes.
2144
2145	The memory node numbers are comma-separated numbers or ranges.
2146	For example::
2147
2148	  # cat cpuset.mems
2149	  0-1,3
2150
2151	An empty value indicates that the cgroup is using the same
2152	setting as the nearest cgroup ancestor with a non-empty
2153	"cpuset.mems" or all the available memory nodes if none
2154	is found.
2155
2156	The value of "cpuset.mems" stays constant until the next update
2157	and won't be affected by any memory nodes hotplug events.
2158
2159	Setting a non-empty value to "cpuset.mems" causes memory of
2160	tasks within the cgroup to be migrated to the designated nodes if
2161	they are currently using memory outside of the designated nodes.
2162
2163	There is a cost for this memory migration.  The migration
2164	may not be complete and some memory pages may be left behind.
2165	So it is recommended that "cpuset.mems" should be set properly
2166	before spawning new tasks into the cpuset.  Even if there is
2167	a need to change "cpuset.mems" with active tasks, it shouldn't
2168	be done frequently.
2169
2170  cpuset.mems.effective
2171	A read-only multiple values file which exists on all
2172	cpuset-enabled cgroups.
2173
2174	It lists the onlined memory nodes that are actually granted to
2175	this cgroup by its parent. These memory nodes are allowed to
2176	be used by tasks within the current cgroup.
2177
2178	If "cpuset.mems" is empty, it shows all the memory nodes from the
2179	parent cgroup that will be available to be used by this cgroup.
2180	Otherwise, it should be a subset of "cpuset.mems" unless none of
2181	the memory nodes listed in "cpuset.mems" can be granted.  In this
2182	case, it will be treated just like an empty "cpuset.mems".
2183
2184	Its value will be affected by memory nodes hotplug events.
2185
2186  cpuset.cpus.partition
2187	A read-write single value file which exists on non-root
2188	cpuset-enabled cgroups.  This flag is owned by the parent cgroup
2189	and is not delegatable.
2190
2191	It accepts only the following input values when written to.
2192
2193	  ==========	=====================================
2194	  "member"	Non-root member of a partition
2195	  "root"	Partition root
2196	  "isolated"	Partition root without load balancing
2197	  ==========	=====================================
2198
2199	The root cgroup is always a partition root and its state
2200	cannot be changed.  All other non-root cgroups start out as
2201	"member".
2202
2203	When set to "root", the current cgroup is the root of a new
2204	partition or scheduling domain that comprises itself and all
2205	its descendants except those that are separate partition roots
2206	themselves and their descendants.
2207
2208	When set to "isolated", the CPUs in that partition root will
2209	be in an isolated state without any load balancing from the
2210	scheduler.  Tasks placed in such a partition with multiple
2211	CPUs should be carefully distributed and bound to each of the
2212	individual CPUs for optimal performance.
2213
2214	The value shown in "cpuset.cpus.effective" of a partition root
2215	is the CPUs that the partition root can dedicate to a potential
2216	new child partition root. The new child subtracts available
2217	CPUs from its parent "cpuset.cpus.effective".
2218
2219	A partition root ("root" or "isolated") can be in one of the
2220	two possible states - valid or invalid.  An invalid partition
2221	root is in a degraded state where some state information may
2222	be retained, but behaves more like a "member".
2223
2224	All possible state transitions among "member", "root" and
2225	"isolated" are allowed.
2226
2227	On read, the "cpuset.cpus.partition" file can show the following
2228	values.
2229
2230	  =============================	=====================================
2231	  "member"			Non-root member of a partition
2232	  "root"			Partition root
2233	  "isolated"			Partition root without load balancing
2234	  "root invalid (<reason>)"	Invalid partition root
2235	  "isolated invalid (<reason>)"	Invalid isolated partition root
2236	  =============================	=====================================
2237
2238	In the case of an invalid partition root, a descriptive string on
2239	why the partition is invalid is included within parentheses.
2240
2241	For a partition root to become valid, the following conditions
2242	must be met.
2243
2244	1) The "cpuset.cpus" is exclusive with its siblings , i.e. they
2245	   are not shared by any of its siblings (exclusivity rule).
2246	2) The parent cgroup is a valid partition root.
2247	3) The "cpuset.cpus" is not empty and must contain at least
2248	   one of the CPUs from parent's "cpuset.cpus", i.e. they overlap.
2249	4) The "cpuset.cpus.effective" cannot be empty unless there is
2250	   no task associated with this partition.
2251
2252	External events like hotplug or changes to "cpuset.cpus" can
2253	cause a valid partition root to become invalid and vice versa.
2254	Note that a task cannot be moved to a cgroup with empty
2255	"cpuset.cpus.effective".
2256
2257	For a valid partition root with the sibling cpu exclusivity
2258	rule enabled, changes made to "cpuset.cpus" that violate the
2259	exclusivity rule will invalidate the partition as well as its
2260	sibiling partitions with conflicting cpuset.cpus values. So
2261	care must be taking in changing "cpuset.cpus".
2262
2263	A valid non-root parent partition may distribute out all its CPUs
2264	to its child partitions when there is no task associated with it.
2265
2266	Care must be taken to change a valid partition root to
2267	"member" as all its child partitions, if present, will become
2268	invalid causing disruption to tasks running in those child
2269	partitions. These inactivated partitions could be recovered if
2270	their parent is switched back to a partition root with a proper
2271	set of "cpuset.cpus".
2272
2273	Poll and inotify events are triggered whenever the state of
2274	"cpuset.cpus.partition" changes.  That includes changes caused
2275	by write to "cpuset.cpus.partition", cpu hotplug or other
2276	changes that modify the validity status of the partition.
2277	This will allow user space agents to monitor unexpected changes
2278	to "cpuset.cpus.partition" without the need to do continuous
2279	polling.
2280
2281
2282Device controller
2283-----------------
2284
2285Device controller manages access to device files. It includes both
2286creation of new device files (using mknod), and access to the
2287existing device files.
2288
2289Cgroup v2 device controller has no interface files and is implemented
2290on top of cgroup BPF. To control access to device files, a user may
2291create bpf programs of type BPF_PROG_TYPE_CGROUP_DEVICE and attach
2292them to cgroups with BPF_CGROUP_DEVICE flag. On an attempt to access a
2293device file, corresponding BPF programs will be executed, and depending
2294on the return value the attempt will succeed or fail with -EPERM.
2295
2296A BPF_PROG_TYPE_CGROUP_DEVICE program takes a pointer to the
2297bpf_cgroup_dev_ctx structure, which describes the device access attempt:
2298access type (mknod/read/write) and device (type, major and minor numbers).
2299If the program returns 0, the attempt fails with -EPERM, otherwise it
2300succeeds.
2301
2302An example of BPF_PROG_TYPE_CGROUP_DEVICE program may be found in
2303tools/testing/selftests/bpf/progs/dev_cgroup.c in the kernel source tree.
2304
2305
2306RDMA
2307----
2308
2309The "rdma" controller regulates the distribution and accounting of
2310RDMA resources.
2311
2312RDMA Interface Files
2313~~~~~~~~~~~~~~~~~~~~
2314
2315  rdma.max
2316	A readwrite nested-keyed file that exists for all the cgroups
2317	except root that describes current configured resource limit
2318	for a RDMA/IB device.
2319
2320	Lines are keyed by device name and are not ordered.
2321	Each line contains space separated resource name and its configured
2322	limit that can be distributed.
2323
2324	The following nested keys are defined.
2325
2326	  ==========	=============================
2327	  hca_handle	Maximum number of HCA Handles
2328	  hca_object 	Maximum number of HCA Objects
2329	  ==========	=============================
2330
2331	An example for mlx4 and ocrdma device follows::
2332
2333	  mlx4_0 hca_handle=2 hca_object=2000
2334	  ocrdma1 hca_handle=3 hca_object=max
2335
2336  rdma.current
2337	A read-only file that describes current resource usage.
2338	It exists for all the cgroup except root.
2339
2340	An example for mlx4 and ocrdma device follows::
2341
2342	  mlx4_0 hca_handle=1 hca_object=20
2343	  ocrdma1 hca_handle=1 hca_object=23
2344
2345HugeTLB
2346-------
2347
2348The HugeTLB controller allows to limit the HugeTLB usage per control group and
2349enforces the controller limit during page fault.
2350
2351HugeTLB Interface Files
2352~~~~~~~~~~~~~~~~~~~~~~~
2353
2354  hugetlb.<hugepagesize>.current
2355	Show current usage for "hugepagesize" hugetlb.  It exists for all
2356	the cgroup except root.
2357
2358  hugetlb.<hugepagesize>.max
2359	Set/show the hard limit of "hugepagesize" hugetlb usage.
2360	The default value is "max".  It exists for all the cgroup except root.
2361
2362  hugetlb.<hugepagesize>.events
2363	A read-only flat-keyed file which exists on non-root cgroups.
2364
2365	  max
2366		The number of allocation failure due to HugeTLB limit
2367
2368  hugetlb.<hugepagesize>.events.local
2369	Similar to hugetlb.<hugepagesize>.events but the fields in the file
2370	are local to the cgroup i.e. not hierarchical. The file modified event
2371	generated on this file reflects only the local events.
2372
2373  hugetlb.<hugepagesize>.numa_stat
2374	Similar to memory.numa_stat, it shows the numa information of the
2375        hugetlb pages of <hugepagesize> in this cgroup.  Only active in
2376        use hugetlb pages are included.  The per-node values are in bytes.
2377
2378Misc
2379----
2380
2381The Miscellaneous cgroup provides the resource limiting and tracking
2382mechanism for the scalar resources which cannot be abstracted like the other
2383cgroup resources. Controller is enabled by the CONFIG_CGROUP_MISC config
2384option.
2385
2386A resource can be added to the controller via enum misc_res_type{} in the
2387include/linux/misc_cgroup.h file and the corresponding name via misc_res_name[]
2388in the kernel/cgroup/misc.c file. Provider of the resource must set its
2389capacity prior to using the resource by calling misc_cg_set_capacity().
2390
2391Once a capacity is set then the resource usage can be updated using charge and
2392uncharge APIs. All of the APIs to interact with misc controller are in
2393include/linux/misc_cgroup.h.
2394
2395Misc Interface Files
2396~~~~~~~~~~~~~~~~~~~~
2397
2398Miscellaneous controller provides 3 interface files. If two misc resources (res_a and res_b) are registered then:
2399
2400  misc.capacity
2401        A read-only flat-keyed file shown only in the root cgroup.  It shows
2402        miscellaneous scalar resources available on the platform along with
2403        their quantities::
2404
2405	  $ cat misc.capacity
2406	  res_a 50
2407	  res_b 10
2408
2409  misc.current
2410        A read-only flat-keyed file shown in the non-root cgroups.  It shows
2411        the current usage of the resources in the cgroup and its children.::
2412
2413	  $ cat misc.current
2414	  res_a 3
2415	  res_b 0
2416
2417  misc.max
2418        A read-write flat-keyed file shown in the non root cgroups. Allowed
2419        maximum usage of the resources in the cgroup and its children.::
2420
2421	  $ cat misc.max
2422	  res_a max
2423	  res_b 4
2424
2425	Limit can be set by::
2426
2427	  # echo res_a 1 > misc.max
2428
2429	Limit can be set to max by::
2430
2431	  # echo res_a max > misc.max
2432
2433        Limits can be set higher than the capacity value in the misc.capacity
2434        file.
2435
2436  misc.events
2437	A read-only flat-keyed file which exists on non-root cgroups. The
2438	following entries are defined. Unless specified otherwise, a value
2439	change in this file generates a file modified event. All fields in
2440	this file are hierarchical.
2441
2442	  max
2443		The number of times the cgroup's resource usage was
2444		about to go over the max boundary.
2445
2446Migration and Ownership
2447~~~~~~~~~~~~~~~~~~~~~~~
2448
2449A miscellaneous scalar resource is charged to the cgroup in which it is used
2450first, and stays charged to that cgroup until that resource is freed. Migrating
2451a process to a different cgroup does not move the charge to the destination
2452cgroup where the process has moved.
2453
2454Others
2455------
2456
2457perf_event
2458~~~~~~~~~~
2459
2460perf_event controller, if not mounted on a legacy hierarchy, is
2461automatically enabled on the v2 hierarchy so that perf events can
2462always be filtered by cgroup v2 path.  The controller can still be
2463moved to a legacy hierarchy after v2 hierarchy is populated.
2464
2465
2466Non-normative information
2467-------------------------
2468
2469This section contains information that isn't considered to be a part of
2470the stable kernel API and so is subject to change.
2471
2472
2473CPU controller root cgroup process behaviour
2474~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2475
2476When distributing CPU cycles in the root cgroup each thread in this
2477cgroup is treated as if it was hosted in a separate child cgroup of the
2478root cgroup. This child cgroup weight is dependent on its thread nice
2479level.
2480
2481For details of this mapping see sched_prio_to_weight array in
2482kernel/sched/core.c file (values from this array should be scaled
2483appropriately so the neutral - nice 0 - value is 100 instead of 1024).
2484
2485
2486IO controller root cgroup process behaviour
2487~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2488
2489Root cgroup processes are hosted in an implicit leaf child node.
2490When distributing IO resources this implicit child node is taken into
2491account as if it was a normal child cgroup of the root cgroup with a
2492weight value of 200.
2493
2494
2495Namespace
2496=========
2497
2498Basics
2499------
2500
2501cgroup namespace provides a mechanism to virtualize the view of the
2502"/proc/$PID/cgroup" file and cgroup mounts.  The CLONE_NEWCGROUP clone
2503flag can be used with clone(2) and unshare(2) to create a new cgroup
2504namespace.  The process running inside the cgroup namespace will have
2505its "/proc/$PID/cgroup" output restricted to cgroupns root.  The
2506cgroupns root is the cgroup of the process at the time of creation of
2507the cgroup namespace.
2508
2509Without cgroup namespace, the "/proc/$PID/cgroup" file shows the
2510complete path of the cgroup of a process.  In a container setup where
2511a set of cgroups and namespaces are intended to isolate processes the
2512"/proc/$PID/cgroup" file may leak potential system level information
2513to the isolated processes.  For example::
2514
2515  # cat /proc/self/cgroup
2516  0::/batchjobs/container_id1
2517
2518The path '/batchjobs/container_id1' can be considered as system-data
2519and undesirable to expose to the isolated processes.  cgroup namespace
2520can be used to restrict visibility of this path.  For example, before
2521creating a cgroup namespace, one would see::
2522
2523  # ls -l /proc/self/ns/cgroup
2524  lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835]
2525  # cat /proc/self/cgroup
2526  0::/batchjobs/container_id1
2527
2528After unsharing a new namespace, the view changes::
2529
2530  # ls -l /proc/self/ns/cgroup
2531  lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183]
2532  # cat /proc/self/cgroup
2533  0::/
2534
2535When some thread from a multi-threaded process unshares its cgroup
2536namespace, the new cgroupns gets applied to the entire process (all
2537the threads).  This is natural for the v2 hierarchy; however, for the
2538legacy hierarchies, this may be unexpected.
2539
2540A cgroup namespace is alive as long as there are processes inside or
2541mounts pinning it.  When the last usage goes away, the cgroup
2542namespace is destroyed.  The cgroupns root and the actual cgroups
2543remain.
2544
2545
2546The Root and Views
2547------------------
2548
2549The 'cgroupns root' for a cgroup namespace is the cgroup in which the
2550process calling unshare(2) is running.  For example, if a process in
2551/batchjobs/container_id1 cgroup calls unshare, cgroup
2552/batchjobs/container_id1 becomes the cgroupns root.  For the
2553init_cgroup_ns, this is the real root ('/') cgroup.
2554
2555The cgroupns root cgroup does not change even if the namespace creator
2556process later moves to a different cgroup::
2557
2558  # ~/unshare -c # unshare cgroupns in some cgroup
2559  # cat /proc/self/cgroup
2560  0::/
2561  # mkdir sub_cgrp_1
2562  # echo 0 > sub_cgrp_1/cgroup.procs
2563  # cat /proc/self/cgroup
2564  0::/sub_cgrp_1
2565
2566Each process gets its namespace-specific view of "/proc/$PID/cgroup"
2567
2568Processes running inside the cgroup namespace will be able to see
2569cgroup paths (in /proc/self/cgroup) only inside their root cgroup.
2570From within an unshared cgroupns::
2571
2572  # sleep 100000 &
2573  [1] 7353
2574  # echo 7353 > sub_cgrp_1/cgroup.procs
2575  # cat /proc/7353/cgroup
2576  0::/sub_cgrp_1
2577
2578From the initial cgroup namespace, the real cgroup path will be
2579visible::
2580
2581  $ cat /proc/7353/cgroup
2582  0::/batchjobs/container_id1/sub_cgrp_1
2583
2584From a sibling cgroup namespace (that is, a namespace rooted at a
2585different cgroup), the cgroup path relative to its own cgroup
2586namespace root will be shown.  For instance, if PID 7353's cgroup
2587namespace root is at '/batchjobs/container_id2', then it will see::
2588
2589  # cat /proc/7353/cgroup
2590  0::/../container_id2/sub_cgrp_1
2591
2592Note that the relative path always starts with '/' to indicate that
2593its relative to the cgroup namespace root of the caller.
2594
2595
2596Migration and setns(2)
2597----------------------
2598
2599Processes inside a cgroup namespace can move into and out of the
2600namespace root if they have proper access to external cgroups.  For
2601example, from inside a namespace with cgroupns root at
2602/batchjobs/container_id1, and assuming that the global hierarchy is
2603still accessible inside cgroupns::
2604
2605  # cat /proc/7353/cgroup
2606  0::/sub_cgrp_1
2607  # echo 7353 > batchjobs/container_id2/cgroup.procs
2608  # cat /proc/7353/cgroup
2609  0::/../container_id2
2610
2611Note that this kind of setup is not encouraged.  A task inside cgroup
2612namespace should only be exposed to its own cgroupns hierarchy.
2613
2614setns(2) to another cgroup namespace is allowed when:
2615
2616(a) the process has CAP_SYS_ADMIN against its current user namespace
2617(b) the process has CAP_SYS_ADMIN against the target cgroup
2618    namespace's userns
2619
2620No implicit cgroup changes happen with attaching to another cgroup
2621namespace.  It is expected that the someone moves the attaching
2622process under the target cgroup namespace root.
2623
2624
2625Interaction with Other Namespaces
2626---------------------------------
2627
2628Namespace specific cgroup hierarchy can be mounted by a process
2629running inside a non-init cgroup namespace::
2630
2631  # mount -t cgroup2 none $MOUNT_POINT
2632
2633This will mount the unified cgroup hierarchy with cgroupns root as the
2634filesystem root.  The process needs CAP_SYS_ADMIN against its user and
2635mount namespaces.
2636
2637The virtualization of /proc/self/cgroup file combined with restricting
2638the view of cgroup hierarchy by namespace-private cgroupfs mount
2639provides a properly isolated cgroup view inside the container.
2640
2641
2642Information on Kernel Programming
2643=================================
2644
2645This section contains kernel programming information in the areas
2646where interacting with cgroup is necessary.  cgroup core and
2647controllers are not covered.
2648
2649
2650Filesystem Support for Writeback
2651--------------------------------
2652
2653A filesystem can support cgroup writeback by updating
2654address_space_operations->writepage[s]() to annotate bio's using the
2655following two functions.
2656
2657  wbc_init_bio(@wbc, @bio)
2658	Should be called for each bio carrying writeback data and
2659	associates the bio with the inode's owner cgroup and the
2660	corresponding request queue.  This must be called after
2661	a queue (device) has been associated with the bio and
2662	before submission.
2663
2664  wbc_account_cgroup_owner(@wbc, @page, @bytes)
2665	Should be called for each data segment being written out.
2666	While this function doesn't care exactly when it's called
2667	during the writeback session, it's the easiest and most
2668	natural to call it as data segments are added to a bio.
2669
2670With writeback bio's annotated, cgroup support can be enabled per
2671super_block by setting SB_I_CGROUPWB in ->s_iflags.  This allows for
2672selective disabling of cgroup writeback support which is helpful when
2673certain filesystem features, e.g. journaled data mode, are
2674incompatible.
2675
2676wbc_init_bio() binds the specified bio to its cgroup.  Depending on
2677the configuration, the bio may be executed at a lower priority and if
2678the writeback session is holding shared resources, e.g. a journal
2679entry, may lead to priority inversion.  There is no one easy solution
2680for the problem.  Filesystems can try to work around specific problem
2681cases by skipping wbc_init_bio() and using bio_associate_blkg()
2682directly.
2683
2684
2685Deprecated v1 Core Features
2686===========================
2687
2688- Multiple hierarchies including named ones are not supported.
2689
2690- All v1 mount options are not supported.
2691
2692- The "tasks" file is removed and "cgroup.procs" is not sorted.
2693
2694- "cgroup.clone_children" is removed.
2695
2696- /proc/cgroups is meaningless for v2.  Use "cgroup.controllers" file
2697  at the root instead.
2698
2699
2700Issues with v1 and Rationales for v2
2701====================================
2702
2703Multiple Hierarchies
2704--------------------
2705
2706cgroup v1 allowed an arbitrary number of hierarchies and each
2707hierarchy could host any number of controllers.  While this seemed to
2708provide a high level of flexibility, it wasn't useful in practice.
2709
2710For example, as there is only one instance of each controller, utility
2711type controllers such as freezer which can be useful in all
2712hierarchies could only be used in one.  The issue is exacerbated by
2713the fact that controllers couldn't be moved to another hierarchy once
2714hierarchies were populated.  Another issue was that all controllers
2715bound to a hierarchy were forced to have exactly the same view of the
2716hierarchy.  It wasn't possible to vary the granularity depending on
2717the specific controller.
2718
2719In practice, these issues heavily limited which controllers could be
2720put on the same hierarchy and most configurations resorted to putting
2721each controller on its own hierarchy.  Only closely related ones, such
2722as the cpu and cpuacct controllers, made sense to be put on the same
2723hierarchy.  This often meant that userland ended up managing multiple
2724similar hierarchies repeating the same steps on each hierarchy
2725whenever a hierarchy management operation was necessary.
2726
2727Furthermore, support for multiple hierarchies came at a steep cost.
2728It greatly complicated cgroup core implementation but more importantly
2729the support for multiple hierarchies restricted how cgroup could be
2730used in general and what controllers was able to do.
2731
2732There was no limit on how many hierarchies there might be, which meant
2733that a thread's cgroup membership couldn't be described in finite
2734length.  The key might contain any number of entries and was unlimited
2735in length, which made it highly awkward to manipulate and led to
2736addition of controllers which existed only to identify membership,
2737which in turn exacerbated the original problem of proliferating number
2738of hierarchies.
2739
2740Also, as a controller couldn't have any expectation regarding the
2741topologies of hierarchies other controllers might be on, each
2742controller had to assume that all other controllers were attached to
2743completely orthogonal hierarchies.  This made it impossible, or at
2744least very cumbersome, for controllers to cooperate with each other.
2745
2746In most use cases, putting controllers on hierarchies which are
2747completely orthogonal to each other isn't necessary.  What usually is
2748called for is the ability to have differing levels of granularity
2749depending on the specific controller.  In other words, hierarchy may
2750be collapsed from leaf towards root when viewed from specific
2751controllers.  For example, a given configuration might not care about
2752how memory is distributed beyond a certain level while still wanting
2753to control how CPU cycles are distributed.
2754
2755
2756Thread Granularity
2757------------------
2758
2759cgroup v1 allowed threads of a process to belong to different cgroups.
2760This didn't make sense for some controllers and those controllers
2761ended up implementing different ways to ignore such situations but
2762much more importantly it blurred the line between API exposed to
2763individual applications and system management interface.
2764
2765Generally, in-process knowledge is available only to the process
2766itself; thus, unlike service-level organization of processes,
2767categorizing threads of a process requires active participation from
2768the application which owns the target process.
2769
2770cgroup v1 had an ambiguously defined delegation model which got abused
2771in combination with thread granularity.  cgroups were delegated to
2772individual applications so that they can create and manage their own
2773sub-hierarchies and control resource distributions along them.  This
2774effectively raised cgroup to the status of a syscall-like API exposed
2775to lay programs.
2776
2777First of all, cgroup has a fundamentally inadequate interface to be
2778exposed this way.  For a process to access its own knobs, it has to
2779extract the path on the target hierarchy from /proc/self/cgroup,
2780construct the path by appending the name of the knob to the path, open
2781and then read and/or write to it.  This is not only extremely clunky
2782and unusual but also inherently racy.  There is no conventional way to
2783define transaction across the required steps and nothing can guarantee
2784that the process would actually be operating on its own sub-hierarchy.
2785
2786cgroup controllers implemented a number of knobs which would never be
2787accepted as public APIs because they were just adding control knobs to
2788system-management pseudo filesystem.  cgroup ended up with interface
2789knobs which were not properly abstracted or refined and directly
2790revealed kernel internal details.  These knobs got exposed to
2791individual applications through the ill-defined delegation mechanism
2792effectively abusing cgroup as a shortcut to implementing public APIs
2793without going through the required scrutiny.
2794
2795This was painful for both userland and kernel.  Userland ended up with
2796misbehaving and poorly abstracted interfaces and kernel exposing and
2797locked into constructs inadvertently.
2798
2799
2800Competition Between Inner Nodes and Threads
2801-------------------------------------------
2802
2803cgroup v1 allowed threads to be in any cgroups which created an
2804interesting problem where threads belonging to a parent cgroup and its
2805children cgroups competed for resources.  This was nasty as two
2806different types of entities competed and there was no obvious way to
2807settle it.  Different controllers did different things.
2808
2809The cpu controller considered threads and cgroups as equivalents and
2810mapped nice levels to cgroup weights.  This worked for some cases but
2811fell flat when children wanted to be allocated specific ratios of CPU
2812cycles and the number of internal threads fluctuated - the ratios
2813constantly changed as the number of competing entities fluctuated.
2814There also were other issues.  The mapping from nice level to weight
2815wasn't obvious or universal, and there were various other knobs which
2816simply weren't available for threads.
2817
2818The io controller implicitly created a hidden leaf node for each
2819cgroup to host the threads.  The hidden leaf had its own copies of all
2820the knobs with ``leaf_`` prefixed.  While this allowed equivalent
2821control over internal threads, it was with serious drawbacks.  It
2822always added an extra layer of nesting which wouldn't be necessary
2823otherwise, made the interface messy and significantly complicated the
2824implementation.
2825
2826The memory controller didn't have a way to control what happened
2827between internal tasks and child cgroups and the behavior was not
2828clearly defined.  There were attempts to add ad-hoc behaviors and
2829knobs to tailor the behavior to specific workloads which would have
2830led to problems extremely difficult to resolve in the long term.
2831
2832Multiple controllers struggled with internal tasks and came up with
2833different ways to deal with it; unfortunately, all the approaches were
2834severely flawed and, furthermore, the widely different behaviors
2835made cgroup as a whole highly inconsistent.
2836
2837This clearly is a problem which needs to be addressed from cgroup core
2838in a uniform way.
2839
2840
2841Other Interface Issues
2842----------------------
2843
2844cgroup v1 grew without oversight and developed a large number of
2845idiosyncrasies and inconsistencies.  One issue on the cgroup core side
2846was how an empty cgroup was notified - a userland helper binary was
2847forked and executed for each event.  The event delivery wasn't
2848recursive or delegatable.  The limitations of the mechanism also led
2849to in-kernel event delivery filtering mechanism further complicating
2850the interface.
2851
2852Controller interfaces were problematic too.  An extreme example is
2853controllers completely ignoring hierarchical organization and treating
2854all cgroups as if they were all located directly under the root
2855cgroup.  Some controllers exposed a large amount of inconsistent
2856implementation details to userland.
2857
2858There also was no consistency across controllers.  When a new cgroup
2859was created, some controllers defaulted to not imposing extra
2860restrictions while others disallowed any resource usage until
2861explicitly configured.  Configuration knobs for the same type of
2862control used widely differing naming schemes and formats.  Statistics
2863and information knobs were named arbitrarily and used different
2864formats and units even in the same controller.
2865
2866cgroup v2 establishes common conventions where appropriate and updates
2867controllers so that they expose minimal and consistent interfaces.
2868
2869
2870Controller Issues and Remedies
2871------------------------------
2872
2873Memory
2874~~~~~~
2875
2876The original lower boundary, the soft limit, is defined as a limit
2877that is per default unset.  As a result, the set of cgroups that
2878global reclaim prefers is opt-in, rather than opt-out.  The costs for
2879optimizing these mostly negative lookups are so high that the
2880implementation, despite its enormous size, does not even provide the
2881basic desirable behavior.  First off, the soft limit has no
2882hierarchical meaning.  All configured groups are organized in a global
2883rbtree and treated like equal peers, regardless where they are located
2884in the hierarchy.  This makes subtree delegation impossible.  Second,
2885the soft limit reclaim pass is so aggressive that it not just
2886introduces high allocation latencies into the system, but also impacts
2887system performance due to overreclaim, to the point where the feature
2888becomes self-defeating.
2889
2890The memory.low boundary on the other hand is a top-down allocated
2891reserve.  A cgroup enjoys reclaim protection when it's within its
2892effective low, which makes delegation of subtrees possible. It also
2893enjoys having reclaim pressure proportional to its overage when
2894above its effective low.
2895
2896The original high boundary, the hard limit, is defined as a strict
2897limit that can not budge, even if the OOM killer has to be called.
2898But this generally goes against the goal of making the most out of the
2899available memory.  The memory consumption of workloads varies during
2900runtime, and that requires users to overcommit.  But doing that with a
2901strict upper limit requires either a fairly accurate prediction of the
2902working set size or adding slack to the limit.  Since working set size
2903estimation is hard and error prone, and getting it wrong results in
2904OOM kills, most users tend to err on the side of a looser limit and
2905end up wasting precious resources.
2906
2907The memory.high boundary on the other hand can be set much more
2908conservatively.  When hit, it throttles allocations by forcing them
2909into direct reclaim to work off the excess, but it never invokes the
2910OOM killer.  As a result, a high boundary that is chosen too
2911aggressively will not terminate the processes, but instead it will
2912lead to gradual performance degradation.  The user can monitor this
2913and make corrections until the minimal memory footprint that still
2914gives acceptable performance is found.
2915
2916In extreme cases, with many concurrent allocations and a complete
2917breakdown of reclaim progress within the group, the high boundary can
2918be exceeded.  But even then it's mostly better to satisfy the
2919allocation from the slack available in other groups or the rest of the
2920system than killing the group.  Otherwise, memory.max is there to
2921limit this type of spillover and ultimately contain buggy or even
2922malicious applications.
2923
2924Setting the original memory.limit_in_bytes below the current usage was
2925subject to a race condition, where concurrent charges could cause the
2926limit setting to fail. memory.max on the other hand will first set the
2927limit to prevent new charges, and then reclaim and OOM kill until the
2928new limit is met - or the task writing to memory.max is killed.
2929
2930The combined memory+swap accounting and limiting is replaced by real
2931control over swap space.
2932
2933The main argument for a combined memory+swap facility in the original
2934cgroup design was that global or parental pressure would always be
2935able to swap all anonymous memory of a child group, regardless of the
2936child's own (possibly untrusted) configuration.  However, untrusted
2937groups can sabotage swapping by other means - such as referencing its
2938anonymous memory in a tight loop - and an admin can not assume full
2939swappability when overcommitting untrusted jobs.
2940
2941For trusted jobs, on the other hand, a combined counter is not an
2942intuitive userspace interface, and it flies in the face of the idea
2943that cgroup controllers should account and limit specific physical
2944resources.  Swap space is a resource like all others in the system,
2945and that's why unified hierarchy allows distributing it separately.
2946