1.. _cgroup-v2: 2 3================ 4Control Group v2 5================ 6 7:Date: October, 2015 8:Author: Tejun Heo <tj@kernel.org> 9 10This is the authoritative documentation on the design, interface and 11conventions of cgroup v2. It describes all userland-visible aspects 12of cgroup including core and specific controller behaviors. All 13future changes must be reflected in this document. Documentation for 14v1 is available under :ref:`Documentation/admin-guide/cgroup-v1/index.rst <cgroup-v1>`. 15 16.. CONTENTS 17 18 1. Introduction 19 1-1. Terminology 20 1-2. What is cgroup? 21 2. Basic Operations 22 2-1. Mounting 23 2-2. Organizing Processes and Threads 24 2-2-1. Processes 25 2-2-2. Threads 26 2-3. [Un]populated Notification 27 2-4. Controlling Controllers 28 2-4-1. Enabling and Disabling 29 2-4-2. Top-down Constraint 30 2-4-3. No Internal Process Constraint 31 2-5. Delegation 32 2-5-1. Model of Delegation 33 2-5-2. Delegation Containment 34 2-6. Guidelines 35 2-6-1. Organize Once and Control 36 2-6-2. Avoid Name Collisions 37 3. Resource Distribution Models 38 3-1. Weights 39 3-2. Limits 40 3-3. Protections 41 3-4. Allocations 42 4. Interface Files 43 4-1. Format 44 4-2. Conventions 45 4-3. Core Interface Files 46 5. Controllers 47 5-1. CPU 48 5-1-1. CPU Interface Files 49 5-2. Memory 50 5-2-1. Memory Interface Files 51 5-2-2. Usage Guidelines 52 5-2-3. Memory Ownership 53 5-3. IO 54 5-3-1. IO Interface Files 55 5-3-2. Writeback 56 5-3-3. IO Latency 57 5-3-3-1. How IO Latency Throttling Works 58 5-3-3-2. IO Latency Interface Files 59 5-4. PID 60 5-4-1. PID Interface Files 61 5-5. Cpuset 62 5.5-1. Cpuset Interface Files 63 5-6. Device 64 5-7. RDMA 65 5-7-1. RDMA Interface Files 66 5-8. HugeTLB 67 5.8-1. HugeTLB Interface Files 68 5-8. Misc 69 5-8-1. perf_event 70 5-N. Non-normative information 71 5-N-1. CPU controller root cgroup process behaviour 72 5-N-2. IO controller root cgroup process behaviour 73 6. Namespace 74 6-1. Basics 75 6-2. The Root and Views 76 6-3. Migration and setns(2) 77 6-4. Interaction with Other Namespaces 78 P. Information on Kernel Programming 79 P-1. Filesystem Support for Writeback 80 D. Deprecated v1 Core Features 81 R. Issues with v1 and Rationales for v2 82 R-1. Multiple Hierarchies 83 R-2. Thread Granularity 84 R-3. Competition Between Inner Nodes and Threads 85 R-4. Other Interface Issues 86 R-5. Controller Issues and Remedies 87 R-5-1. Memory 88 89 90Introduction 91============ 92 93Terminology 94----------- 95 96"cgroup" stands for "control group" and is never capitalized. The 97singular form is used to designate the whole feature and also as a 98qualifier as in "cgroup controllers". When explicitly referring to 99multiple individual control groups, the plural form "cgroups" is used. 100 101 102What is cgroup? 103--------------- 104 105cgroup is a mechanism to organize processes hierarchically and 106distribute system resources along the hierarchy in a controlled and 107configurable manner. 108 109cgroup is largely composed of two parts - the core and controllers. 110cgroup core is primarily responsible for hierarchically organizing 111processes. A cgroup controller is usually responsible for 112distributing a specific type of system resource along the hierarchy 113although there are utility controllers which serve purposes other than 114resource distribution. 115 116cgroups form a tree structure and every process in the system belongs 117to one and only one cgroup. All threads of a process belong to the 118same cgroup. On creation, all processes are put in the cgroup that 119the parent process belongs to at the time. A process can be migrated 120to another cgroup. Migration of a process doesn't affect already 121existing descendant processes. 122 123Following certain structural constraints, controllers may be enabled or 124disabled selectively on a cgroup. All controller behaviors are 125hierarchical - if a controller is enabled on a cgroup, it affects all 126processes which belong to the cgroups consisting the inclusive 127sub-hierarchy of the cgroup. When a controller is enabled on a nested 128cgroup, it always restricts the resource distribution further. The 129restrictions set closer to the root in the hierarchy can not be 130overridden from further away. 131 132 133Basic Operations 134================ 135 136Mounting 137-------- 138 139Unlike v1, cgroup v2 has only single hierarchy. The cgroup v2 140hierarchy can be mounted with the following mount command:: 141 142 # mount -t cgroup2 none $MOUNT_POINT 143 144cgroup2 filesystem has the magic number 0x63677270 ("cgrp"). All 145controllers which support v2 and are not bound to a v1 hierarchy are 146automatically bound to the v2 hierarchy and show up at the root. 147Controllers which are not in active use in the v2 hierarchy can be 148bound to other hierarchies. This allows mixing v2 hierarchy with the 149legacy v1 multiple hierarchies in a fully backward compatible way. 150 151A controller can be moved across hierarchies only after the controller 152is no longer referenced in its current hierarchy. Because per-cgroup 153controller states are destroyed asynchronously and controllers may 154have lingering references, a controller may not show up immediately on 155the v2 hierarchy after the final umount of the previous hierarchy. 156Similarly, a controller should be fully disabled to be moved out of 157the unified hierarchy and it may take some time for the disabled 158controller to become available for other hierarchies; furthermore, due 159to inter-controller dependencies, other controllers may need to be 160disabled too. 161 162While useful for development and manual configurations, moving 163controllers dynamically between the v2 and other hierarchies is 164strongly discouraged for production use. It is recommended to decide 165the hierarchies and controller associations before starting using the 166controllers after system boot. 167 168During transition to v2, system management software might still 169automount the v1 cgroup filesystem and so hijack all controllers 170during boot, before manual intervention is possible. To make testing 171and experimenting easier, the kernel parameter cgroup_no_v1= allows 172disabling controllers in v1 and make them always available in v2. 173 174cgroup v2 currently supports the following mount options. 175 176 nsdelegate 177 Consider cgroup namespaces as delegation boundaries. This 178 option is system wide and can only be set on mount or modified 179 through remount from the init namespace. The mount option is 180 ignored on non-init namespace mounts. Please refer to the 181 Delegation section for details. 182 183 memory_localevents 184 Only populate memory.events with data for the current cgroup, 185 and not any subtrees. This is legacy behaviour, the default 186 behaviour without this option is to include subtree counts. 187 This option is system wide and can only be set on mount or 188 modified through remount from the init namespace. The mount 189 option is ignored on non-init namespace mounts. 190 191 memory_recursiveprot 192 Recursively apply memory.min and memory.low protection to 193 entire subtrees, without requiring explicit downward 194 propagation into leaf cgroups. This allows protecting entire 195 subtrees from one another, while retaining free competition 196 within those subtrees. This should have been the default 197 behavior but is a mount-option to avoid regressing setups 198 relying on the original semantics (e.g. specifying bogusly 199 high 'bypass' protection values at higher tree levels). 200 201 202Organizing Processes and Threads 203-------------------------------- 204 205Processes 206~~~~~~~~~ 207 208Initially, only the root cgroup exists to which all processes belong. 209A child cgroup can be created by creating a sub-directory:: 210 211 # mkdir $CGROUP_NAME 212 213A given cgroup may have multiple child cgroups forming a tree 214structure. Each cgroup has a read-writable interface file 215"cgroup.procs". When read, it lists the PIDs of all processes which 216belong to the cgroup one-per-line. The PIDs are not ordered and the 217same PID may show up more than once if the process got moved to 218another cgroup and then back or the PID got recycled while reading. 219 220A process can be migrated into a cgroup by writing its PID to the 221target cgroup's "cgroup.procs" file. Only one process can be migrated 222on a single write(2) call. If a process is composed of multiple 223threads, writing the PID of any thread migrates all threads of the 224process. 225 226When a process forks a child process, the new process is born into the 227cgroup that the forking process belongs to at the time of the 228operation. After exit, a process stays associated with the cgroup 229that it belonged to at the time of exit until it's reaped; however, a 230zombie process does not appear in "cgroup.procs" and thus can't be 231moved to another cgroup. 232 233A cgroup which doesn't have any children or live processes can be 234destroyed by removing the directory. Note that a cgroup which doesn't 235have any children and is associated only with zombie processes is 236considered empty and can be removed:: 237 238 # rmdir $CGROUP_NAME 239 240"/proc/$PID/cgroup" lists a process's cgroup membership. If legacy 241cgroup is in use in the system, this file may contain multiple lines, 242one for each hierarchy. The entry for cgroup v2 is always in the 243format "0::$PATH":: 244 245 # cat /proc/842/cgroup 246 ... 247 0::/test-cgroup/test-cgroup-nested 248 249If the process becomes a zombie and the cgroup it was associated with 250is removed subsequently, " (deleted)" is appended to the path:: 251 252 # cat /proc/842/cgroup 253 ... 254 0::/test-cgroup/test-cgroup-nested (deleted) 255 256 257Threads 258~~~~~~~ 259 260cgroup v2 supports thread granularity for a subset of controllers to 261support use cases requiring hierarchical resource distribution across 262the threads of a group of processes. By default, all threads of a 263process belong to the same cgroup, which also serves as the resource 264domain to host resource consumptions which are not specific to a 265process or thread. The thread mode allows threads to be spread across 266a subtree while still maintaining the common resource domain for them. 267 268Controllers which support thread mode are called threaded controllers. 269The ones which don't are called domain controllers. 270 271Marking a cgroup threaded makes it join the resource domain of its 272parent as a threaded cgroup. The parent may be another threaded 273cgroup whose resource domain is further up in the hierarchy. The root 274of a threaded subtree, that is, the nearest ancestor which is not 275threaded, is called threaded domain or thread root interchangeably and 276serves as the resource domain for the entire subtree. 277 278Inside a threaded subtree, threads of a process can be put in 279different cgroups and are not subject to the no internal process 280constraint - threaded controllers can be enabled on non-leaf cgroups 281whether they have threads in them or not. 282 283As the threaded domain cgroup hosts all the domain resource 284consumptions of the subtree, it is considered to have internal 285resource consumptions whether there are processes in it or not and 286can't have populated child cgroups which aren't threaded. Because the 287root cgroup is not subject to no internal process constraint, it can 288serve both as a threaded domain and a parent to domain cgroups. 289 290The current operation mode or type of the cgroup is shown in the 291"cgroup.type" file which indicates whether the cgroup is a normal 292domain, a domain which is serving as the domain of a threaded subtree, 293or a threaded cgroup. 294 295On creation, a cgroup is always a domain cgroup and can be made 296threaded by writing "threaded" to the "cgroup.type" file. The 297operation is single direction:: 298 299 # echo threaded > cgroup.type 300 301Once threaded, the cgroup can't be made a domain again. To enable the 302thread mode, the following conditions must be met. 303 304- As the cgroup will join the parent's resource domain. The parent 305 must either be a valid (threaded) domain or a threaded cgroup. 306 307- When the parent is an unthreaded domain, it must not have any domain 308 controllers enabled or populated domain children. The root is 309 exempt from this requirement. 310 311Topology-wise, a cgroup can be in an invalid state. Please consider 312the following topology:: 313 314 A (threaded domain) - B (threaded) - C (domain, just created) 315 316C is created as a domain but isn't connected to a parent which can 317host child domains. C can't be used until it is turned into a 318threaded cgroup. "cgroup.type" file will report "domain (invalid)" in 319these cases. Operations which fail due to invalid topology use 320EOPNOTSUPP as the errno. 321 322A domain cgroup is turned into a threaded domain when one of its child 323cgroup becomes threaded or threaded controllers are enabled in the 324"cgroup.subtree_control" file while there are processes in the cgroup. 325A threaded domain reverts to a normal domain when the conditions 326clear. 327 328When read, "cgroup.threads" contains the list of the thread IDs of all 329threads in the cgroup. Except that the operations are per-thread 330instead of per-process, "cgroup.threads" has the same format and 331behaves the same way as "cgroup.procs". While "cgroup.threads" can be 332written to in any cgroup, as it can only move threads inside the same 333threaded domain, its operations are confined inside each threaded 334subtree. 335 336The threaded domain cgroup serves as the resource domain for the whole 337subtree, and, while the threads can be scattered across the subtree, 338all the processes are considered to be in the threaded domain cgroup. 339"cgroup.procs" in a threaded domain cgroup contains the PIDs of all 340processes in the subtree and is not readable in the subtree proper. 341However, "cgroup.procs" can be written to from anywhere in the subtree 342to migrate all threads of the matching process to the cgroup. 343 344Only threaded controllers can be enabled in a threaded subtree. When 345a threaded controller is enabled inside a threaded subtree, it only 346accounts for and controls resource consumptions associated with the 347threads in the cgroup and its descendants. All consumptions which 348aren't tied to a specific thread belong to the threaded domain cgroup. 349 350Because a threaded subtree is exempt from no internal process 351constraint, a threaded controller must be able to handle competition 352between threads in a non-leaf cgroup and its child cgroups. Each 353threaded controller defines how such competitions are handled. 354 355 356[Un]populated Notification 357-------------------------- 358 359Each non-root cgroup has a "cgroup.events" file which contains 360"populated" field indicating whether the cgroup's sub-hierarchy has 361live processes in it. Its value is 0 if there is no live process in 362the cgroup and its descendants; otherwise, 1. poll and [id]notify 363events are triggered when the value changes. This can be used, for 364example, to start a clean-up operation after all processes of a given 365sub-hierarchy have exited. The populated state updates and 366notifications are recursive. Consider the following sub-hierarchy 367where the numbers in the parentheses represent the numbers of processes 368in each cgroup:: 369 370 A(4) - B(0) - C(1) 371 \ D(0) 372 373A, B and C's "populated" fields would be 1 while D's 0. After the one 374process in C exits, B and C's "populated" fields would flip to "0" and 375file modified events will be generated on the "cgroup.events" files of 376both cgroups. 377 378 379Controlling Controllers 380----------------------- 381 382Enabling and Disabling 383~~~~~~~~~~~~~~~~~~~~~~ 384 385Each cgroup has a "cgroup.controllers" file which lists all 386controllers available for the cgroup to enable:: 387 388 # cat cgroup.controllers 389 cpu io memory 390 391No controller is enabled by default. Controllers can be enabled and 392disabled by writing to the "cgroup.subtree_control" file:: 393 394 # echo "+cpu +memory -io" > cgroup.subtree_control 395 396Only controllers which are listed in "cgroup.controllers" can be 397enabled. When multiple operations are specified as above, either they 398all succeed or fail. If multiple operations on the same controller 399are specified, the last one is effective. 400 401Enabling a controller in a cgroup indicates that the distribution of 402the target resource across its immediate children will be controlled. 403Consider the following sub-hierarchy. The enabled controllers are 404listed in parentheses:: 405 406 A(cpu,memory) - B(memory) - C() 407 \ D() 408 409As A has "cpu" and "memory" enabled, A will control the distribution 410of CPU cycles and memory to its children, in this case, B. As B has 411"memory" enabled but not "CPU", C and D will compete freely on CPU 412cycles but their division of memory available to B will be controlled. 413 414As a controller regulates the distribution of the target resource to 415the cgroup's children, enabling it creates the controller's interface 416files in the child cgroups. In the above example, enabling "cpu" on B 417would create the "cpu." prefixed controller interface files in C and 418D. Likewise, disabling "memory" from B would remove the "memory." 419prefixed controller interface files from C and D. This means that the 420controller interface files - anything which doesn't start with 421"cgroup." are owned by the parent rather than the cgroup itself. 422 423 424Top-down Constraint 425~~~~~~~~~~~~~~~~~~~ 426 427Resources are distributed top-down and a cgroup can further distribute 428a resource only if the resource has been distributed to it from the 429parent. This means that all non-root "cgroup.subtree_control" files 430can only contain controllers which are enabled in the parent's 431"cgroup.subtree_control" file. A controller can be enabled only if 432the parent has the controller enabled and a controller can't be 433disabled if one or more children have it enabled. 434 435 436No Internal Process Constraint 437~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 438 439Non-root cgroups can distribute domain resources to their children 440only when they don't have any processes of their own. In other words, 441only domain cgroups which don't contain any processes can have domain 442controllers enabled in their "cgroup.subtree_control" files. 443 444This guarantees that, when a domain controller is looking at the part 445of the hierarchy which has it enabled, processes are always only on 446the leaves. This rules out situations where child cgroups compete 447against internal processes of the parent. 448 449The root cgroup is exempt from this restriction. Root contains 450processes and anonymous resource consumption which can't be associated 451with any other cgroups and requires special treatment from most 452controllers. How resource consumption in the root cgroup is governed 453is up to each controller (for more information on this topic please 454refer to the Non-normative information section in the Controllers 455chapter). 456 457Note that the restriction doesn't get in the way if there is no 458enabled controller in the cgroup's "cgroup.subtree_control". This is 459important as otherwise it wouldn't be possible to create children of a 460populated cgroup. To control resource distribution of a cgroup, the 461cgroup must create children and transfer all its processes to the 462children before enabling controllers in its "cgroup.subtree_control" 463file. 464 465 466Delegation 467---------- 468 469Model of Delegation 470~~~~~~~~~~~~~~~~~~~ 471 472A cgroup can be delegated in two ways. First, to a less privileged 473user by granting write access of the directory and its "cgroup.procs", 474"cgroup.threads" and "cgroup.subtree_control" files to the user. 475Second, if the "nsdelegate" mount option is set, automatically to a 476cgroup namespace on namespace creation. 477 478Because the resource control interface files in a given directory 479control the distribution of the parent's resources, the delegatee 480shouldn't be allowed to write to them. For the first method, this is 481achieved by not granting access to these files. For the second, the 482kernel rejects writes to all files other than "cgroup.procs" and 483"cgroup.subtree_control" on a namespace root from inside the 484namespace. 485 486The end results are equivalent for both delegation types. Once 487delegated, the user can build sub-hierarchy under the directory, 488organize processes inside it as it sees fit and further distribute the 489resources it received from the parent. The limits and other settings 490of all resource controllers are hierarchical and regardless of what 491happens in the delegated sub-hierarchy, nothing can escape the 492resource restrictions imposed by the parent. 493 494Currently, cgroup doesn't impose any restrictions on the number of 495cgroups in or nesting depth of a delegated sub-hierarchy; however, 496this may be limited explicitly in the future. 497 498 499Delegation Containment 500~~~~~~~~~~~~~~~~~~~~~~ 501 502A delegated sub-hierarchy is contained in the sense that processes 503can't be moved into or out of the sub-hierarchy by the delegatee. 504 505For delegations to a less privileged user, this is achieved by 506requiring the following conditions for a process with a non-root euid 507to migrate a target process into a cgroup by writing its PID to the 508"cgroup.procs" file. 509 510- The writer must have write access to the "cgroup.procs" file. 511 512- The writer must have write access to the "cgroup.procs" file of the 513 common ancestor of the source and destination cgroups. 514 515The above two constraints ensure that while a delegatee may migrate 516processes around freely in the delegated sub-hierarchy it can't pull 517in from or push out to outside the sub-hierarchy. 518 519For an example, let's assume cgroups C0 and C1 have been delegated to 520user U0 who created C00, C01 under C0 and C10 under C1 as follows and 521all processes under C0 and C1 belong to U0:: 522 523 ~~~~~~~~~~~~~ - C0 - C00 524 ~ cgroup ~ \ C01 525 ~ hierarchy ~ 526 ~~~~~~~~~~~~~ - C1 - C10 527 528Let's also say U0 wants to write the PID of a process which is 529currently in C10 into "C00/cgroup.procs". U0 has write access to the 530file; however, the common ancestor of the source cgroup C10 and the 531destination cgroup C00 is above the points of delegation and U0 would 532not have write access to its "cgroup.procs" files and thus the write 533will be denied with -EACCES. 534 535For delegations to namespaces, containment is achieved by requiring 536that both the source and destination cgroups are reachable from the 537namespace of the process which is attempting the migration. If either 538is not reachable, the migration is rejected with -ENOENT. 539 540 541Guidelines 542---------- 543 544Organize Once and Control 545~~~~~~~~~~~~~~~~~~~~~~~~~ 546 547Migrating a process across cgroups is a relatively expensive operation 548and stateful resources such as memory are not moved together with the 549process. This is an explicit design decision as there often exist 550inherent trade-offs between migration and various hot paths in terms 551of synchronization cost. 552 553As such, migrating processes across cgroups frequently as a means to 554apply different resource restrictions is discouraged. A workload 555should be assigned to a cgroup according to the system's logical and 556resource structure once on start-up. Dynamic adjustments to resource 557distribution can be made by changing controller configuration through 558the interface files. 559 560 561Avoid Name Collisions 562~~~~~~~~~~~~~~~~~~~~~ 563 564Interface files for a cgroup and its children cgroups occupy the same 565directory and it is possible to create children cgroups which collide 566with interface files. 567 568All cgroup core interface files are prefixed with "cgroup." and each 569controller's interface files are prefixed with the controller name and 570a dot. A controller's name is composed of lower case alphabets and 571'_'s but never begins with an '_' so it can be used as the prefix 572character for collision avoidance. Also, interface file names won't 573start or end with terms which are often used in categorizing workloads 574such as job, service, slice, unit or workload. 575 576cgroup doesn't do anything to prevent name collisions and it's the 577user's responsibility to avoid them. 578 579 580Resource Distribution Models 581============================ 582 583cgroup controllers implement several resource distribution schemes 584depending on the resource type and expected use cases. This section 585describes major schemes in use along with their expected behaviors. 586 587 588Weights 589------- 590 591A parent's resource is distributed by adding up the weights of all 592active children and giving each the fraction matching the ratio of its 593weight against the sum. As only children which can make use of the 594resource at the moment participate in the distribution, this is 595work-conserving. Due to the dynamic nature, this model is usually 596used for stateless resources. 597 598All weights are in the range [1, 10000] with the default at 100. This 599allows symmetric multiplicative biases in both directions at fine 600enough granularity while staying in the intuitive range. 601 602As long as the weight is in range, all configuration combinations are 603valid and there is no reason to reject configuration changes or 604process migrations. 605 606"cpu.weight" proportionally distributes CPU cycles to active children 607and is an example of this type. 608 609 610Limits 611------ 612 613A child can only consume upto the configured amount of the resource. 614Limits can be over-committed - the sum of the limits of children can 615exceed the amount of resource available to the parent. 616 617Limits are in the range [0, max] and defaults to "max", which is noop. 618 619As limits can be over-committed, all configuration combinations are 620valid and there is no reason to reject configuration changes or 621process migrations. 622 623"io.max" limits the maximum BPS and/or IOPS that a cgroup can consume 624on an IO device and is an example of this type. 625 626 627Protections 628----------- 629 630A cgroup is protected upto the configured amount of the resource 631as long as the usages of all its ancestors are under their 632protected levels. Protections can be hard guarantees or best effort 633soft boundaries. Protections can also be over-committed in which case 634only upto the amount available to the parent is protected among 635children. 636 637Protections are in the range [0, max] and defaults to 0, which is 638noop. 639 640As protections can be over-committed, all configuration combinations 641are valid and there is no reason to reject configuration changes or 642process migrations. 643 644"memory.low" implements best-effort memory protection and is an 645example of this type. 646 647 648Allocations 649----------- 650 651A cgroup is exclusively allocated a certain amount of a finite 652resource. Allocations can't be over-committed - the sum of the 653allocations of children can not exceed the amount of resource 654available to the parent. 655 656Allocations are in the range [0, max] and defaults to 0, which is no 657resource. 658 659As allocations can't be over-committed, some configuration 660combinations are invalid and should be rejected. Also, if the 661resource is mandatory for execution of processes, process migrations 662may be rejected. 663 664"cpu.rt.max" hard-allocates realtime slices and is an example of this 665type. 666 667 668Interface Files 669=============== 670 671Format 672------ 673 674All interface files should be in one of the following formats whenever 675possible:: 676 677 New-line separated values 678 (when only one value can be written at once) 679 680 VAL0\n 681 VAL1\n 682 ... 683 684 Space separated values 685 (when read-only or multiple values can be written at once) 686 687 VAL0 VAL1 ...\n 688 689 Flat keyed 690 691 KEY0 VAL0\n 692 KEY1 VAL1\n 693 ... 694 695 Nested keyed 696 697 KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01... 698 KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11... 699 ... 700 701For a writable file, the format for writing should generally match 702reading; however, controllers may allow omitting later fields or 703implement restricted shortcuts for most common use cases. 704 705For both flat and nested keyed files, only the values for a single key 706can be written at a time. For nested keyed files, the sub key pairs 707may be specified in any order and not all pairs have to be specified. 708 709 710Conventions 711----------- 712 713- Settings for a single feature should be contained in a single file. 714 715- The root cgroup should be exempt from resource control and thus 716 shouldn't have resource control interface files. 717 718- The default time unit is microseconds. If a different unit is ever 719 used, an explicit unit suffix must be present. 720 721- A parts-per quantity should use a percentage decimal with at least 722 two digit fractional part - e.g. 13.40. 723 724- If a controller implements weight based resource distribution, its 725 interface file should be named "weight" and have the range [1, 726 10000] with 100 as the default. The values are chosen to allow 727 enough and symmetric bias in both directions while keeping it 728 intuitive (the default is 100%). 729 730- If a controller implements an absolute resource guarantee and/or 731 limit, the interface files should be named "min" and "max" 732 respectively. If a controller implements best effort resource 733 guarantee and/or limit, the interface files should be named "low" 734 and "high" respectively. 735 736 In the above four control files, the special token "max" should be 737 used to represent upward infinity for both reading and writing. 738 739- If a setting has a configurable default value and keyed specific 740 overrides, the default entry should be keyed with "default" and 741 appear as the first entry in the file. 742 743 The default value can be updated by writing either "default $VAL" or 744 "$VAL". 745 746 When writing to update a specific override, "default" can be used as 747 the value to indicate removal of the override. Override entries 748 with "default" as the value must not appear when read. 749 750 For example, a setting which is keyed by major:minor device numbers 751 with integer values may look like the following:: 752 753 # cat cgroup-example-interface-file 754 default 150 755 8:0 300 756 757 The default value can be updated by:: 758 759 # echo 125 > cgroup-example-interface-file 760 761 or:: 762 763 # echo "default 125" > cgroup-example-interface-file 764 765 An override can be set by:: 766 767 # echo "8:16 170" > cgroup-example-interface-file 768 769 and cleared by:: 770 771 # echo "8:0 default" > cgroup-example-interface-file 772 # cat cgroup-example-interface-file 773 default 125 774 8:16 170 775 776- For events which are not very high frequency, an interface file 777 "events" should be created which lists event key value pairs. 778 Whenever a notifiable event happens, file modified event should be 779 generated on the file. 780 781 782Core Interface Files 783-------------------- 784 785All cgroup core files are prefixed with "cgroup." 786 787 cgroup.type 788 A read-write single value file which exists on non-root 789 cgroups. 790 791 When read, it indicates the current type of the cgroup, which 792 can be one of the following values. 793 794 - "domain" : A normal valid domain cgroup. 795 796 - "domain threaded" : A threaded domain cgroup which is 797 serving as the root of a threaded subtree. 798 799 - "domain invalid" : A cgroup which is in an invalid state. 800 It can't be populated or have controllers enabled. It may 801 be allowed to become a threaded cgroup. 802 803 - "threaded" : A threaded cgroup which is a member of a 804 threaded subtree. 805 806 A cgroup can be turned into a threaded cgroup by writing 807 "threaded" to this file. 808 809 cgroup.procs 810 A read-write new-line separated values file which exists on 811 all cgroups. 812 813 When read, it lists the PIDs of all processes which belong to 814 the cgroup one-per-line. The PIDs are not ordered and the 815 same PID may show up more than once if the process got moved 816 to another cgroup and then back or the PID got recycled while 817 reading. 818 819 A PID can be written to migrate the process associated with 820 the PID to the cgroup. The writer should match all of the 821 following conditions. 822 823 - It must have write access to the "cgroup.procs" file. 824 825 - It must have write access to the "cgroup.procs" file of the 826 common ancestor of the source and destination cgroups. 827 828 When delegating a sub-hierarchy, write access to this file 829 should be granted along with the containing directory. 830 831 In a threaded cgroup, reading this file fails with EOPNOTSUPP 832 as all the processes belong to the thread root. Writing is 833 supported and moves every thread of the process to the cgroup. 834 835 cgroup.threads 836 A read-write new-line separated values file which exists on 837 all cgroups. 838 839 When read, it lists the TIDs of all threads which belong to 840 the cgroup one-per-line. The TIDs are not ordered and the 841 same TID may show up more than once if the thread got moved to 842 another cgroup and then back or the TID got recycled while 843 reading. 844 845 A TID can be written to migrate the thread associated with the 846 TID to the cgroup. The writer should match all of the 847 following conditions. 848 849 - It must have write access to the "cgroup.threads" file. 850 851 - The cgroup that the thread is currently in must be in the 852 same resource domain as the destination cgroup. 853 854 - It must have write access to the "cgroup.procs" file of the 855 common ancestor of the source and destination cgroups. 856 857 When delegating a sub-hierarchy, write access to this file 858 should be granted along with the containing directory. 859 860 cgroup.controllers 861 A read-only space separated values file which exists on all 862 cgroups. 863 864 It shows space separated list of all controllers available to 865 the cgroup. The controllers are not ordered. 866 867 cgroup.subtree_control 868 A read-write space separated values file which exists on all 869 cgroups. Starts out empty. 870 871 When read, it shows space separated list of the controllers 872 which are enabled to control resource distribution from the 873 cgroup to its children. 874 875 Space separated list of controllers prefixed with '+' or '-' 876 can be written to enable or disable controllers. A controller 877 name prefixed with '+' enables the controller and '-' 878 disables. If a controller appears more than once on the list, 879 the last one is effective. When multiple enable and disable 880 operations are specified, either all succeed or all fail. 881 882 cgroup.events 883 A read-only flat-keyed file which exists on non-root cgroups. 884 The following entries are defined. Unless specified 885 otherwise, a value change in this file generates a file 886 modified event. 887 888 populated 889 1 if the cgroup or its descendants contains any live 890 processes; otherwise, 0. 891 frozen 892 1 if the cgroup is frozen; otherwise, 0. 893 894 cgroup.max.descendants 895 A read-write single value files. The default is "max". 896 897 Maximum allowed number of descent cgroups. 898 If the actual number of descendants is equal or larger, 899 an attempt to create a new cgroup in the hierarchy will fail. 900 901 cgroup.max.depth 902 A read-write single value files. The default is "max". 903 904 Maximum allowed descent depth below the current cgroup. 905 If the actual descent depth is equal or larger, 906 an attempt to create a new child cgroup will fail. 907 908 cgroup.stat 909 A read-only flat-keyed file with the following entries: 910 911 nr_descendants 912 Total number of visible descendant cgroups. 913 914 nr_dying_descendants 915 Total number of dying descendant cgroups. A cgroup becomes 916 dying after being deleted by a user. The cgroup will remain 917 in dying state for some time undefined time (which can depend 918 on system load) before being completely destroyed. 919 920 A process can't enter a dying cgroup under any circumstances, 921 a dying cgroup can't revive. 922 923 A dying cgroup can consume system resources not exceeding 924 limits, which were active at the moment of cgroup deletion. 925 926 cgroup.freeze 927 A read-write single value file which exists on non-root cgroups. 928 Allowed values are "0" and "1". The default is "0". 929 930 Writing "1" to the file causes freezing of the cgroup and all 931 descendant cgroups. This means that all belonging processes will 932 be stopped and will not run until the cgroup will be explicitly 933 unfrozen. Freezing of the cgroup may take some time; when this action 934 is completed, the "frozen" value in the cgroup.events control file 935 will be updated to "1" and the corresponding notification will be 936 issued. 937 938 A cgroup can be frozen either by its own settings, or by settings 939 of any ancestor cgroups. If any of ancestor cgroups is frozen, the 940 cgroup will remain frozen. 941 942 Processes in the frozen cgroup can be killed by a fatal signal. 943 They also can enter and leave a frozen cgroup: either by an explicit 944 move by a user, or if freezing of the cgroup races with fork(). 945 If a process is moved to a frozen cgroup, it stops. If a process is 946 moved out of a frozen cgroup, it becomes running. 947 948 Frozen status of a cgroup doesn't affect any cgroup tree operations: 949 it's possible to delete a frozen (and empty) cgroup, as well as 950 create new sub-cgroups. 951 952Controllers 953=========== 954 955.. _cgroup-v2-cpu: 956 957CPU 958--- 959 960The "cpu" controllers regulates distribution of CPU cycles. This 961controller implements weight and absolute bandwidth limit models for 962normal scheduling policy and absolute bandwidth allocation model for 963realtime scheduling policy. 964 965In all the above models, cycles distribution is defined only on a temporal 966base and it does not account for the frequency at which tasks are executed. 967The (optional) utilization clamping support allows to hint the schedutil 968cpufreq governor about the minimum desired frequency which should always be 969provided by a CPU, as well as the maximum desired frequency, which should not 970be exceeded by a CPU. 971 972WARNING: cgroup2 doesn't yet support control of realtime processes and 973the cpu controller can only be enabled when all RT processes are in 974the root cgroup. Be aware that system management software may already 975have placed RT processes into nonroot cgroups during the system boot 976process, and these processes may need to be moved to the root cgroup 977before the cpu controller can be enabled. 978 979 980CPU Interface Files 981~~~~~~~~~~~~~~~~~~~ 982 983All time durations are in microseconds. 984 985 cpu.stat 986 A read-only flat-keyed file. 987 This file exists whether the controller is enabled or not. 988 989 It always reports the following three stats: 990 991 - usage_usec 992 - user_usec 993 - system_usec 994 995 and the following three when the controller is enabled: 996 997 - nr_periods 998 - nr_throttled 999 - throttled_usec 1000 1001 cpu.weight 1002 A read-write single value file which exists on non-root 1003 cgroups. The default is "100". 1004 1005 The weight in the range [1, 10000]. 1006 1007 cpu.weight.nice 1008 A read-write single value file which exists on non-root 1009 cgroups. The default is "0". 1010 1011 The nice value is in the range [-20, 19]. 1012 1013 This interface file is an alternative interface for 1014 "cpu.weight" and allows reading and setting weight using the 1015 same values used by nice(2). Because the range is smaller and 1016 granularity is coarser for the nice values, the read value is 1017 the closest approximation of the current weight. 1018 1019 cpu.max 1020 A read-write two value file which exists on non-root cgroups. 1021 The default is "max 100000". 1022 1023 The maximum bandwidth limit. It's in the following format:: 1024 1025 $MAX $PERIOD 1026 1027 which indicates that the group may consume upto $MAX in each 1028 $PERIOD duration. "max" for $MAX indicates no limit. If only 1029 one number is written, $MAX is updated. 1030 1031 cpu.pressure 1032 A read-write nested-keyed file. 1033 1034 Shows pressure stall information for CPU. See 1035 :ref:`Documentation/accounting/psi.rst <psi>` for details. 1036 1037 cpu.uclamp.min 1038 A read-write single value file which exists on non-root cgroups. 1039 The default is "0", i.e. no utilization boosting. 1040 1041 The requested minimum utilization (protection) as a percentage 1042 rational number, e.g. 12.34 for 12.34%. 1043 1044 This interface allows reading and setting minimum utilization clamp 1045 values similar to the sched_setattr(2). This minimum utilization 1046 value is used to clamp the task specific minimum utilization clamp. 1047 1048 The requested minimum utilization (protection) is always capped by 1049 the current value for the maximum utilization (limit), i.e. 1050 `cpu.uclamp.max`. 1051 1052 cpu.uclamp.max 1053 A read-write single value file which exists on non-root cgroups. 1054 The default is "max". i.e. no utilization capping 1055 1056 The requested maximum utilization (limit) as a percentage rational 1057 number, e.g. 98.76 for 98.76%. 1058 1059 This interface allows reading and setting maximum utilization clamp 1060 values similar to the sched_setattr(2). This maximum utilization 1061 value is used to clamp the task specific maximum utilization clamp. 1062 1063 1064 1065Memory 1066------ 1067 1068The "memory" controller regulates distribution of memory. Memory is 1069stateful and implements both limit and protection models. Due to the 1070intertwining between memory usage and reclaim pressure and the 1071stateful nature of memory, the distribution model is relatively 1072complex. 1073 1074While not completely water-tight, all major memory usages by a given 1075cgroup are tracked so that the total memory consumption can be 1076accounted and controlled to a reasonable extent. Currently, the 1077following types of memory usages are tracked. 1078 1079- Userland memory - page cache and anonymous memory. 1080 1081- Kernel data structures such as dentries and inodes. 1082 1083- TCP socket buffers. 1084 1085The above list may expand in the future for better coverage. 1086 1087 1088Memory Interface Files 1089~~~~~~~~~~~~~~~~~~~~~~ 1090 1091All memory amounts are in bytes. If a value which is not aligned to 1092PAGE_SIZE is written, the value may be rounded up to the closest 1093PAGE_SIZE multiple when read back. 1094 1095 memory.current 1096 A read-only single value file which exists on non-root 1097 cgroups. 1098 1099 The total amount of memory currently being used by the cgroup 1100 and its descendants. 1101 1102 memory.min 1103 A read-write single value file which exists on non-root 1104 cgroups. The default is "0". 1105 1106 Hard memory protection. If the memory usage of a cgroup 1107 is within its effective min boundary, the cgroup's memory 1108 won't be reclaimed under any conditions. If there is no 1109 unprotected reclaimable memory available, OOM killer 1110 is invoked. Above the effective min boundary (or 1111 effective low boundary if it is higher), pages are reclaimed 1112 proportionally to the overage, reducing reclaim pressure for 1113 smaller overages. 1114 1115 Effective min boundary is limited by memory.min values of 1116 all ancestor cgroups. If there is memory.min overcommitment 1117 (child cgroup or cgroups are requiring more protected memory 1118 than parent will allow), then each child cgroup will get 1119 the part of parent's protection proportional to its 1120 actual memory usage below memory.min. 1121 1122 Putting more memory than generally available under this 1123 protection is discouraged and may lead to constant OOMs. 1124 1125 If a memory cgroup is not populated with processes, 1126 its memory.min is ignored. 1127 1128 memory.low 1129 A read-write single value file which exists on non-root 1130 cgroups. The default is "0". 1131 1132 Best-effort memory protection. If the memory usage of a 1133 cgroup is within its effective low boundary, the cgroup's 1134 memory won't be reclaimed unless there is no reclaimable 1135 memory available in unprotected cgroups. 1136 Above the effective low boundary (or 1137 effective min boundary if it is higher), pages are reclaimed 1138 proportionally to the overage, reducing reclaim pressure for 1139 smaller overages. 1140 1141 Effective low boundary is limited by memory.low values of 1142 all ancestor cgroups. If there is memory.low overcommitment 1143 (child cgroup or cgroups are requiring more protected memory 1144 than parent will allow), then each child cgroup will get 1145 the part of parent's protection proportional to its 1146 actual memory usage below memory.low. 1147 1148 Putting more memory than generally available under this 1149 protection is discouraged. 1150 1151 memory.high 1152 A read-write single value file which exists on non-root 1153 cgroups. The default is "max". 1154 1155 Memory usage throttle limit. This is the main mechanism to 1156 control memory usage of a cgroup. If a cgroup's usage goes 1157 over the high boundary, the processes of the cgroup are 1158 throttled and put under heavy reclaim pressure. 1159 1160 Going over the high limit never invokes the OOM killer and 1161 under extreme conditions the limit may be breached. 1162 1163 memory.max 1164 A read-write single value file which exists on non-root 1165 cgroups. The default is "max". 1166 1167 Memory usage hard limit. This is the final protection 1168 mechanism. If a cgroup's memory usage reaches this limit and 1169 can't be reduced, the OOM killer is invoked in the cgroup. 1170 Under certain circumstances, the usage may go over the limit 1171 temporarily. 1172 1173 In default configuration regular 0-order allocations always 1174 succeed unless OOM killer chooses current task as a victim. 1175 1176 Some kinds of allocations don't invoke the OOM killer. 1177 Caller could retry them differently, return into userspace 1178 as -ENOMEM or silently ignore in cases like disk readahead. 1179 1180 This is the ultimate protection mechanism. As long as the 1181 high limit is used and monitored properly, this limit's 1182 utility is limited to providing the final safety net. 1183 1184 memory.oom.group 1185 A read-write single value file which exists on non-root 1186 cgroups. The default value is "0". 1187 1188 Determines whether the cgroup should be treated as 1189 an indivisible workload by the OOM killer. If set, 1190 all tasks belonging to the cgroup or to its descendants 1191 (if the memory cgroup is not a leaf cgroup) are killed 1192 together or not at all. This can be used to avoid 1193 partial kills to guarantee workload integrity. 1194 1195 Tasks with the OOM protection (oom_score_adj set to -1000) 1196 are treated as an exception and are never killed. 1197 1198 If the OOM killer is invoked in a cgroup, it's not going 1199 to kill any tasks outside of this cgroup, regardless 1200 memory.oom.group values of ancestor cgroups. 1201 1202 memory.events 1203 A read-only flat-keyed file which exists on non-root cgroups. 1204 The following entries are defined. Unless specified 1205 otherwise, a value change in this file generates a file 1206 modified event. 1207 1208 Note that all fields in this file are hierarchical and the 1209 file modified event can be generated due to an event down the 1210 hierarchy. For for the local events at the cgroup level see 1211 memory.events.local. 1212 1213 low 1214 The number of times the cgroup is reclaimed due to 1215 high memory pressure even though its usage is under 1216 the low boundary. This usually indicates that the low 1217 boundary is over-committed. 1218 1219 high 1220 The number of times processes of the cgroup are 1221 throttled and routed to perform direct memory reclaim 1222 because the high memory boundary was exceeded. For a 1223 cgroup whose memory usage is capped by the high limit 1224 rather than global memory pressure, this event's 1225 occurrences are expected. 1226 1227 max 1228 The number of times the cgroup's memory usage was 1229 about to go over the max boundary. If direct reclaim 1230 fails to bring it down, the cgroup goes to OOM state. 1231 1232 oom 1233 The number of time the cgroup's memory usage was 1234 reached the limit and allocation was about to fail. 1235 1236 This event is not raised if the OOM killer is not 1237 considered as an option, e.g. for failed high-order 1238 allocations or if caller asked to not retry attempts. 1239 1240 oom_kill 1241 The number of processes belonging to this cgroup 1242 killed by any kind of OOM killer. 1243 1244 memory.events.local 1245 Similar to memory.events but the fields in the file are local 1246 to the cgroup i.e. not hierarchical. The file modified event 1247 generated on this file reflects only the local events. 1248 1249 memory.stat 1250 A read-only flat-keyed file which exists on non-root cgroups. 1251 1252 This breaks down the cgroup's memory footprint into different 1253 types of memory, type-specific details, and other information 1254 on the state and past events of the memory management system. 1255 1256 All memory amounts are in bytes. 1257 1258 The entries are ordered to be human readable, and new entries 1259 can show up in the middle. Don't rely on items remaining in a 1260 fixed position; use the keys to look up specific values! 1261 1262 If the entry has no per-node counter (or not show in the 1263 memory.numa_stat). We use 'npn' (non-per-node) as the tag 1264 to indicate that it will not show in the memory.numa_stat. 1265 1266 anon 1267 Amount of memory used in anonymous mappings such as 1268 brk(), sbrk(), and mmap(MAP_ANONYMOUS) 1269 1270 file 1271 Amount of memory used to cache filesystem data, 1272 including tmpfs and shared memory. 1273 1274 kernel_stack 1275 Amount of memory allocated to kernel stacks. 1276 1277 pagetables 1278 Amount of memory allocated for page tables. 1279 1280 percpu (npn) 1281 Amount of memory used for storing per-cpu kernel 1282 data structures. 1283 1284 sock (npn) 1285 Amount of memory used in network transmission buffers 1286 1287 shmem 1288 Amount of cached filesystem data that is swap-backed, 1289 such as tmpfs, shm segments, shared anonymous mmap()s 1290 1291 file_mapped 1292 Amount of cached filesystem data mapped with mmap() 1293 1294 file_dirty 1295 Amount of cached filesystem data that was modified but 1296 not yet written back to disk 1297 1298 file_writeback 1299 Amount of cached filesystem data that was modified and 1300 is currently being written back to disk 1301 1302 swapcached 1303 Amount of swap cached in memory. The swapcache is accounted 1304 against both memory and swap usage. 1305 1306 anon_thp 1307 Amount of memory used in anonymous mappings backed by 1308 transparent hugepages 1309 1310 file_thp 1311 Amount of cached filesystem data backed by transparent 1312 hugepages 1313 1314 shmem_thp 1315 Amount of shm, tmpfs, shared anonymous mmap()s backed by 1316 transparent hugepages 1317 1318 inactive_anon, active_anon, inactive_file, active_file, unevictable 1319 Amount of memory, swap-backed and filesystem-backed, 1320 on the internal memory management lists used by the 1321 page reclaim algorithm. 1322 1323 As these represent internal list state (eg. shmem pages are on anon 1324 memory management lists), inactive_foo + active_foo may not be equal to 1325 the value for the foo counter, since the foo counter is type-based, not 1326 list-based. 1327 1328 slab_reclaimable 1329 Part of "slab" that might be reclaimed, such as 1330 dentries and inodes. 1331 1332 slab_unreclaimable 1333 Part of "slab" that cannot be reclaimed on memory 1334 pressure. 1335 1336 slab (npn) 1337 Amount of memory used for storing in-kernel data 1338 structures. 1339 1340 workingset_refault_anon 1341 Number of refaults of previously evicted anonymous pages. 1342 1343 workingset_refault_file 1344 Number of refaults of previously evicted file pages. 1345 1346 workingset_activate_anon 1347 Number of refaulted anonymous pages that were immediately 1348 activated. 1349 1350 workingset_activate_file 1351 Number of refaulted file pages that were immediately activated. 1352 1353 workingset_restore_anon 1354 Number of restored anonymous pages which have been detected as 1355 an active workingset before they got reclaimed. 1356 1357 workingset_restore_file 1358 Number of restored file pages which have been detected as an 1359 active workingset before they got reclaimed. 1360 1361 workingset_nodereclaim 1362 Number of times a shadow node has been reclaimed 1363 1364 pgfault (npn) 1365 Total number of page faults incurred 1366 1367 pgmajfault (npn) 1368 Number of major page faults incurred 1369 1370 pgrefill (npn) 1371 Amount of scanned pages (in an active LRU list) 1372 1373 pgscan (npn) 1374 Amount of scanned pages (in an inactive LRU list) 1375 1376 pgsteal (npn) 1377 Amount of reclaimed pages 1378 1379 pgactivate (npn) 1380 Amount of pages moved to the active LRU list 1381 1382 pgdeactivate (npn) 1383 Amount of pages moved to the inactive LRU list 1384 1385 pglazyfree (npn) 1386 Amount of pages postponed to be freed under memory pressure 1387 1388 pglazyfreed (npn) 1389 Amount of reclaimed lazyfree pages 1390 1391 thp_fault_alloc (npn) 1392 Number of transparent hugepages which were allocated to satisfy 1393 a page fault. This counter is not present when CONFIG_TRANSPARENT_HUGEPAGE 1394 is not set. 1395 1396 thp_collapse_alloc (npn) 1397 Number of transparent hugepages which were allocated to allow 1398 collapsing an existing range of pages. This counter is not 1399 present when CONFIG_TRANSPARENT_HUGEPAGE is not set. 1400 1401 memory.numa_stat 1402 A read-only nested-keyed file which exists on non-root cgroups. 1403 1404 This breaks down the cgroup's memory footprint into different 1405 types of memory, type-specific details, and other information 1406 per node on the state of the memory management system. 1407 1408 This is useful for providing visibility into the NUMA locality 1409 information within an memcg since the pages are allowed to be 1410 allocated from any physical node. One of the use case is evaluating 1411 application performance by combining this information with the 1412 application's CPU allocation. 1413 1414 All memory amounts are in bytes. 1415 1416 The output format of memory.numa_stat is:: 1417 1418 type N0=<bytes in node 0> N1=<bytes in node 1> ... 1419 1420 The entries are ordered to be human readable, and new entries 1421 can show up in the middle. Don't rely on items remaining in a 1422 fixed position; use the keys to look up specific values! 1423 1424 The entries can refer to the memory.stat. 1425 1426 memory.swap.current 1427 A read-only single value file which exists on non-root 1428 cgroups. 1429 1430 The total amount of swap currently being used by the cgroup 1431 and its descendants. 1432 1433 memory.swap.high 1434 A read-write single value file which exists on non-root 1435 cgroups. The default is "max". 1436 1437 Swap usage throttle limit. If a cgroup's swap usage exceeds 1438 this limit, all its further allocations will be throttled to 1439 allow userspace to implement custom out-of-memory procedures. 1440 1441 This limit marks a point of no return for the cgroup. It is NOT 1442 designed to manage the amount of swapping a workload does 1443 during regular operation. Compare to memory.swap.max, which 1444 prohibits swapping past a set amount, but lets the cgroup 1445 continue unimpeded as long as other memory can be reclaimed. 1446 1447 Healthy workloads are not expected to reach this limit. 1448 1449 memory.swap.max 1450 A read-write single value file which exists on non-root 1451 cgroups. The default is "max". 1452 1453 Swap usage hard limit. If a cgroup's swap usage reaches this 1454 limit, anonymous memory of the cgroup will not be swapped out. 1455 1456 memory.swap.events 1457 A read-only flat-keyed file which exists on non-root cgroups. 1458 The following entries are defined. Unless specified 1459 otherwise, a value change in this file generates a file 1460 modified event. 1461 1462 high 1463 The number of times the cgroup's swap usage was over 1464 the high threshold. 1465 1466 max 1467 The number of times the cgroup's swap usage was about 1468 to go over the max boundary and swap allocation 1469 failed. 1470 1471 fail 1472 The number of times swap allocation failed either 1473 because of running out of swap system-wide or max 1474 limit. 1475 1476 When reduced under the current usage, the existing swap 1477 entries are reclaimed gradually and the swap usage may stay 1478 higher than the limit for an extended period of time. This 1479 reduces the impact on the workload and memory management. 1480 1481 memory.pressure 1482 A read-only nested-keyed file. 1483 1484 Shows pressure stall information for memory. See 1485 :ref:`Documentation/accounting/psi.rst <psi>` for details. 1486 1487 1488Usage Guidelines 1489~~~~~~~~~~~~~~~~ 1490 1491"memory.high" is the main mechanism to control memory usage. 1492Over-committing on high limit (sum of high limits > available memory) 1493and letting global memory pressure to distribute memory according to 1494usage is a viable strategy. 1495 1496Because breach of the high limit doesn't trigger the OOM killer but 1497throttles the offending cgroup, a management agent has ample 1498opportunities to monitor and take appropriate actions such as granting 1499more memory or terminating the workload. 1500 1501Determining whether a cgroup has enough memory is not trivial as 1502memory usage doesn't indicate whether the workload can benefit from 1503more memory. For example, a workload which writes data received from 1504network to a file can use all available memory but can also operate as 1505performant with a small amount of memory. A measure of memory 1506pressure - how much the workload is being impacted due to lack of 1507memory - is necessary to determine whether a workload needs more 1508memory; unfortunately, memory pressure monitoring mechanism isn't 1509implemented yet. 1510 1511 1512Memory Ownership 1513~~~~~~~~~~~~~~~~ 1514 1515A memory area is charged to the cgroup which instantiated it and stays 1516charged to the cgroup until the area is released. Migrating a process 1517to a different cgroup doesn't move the memory usages that it 1518instantiated while in the previous cgroup to the new cgroup. 1519 1520A memory area may be used by processes belonging to different cgroups. 1521To which cgroup the area will be charged is in-deterministic; however, 1522over time, the memory area is likely to end up in a cgroup which has 1523enough memory allowance to avoid high reclaim pressure. 1524 1525If a cgroup sweeps a considerable amount of memory which is expected 1526to be accessed repeatedly by other cgroups, it may make sense to use 1527POSIX_FADV_DONTNEED to relinquish the ownership of memory areas 1528belonging to the affected files to ensure correct memory ownership. 1529 1530 1531IO 1532-- 1533 1534The "io" controller regulates the distribution of IO resources. This 1535controller implements both weight based and absolute bandwidth or IOPS 1536limit distribution; however, weight based distribution is available 1537only if cfq-iosched is in use and neither scheme is available for 1538blk-mq devices. 1539 1540 1541IO Interface Files 1542~~~~~~~~~~~~~~~~~~ 1543 1544 io.stat 1545 A read-only nested-keyed file. 1546 1547 Lines are keyed by $MAJ:$MIN device numbers and not ordered. 1548 The following nested keys are defined. 1549 1550 ====== ===================== 1551 rbytes Bytes read 1552 wbytes Bytes written 1553 rios Number of read IOs 1554 wios Number of write IOs 1555 dbytes Bytes discarded 1556 dios Number of discard IOs 1557 ====== ===================== 1558 1559 An example read output follows:: 1560 1561 8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0 1562 8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021 1563 1564 io.cost.qos 1565 A read-write nested-keyed file which exists only on the root 1566 cgroup. 1567 1568 This file configures the Quality of Service of the IO cost 1569 model based controller (CONFIG_BLK_CGROUP_IOCOST) which 1570 currently implements "io.weight" proportional control. Lines 1571 are keyed by $MAJ:$MIN device numbers and not ordered. The 1572 line for a given device is populated on the first write for 1573 the device on "io.cost.qos" or "io.cost.model". The following 1574 nested keys are defined. 1575 1576 ====== ===================================== 1577 enable Weight-based control enable 1578 ctrl "auto" or "user" 1579 rpct Read latency percentile [0, 100] 1580 rlat Read latency threshold 1581 wpct Write latency percentile [0, 100] 1582 wlat Write latency threshold 1583 min Minimum scaling percentage [1, 10000] 1584 max Maximum scaling percentage [1, 10000] 1585 ====== ===================================== 1586 1587 The controller is disabled by default and can be enabled by 1588 setting "enable" to 1. "rpct" and "wpct" parameters default 1589 to zero and the controller uses internal device saturation 1590 state to adjust the overall IO rate between "min" and "max". 1591 1592 When a better control quality is needed, latency QoS 1593 parameters can be configured. For example:: 1594 1595 8:16 enable=1 ctrl=auto rpct=95.00 rlat=75000 wpct=95.00 wlat=150000 min=50.00 max=150.0 1596 1597 shows that on sdb, the controller is enabled, will consider 1598 the device saturated if the 95th percentile of read completion 1599 latencies is above 75ms or write 150ms, and adjust the overall 1600 IO issue rate between 50% and 150% accordingly. 1601 1602 The lower the saturation point, the better the latency QoS at 1603 the cost of aggregate bandwidth. The narrower the allowed 1604 adjustment range between "min" and "max", the more conformant 1605 to the cost model the IO behavior. Note that the IO issue 1606 base rate may be far off from 100% and setting "min" and "max" 1607 blindly can lead to a significant loss of device capacity or 1608 control quality. "min" and "max" are useful for regulating 1609 devices which show wide temporary behavior changes - e.g. a 1610 ssd which accepts writes at the line speed for a while and 1611 then completely stalls for multiple seconds. 1612 1613 When "ctrl" is "auto", the parameters are controlled by the 1614 kernel and may change automatically. Setting "ctrl" to "user" 1615 or setting any of the percentile and latency parameters puts 1616 it into "user" mode and disables the automatic changes. The 1617 automatic mode can be restored by setting "ctrl" to "auto". 1618 1619 io.cost.model 1620 A read-write nested-keyed file which exists only on the root 1621 cgroup. 1622 1623 This file configures the cost model of the IO cost model based 1624 controller (CONFIG_BLK_CGROUP_IOCOST) which currently 1625 implements "io.weight" proportional control. Lines are keyed 1626 by $MAJ:$MIN device numbers and not ordered. The line for a 1627 given device is populated on the first write for the device on 1628 "io.cost.qos" or "io.cost.model". The following nested keys 1629 are defined. 1630 1631 ===== ================================ 1632 ctrl "auto" or "user" 1633 model The cost model in use - "linear" 1634 ===== ================================ 1635 1636 When "ctrl" is "auto", the kernel may change all parameters 1637 dynamically. When "ctrl" is set to "user" or any other 1638 parameters are written to, "ctrl" become "user" and the 1639 automatic changes are disabled. 1640 1641 When "model" is "linear", the following model parameters are 1642 defined. 1643 1644 ============= ======================================== 1645 [r|w]bps The maximum sequential IO throughput 1646 [r|w]seqiops The maximum 4k sequential IOs per second 1647 [r|w]randiops The maximum 4k random IOs per second 1648 ============= ======================================== 1649 1650 From the above, the builtin linear model determines the base 1651 costs of a sequential and random IO and the cost coefficient 1652 for the IO size. While simple, this model can cover most 1653 common device classes acceptably. 1654 1655 The IO cost model isn't expected to be accurate in absolute 1656 sense and is scaled to the device behavior dynamically. 1657 1658 If needed, tools/cgroup/iocost_coef_gen.py can be used to 1659 generate device-specific coefficients. 1660 1661 io.weight 1662 A read-write flat-keyed file which exists on non-root cgroups. 1663 The default is "default 100". 1664 1665 The first line is the default weight applied to devices 1666 without specific override. The rest are overrides keyed by 1667 $MAJ:$MIN device numbers and not ordered. The weights are in 1668 the range [1, 10000] and specifies the relative amount IO time 1669 the cgroup can use in relation to its siblings. 1670 1671 The default weight can be updated by writing either "default 1672 $WEIGHT" or simply "$WEIGHT". Overrides can be set by writing 1673 "$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default". 1674 1675 An example read output follows:: 1676 1677 default 100 1678 8:16 200 1679 8:0 50 1680 1681 io.max 1682 A read-write nested-keyed file which exists on non-root 1683 cgroups. 1684 1685 BPS and IOPS based IO limit. Lines are keyed by $MAJ:$MIN 1686 device numbers and not ordered. The following nested keys are 1687 defined. 1688 1689 ===== ================================== 1690 rbps Max read bytes per second 1691 wbps Max write bytes per second 1692 riops Max read IO operations per second 1693 wiops Max write IO operations per second 1694 ===== ================================== 1695 1696 When writing, any number of nested key-value pairs can be 1697 specified in any order. "max" can be specified as the value 1698 to remove a specific limit. If the same key is specified 1699 multiple times, the outcome is undefined. 1700 1701 BPS and IOPS are measured in each IO direction and IOs are 1702 delayed if limit is reached. Temporary bursts are allowed. 1703 1704 Setting read limit at 2M BPS and write at 120 IOPS for 8:16:: 1705 1706 echo "8:16 rbps=2097152 wiops=120" > io.max 1707 1708 Reading returns the following:: 1709 1710 8:16 rbps=2097152 wbps=max riops=max wiops=120 1711 1712 Write IOPS limit can be removed by writing the following:: 1713 1714 echo "8:16 wiops=max" > io.max 1715 1716 Reading now returns the following:: 1717 1718 8:16 rbps=2097152 wbps=max riops=max wiops=max 1719 1720 io.pressure 1721 A read-only nested-keyed file. 1722 1723 Shows pressure stall information for IO. See 1724 :ref:`Documentation/accounting/psi.rst <psi>` for details. 1725 1726 1727Writeback 1728~~~~~~~~~ 1729 1730Page cache is dirtied through buffered writes and shared mmaps and 1731written asynchronously to the backing filesystem by the writeback 1732mechanism. Writeback sits between the memory and IO domains and 1733regulates the proportion of dirty memory by balancing dirtying and 1734write IOs. 1735 1736The io controller, in conjunction with the memory controller, 1737implements control of page cache writeback IOs. The memory controller 1738defines the memory domain that dirty memory ratio is calculated and 1739maintained for and the io controller defines the io domain which 1740writes out dirty pages for the memory domain. Both system-wide and 1741per-cgroup dirty memory states are examined and the more restrictive 1742of the two is enforced. 1743 1744cgroup writeback requires explicit support from the underlying 1745filesystem. Currently, cgroup writeback is implemented on ext2, ext4, 1746btrfs, f2fs, and xfs. On other filesystems, all writeback IOs are 1747attributed to the root cgroup. 1748 1749There are inherent differences in memory and writeback management 1750which affects how cgroup ownership is tracked. Memory is tracked per 1751page while writeback per inode. For the purpose of writeback, an 1752inode is assigned to a cgroup and all IO requests to write dirty pages 1753from the inode are attributed to that cgroup. 1754 1755As cgroup ownership for memory is tracked per page, there can be pages 1756which are associated with different cgroups than the one the inode is 1757associated with. These are called foreign pages. The writeback 1758constantly keeps track of foreign pages and, if a particular foreign 1759cgroup becomes the majority over a certain period of time, switches 1760the ownership of the inode to that cgroup. 1761 1762While this model is enough for most use cases where a given inode is 1763mostly dirtied by a single cgroup even when the main writing cgroup 1764changes over time, use cases where multiple cgroups write to a single 1765inode simultaneously are not supported well. In such circumstances, a 1766significant portion of IOs are likely to be attributed incorrectly. 1767As memory controller assigns page ownership on the first use and 1768doesn't update it until the page is released, even if writeback 1769strictly follows page ownership, multiple cgroups dirtying overlapping 1770areas wouldn't work as expected. It's recommended to avoid such usage 1771patterns. 1772 1773The sysctl knobs which affect writeback behavior are applied to cgroup 1774writeback as follows. 1775 1776 vm.dirty_background_ratio, vm.dirty_ratio 1777 These ratios apply the same to cgroup writeback with the 1778 amount of available memory capped by limits imposed by the 1779 memory controller and system-wide clean memory. 1780 1781 vm.dirty_background_bytes, vm.dirty_bytes 1782 For cgroup writeback, this is calculated into ratio against 1783 total available memory and applied the same way as 1784 vm.dirty[_background]_ratio. 1785 1786 1787IO Latency 1788~~~~~~~~~~ 1789 1790This is a cgroup v2 controller for IO workload protection. You provide a group 1791with a latency target, and if the average latency exceeds that target the 1792controller will throttle any peers that have a lower latency target than the 1793protected workload. 1794 1795The limits are only applied at the peer level in the hierarchy. This means that 1796in the diagram below, only groups A, B, and C will influence each other, and 1797groups D and F will influence each other. Group G will influence nobody:: 1798 1799 [root] 1800 / | \ 1801 A B C 1802 / \ | 1803 D F G 1804 1805 1806So the ideal way to configure this is to set io.latency in groups A, B, and C. 1807Generally you do not want to set a value lower than the latency your device 1808supports. Experiment to find the value that works best for your workload. 1809Start at higher than the expected latency for your device and watch the 1810avg_lat value in io.stat for your workload group to get an idea of the 1811latency you see during normal operation. Use the avg_lat value as a basis for 1812your real setting, setting at 10-15% higher than the value in io.stat. 1813 1814How IO Latency Throttling Works 1815~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1816 1817io.latency is work conserving; so as long as everybody is meeting their latency 1818target the controller doesn't do anything. Once a group starts missing its 1819target it begins throttling any peer group that has a higher target than itself. 1820This throttling takes 2 forms: 1821 1822- Queue depth throttling. This is the number of outstanding IO's a group is 1823 allowed to have. We will clamp down relatively quickly, starting at no limit 1824 and going all the way down to 1 IO at a time. 1825 1826- Artificial delay induction. There are certain types of IO that cannot be 1827 throttled without possibly adversely affecting higher priority groups. This 1828 includes swapping and metadata IO. These types of IO are allowed to occur 1829 normally, however they are "charged" to the originating group. If the 1830 originating group is being throttled you will see the use_delay and delay 1831 fields in io.stat increase. The delay value is how many microseconds that are 1832 being added to any process that runs in this group. Because this number can 1833 grow quite large if there is a lot of swapping or metadata IO occurring we 1834 limit the individual delay events to 1 second at a time. 1835 1836Once the victimized group starts meeting its latency target again it will start 1837unthrottling any peer groups that were throttled previously. If the victimized 1838group simply stops doing IO the global counter will unthrottle appropriately. 1839 1840IO Latency Interface Files 1841~~~~~~~~~~~~~~~~~~~~~~~~~~ 1842 1843 io.latency 1844 This takes a similar format as the other controllers. 1845 1846 "MAJOR:MINOR target=<target time in microseconds" 1847 1848 io.stat 1849 If the controller is enabled you will see extra stats in io.stat in 1850 addition to the normal ones. 1851 1852 depth 1853 This is the current queue depth for the group. 1854 1855 avg_lat 1856 This is an exponential moving average with a decay rate of 1/exp 1857 bound by the sampling interval. The decay rate interval can be 1858 calculated by multiplying the win value in io.stat by the 1859 corresponding number of samples based on the win value. 1860 1861 win 1862 The sampling window size in milliseconds. This is the minimum 1863 duration of time between evaluation events. Windows only elapse 1864 with IO activity. Idle periods extend the most recent window. 1865 1866PID 1867--- 1868 1869The process number controller is used to allow a cgroup to stop any 1870new tasks from being fork()'d or clone()'d after a specified limit is 1871reached. 1872 1873The number of tasks in a cgroup can be exhausted in ways which other 1874controllers cannot prevent, thus warranting its own controller. For 1875example, a fork bomb is likely to exhaust the number of tasks before 1876hitting memory restrictions. 1877 1878Note that PIDs used in this controller refer to TIDs, process IDs as 1879used by the kernel. 1880 1881 1882PID Interface Files 1883~~~~~~~~~~~~~~~~~~~ 1884 1885 pids.max 1886 A read-write single value file which exists on non-root 1887 cgroups. The default is "max". 1888 1889 Hard limit of number of processes. 1890 1891 pids.current 1892 A read-only single value file which exists on all cgroups. 1893 1894 The number of processes currently in the cgroup and its 1895 descendants. 1896 1897Organisational operations are not blocked by cgroup policies, so it is 1898possible to have pids.current > pids.max. This can be done by either 1899setting the limit to be smaller than pids.current, or attaching enough 1900processes to the cgroup such that pids.current is larger than 1901pids.max. However, it is not possible to violate a cgroup PID policy 1902through fork() or clone(). These will return -EAGAIN if the creation 1903of a new process would cause a cgroup policy to be violated. 1904 1905 1906Cpuset 1907------ 1908 1909The "cpuset" controller provides a mechanism for constraining 1910the CPU and memory node placement of tasks to only the resources 1911specified in the cpuset interface files in a task's current cgroup. 1912This is especially valuable on large NUMA systems where placing jobs 1913on properly sized subsets of the systems with careful processor and 1914memory placement to reduce cross-node memory access and contention 1915can improve overall system performance. 1916 1917The "cpuset" controller is hierarchical. That means the controller 1918cannot use CPUs or memory nodes not allowed in its parent. 1919 1920 1921Cpuset Interface Files 1922~~~~~~~~~~~~~~~~~~~~~~ 1923 1924 cpuset.cpus 1925 A read-write multiple values file which exists on non-root 1926 cpuset-enabled cgroups. 1927 1928 It lists the requested CPUs to be used by tasks within this 1929 cgroup. The actual list of CPUs to be granted, however, is 1930 subjected to constraints imposed by its parent and can differ 1931 from the requested CPUs. 1932 1933 The CPU numbers are comma-separated numbers or ranges. 1934 For example:: 1935 1936 # cat cpuset.cpus 1937 0-4,6,8-10 1938 1939 An empty value indicates that the cgroup is using the same 1940 setting as the nearest cgroup ancestor with a non-empty 1941 "cpuset.cpus" or all the available CPUs if none is found. 1942 1943 The value of "cpuset.cpus" stays constant until the next update 1944 and won't be affected by any CPU hotplug events. 1945 1946 cpuset.cpus.effective 1947 A read-only multiple values file which exists on all 1948 cpuset-enabled cgroups. 1949 1950 It lists the onlined CPUs that are actually granted to this 1951 cgroup by its parent. These CPUs are allowed to be used by 1952 tasks within the current cgroup. 1953 1954 If "cpuset.cpus" is empty, the "cpuset.cpus.effective" file shows 1955 all the CPUs from the parent cgroup that can be available to 1956 be used by this cgroup. Otherwise, it should be a subset of 1957 "cpuset.cpus" unless none of the CPUs listed in "cpuset.cpus" 1958 can be granted. In this case, it will be treated just like an 1959 empty "cpuset.cpus". 1960 1961 Its value will be affected by CPU hotplug events. 1962 1963 cpuset.mems 1964 A read-write multiple values file which exists on non-root 1965 cpuset-enabled cgroups. 1966 1967 It lists the requested memory nodes to be used by tasks within 1968 this cgroup. The actual list of memory nodes granted, however, 1969 is subjected to constraints imposed by its parent and can differ 1970 from the requested memory nodes. 1971 1972 The memory node numbers are comma-separated numbers or ranges. 1973 For example:: 1974 1975 # cat cpuset.mems 1976 0-1,3 1977 1978 An empty value indicates that the cgroup is using the same 1979 setting as the nearest cgroup ancestor with a non-empty 1980 "cpuset.mems" or all the available memory nodes if none 1981 is found. 1982 1983 The value of "cpuset.mems" stays constant until the next update 1984 and won't be affected by any memory nodes hotplug events. 1985 1986 cpuset.mems.effective 1987 A read-only multiple values file which exists on all 1988 cpuset-enabled cgroups. 1989 1990 It lists the onlined memory nodes that are actually granted to 1991 this cgroup by its parent. These memory nodes are allowed to 1992 be used by tasks within the current cgroup. 1993 1994 If "cpuset.mems" is empty, it shows all the memory nodes from the 1995 parent cgroup that will be available to be used by this cgroup. 1996 Otherwise, it should be a subset of "cpuset.mems" unless none of 1997 the memory nodes listed in "cpuset.mems" can be granted. In this 1998 case, it will be treated just like an empty "cpuset.mems". 1999 2000 Its value will be affected by memory nodes hotplug events. 2001 2002 cpuset.cpus.partition 2003 A read-write single value file which exists on non-root 2004 cpuset-enabled cgroups. This flag is owned by the parent cgroup 2005 and is not delegatable. 2006 2007 It accepts only the following input values when written to. 2008 2009 ======== ================================ 2010 "root" a partition root 2011 "member" a non-root member of a partition 2012 ======== ================================ 2013 2014 When set to be a partition root, the current cgroup is the 2015 root of a new partition or scheduling domain that comprises 2016 itself and all its descendants except those that are separate 2017 partition roots themselves and their descendants. The root 2018 cgroup is always a partition root. 2019 2020 There are constraints on where a partition root can be set. 2021 It can only be set in a cgroup if all the following conditions 2022 are true. 2023 2024 1) The "cpuset.cpus" is not empty and the list of CPUs are 2025 exclusive, i.e. they are not shared by any of its siblings. 2026 2) The parent cgroup is a partition root. 2027 3) The "cpuset.cpus" is also a proper subset of the parent's 2028 "cpuset.cpus.effective". 2029 4) There is no child cgroups with cpuset enabled. This is for 2030 eliminating corner cases that have to be handled if such a 2031 condition is allowed. 2032 2033 Setting it to partition root will take the CPUs away from the 2034 effective CPUs of the parent cgroup. Once it is set, this 2035 file cannot be reverted back to "member" if there are any child 2036 cgroups with cpuset enabled. 2037 2038 A parent partition cannot distribute all its CPUs to its 2039 child partitions. There must be at least one cpu left in the 2040 parent partition. 2041 2042 Once becoming a partition root, changes to "cpuset.cpus" is 2043 generally allowed as long as the first condition above is true, 2044 the change will not take away all the CPUs from the parent 2045 partition and the new "cpuset.cpus" value is a superset of its 2046 children's "cpuset.cpus" values. 2047 2048 Sometimes, external factors like changes to ancestors' 2049 "cpuset.cpus" or cpu hotplug can cause the state of the partition 2050 root to change. On read, the "cpuset.sched.partition" file 2051 can show the following values. 2052 2053 ============== ============================== 2054 "member" Non-root member of a partition 2055 "root" Partition root 2056 "root invalid" Invalid partition root 2057 ============== ============================== 2058 2059 It is a partition root if the first 2 partition root conditions 2060 above are true and at least one CPU from "cpuset.cpus" is 2061 granted by the parent cgroup. 2062 2063 A partition root can become invalid if none of CPUs requested 2064 in "cpuset.cpus" can be granted by the parent cgroup or the 2065 parent cgroup is no longer a partition root itself. In this 2066 case, it is not a real partition even though the restriction 2067 of the first partition root condition above will still apply. 2068 The cpu affinity of all the tasks in the cgroup will then be 2069 associated with CPUs in the nearest ancestor partition. 2070 2071 An invalid partition root can be transitioned back to a 2072 real partition root if at least one of the requested CPUs 2073 can now be granted by its parent. In this case, the cpu 2074 affinity of all the tasks in the formerly invalid partition 2075 will be associated to the CPUs of the newly formed partition. 2076 Changing the partition state of an invalid partition root to 2077 "member" is always allowed even if child cpusets are present. 2078 2079 2080Device controller 2081----------------- 2082 2083Device controller manages access to device files. It includes both 2084creation of new device files (using mknod), and access to the 2085existing device files. 2086 2087Cgroup v2 device controller has no interface files and is implemented 2088on top of cgroup BPF. To control access to device files, a user may 2089create bpf programs of the BPF_CGROUP_DEVICE type and attach them 2090to cgroups. On an attempt to access a device file, corresponding 2091BPF programs will be executed, and depending on the return value 2092the attempt will succeed or fail with -EPERM. 2093 2094A BPF_CGROUP_DEVICE program takes a pointer to the bpf_cgroup_dev_ctx 2095structure, which describes the device access attempt: access type 2096(mknod/read/write) and device (type, major and minor numbers). 2097If the program returns 0, the attempt fails with -EPERM, otherwise 2098it succeeds. 2099 2100An example of BPF_CGROUP_DEVICE program may be found in the kernel 2101source tree in the tools/testing/selftests/bpf/progs/dev_cgroup.c file. 2102 2103 2104RDMA 2105---- 2106 2107The "rdma" controller regulates the distribution and accounting of 2108RDMA resources. 2109 2110RDMA Interface Files 2111~~~~~~~~~~~~~~~~~~~~ 2112 2113 rdma.max 2114 A readwrite nested-keyed file that exists for all the cgroups 2115 except root that describes current configured resource limit 2116 for a RDMA/IB device. 2117 2118 Lines are keyed by device name and are not ordered. 2119 Each line contains space separated resource name and its configured 2120 limit that can be distributed. 2121 2122 The following nested keys are defined. 2123 2124 ========== ============================= 2125 hca_handle Maximum number of HCA Handles 2126 hca_object Maximum number of HCA Objects 2127 ========== ============================= 2128 2129 An example for mlx4 and ocrdma device follows:: 2130 2131 mlx4_0 hca_handle=2 hca_object=2000 2132 ocrdma1 hca_handle=3 hca_object=max 2133 2134 rdma.current 2135 A read-only file that describes current resource usage. 2136 It exists for all the cgroup except root. 2137 2138 An example for mlx4 and ocrdma device follows:: 2139 2140 mlx4_0 hca_handle=1 hca_object=20 2141 ocrdma1 hca_handle=1 hca_object=23 2142 2143HugeTLB 2144------- 2145 2146The HugeTLB controller allows to limit the HugeTLB usage per control group and 2147enforces the controller limit during page fault. 2148 2149HugeTLB Interface Files 2150~~~~~~~~~~~~~~~~~~~~~~~ 2151 2152 hugetlb.<hugepagesize>.current 2153 Show current usage for "hugepagesize" hugetlb. It exists for all 2154 the cgroup except root. 2155 2156 hugetlb.<hugepagesize>.max 2157 Set/show the hard limit of "hugepagesize" hugetlb usage. 2158 The default value is "max". It exists for all the cgroup except root. 2159 2160 hugetlb.<hugepagesize>.events 2161 A read-only flat-keyed file which exists on non-root cgroups. 2162 2163 max 2164 The number of allocation failure due to HugeTLB limit 2165 2166 hugetlb.<hugepagesize>.events.local 2167 Similar to hugetlb.<hugepagesize>.events but the fields in the file 2168 are local to the cgroup i.e. not hierarchical. The file modified event 2169 generated on this file reflects only the local events. 2170 2171Misc 2172---- 2173 2174perf_event 2175~~~~~~~~~~ 2176 2177perf_event controller, if not mounted on a legacy hierarchy, is 2178automatically enabled on the v2 hierarchy so that perf events can 2179always be filtered by cgroup v2 path. The controller can still be 2180moved to a legacy hierarchy after v2 hierarchy is populated. 2181 2182 2183Non-normative information 2184------------------------- 2185 2186This section contains information that isn't considered to be a part of 2187the stable kernel API and so is subject to change. 2188 2189 2190CPU controller root cgroup process behaviour 2191~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2192 2193When distributing CPU cycles in the root cgroup each thread in this 2194cgroup is treated as if it was hosted in a separate child cgroup of the 2195root cgroup. This child cgroup weight is dependent on its thread nice 2196level. 2197 2198For details of this mapping see sched_prio_to_weight array in 2199kernel/sched/core.c file (values from this array should be scaled 2200appropriately so the neutral - nice 0 - value is 100 instead of 1024). 2201 2202 2203IO controller root cgroup process behaviour 2204~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2205 2206Root cgroup processes are hosted in an implicit leaf child node. 2207When distributing IO resources this implicit child node is taken into 2208account as if it was a normal child cgroup of the root cgroup with a 2209weight value of 200. 2210 2211 2212Namespace 2213========= 2214 2215Basics 2216------ 2217 2218cgroup namespace provides a mechanism to virtualize the view of the 2219"/proc/$PID/cgroup" file and cgroup mounts. The CLONE_NEWCGROUP clone 2220flag can be used with clone(2) and unshare(2) to create a new cgroup 2221namespace. The process running inside the cgroup namespace will have 2222its "/proc/$PID/cgroup" output restricted to cgroupns root. The 2223cgroupns root is the cgroup of the process at the time of creation of 2224the cgroup namespace. 2225 2226Without cgroup namespace, the "/proc/$PID/cgroup" file shows the 2227complete path of the cgroup of a process. In a container setup where 2228a set of cgroups and namespaces are intended to isolate processes the 2229"/proc/$PID/cgroup" file may leak potential system level information 2230to the isolated processes. For example:: 2231 2232 # cat /proc/self/cgroup 2233 0::/batchjobs/container_id1 2234 2235The path '/batchjobs/container_id1' can be considered as system-data 2236and undesirable to expose to the isolated processes. cgroup namespace 2237can be used to restrict visibility of this path. For example, before 2238creating a cgroup namespace, one would see:: 2239 2240 # ls -l /proc/self/ns/cgroup 2241 lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835] 2242 # cat /proc/self/cgroup 2243 0::/batchjobs/container_id1 2244 2245After unsharing a new namespace, the view changes:: 2246 2247 # ls -l /proc/self/ns/cgroup 2248 lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183] 2249 # cat /proc/self/cgroup 2250 0::/ 2251 2252When some thread from a multi-threaded process unshares its cgroup 2253namespace, the new cgroupns gets applied to the entire process (all 2254the threads). This is natural for the v2 hierarchy; however, for the 2255legacy hierarchies, this may be unexpected. 2256 2257A cgroup namespace is alive as long as there are processes inside or 2258mounts pinning it. When the last usage goes away, the cgroup 2259namespace is destroyed. The cgroupns root and the actual cgroups 2260remain. 2261 2262 2263The Root and Views 2264------------------ 2265 2266The 'cgroupns root' for a cgroup namespace is the cgroup in which the 2267process calling unshare(2) is running. For example, if a process in 2268/batchjobs/container_id1 cgroup calls unshare, cgroup 2269/batchjobs/container_id1 becomes the cgroupns root. For the 2270init_cgroup_ns, this is the real root ('/') cgroup. 2271 2272The cgroupns root cgroup does not change even if the namespace creator 2273process later moves to a different cgroup:: 2274 2275 # ~/unshare -c # unshare cgroupns in some cgroup 2276 # cat /proc/self/cgroup 2277 0::/ 2278 # mkdir sub_cgrp_1 2279 # echo 0 > sub_cgrp_1/cgroup.procs 2280 # cat /proc/self/cgroup 2281 0::/sub_cgrp_1 2282 2283Each process gets its namespace-specific view of "/proc/$PID/cgroup" 2284 2285Processes running inside the cgroup namespace will be able to see 2286cgroup paths (in /proc/self/cgroup) only inside their root cgroup. 2287From within an unshared cgroupns:: 2288 2289 # sleep 100000 & 2290 [1] 7353 2291 # echo 7353 > sub_cgrp_1/cgroup.procs 2292 # cat /proc/7353/cgroup 2293 0::/sub_cgrp_1 2294 2295From the initial cgroup namespace, the real cgroup path will be 2296visible:: 2297 2298 $ cat /proc/7353/cgroup 2299 0::/batchjobs/container_id1/sub_cgrp_1 2300 2301From a sibling cgroup namespace (that is, a namespace rooted at a 2302different cgroup), the cgroup path relative to its own cgroup 2303namespace root will be shown. For instance, if PID 7353's cgroup 2304namespace root is at '/batchjobs/container_id2', then it will see:: 2305 2306 # cat /proc/7353/cgroup 2307 0::/../container_id2/sub_cgrp_1 2308 2309Note that the relative path always starts with '/' to indicate that 2310its relative to the cgroup namespace root of the caller. 2311 2312 2313Migration and setns(2) 2314---------------------- 2315 2316Processes inside a cgroup namespace can move into and out of the 2317namespace root if they have proper access to external cgroups. For 2318example, from inside a namespace with cgroupns root at 2319/batchjobs/container_id1, and assuming that the global hierarchy is 2320still accessible inside cgroupns:: 2321 2322 # cat /proc/7353/cgroup 2323 0::/sub_cgrp_1 2324 # echo 7353 > batchjobs/container_id2/cgroup.procs 2325 # cat /proc/7353/cgroup 2326 0::/../container_id2 2327 2328Note that this kind of setup is not encouraged. A task inside cgroup 2329namespace should only be exposed to its own cgroupns hierarchy. 2330 2331setns(2) to another cgroup namespace is allowed when: 2332 2333(a) the process has CAP_SYS_ADMIN against its current user namespace 2334(b) the process has CAP_SYS_ADMIN against the target cgroup 2335 namespace's userns 2336 2337No implicit cgroup changes happen with attaching to another cgroup 2338namespace. It is expected that the someone moves the attaching 2339process under the target cgroup namespace root. 2340 2341 2342Interaction with Other Namespaces 2343--------------------------------- 2344 2345Namespace specific cgroup hierarchy can be mounted by a process 2346running inside a non-init cgroup namespace:: 2347 2348 # mount -t cgroup2 none $MOUNT_POINT 2349 2350This will mount the unified cgroup hierarchy with cgroupns root as the 2351filesystem root. The process needs CAP_SYS_ADMIN against its user and 2352mount namespaces. 2353 2354The virtualization of /proc/self/cgroup file combined with restricting 2355the view of cgroup hierarchy by namespace-private cgroupfs mount 2356provides a properly isolated cgroup view inside the container. 2357 2358 2359Information on Kernel Programming 2360================================= 2361 2362This section contains kernel programming information in the areas 2363where interacting with cgroup is necessary. cgroup core and 2364controllers are not covered. 2365 2366 2367Filesystem Support for Writeback 2368-------------------------------- 2369 2370A filesystem can support cgroup writeback by updating 2371address_space_operations->writepage[s]() to annotate bio's using the 2372following two functions. 2373 2374 wbc_init_bio(@wbc, @bio) 2375 Should be called for each bio carrying writeback data and 2376 associates the bio with the inode's owner cgroup and the 2377 corresponding request queue. This must be called after 2378 a queue (device) has been associated with the bio and 2379 before submission. 2380 2381 wbc_account_cgroup_owner(@wbc, @page, @bytes) 2382 Should be called for each data segment being written out. 2383 While this function doesn't care exactly when it's called 2384 during the writeback session, it's the easiest and most 2385 natural to call it as data segments are added to a bio. 2386 2387With writeback bio's annotated, cgroup support can be enabled per 2388super_block by setting SB_I_CGROUPWB in ->s_iflags. This allows for 2389selective disabling of cgroup writeback support which is helpful when 2390certain filesystem features, e.g. journaled data mode, are 2391incompatible. 2392 2393wbc_init_bio() binds the specified bio to its cgroup. Depending on 2394the configuration, the bio may be executed at a lower priority and if 2395the writeback session is holding shared resources, e.g. a journal 2396entry, may lead to priority inversion. There is no one easy solution 2397for the problem. Filesystems can try to work around specific problem 2398cases by skipping wbc_init_bio() and using bio_associate_blkg() 2399directly. 2400 2401 2402Deprecated v1 Core Features 2403=========================== 2404 2405- Multiple hierarchies including named ones are not supported. 2406 2407- All v1 mount options are not supported. 2408 2409- The "tasks" file is removed and "cgroup.procs" is not sorted. 2410 2411- "cgroup.clone_children" is removed. 2412 2413- /proc/cgroups is meaningless for v2. Use "cgroup.controllers" file 2414 at the root instead. 2415 2416 2417Issues with v1 and Rationales for v2 2418==================================== 2419 2420Multiple Hierarchies 2421-------------------- 2422 2423cgroup v1 allowed an arbitrary number of hierarchies and each 2424hierarchy could host any number of controllers. While this seemed to 2425provide a high level of flexibility, it wasn't useful in practice. 2426 2427For example, as there is only one instance of each controller, utility 2428type controllers such as freezer which can be useful in all 2429hierarchies could only be used in one. The issue is exacerbated by 2430the fact that controllers couldn't be moved to another hierarchy once 2431hierarchies were populated. Another issue was that all controllers 2432bound to a hierarchy were forced to have exactly the same view of the 2433hierarchy. It wasn't possible to vary the granularity depending on 2434the specific controller. 2435 2436In practice, these issues heavily limited which controllers could be 2437put on the same hierarchy and most configurations resorted to putting 2438each controller on its own hierarchy. Only closely related ones, such 2439as the cpu and cpuacct controllers, made sense to be put on the same 2440hierarchy. This often meant that userland ended up managing multiple 2441similar hierarchies repeating the same steps on each hierarchy 2442whenever a hierarchy management operation was necessary. 2443 2444Furthermore, support for multiple hierarchies came at a steep cost. 2445It greatly complicated cgroup core implementation but more importantly 2446the support for multiple hierarchies restricted how cgroup could be 2447used in general and what controllers was able to do. 2448 2449There was no limit on how many hierarchies there might be, which meant 2450that a thread's cgroup membership couldn't be described in finite 2451length. The key might contain any number of entries and was unlimited 2452in length, which made it highly awkward to manipulate and led to 2453addition of controllers which existed only to identify membership, 2454which in turn exacerbated the original problem of proliferating number 2455of hierarchies. 2456 2457Also, as a controller couldn't have any expectation regarding the 2458topologies of hierarchies other controllers might be on, each 2459controller had to assume that all other controllers were attached to 2460completely orthogonal hierarchies. This made it impossible, or at 2461least very cumbersome, for controllers to cooperate with each other. 2462 2463In most use cases, putting controllers on hierarchies which are 2464completely orthogonal to each other isn't necessary. What usually is 2465called for is the ability to have differing levels of granularity 2466depending on the specific controller. In other words, hierarchy may 2467be collapsed from leaf towards root when viewed from specific 2468controllers. For example, a given configuration might not care about 2469how memory is distributed beyond a certain level while still wanting 2470to control how CPU cycles are distributed. 2471 2472 2473Thread Granularity 2474------------------ 2475 2476cgroup v1 allowed threads of a process to belong to different cgroups. 2477This didn't make sense for some controllers and those controllers 2478ended up implementing different ways to ignore such situations but 2479much more importantly it blurred the line between API exposed to 2480individual applications and system management interface. 2481 2482Generally, in-process knowledge is available only to the process 2483itself; thus, unlike service-level organization of processes, 2484categorizing threads of a process requires active participation from 2485the application which owns the target process. 2486 2487cgroup v1 had an ambiguously defined delegation model which got abused 2488in combination with thread granularity. cgroups were delegated to 2489individual applications so that they can create and manage their own 2490sub-hierarchies and control resource distributions along them. This 2491effectively raised cgroup to the status of a syscall-like API exposed 2492to lay programs. 2493 2494First of all, cgroup has a fundamentally inadequate interface to be 2495exposed this way. For a process to access its own knobs, it has to 2496extract the path on the target hierarchy from /proc/self/cgroup, 2497construct the path by appending the name of the knob to the path, open 2498and then read and/or write to it. This is not only extremely clunky 2499and unusual but also inherently racy. There is no conventional way to 2500define transaction across the required steps and nothing can guarantee 2501that the process would actually be operating on its own sub-hierarchy. 2502 2503cgroup controllers implemented a number of knobs which would never be 2504accepted as public APIs because they were just adding control knobs to 2505system-management pseudo filesystem. cgroup ended up with interface 2506knobs which were not properly abstracted or refined and directly 2507revealed kernel internal details. These knobs got exposed to 2508individual applications through the ill-defined delegation mechanism 2509effectively abusing cgroup as a shortcut to implementing public APIs 2510without going through the required scrutiny. 2511 2512This was painful for both userland and kernel. Userland ended up with 2513misbehaving and poorly abstracted interfaces and kernel exposing and 2514locked into constructs inadvertently. 2515 2516 2517Competition Between Inner Nodes and Threads 2518------------------------------------------- 2519 2520cgroup v1 allowed threads to be in any cgroups which created an 2521interesting problem where threads belonging to a parent cgroup and its 2522children cgroups competed for resources. This was nasty as two 2523different types of entities competed and there was no obvious way to 2524settle it. Different controllers did different things. 2525 2526The cpu controller considered threads and cgroups as equivalents and 2527mapped nice levels to cgroup weights. This worked for some cases but 2528fell flat when children wanted to be allocated specific ratios of CPU 2529cycles and the number of internal threads fluctuated - the ratios 2530constantly changed as the number of competing entities fluctuated. 2531There also were other issues. The mapping from nice level to weight 2532wasn't obvious or universal, and there were various other knobs which 2533simply weren't available for threads. 2534 2535The io controller implicitly created a hidden leaf node for each 2536cgroup to host the threads. The hidden leaf had its own copies of all 2537the knobs with ``leaf_`` prefixed. While this allowed equivalent 2538control over internal threads, it was with serious drawbacks. It 2539always added an extra layer of nesting which wouldn't be necessary 2540otherwise, made the interface messy and significantly complicated the 2541implementation. 2542 2543The memory controller didn't have a way to control what happened 2544between internal tasks and child cgroups and the behavior was not 2545clearly defined. There were attempts to add ad-hoc behaviors and 2546knobs to tailor the behavior to specific workloads which would have 2547led to problems extremely difficult to resolve in the long term. 2548 2549Multiple controllers struggled with internal tasks and came up with 2550different ways to deal with it; unfortunately, all the approaches were 2551severely flawed and, furthermore, the widely different behaviors 2552made cgroup as a whole highly inconsistent. 2553 2554This clearly is a problem which needs to be addressed from cgroup core 2555in a uniform way. 2556 2557 2558Other Interface Issues 2559---------------------- 2560 2561cgroup v1 grew without oversight and developed a large number of 2562idiosyncrasies and inconsistencies. One issue on the cgroup core side 2563was how an empty cgroup was notified - a userland helper binary was 2564forked and executed for each event. The event delivery wasn't 2565recursive or delegatable. The limitations of the mechanism also led 2566to in-kernel event delivery filtering mechanism further complicating 2567the interface. 2568 2569Controller interfaces were problematic too. An extreme example is 2570controllers completely ignoring hierarchical organization and treating 2571all cgroups as if they were all located directly under the root 2572cgroup. Some controllers exposed a large amount of inconsistent 2573implementation details to userland. 2574 2575There also was no consistency across controllers. When a new cgroup 2576was created, some controllers defaulted to not imposing extra 2577restrictions while others disallowed any resource usage until 2578explicitly configured. Configuration knobs for the same type of 2579control used widely differing naming schemes and formats. Statistics 2580and information knobs were named arbitrarily and used different 2581formats and units even in the same controller. 2582 2583cgroup v2 establishes common conventions where appropriate and updates 2584controllers so that they expose minimal and consistent interfaces. 2585 2586 2587Controller Issues and Remedies 2588------------------------------ 2589 2590Memory 2591~~~~~~ 2592 2593The original lower boundary, the soft limit, is defined as a limit 2594that is per default unset. As a result, the set of cgroups that 2595global reclaim prefers is opt-in, rather than opt-out. The costs for 2596optimizing these mostly negative lookups are so high that the 2597implementation, despite its enormous size, does not even provide the 2598basic desirable behavior. First off, the soft limit has no 2599hierarchical meaning. All configured groups are organized in a global 2600rbtree and treated like equal peers, regardless where they are located 2601in the hierarchy. This makes subtree delegation impossible. Second, 2602the soft limit reclaim pass is so aggressive that it not just 2603introduces high allocation latencies into the system, but also impacts 2604system performance due to overreclaim, to the point where the feature 2605becomes self-defeating. 2606 2607The memory.low boundary on the other hand is a top-down allocated 2608reserve. A cgroup enjoys reclaim protection when it's within its 2609effective low, which makes delegation of subtrees possible. It also 2610enjoys having reclaim pressure proportional to its overage when 2611above its effective low. 2612 2613The original high boundary, the hard limit, is defined as a strict 2614limit that can not budge, even if the OOM killer has to be called. 2615But this generally goes against the goal of making the most out of the 2616available memory. The memory consumption of workloads varies during 2617runtime, and that requires users to overcommit. But doing that with a 2618strict upper limit requires either a fairly accurate prediction of the 2619working set size or adding slack to the limit. Since working set size 2620estimation is hard and error prone, and getting it wrong results in 2621OOM kills, most users tend to err on the side of a looser limit and 2622end up wasting precious resources. 2623 2624The memory.high boundary on the other hand can be set much more 2625conservatively. When hit, it throttles allocations by forcing them 2626into direct reclaim to work off the excess, but it never invokes the 2627OOM killer. As a result, a high boundary that is chosen too 2628aggressively will not terminate the processes, but instead it will 2629lead to gradual performance degradation. The user can monitor this 2630and make corrections until the minimal memory footprint that still 2631gives acceptable performance is found. 2632 2633In extreme cases, with many concurrent allocations and a complete 2634breakdown of reclaim progress within the group, the high boundary can 2635be exceeded. But even then it's mostly better to satisfy the 2636allocation from the slack available in other groups or the rest of the 2637system than killing the group. Otherwise, memory.max is there to 2638limit this type of spillover and ultimately contain buggy or even 2639malicious applications. 2640 2641Setting the original memory.limit_in_bytes below the current usage was 2642subject to a race condition, where concurrent charges could cause the 2643limit setting to fail. memory.max on the other hand will first set the 2644limit to prevent new charges, and then reclaim and OOM kill until the 2645new limit is met - or the task writing to memory.max is killed. 2646 2647The combined memory+swap accounting and limiting is replaced by real 2648control over swap space. 2649 2650The main argument for a combined memory+swap facility in the original 2651cgroup design was that global or parental pressure would always be 2652able to swap all anonymous memory of a child group, regardless of the 2653child's own (possibly untrusted) configuration. However, untrusted 2654groups can sabotage swapping by other means - such as referencing its 2655anonymous memory in a tight loop - and an admin can not assume full 2656swappability when overcommitting untrusted jobs. 2657 2658For trusted jobs, on the other hand, a combined counter is not an 2659intuitive userspace interface, and it flies in the face of the idea 2660that cgroup controllers should account and limit specific physical 2661resources. Swap space is a resource like all others in the system, 2662and that's why unified hierarchy allows distributing it separately. 2663