1.. _cgroup-v2: 2 3================ 4Control Group v2 5================ 6 7:Date: October, 2015 8:Author: Tejun Heo <tj@kernel.org> 9 10This is the authoritative documentation on the design, interface and 11conventions of cgroup v2. It describes all userland-visible aspects 12of cgroup including core and specific controller behaviors. All 13future changes must be reflected in this document. Documentation for 14v1 is available under :ref:`Documentation/admin-guide/cgroup-v1/index.rst <cgroup-v1>`. 15 16.. CONTENTS 17 18 1. Introduction 19 1-1. Terminology 20 1-2. What is cgroup? 21 2. Basic Operations 22 2-1. Mounting 23 2-2. Organizing Processes and Threads 24 2-2-1. Processes 25 2-2-2. Threads 26 2-3. [Un]populated Notification 27 2-4. Controlling Controllers 28 2-4-1. Enabling and Disabling 29 2-4-2. Top-down Constraint 30 2-4-3. No Internal Process Constraint 31 2-5. Delegation 32 2-5-1. Model of Delegation 33 2-5-2. Delegation Containment 34 2-6. Guidelines 35 2-6-1. Organize Once and Control 36 2-6-2. Avoid Name Collisions 37 3. Resource Distribution Models 38 3-1. Weights 39 3-2. Limits 40 3-3. Protections 41 3-4. Allocations 42 4. Interface Files 43 4-1. Format 44 4-2. Conventions 45 4-3. Core Interface Files 46 5. Controllers 47 5-1. CPU 48 5-1-1. CPU Interface Files 49 5-2. Memory 50 5-2-1. Memory Interface Files 51 5-2-2. Usage Guidelines 52 5-2-3. Memory Ownership 53 5-3. IO 54 5-3-1. IO Interface Files 55 5-3-2. Writeback 56 5-3-3. IO Latency 57 5-3-3-1. How IO Latency Throttling Works 58 5-3-3-2. IO Latency Interface Files 59 5-4. PID 60 5-4-1. PID Interface Files 61 5-5. Cpuset 62 5.5-1. Cpuset Interface Files 63 5-6. Device 64 5-7. RDMA 65 5-7-1. RDMA Interface Files 66 5-8. HugeTLB 67 5.8-1. HugeTLB Interface Files 68 5-8. Misc 69 5-8-1. perf_event 70 5-N. Non-normative information 71 5-N-1. CPU controller root cgroup process behaviour 72 5-N-2. IO controller root cgroup process behaviour 73 6. Namespace 74 6-1. Basics 75 6-2. The Root and Views 76 6-3. Migration and setns(2) 77 6-4. Interaction with Other Namespaces 78 P. Information on Kernel Programming 79 P-1. Filesystem Support for Writeback 80 D. Deprecated v1 Core Features 81 R. Issues with v1 and Rationales for v2 82 R-1. Multiple Hierarchies 83 R-2. Thread Granularity 84 R-3. Competition Between Inner Nodes and Threads 85 R-4. Other Interface Issues 86 R-5. Controller Issues and Remedies 87 R-5-1. Memory 88 89 90Introduction 91============ 92 93Terminology 94----------- 95 96"cgroup" stands for "control group" and is never capitalized. The 97singular form is used to designate the whole feature and also as a 98qualifier as in "cgroup controllers". When explicitly referring to 99multiple individual control groups, the plural form "cgroups" is used. 100 101 102What is cgroup? 103--------------- 104 105cgroup is a mechanism to organize processes hierarchically and 106distribute system resources along the hierarchy in a controlled and 107configurable manner. 108 109cgroup is largely composed of two parts - the core and controllers. 110cgroup core is primarily responsible for hierarchically organizing 111processes. A cgroup controller is usually responsible for 112distributing a specific type of system resource along the hierarchy 113although there are utility controllers which serve purposes other than 114resource distribution. 115 116cgroups form a tree structure and every process in the system belongs 117to one and only one cgroup. All threads of a process belong to the 118same cgroup. On creation, all processes are put in the cgroup that 119the parent process belongs to at the time. A process can be migrated 120to another cgroup. Migration of a process doesn't affect already 121existing descendant processes. 122 123Following certain structural constraints, controllers may be enabled or 124disabled selectively on a cgroup. All controller behaviors are 125hierarchical - if a controller is enabled on a cgroup, it affects all 126processes which belong to the cgroups consisting the inclusive 127sub-hierarchy of the cgroup. When a controller is enabled on a nested 128cgroup, it always restricts the resource distribution further. The 129restrictions set closer to the root in the hierarchy can not be 130overridden from further away. 131 132 133Basic Operations 134================ 135 136Mounting 137-------- 138 139Unlike v1, cgroup v2 has only single hierarchy. The cgroup v2 140hierarchy can be mounted with the following mount command:: 141 142 # mount -t cgroup2 none $MOUNT_POINT 143 144cgroup2 filesystem has the magic number 0x63677270 ("cgrp"). All 145controllers which support v2 and are not bound to a v1 hierarchy are 146automatically bound to the v2 hierarchy and show up at the root. 147Controllers which are not in active use in the v2 hierarchy can be 148bound to other hierarchies. This allows mixing v2 hierarchy with the 149legacy v1 multiple hierarchies in a fully backward compatible way. 150 151A controller can be moved across hierarchies only after the controller 152is no longer referenced in its current hierarchy. Because per-cgroup 153controller states are destroyed asynchronously and controllers may 154have lingering references, a controller may not show up immediately on 155the v2 hierarchy after the final umount of the previous hierarchy. 156Similarly, a controller should be fully disabled to be moved out of 157the unified hierarchy and it may take some time for the disabled 158controller to become available for other hierarchies; furthermore, due 159to inter-controller dependencies, other controllers may need to be 160disabled too. 161 162While useful for development and manual configurations, moving 163controllers dynamically between the v2 and other hierarchies is 164strongly discouraged for production use. It is recommended to decide 165the hierarchies and controller associations before starting using the 166controllers after system boot. 167 168During transition to v2, system management software might still 169automount the v1 cgroup filesystem and so hijack all controllers 170during boot, before manual intervention is possible. To make testing 171and experimenting easier, the kernel parameter cgroup_no_v1= allows 172disabling controllers in v1 and make them always available in v2. 173 174cgroup v2 currently supports the following mount options. 175 176 nsdelegate 177 Consider cgroup namespaces as delegation boundaries. This 178 option is system wide and can only be set on mount or modified 179 through remount from the init namespace. The mount option is 180 ignored on non-init namespace mounts. Please refer to the 181 Delegation section for details. 182 183 memory_localevents 184 Only populate memory.events with data for the current cgroup, 185 and not any subtrees. This is legacy behaviour, the default 186 behaviour without this option is to include subtree counts. 187 This option is system wide and can only be set on mount or 188 modified through remount from the init namespace. The mount 189 option is ignored on non-init namespace mounts. 190 191 memory_recursiveprot 192 Recursively apply memory.min and memory.low protection to 193 entire subtrees, without requiring explicit downward 194 propagation into leaf cgroups. This allows protecting entire 195 subtrees from one another, while retaining free competition 196 within those subtrees. This should have been the default 197 behavior but is a mount-option to avoid regressing setups 198 relying on the original semantics (e.g. specifying bogusly 199 high 'bypass' protection values at higher tree levels). 200 201 202Organizing Processes and Threads 203-------------------------------- 204 205Processes 206~~~~~~~~~ 207 208Initially, only the root cgroup exists to which all processes belong. 209A child cgroup can be created by creating a sub-directory:: 210 211 # mkdir $CGROUP_NAME 212 213A given cgroup may have multiple child cgroups forming a tree 214structure. Each cgroup has a read-writable interface file 215"cgroup.procs". When read, it lists the PIDs of all processes which 216belong to the cgroup one-per-line. The PIDs are not ordered and the 217same PID may show up more than once if the process got moved to 218another cgroup and then back or the PID got recycled while reading. 219 220A process can be migrated into a cgroup by writing its PID to the 221target cgroup's "cgroup.procs" file. Only one process can be migrated 222on a single write(2) call. If a process is composed of multiple 223threads, writing the PID of any thread migrates all threads of the 224process. 225 226When a process forks a child process, the new process is born into the 227cgroup that the forking process belongs to at the time of the 228operation. After exit, a process stays associated with the cgroup 229that it belonged to at the time of exit until it's reaped; however, a 230zombie process does not appear in "cgroup.procs" and thus can't be 231moved to another cgroup. 232 233A cgroup which doesn't have any children or live processes can be 234destroyed by removing the directory. Note that a cgroup which doesn't 235have any children and is associated only with zombie processes is 236considered empty and can be removed:: 237 238 # rmdir $CGROUP_NAME 239 240"/proc/$PID/cgroup" lists a process's cgroup membership. If legacy 241cgroup is in use in the system, this file may contain multiple lines, 242one for each hierarchy. The entry for cgroup v2 is always in the 243format "0::$PATH":: 244 245 # cat /proc/842/cgroup 246 ... 247 0::/test-cgroup/test-cgroup-nested 248 249If the process becomes a zombie and the cgroup it was associated with 250is removed subsequently, " (deleted)" is appended to the path:: 251 252 # cat /proc/842/cgroup 253 ... 254 0::/test-cgroup/test-cgroup-nested (deleted) 255 256 257Threads 258~~~~~~~ 259 260cgroup v2 supports thread granularity for a subset of controllers to 261support use cases requiring hierarchical resource distribution across 262the threads of a group of processes. By default, all threads of a 263process belong to the same cgroup, which also serves as the resource 264domain to host resource consumptions which are not specific to a 265process or thread. The thread mode allows threads to be spread across 266a subtree while still maintaining the common resource domain for them. 267 268Controllers which support thread mode are called threaded controllers. 269The ones which don't are called domain controllers. 270 271Marking a cgroup threaded makes it join the resource domain of its 272parent as a threaded cgroup. The parent may be another threaded 273cgroup whose resource domain is further up in the hierarchy. The root 274of a threaded subtree, that is, the nearest ancestor which is not 275threaded, is called threaded domain or thread root interchangeably and 276serves as the resource domain for the entire subtree. 277 278Inside a threaded subtree, threads of a process can be put in 279different cgroups and are not subject to the no internal process 280constraint - threaded controllers can be enabled on non-leaf cgroups 281whether they have threads in them or not. 282 283As the threaded domain cgroup hosts all the domain resource 284consumptions of the subtree, it is considered to have internal 285resource consumptions whether there are processes in it or not and 286can't have populated child cgroups which aren't threaded. Because the 287root cgroup is not subject to no internal process constraint, it can 288serve both as a threaded domain and a parent to domain cgroups. 289 290The current operation mode or type of the cgroup is shown in the 291"cgroup.type" file which indicates whether the cgroup is a normal 292domain, a domain which is serving as the domain of a threaded subtree, 293or a threaded cgroup. 294 295On creation, a cgroup is always a domain cgroup and can be made 296threaded by writing "threaded" to the "cgroup.type" file. The 297operation is single direction:: 298 299 # echo threaded > cgroup.type 300 301Once threaded, the cgroup can't be made a domain again. To enable the 302thread mode, the following conditions must be met. 303 304- As the cgroup will join the parent's resource domain. The parent 305 must either be a valid (threaded) domain or a threaded cgroup. 306 307- When the parent is an unthreaded domain, it must not have any domain 308 controllers enabled or populated domain children. The root is 309 exempt from this requirement. 310 311Topology-wise, a cgroup can be in an invalid state. Please consider 312the following topology:: 313 314 A (threaded domain) - B (threaded) - C (domain, just created) 315 316C is created as a domain but isn't connected to a parent which can 317host child domains. C can't be used until it is turned into a 318threaded cgroup. "cgroup.type" file will report "domain (invalid)" in 319these cases. Operations which fail due to invalid topology use 320EOPNOTSUPP as the errno. 321 322A domain cgroup is turned into a threaded domain when one of its child 323cgroup becomes threaded or threaded controllers are enabled in the 324"cgroup.subtree_control" file while there are processes in the cgroup. 325A threaded domain reverts to a normal domain when the conditions 326clear. 327 328When read, "cgroup.threads" contains the list of the thread IDs of all 329threads in the cgroup. Except that the operations are per-thread 330instead of per-process, "cgroup.threads" has the same format and 331behaves the same way as "cgroup.procs". While "cgroup.threads" can be 332written to in any cgroup, as it can only move threads inside the same 333threaded domain, its operations are confined inside each threaded 334subtree. 335 336The threaded domain cgroup serves as the resource domain for the whole 337subtree, and, while the threads can be scattered across the subtree, 338all the processes are considered to be in the threaded domain cgroup. 339"cgroup.procs" in a threaded domain cgroup contains the PIDs of all 340processes in the subtree and is not readable in the subtree proper. 341However, "cgroup.procs" can be written to from anywhere in the subtree 342to migrate all threads of the matching process to the cgroup. 343 344Only threaded controllers can be enabled in a threaded subtree. When 345a threaded controller is enabled inside a threaded subtree, it only 346accounts for and controls resource consumptions associated with the 347threads in the cgroup and its descendants. All consumptions which 348aren't tied to a specific thread belong to the threaded domain cgroup. 349 350Because a threaded subtree is exempt from no internal process 351constraint, a threaded controller must be able to handle competition 352between threads in a non-leaf cgroup and its child cgroups. Each 353threaded controller defines how such competitions are handled. 354 355 356[Un]populated Notification 357-------------------------- 358 359Each non-root cgroup has a "cgroup.events" file which contains 360"populated" field indicating whether the cgroup's sub-hierarchy has 361live processes in it. Its value is 0 if there is no live process in 362the cgroup and its descendants; otherwise, 1. poll and [id]notify 363events are triggered when the value changes. This can be used, for 364example, to start a clean-up operation after all processes of a given 365sub-hierarchy have exited. The populated state updates and 366notifications are recursive. Consider the following sub-hierarchy 367where the numbers in the parentheses represent the numbers of processes 368in each cgroup:: 369 370 A(4) - B(0) - C(1) 371 \ D(0) 372 373A, B and C's "populated" fields would be 1 while D's 0. After the one 374process in C exits, B and C's "populated" fields would flip to "0" and 375file modified events will be generated on the "cgroup.events" files of 376both cgroups. 377 378 379Controlling Controllers 380----------------------- 381 382Enabling and Disabling 383~~~~~~~~~~~~~~~~~~~~~~ 384 385Each cgroup has a "cgroup.controllers" file which lists all 386controllers available for the cgroup to enable:: 387 388 # cat cgroup.controllers 389 cpu io memory 390 391No controller is enabled by default. Controllers can be enabled and 392disabled by writing to the "cgroup.subtree_control" file:: 393 394 # echo "+cpu +memory -io" > cgroup.subtree_control 395 396Only controllers which are listed in "cgroup.controllers" can be 397enabled. When multiple operations are specified as above, either they 398all succeed or fail. If multiple operations on the same controller 399are specified, the last one is effective. 400 401Enabling a controller in a cgroup indicates that the distribution of 402the target resource across its immediate children will be controlled. 403Consider the following sub-hierarchy. The enabled controllers are 404listed in parentheses:: 405 406 A(cpu,memory) - B(memory) - C() 407 \ D() 408 409As A has "cpu" and "memory" enabled, A will control the distribution 410of CPU cycles and memory to its children, in this case, B. As B has 411"memory" enabled but not "CPU", C and D will compete freely on CPU 412cycles but their division of memory available to B will be controlled. 413 414As a controller regulates the distribution of the target resource to 415the cgroup's children, enabling it creates the controller's interface 416files in the child cgroups. In the above example, enabling "cpu" on B 417would create the "cpu." prefixed controller interface files in C and 418D. Likewise, disabling "memory" from B would remove the "memory." 419prefixed controller interface files from C and D. This means that the 420controller interface files - anything which doesn't start with 421"cgroup." are owned by the parent rather than the cgroup itself. 422 423 424Top-down Constraint 425~~~~~~~~~~~~~~~~~~~ 426 427Resources are distributed top-down and a cgroup can further distribute 428a resource only if the resource has been distributed to it from the 429parent. This means that all non-root "cgroup.subtree_control" files 430can only contain controllers which are enabled in the parent's 431"cgroup.subtree_control" file. A controller can be enabled only if 432the parent has the controller enabled and a controller can't be 433disabled if one or more children have it enabled. 434 435 436No Internal Process Constraint 437~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 438 439Non-root cgroups can distribute domain resources to their children 440only when they don't have any processes of their own. In other words, 441only domain cgroups which don't contain any processes can have domain 442controllers enabled in their "cgroup.subtree_control" files. 443 444This guarantees that, when a domain controller is looking at the part 445of the hierarchy which has it enabled, processes are always only on 446the leaves. This rules out situations where child cgroups compete 447against internal processes of the parent. 448 449The root cgroup is exempt from this restriction. Root contains 450processes and anonymous resource consumption which can't be associated 451with any other cgroups and requires special treatment from most 452controllers. How resource consumption in the root cgroup is governed 453is up to each controller (for more information on this topic please 454refer to the Non-normative information section in the Controllers 455chapter). 456 457Note that the restriction doesn't get in the way if there is no 458enabled controller in the cgroup's "cgroup.subtree_control". This is 459important as otherwise it wouldn't be possible to create children of a 460populated cgroup. To control resource distribution of a cgroup, the 461cgroup must create children and transfer all its processes to the 462children before enabling controllers in its "cgroup.subtree_control" 463file. 464 465 466Delegation 467---------- 468 469Model of Delegation 470~~~~~~~~~~~~~~~~~~~ 471 472A cgroup can be delegated in two ways. First, to a less privileged 473user by granting write access of the directory and its "cgroup.procs", 474"cgroup.threads" and "cgroup.subtree_control" files to the user. 475Second, if the "nsdelegate" mount option is set, automatically to a 476cgroup namespace on namespace creation. 477 478Because the resource control interface files in a given directory 479control the distribution of the parent's resources, the delegatee 480shouldn't be allowed to write to them. For the first method, this is 481achieved by not granting access to these files. For the second, the 482kernel rejects writes to all files other than "cgroup.procs" and 483"cgroup.subtree_control" on a namespace root from inside the 484namespace. 485 486The end results are equivalent for both delegation types. Once 487delegated, the user can build sub-hierarchy under the directory, 488organize processes inside it as it sees fit and further distribute the 489resources it received from the parent. The limits and other settings 490of all resource controllers are hierarchical and regardless of what 491happens in the delegated sub-hierarchy, nothing can escape the 492resource restrictions imposed by the parent. 493 494Currently, cgroup doesn't impose any restrictions on the number of 495cgroups in or nesting depth of a delegated sub-hierarchy; however, 496this may be limited explicitly in the future. 497 498 499Delegation Containment 500~~~~~~~~~~~~~~~~~~~~~~ 501 502A delegated sub-hierarchy is contained in the sense that processes 503can't be moved into or out of the sub-hierarchy by the delegatee. 504 505For delegations to a less privileged user, this is achieved by 506requiring the following conditions for a process with a non-root euid 507to migrate a target process into a cgroup by writing its PID to the 508"cgroup.procs" file. 509 510- The writer must have write access to the "cgroup.procs" file. 511 512- The writer must have write access to the "cgroup.procs" file of the 513 common ancestor of the source and destination cgroups. 514 515The above two constraints ensure that while a delegatee may migrate 516processes around freely in the delegated sub-hierarchy it can't pull 517in from or push out to outside the sub-hierarchy. 518 519For an example, let's assume cgroups C0 and C1 have been delegated to 520user U0 who created C00, C01 under C0 and C10 under C1 as follows and 521all processes under C0 and C1 belong to U0:: 522 523 ~~~~~~~~~~~~~ - C0 - C00 524 ~ cgroup ~ \ C01 525 ~ hierarchy ~ 526 ~~~~~~~~~~~~~ - C1 - C10 527 528Let's also say U0 wants to write the PID of a process which is 529currently in C10 into "C00/cgroup.procs". U0 has write access to the 530file; however, the common ancestor of the source cgroup C10 and the 531destination cgroup C00 is above the points of delegation and U0 would 532not have write access to its "cgroup.procs" files and thus the write 533will be denied with -EACCES. 534 535For delegations to namespaces, containment is achieved by requiring 536that both the source and destination cgroups are reachable from the 537namespace of the process which is attempting the migration. If either 538is not reachable, the migration is rejected with -ENOENT. 539 540 541Guidelines 542---------- 543 544Organize Once and Control 545~~~~~~~~~~~~~~~~~~~~~~~~~ 546 547Migrating a process across cgroups is a relatively expensive operation 548and stateful resources such as memory are not moved together with the 549process. This is an explicit design decision as there often exist 550inherent trade-offs between migration and various hot paths in terms 551of synchronization cost. 552 553As such, migrating processes across cgroups frequently as a means to 554apply different resource restrictions is discouraged. A workload 555should be assigned to a cgroup according to the system's logical and 556resource structure once on start-up. Dynamic adjustments to resource 557distribution can be made by changing controller configuration through 558the interface files. 559 560 561Avoid Name Collisions 562~~~~~~~~~~~~~~~~~~~~~ 563 564Interface files for a cgroup and its children cgroups occupy the same 565directory and it is possible to create children cgroups which collide 566with interface files. 567 568All cgroup core interface files are prefixed with "cgroup." and each 569controller's interface files are prefixed with the controller name and 570a dot. A controller's name is composed of lower case alphabets and 571'_'s but never begins with an '_' so it can be used as the prefix 572character for collision avoidance. Also, interface file names won't 573start or end with terms which are often used in categorizing workloads 574such as job, service, slice, unit or workload. 575 576cgroup doesn't do anything to prevent name collisions and it's the 577user's responsibility to avoid them. 578 579 580Resource Distribution Models 581============================ 582 583cgroup controllers implement several resource distribution schemes 584depending on the resource type and expected use cases. This section 585describes major schemes in use along with their expected behaviors. 586 587 588Weights 589------- 590 591A parent's resource is distributed by adding up the weights of all 592active children and giving each the fraction matching the ratio of its 593weight against the sum. As only children which can make use of the 594resource at the moment participate in the distribution, this is 595work-conserving. Due to the dynamic nature, this model is usually 596used for stateless resources. 597 598All weights are in the range [1, 10000] with the default at 100. This 599allows symmetric multiplicative biases in both directions at fine 600enough granularity while staying in the intuitive range. 601 602As long as the weight is in range, all configuration combinations are 603valid and there is no reason to reject configuration changes or 604process migrations. 605 606"cpu.weight" proportionally distributes CPU cycles to active children 607and is an example of this type. 608 609 610Limits 611------ 612 613A child can only consume upto the configured amount of the resource. 614Limits can be over-committed - the sum of the limits of children can 615exceed the amount of resource available to the parent. 616 617Limits are in the range [0, max] and defaults to "max", which is noop. 618 619As limits can be over-committed, all configuration combinations are 620valid and there is no reason to reject configuration changes or 621process migrations. 622 623"io.max" limits the maximum BPS and/or IOPS that a cgroup can consume 624on an IO device and is an example of this type. 625 626 627Protections 628----------- 629 630A cgroup is protected upto the configured amount of the resource 631as long as the usages of all its ancestors are under their 632protected levels. Protections can be hard guarantees or best effort 633soft boundaries. Protections can also be over-committed in which case 634only upto the amount available to the parent is protected among 635children. 636 637Protections are in the range [0, max] and defaults to 0, which is 638noop. 639 640As protections can be over-committed, all configuration combinations 641are valid and there is no reason to reject configuration changes or 642process migrations. 643 644"memory.low" implements best-effort memory protection and is an 645example of this type. 646 647 648Allocations 649----------- 650 651A cgroup is exclusively allocated a certain amount of a finite 652resource. Allocations can't be over-committed - the sum of the 653allocations of children can not exceed the amount of resource 654available to the parent. 655 656Allocations are in the range [0, max] and defaults to 0, which is no 657resource. 658 659As allocations can't be over-committed, some configuration 660combinations are invalid and should be rejected. Also, if the 661resource is mandatory for execution of processes, process migrations 662may be rejected. 663 664"cpu.rt.max" hard-allocates realtime slices and is an example of this 665type. 666 667 668Interface Files 669=============== 670 671Format 672------ 673 674All interface files should be in one of the following formats whenever 675possible:: 676 677 New-line separated values 678 (when only one value can be written at once) 679 680 VAL0\n 681 VAL1\n 682 ... 683 684 Space separated values 685 (when read-only or multiple values can be written at once) 686 687 VAL0 VAL1 ...\n 688 689 Flat keyed 690 691 KEY0 VAL0\n 692 KEY1 VAL1\n 693 ... 694 695 Nested keyed 696 697 KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01... 698 KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11... 699 ... 700 701For a writable file, the format for writing should generally match 702reading; however, controllers may allow omitting later fields or 703implement restricted shortcuts for most common use cases. 704 705For both flat and nested keyed files, only the values for a single key 706can be written at a time. For nested keyed files, the sub key pairs 707may be specified in any order and not all pairs have to be specified. 708 709 710Conventions 711----------- 712 713- Settings for a single feature should be contained in a single file. 714 715- The root cgroup should be exempt from resource control and thus 716 shouldn't have resource control interface files. 717 718- The default time unit is microseconds. If a different unit is ever 719 used, an explicit unit suffix must be present. 720 721- A parts-per quantity should use a percentage decimal with at least 722 two digit fractional part - e.g. 13.40. 723 724- If a controller implements weight based resource distribution, its 725 interface file should be named "weight" and have the range [1, 726 10000] with 100 as the default. The values are chosen to allow 727 enough and symmetric bias in both directions while keeping it 728 intuitive (the default is 100%). 729 730- If a controller implements an absolute resource guarantee and/or 731 limit, the interface files should be named "min" and "max" 732 respectively. If a controller implements best effort resource 733 guarantee and/or limit, the interface files should be named "low" 734 and "high" respectively. 735 736 In the above four control files, the special token "max" should be 737 used to represent upward infinity for both reading and writing. 738 739- If a setting has a configurable default value and keyed specific 740 overrides, the default entry should be keyed with "default" and 741 appear as the first entry in the file. 742 743 The default value can be updated by writing either "default $VAL" or 744 "$VAL". 745 746 When writing to update a specific override, "default" can be used as 747 the value to indicate removal of the override. Override entries 748 with "default" as the value must not appear when read. 749 750 For example, a setting which is keyed by major:minor device numbers 751 with integer values may look like the following:: 752 753 # cat cgroup-example-interface-file 754 default 150 755 8:0 300 756 757 The default value can be updated by:: 758 759 # echo 125 > cgroup-example-interface-file 760 761 or:: 762 763 # echo "default 125" > cgroup-example-interface-file 764 765 An override can be set by:: 766 767 # echo "8:16 170" > cgroup-example-interface-file 768 769 and cleared by:: 770 771 # echo "8:0 default" > cgroup-example-interface-file 772 # cat cgroup-example-interface-file 773 default 125 774 8:16 170 775 776- For events which are not very high frequency, an interface file 777 "events" should be created which lists event key value pairs. 778 Whenever a notifiable event happens, file modified event should be 779 generated on the file. 780 781 782Core Interface Files 783-------------------- 784 785All cgroup core files are prefixed with "cgroup." 786 787 cgroup.type 788 A read-write single value file which exists on non-root 789 cgroups. 790 791 When read, it indicates the current type of the cgroup, which 792 can be one of the following values. 793 794 - "domain" : A normal valid domain cgroup. 795 796 - "domain threaded" : A threaded domain cgroup which is 797 serving as the root of a threaded subtree. 798 799 - "domain invalid" : A cgroup which is in an invalid state. 800 It can't be populated or have controllers enabled. It may 801 be allowed to become a threaded cgroup. 802 803 - "threaded" : A threaded cgroup which is a member of a 804 threaded subtree. 805 806 A cgroup can be turned into a threaded cgroup by writing 807 "threaded" to this file. 808 809 cgroup.procs 810 A read-write new-line separated values file which exists on 811 all cgroups. 812 813 When read, it lists the PIDs of all processes which belong to 814 the cgroup one-per-line. The PIDs are not ordered and the 815 same PID may show up more than once if the process got moved 816 to another cgroup and then back or the PID got recycled while 817 reading. 818 819 A PID can be written to migrate the process associated with 820 the PID to the cgroup. The writer should match all of the 821 following conditions. 822 823 - It must have write access to the "cgroup.procs" file. 824 825 - It must have write access to the "cgroup.procs" file of the 826 common ancestor of the source and destination cgroups. 827 828 When delegating a sub-hierarchy, write access to this file 829 should be granted along with the containing directory. 830 831 In a threaded cgroup, reading this file fails with EOPNOTSUPP 832 as all the processes belong to the thread root. Writing is 833 supported and moves every thread of the process to the cgroup. 834 835 cgroup.threads 836 A read-write new-line separated values file which exists on 837 all cgroups. 838 839 When read, it lists the TIDs of all threads which belong to 840 the cgroup one-per-line. The TIDs are not ordered and the 841 same TID may show up more than once if the thread got moved to 842 another cgroup and then back or the TID got recycled while 843 reading. 844 845 A TID can be written to migrate the thread associated with the 846 TID to the cgroup. The writer should match all of the 847 following conditions. 848 849 - It must have write access to the "cgroup.threads" file. 850 851 - The cgroup that the thread is currently in must be in the 852 same resource domain as the destination cgroup. 853 854 - It must have write access to the "cgroup.procs" file of the 855 common ancestor of the source and destination cgroups. 856 857 When delegating a sub-hierarchy, write access to this file 858 should be granted along with the containing directory. 859 860 cgroup.controllers 861 A read-only space separated values file which exists on all 862 cgroups. 863 864 It shows space separated list of all controllers available to 865 the cgroup. The controllers are not ordered. 866 867 cgroup.subtree_control 868 A read-write space separated values file which exists on all 869 cgroups. Starts out empty. 870 871 When read, it shows space separated list of the controllers 872 which are enabled to control resource distribution from the 873 cgroup to its children. 874 875 Space separated list of controllers prefixed with '+' or '-' 876 can be written to enable or disable controllers. A controller 877 name prefixed with '+' enables the controller and '-' 878 disables. If a controller appears more than once on the list, 879 the last one is effective. When multiple enable and disable 880 operations are specified, either all succeed or all fail. 881 882 cgroup.events 883 A read-only flat-keyed file which exists on non-root cgroups. 884 The following entries are defined. Unless specified 885 otherwise, a value change in this file generates a file 886 modified event. 887 888 populated 889 1 if the cgroup or its descendants contains any live 890 processes; otherwise, 0. 891 frozen 892 1 if the cgroup is frozen; otherwise, 0. 893 894 cgroup.max.descendants 895 A read-write single value files. The default is "max". 896 897 Maximum allowed number of descent cgroups. 898 If the actual number of descendants is equal or larger, 899 an attempt to create a new cgroup in the hierarchy will fail. 900 901 cgroup.max.depth 902 A read-write single value files. The default is "max". 903 904 Maximum allowed descent depth below the current cgroup. 905 If the actual descent depth is equal or larger, 906 an attempt to create a new child cgroup will fail. 907 908 cgroup.stat 909 A read-only flat-keyed file with the following entries: 910 911 nr_descendants 912 Total number of visible descendant cgroups. 913 914 nr_dying_descendants 915 Total number of dying descendant cgroups. A cgroup becomes 916 dying after being deleted by a user. The cgroup will remain 917 in dying state for some time undefined time (which can depend 918 on system load) before being completely destroyed. 919 920 A process can't enter a dying cgroup under any circumstances, 921 a dying cgroup can't revive. 922 923 A dying cgroup can consume system resources not exceeding 924 limits, which were active at the moment of cgroup deletion. 925 926 cgroup.freeze 927 A read-write single value file which exists on non-root cgroups. 928 Allowed values are "0" and "1". The default is "0". 929 930 Writing "1" to the file causes freezing of the cgroup and all 931 descendant cgroups. This means that all belonging processes will 932 be stopped and will not run until the cgroup will be explicitly 933 unfrozen. Freezing of the cgroup may take some time; when this action 934 is completed, the "frozen" value in the cgroup.events control file 935 will be updated to "1" and the corresponding notification will be 936 issued. 937 938 A cgroup can be frozen either by its own settings, or by settings 939 of any ancestor cgroups. If any of ancestor cgroups is frozen, the 940 cgroup will remain frozen. 941 942 Processes in the frozen cgroup can be killed by a fatal signal. 943 They also can enter and leave a frozen cgroup: either by an explicit 944 move by a user, or if freezing of the cgroup races with fork(). 945 If a process is moved to a frozen cgroup, it stops. If a process is 946 moved out of a frozen cgroup, it becomes running. 947 948 Frozen status of a cgroup doesn't affect any cgroup tree operations: 949 it's possible to delete a frozen (and empty) cgroup, as well as 950 create new sub-cgroups. 951 952Controllers 953=========== 954 955.. _cgroup-v2-cpu: 956 957CPU 958--- 959 960The "cpu" controllers regulates distribution of CPU cycles. This 961controller implements weight and absolute bandwidth limit models for 962normal scheduling policy and absolute bandwidth allocation model for 963realtime scheduling policy. 964 965In all the above models, cycles distribution is defined only on a temporal 966base and it does not account for the frequency at which tasks are executed. 967The (optional) utilization clamping support allows to hint the schedutil 968cpufreq governor about the minimum desired frequency which should always be 969provided by a CPU, as well as the maximum desired frequency, which should not 970be exceeded by a CPU. 971 972WARNING: cgroup2 doesn't yet support control of realtime processes and 973the cpu controller can only be enabled when all RT processes are in 974the root cgroup. Be aware that system management software may already 975have placed RT processes into nonroot cgroups during the system boot 976process, and these processes may need to be moved to the root cgroup 977before the cpu controller can be enabled. 978 979 980CPU Interface Files 981~~~~~~~~~~~~~~~~~~~ 982 983All time durations are in microseconds. 984 985 cpu.stat 986 A read-only flat-keyed file. 987 This file exists whether the controller is enabled or not. 988 989 It always reports the following three stats: 990 991 - usage_usec 992 - user_usec 993 - system_usec 994 995 and the following three when the controller is enabled: 996 997 - nr_periods 998 - nr_throttled 999 - throttled_usec 1000 1001 cpu.weight 1002 A read-write single value file which exists on non-root 1003 cgroups. The default is "100". 1004 1005 The weight in the range [1, 10000]. 1006 1007 cpu.weight.nice 1008 A read-write single value file which exists on non-root 1009 cgroups. The default is "0". 1010 1011 The nice value is in the range [-20, 19]. 1012 1013 This interface file is an alternative interface for 1014 "cpu.weight" and allows reading and setting weight using the 1015 same values used by nice(2). Because the range is smaller and 1016 granularity is coarser for the nice values, the read value is 1017 the closest approximation of the current weight. 1018 1019 cpu.max 1020 A read-write two value file which exists on non-root cgroups. 1021 The default is "max 100000". 1022 1023 The maximum bandwidth limit. It's in the following format:: 1024 1025 $MAX $PERIOD 1026 1027 which indicates that the group may consume upto $MAX in each 1028 $PERIOD duration. "max" for $MAX indicates no limit. If only 1029 one number is written, $MAX is updated. 1030 1031 cpu.pressure 1032 A read-only nested-key file which exists on non-root cgroups. 1033 1034 Shows pressure stall information for CPU. See 1035 :ref:`Documentation/accounting/psi.rst <psi>` for details. 1036 1037 cpu.uclamp.min 1038 A read-write single value file which exists on non-root cgroups. 1039 The default is "0", i.e. no utilization boosting. 1040 1041 The requested minimum utilization (protection) as a percentage 1042 rational number, e.g. 12.34 for 12.34%. 1043 1044 This interface allows reading and setting minimum utilization clamp 1045 values similar to the sched_setattr(2). This minimum utilization 1046 value is used to clamp the task specific minimum utilization clamp. 1047 1048 The requested minimum utilization (protection) is always capped by 1049 the current value for the maximum utilization (limit), i.e. 1050 `cpu.uclamp.max`. 1051 1052 cpu.uclamp.max 1053 A read-write single value file which exists on non-root cgroups. 1054 The default is "max". i.e. no utilization capping 1055 1056 The requested maximum utilization (limit) as a percentage rational 1057 number, e.g. 98.76 for 98.76%. 1058 1059 This interface allows reading and setting maximum utilization clamp 1060 values similar to the sched_setattr(2). This maximum utilization 1061 value is used to clamp the task specific maximum utilization clamp. 1062 1063 1064 1065Memory 1066------ 1067 1068The "memory" controller regulates distribution of memory. Memory is 1069stateful and implements both limit and protection models. Due to the 1070intertwining between memory usage and reclaim pressure and the 1071stateful nature of memory, the distribution model is relatively 1072complex. 1073 1074While not completely water-tight, all major memory usages by a given 1075cgroup are tracked so that the total memory consumption can be 1076accounted and controlled to a reasonable extent. Currently, the 1077following types of memory usages are tracked. 1078 1079- Userland memory - page cache and anonymous memory. 1080 1081- Kernel data structures such as dentries and inodes. 1082 1083- TCP socket buffers. 1084 1085The above list may expand in the future for better coverage. 1086 1087 1088Memory Interface Files 1089~~~~~~~~~~~~~~~~~~~~~~ 1090 1091All memory amounts are in bytes. If a value which is not aligned to 1092PAGE_SIZE is written, the value may be rounded up to the closest 1093PAGE_SIZE multiple when read back. 1094 1095 memory.current 1096 A read-only single value file which exists on non-root 1097 cgroups. 1098 1099 The total amount of memory currently being used by the cgroup 1100 and its descendants. 1101 1102 memory.min 1103 A read-write single value file which exists on non-root 1104 cgroups. The default is "0". 1105 1106 Hard memory protection. If the memory usage of a cgroup 1107 is within its effective min boundary, the cgroup's memory 1108 won't be reclaimed under any conditions. If there is no 1109 unprotected reclaimable memory available, OOM killer 1110 is invoked. Above the effective min boundary (or 1111 effective low boundary if it is higher), pages are reclaimed 1112 proportionally to the overage, reducing reclaim pressure for 1113 smaller overages. 1114 1115 Effective min boundary is limited by memory.min values of 1116 all ancestor cgroups. If there is memory.min overcommitment 1117 (child cgroup or cgroups are requiring more protected memory 1118 than parent will allow), then each child cgroup will get 1119 the part of parent's protection proportional to its 1120 actual memory usage below memory.min. 1121 1122 Putting more memory than generally available under this 1123 protection is discouraged and may lead to constant OOMs. 1124 1125 If a memory cgroup is not populated with processes, 1126 its memory.min is ignored. 1127 1128 memory.low 1129 A read-write single value file which exists on non-root 1130 cgroups. The default is "0". 1131 1132 Best-effort memory protection. If the memory usage of a 1133 cgroup is within its effective low boundary, the cgroup's 1134 memory won't be reclaimed unless there is no reclaimable 1135 memory available in unprotected cgroups. 1136 Above the effective low boundary (or 1137 effective min boundary if it is higher), pages are reclaimed 1138 proportionally to the overage, reducing reclaim pressure for 1139 smaller overages. 1140 1141 Effective low boundary is limited by memory.low values of 1142 all ancestor cgroups. If there is memory.low overcommitment 1143 (child cgroup or cgroups are requiring more protected memory 1144 than parent will allow), then each child cgroup will get 1145 the part of parent's protection proportional to its 1146 actual memory usage below memory.low. 1147 1148 Putting more memory than generally available under this 1149 protection is discouraged. 1150 1151 memory.high 1152 A read-write single value file which exists on non-root 1153 cgroups. The default is "max". 1154 1155 Memory usage throttle limit. This is the main mechanism to 1156 control memory usage of a cgroup. If a cgroup's usage goes 1157 over the high boundary, the processes of the cgroup are 1158 throttled and put under heavy reclaim pressure. 1159 1160 Going over the high limit never invokes the OOM killer and 1161 under extreme conditions the limit may be breached. 1162 1163 memory.max 1164 A read-write single value file which exists on non-root 1165 cgroups. The default is "max". 1166 1167 Memory usage hard limit. This is the final protection 1168 mechanism. If a cgroup's memory usage reaches this limit and 1169 can't be reduced, the OOM killer is invoked in the cgroup. 1170 Under certain circumstances, the usage may go over the limit 1171 temporarily. 1172 1173 In default configuration regular 0-order allocations always 1174 succeed unless OOM killer chooses current task as a victim. 1175 1176 Some kinds of allocations don't invoke the OOM killer. 1177 Caller could retry them differently, return into userspace 1178 as -ENOMEM or silently ignore in cases like disk readahead. 1179 1180 This is the ultimate protection mechanism. As long as the 1181 high limit is used and monitored properly, this limit's 1182 utility is limited to providing the final safety net. 1183 1184 memory.oom.group 1185 A read-write single value file which exists on non-root 1186 cgroups. The default value is "0". 1187 1188 Determines whether the cgroup should be treated as 1189 an indivisible workload by the OOM killer. If set, 1190 all tasks belonging to the cgroup or to its descendants 1191 (if the memory cgroup is not a leaf cgroup) are killed 1192 together or not at all. This can be used to avoid 1193 partial kills to guarantee workload integrity. 1194 1195 Tasks with the OOM protection (oom_score_adj set to -1000) 1196 are treated as an exception and are never killed. 1197 1198 If the OOM killer is invoked in a cgroup, it's not going 1199 to kill any tasks outside of this cgroup, regardless 1200 memory.oom.group values of ancestor cgroups. 1201 1202 memory.events 1203 A read-only flat-keyed file which exists on non-root cgroups. 1204 The following entries are defined. Unless specified 1205 otherwise, a value change in this file generates a file 1206 modified event. 1207 1208 Note that all fields in this file are hierarchical and the 1209 file modified event can be generated due to an event down the 1210 hierarchy. For for the local events at the cgroup level see 1211 memory.events.local. 1212 1213 low 1214 The number of times the cgroup is reclaimed due to 1215 high memory pressure even though its usage is under 1216 the low boundary. This usually indicates that the low 1217 boundary is over-committed. 1218 1219 high 1220 The number of times processes of the cgroup are 1221 throttled and routed to perform direct memory reclaim 1222 because the high memory boundary was exceeded. For a 1223 cgroup whose memory usage is capped by the high limit 1224 rather than global memory pressure, this event's 1225 occurrences are expected. 1226 1227 max 1228 The number of times the cgroup's memory usage was 1229 about to go over the max boundary. If direct reclaim 1230 fails to bring it down, the cgroup goes to OOM state. 1231 1232 oom 1233 The number of time the cgroup's memory usage was 1234 reached the limit and allocation was about to fail. 1235 1236 This event is not raised if the OOM killer is not 1237 considered as an option, e.g. for failed high-order 1238 allocations or if caller asked to not retry attempts. 1239 1240 oom_kill 1241 The number of processes belonging to this cgroup 1242 killed by any kind of OOM killer. 1243 1244 memory.events.local 1245 Similar to memory.events but the fields in the file are local 1246 to the cgroup i.e. not hierarchical. The file modified event 1247 generated on this file reflects only the local events. 1248 1249 memory.stat 1250 A read-only flat-keyed file which exists on non-root cgroups. 1251 1252 This breaks down the cgroup's memory footprint into different 1253 types of memory, type-specific details, and other information 1254 on the state and past events of the memory management system. 1255 1256 All memory amounts are in bytes. 1257 1258 The entries are ordered to be human readable, and new entries 1259 can show up in the middle. Don't rely on items remaining in a 1260 fixed position; use the keys to look up specific values! 1261 1262 If the entry has no per-node counter (or not show in the 1263 memory.numa_stat). We use 'npn' (non-per-node) as the tag 1264 to indicate that it will not show in the memory.numa_stat. 1265 1266 anon 1267 Amount of memory used in anonymous mappings such as 1268 brk(), sbrk(), and mmap(MAP_ANONYMOUS) 1269 1270 file 1271 Amount of memory used to cache filesystem data, 1272 including tmpfs and shared memory. 1273 1274 kernel_stack 1275 Amount of memory allocated to kernel stacks. 1276 1277 pagetables 1278 Amount of memory allocated for page tables. 1279 1280 percpu (npn) 1281 Amount of memory used for storing per-cpu kernel 1282 data structures. 1283 1284 sock (npn) 1285 Amount of memory used in network transmission buffers 1286 1287 shmem 1288 Amount of cached filesystem data that is swap-backed, 1289 such as tmpfs, shm segments, shared anonymous mmap()s 1290 1291 file_mapped 1292 Amount of cached filesystem data mapped with mmap() 1293 1294 file_dirty 1295 Amount of cached filesystem data that was modified but 1296 not yet written back to disk 1297 1298 file_writeback 1299 Amount of cached filesystem data that was modified and 1300 is currently being written back to disk 1301 1302 anon_thp 1303 Amount of memory used in anonymous mappings backed by 1304 transparent hugepages 1305 1306 file_thp 1307 Amount of cached filesystem data backed by transparent 1308 hugepages 1309 1310 shmem_thp 1311 Amount of shm, tmpfs, shared anonymous mmap()s backed by 1312 transparent hugepages 1313 1314 inactive_anon, active_anon, inactive_file, active_file, unevictable 1315 Amount of memory, swap-backed and filesystem-backed, 1316 on the internal memory management lists used by the 1317 page reclaim algorithm. 1318 1319 As these represent internal list state (eg. shmem pages are on anon 1320 memory management lists), inactive_foo + active_foo may not be equal to 1321 the value for the foo counter, since the foo counter is type-based, not 1322 list-based. 1323 1324 slab_reclaimable 1325 Part of "slab" that might be reclaimed, such as 1326 dentries and inodes. 1327 1328 slab_unreclaimable 1329 Part of "slab" that cannot be reclaimed on memory 1330 pressure. 1331 1332 slab (npn) 1333 Amount of memory used for storing in-kernel data 1334 structures. 1335 1336 workingset_refault_anon 1337 Number of refaults of previously evicted anonymous pages. 1338 1339 workingset_refault_file 1340 Number of refaults of previously evicted file pages. 1341 1342 workingset_activate_anon 1343 Number of refaulted anonymous pages that were immediately 1344 activated. 1345 1346 workingset_activate_file 1347 Number of refaulted file pages that were immediately activated. 1348 1349 workingset_restore_anon 1350 Number of restored anonymous pages which have been detected as 1351 an active workingset before they got reclaimed. 1352 1353 workingset_restore_file 1354 Number of restored file pages which have been detected as an 1355 active workingset before they got reclaimed. 1356 1357 workingset_nodereclaim 1358 Number of times a shadow node has been reclaimed 1359 1360 pgfault (npn) 1361 Total number of page faults incurred 1362 1363 pgmajfault (npn) 1364 Number of major page faults incurred 1365 1366 pgrefill (npn) 1367 Amount of scanned pages (in an active LRU list) 1368 1369 pgscan (npn) 1370 Amount of scanned pages (in an inactive LRU list) 1371 1372 pgsteal (npn) 1373 Amount of reclaimed pages 1374 1375 pgactivate (npn) 1376 Amount of pages moved to the active LRU list 1377 1378 pgdeactivate (npn) 1379 Amount of pages moved to the inactive LRU list 1380 1381 pglazyfree (npn) 1382 Amount of pages postponed to be freed under memory pressure 1383 1384 pglazyfreed (npn) 1385 Amount of reclaimed lazyfree pages 1386 1387 thp_fault_alloc (npn) 1388 Number of transparent hugepages which were allocated to satisfy 1389 a page fault. This counter is not present when CONFIG_TRANSPARENT_HUGEPAGE 1390 is not set. 1391 1392 thp_collapse_alloc (npn) 1393 Number of transparent hugepages which were allocated to allow 1394 collapsing an existing range of pages. This counter is not 1395 present when CONFIG_TRANSPARENT_HUGEPAGE is not set. 1396 1397 memory.numa_stat 1398 A read-only nested-keyed file which exists on non-root cgroups. 1399 1400 This breaks down the cgroup's memory footprint into different 1401 types of memory, type-specific details, and other information 1402 per node on the state of the memory management system. 1403 1404 This is useful for providing visibility into the NUMA locality 1405 information within an memcg since the pages are allowed to be 1406 allocated from any physical node. One of the use case is evaluating 1407 application performance by combining this information with the 1408 application's CPU allocation. 1409 1410 All memory amounts are in bytes. 1411 1412 The output format of memory.numa_stat is:: 1413 1414 type N0=<bytes in node 0> N1=<bytes in node 1> ... 1415 1416 The entries are ordered to be human readable, and new entries 1417 can show up in the middle. Don't rely on items remaining in a 1418 fixed position; use the keys to look up specific values! 1419 1420 The entries can refer to the memory.stat. 1421 1422 memory.swap.current 1423 A read-only single value file which exists on non-root 1424 cgroups. 1425 1426 The total amount of swap currently being used by the cgroup 1427 and its descendants. 1428 1429 memory.swap.high 1430 A read-write single value file which exists on non-root 1431 cgroups. The default is "max". 1432 1433 Swap usage throttle limit. If a cgroup's swap usage exceeds 1434 this limit, all its further allocations will be throttled to 1435 allow userspace to implement custom out-of-memory procedures. 1436 1437 This limit marks a point of no return for the cgroup. It is NOT 1438 designed to manage the amount of swapping a workload does 1439 during regular operation. Compare to memory.swap.max, which 1440 prohibits swapping past a set amount, but lets the cgroup 1441 continue unimpeded as long as other memory can be reclaimed. 1442 1443 Healthy workloads are not expected to reach this limit. 1444 1445 memory.swap.max 1446 A read-write single value file which exists on non-root 1447 cgroups. The default is "max". 1448 1449 Swap usage hard limit. If a cgroup's swap usage reaches this 1450 limit, anonymous memory of the cgroup will not be swapped out. 1451 1452 memory.swap.events 1453 A read-only flat-keyed file which exists on non-root cgroups. 1454 The following entries are defined. Unless specified 1455 otherwise, a value change in this file generates a file 1456 modified event. 1457 1458 high 1459 The number of times the cgroup's swap usage was over 1460 the high threshold. 1461 1462 max 1463 The number of times the cgroup's swap usage was about 1464 to go over the max boundary and swap allocation 1465 failed. 1466 1467 fail 1468 The number of times swap allocation failed either 1469 because of running out of swap system-wide or max 1470 limit. 1471 1472 When reduced under the current usage, the existing swap 1473 entries are reclaimed gradually and the swap usage may stay 1474 higher than the limit for an extended period of time. This 1475 reduces the impact on the workload and memory management. 1476 1477 memory.pressure 1478 A read-only nested-key file which exists on non-root cgroups. 1479 1480 Shows pressure stall information for memory. See 1481 :ref:`Documentation/accounting/psi.rst <psi>` for details. 1482 1483 1484Usage Guidelines 1485~~~~~~~~~~~~~~~~ 1486 1487"memory.high" is the main mechanism to control memory usage. 1488Over-committing on high limit (sum of high limits > available memory) 1489and letting global memory pressure to distribute memory according to 1490usage is a viable strategy. 1491 1492Because breach of the high limit doesn't trigger the OOM killer but 1493throttles the offending cgroup, a management agent has ample 1494opportunities to monitor and take appropriate actions such as granting 1495more memory or terminating the workload. 1496 1497Determining whether a cgroup has enough memory is not trivial as 1498memory usage doesn't indicate whether the workload can benefit from 1499more memory. For example, a workload which writes data received from 1500network to a file can use all available memory but can also operate as 1501performant with a small amount of memory. A measure of memory 1502pressure - how much the workload is being impacted due to lack of 1503memory - is necessary to determine whether a workload needs more 1504memory; unfortunately, memory pressure monitoring mechanism isn't 1505implemented yet. 1506 1507 1508Memory Ownership 1509~~~~~~~~~~~~~~~~ 1510 1511A memory area is charged to the cgroup which instantiated it and stays 1512charged to the cgroup until the area is released. Migrating a process 1513to a different cgroup doesn't move the memory usages that it 1514instantiated while in the previous cgroup to the new cgroup. 1515 1516A memory area may be used by processes belonging to different cgroups. 1517To which cgroup the area will be charged is in-deterministic; however, 1518over time, the memory area is likely to end up in a cgroup which has 1519enough memory allowance to avoid high reclaim pressure. 1520 1521If a cgroup sweeps a considerable amount of memory which is expected 1522to be accessed repeatedly by other cgroups, it may make sense to use 1523POSIX_FADV_DONTNEED to relinquish the ownership of memory areas 1524belonging to the affected files to ensure correct memory ownership. 1525 1526 1527IO 1528-- 1529 1530The "io" controller regulates the distribution of IO resources. This 1531controller implements both weight based and absolute bandwidth or IOPS 1532limit distribution; however, weight based distribution is available 1533only if cfq-iosched is in use and neither scheme is available for 1534blk-mq devices. 1535 1536 1537IO Interface Files 1538~~~~~~~~~~~~~~~~~~ 1539 1540 io.stat 1541 A read-only nested-keyed file. 1542 1543 Lines are keyed by $MAJ:$MIN device numbers and not ordered. 1544 The following nested keys are defined. 1545 1546 ====== ===================== 1547 rbytes Bytes read 1548 wbytes Bytes written 1549 rios Number of read IOs 1550 wios Number of write IOs 1551 dbytes Bytes discarded 1552 dios Number of discard IOs 1553 ====== ===================== 1554 1555 An example read output follows:: 1556 1557 8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0 1558 8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021 1559 1560 io.cost.qos 1561 A read-write nested-keyed file which exists only on the root 1562 cgroup. 1563 1564 This file configures the Quality of Service of the IO cost 1565 model based controller (CONFIG_BLK_CGROUP_IOCOST) which 1566 currently implements "io.weight" proportional control. Lines 1567 are keyed by $MAJ:$MIN device numbers and not ordered. The 1568 line for a given device is populated on the first write for 1569 the device on "io.cost.qos" or "io.cost.model". The following 1570 nested keys are defined. 1571 1572 ====== ===================================== 1573 enable Weight-based control enable 1574 ctrl "auto" or "user" 1575 rpct Read latency percentile [0, 100] 1576 rlat Read latency threshold 1577 wpct Write latency percentile [0, 100] 1578 wlat Write latency threshold 1579 min Minimum scaling percentage [1, 10000] 1580 max Maximum scaling percentage [1, 10000] 1581 ====== ===================================== 1582 1583 The controller is disabled by default and can be enabled by 1584 setting "enable" to 1. "rpct" and "wpct" parameters default 1585 to zero and the controller uses internal device saturation 1586 state to adjust the overall IO rate between "min" and "max". 1587 1588 When a better control quality is needed, latency QoS 1589 parameters can be configured. For example:: 1590 1591 8:16 enable=1 ctrl=auto rpct=95.00 rlat=75000 wpct=95.00 wlat=150000 min=50.00 max=150.0 1592 1593 shows that on sdb, the controller is enabled, will consider 1594 the device saturated if the 95th percentile of read completion 1595 latencies is above 75ms or write 150ms, and adjust the overall 1596 IO issue rate between 50% and 150% accordingly. 1597 1598 The lower the saturation point, the better the latency QoS at 1599 the cost of aggregate bandwidth. The narrower the allowed 1600 adjustment range between "min" and "max", the more conformant 1601 to the cost model the IO behavior. Note that the IO issue 1602 base rate may be far off from 100% and setting "min" and "max" 1603 blindly can lead to a significant loss of device capacity or 1604 control quality. "min" and "max" are useful for regulating 1605 devices which show wide temporary behavior changes - e.g. a 1606 ssd which accepts writes at the line speed for a while and 1607 then completely stalls for multiple seconds. 1608 1609 When "ctrl" is "auto", the parameters are controlled by the 1610 kernel and may change automatically. Setting "ctrl" to "user" 1611 or setting any of the percentile and latency parameters puts 1612 it into "user" mode and disables the automatic changes. The 1613 automatic mode can be restored by setting "ctrl" to "auto". 1614 1615 io.cost.model 1616 A read-write nested-keyed file which exists only on the root 1617 cgroup. 1618 1619 This file configures the cost model of the IO cost model based 1620 controller (CONFIG_BLK_CGROUP_IOCOST) which currently 1621 implements "io.weight" proportional control. Lines are keyed 1622 by $MAJ:$MIN device numbers and not ordered. The line for a 1623 given device is populated on the first write for the device on 1624 "io.cost.qos" or "io.cost.model". The following nested keys 1625 are defined. 1626 1627 ===== ================================ 1628 ctrl "auto" or "user" 1629 model The cost model in use - "linear" 1630 ===== ================================ 1631 1632 When "ctrl" is "auto", the kernel may change all parameters 1633 dynamically. When "ctrl" is set to "user" or any other 1634 parameters are written to, "ctrl" become "user" and the 1635 automatic changes are disabled. 1636 1637 When "model" is "linear", the following model parameters are 1638 defined. 1639 1640 ============= ======================================== 1641 [r|w]bps The maximum sequential IO throughput 1642 [r|w]seqiops The maximum 4k sequential IOs per second 1643 [r|w]randiops The maximum 4k random IOs per second 1644 ============= ======================================== 1645 1646 From the above, the builtin linear model determines the base 1647 costs of a sequential and random IO and the cost coefficient 1648 for the IO size. While simple, this model can cover most 1649 common device classes acceptably. 1650 1651 The IO cost model isn't expected to be accurate in absolute 1652 sense and is scaled to the device behavior dynamically. 1653 1654 If needed, tools/cgroup/iocost_coef_gen.py can be used to 1655 generate device-specific coefficients. 1656 1657 io.weight 1658 A read-write flat-keyed file which exists on non-root cgroups. 1659 The default is "default 100". 1660 1661 The first line is the default weight applied to devices 1662 without specific override. The rest are overrides keyed by 1663 $MAJ:$MIN device numbers and not ordered. The weights are in 1664 the range [1, 10000] and specifies the relative amount IO time 1665 the cgroup can use in relation to its siblings. 1666 1667 The default weight can be updated by writing either "default 1668 $WEIGHT" or simply "$WEIGHT". Overrides can be set by writing 1669 "$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default". 1670 1671 An example read output follows:: 1672 1673 default 100 1674 8:16 200 1675 8:0 50 1676 1677 io.max 1678 A read-write nested-keyed file which exists on non-root 1679 cgroups. 1680 1681 BPS and IOPS based IO limit. Lines are keyed by $MAJ:$MIN 1682 device numbers and not ordered. The following nested keys are 1683 defined. 1684 1685 ===== ================================== 1686 rbps Max read bytes per second 1687 wbps Max write bytes per second 1688 riops Max read IO operations per second 1689 wiops Max write IO operations per second 1690 ===== ================================== 1691 1692 When writing, any number of nested key-value pairs can be 1693 specified in any order. "max" can be specified as the value 1694 to remove a specific limit. If the same key is specified 1695 multiple times, the outcome is undefined. 1696 1697 BPS and IOPS are measured in each IO direction and IOs are 1698 delayed if limit is reached. Temporary bursts are allowed. 1699 1700 Setting read limit at 2M BPS and write at 120 IOPS for 8:16:: 1701 1702 echo "8:16 rbps=2097152 wiops=120" > io.max 1703 1704 Reading returns the following:: 1705 1706 8:16 rbps=2097152 wbps=max riops=max wiops=120 1707 1708 Write IOPS limit can be removed by writing the following:: 1709 1710 echo "8:16 wiops=max" > io.max 1711 1712 Reading now returns the following:: 1713 1714 8:16 rbps=2097152 wbps=max riops=max wiops=max 1715 1716 io.pressure 1717 A read-only nested-key file which exists on non-root cgroups. 1718 1719 Shows pressure stall information for IO. See 1720 :ref:`Documentation/accounting/psi.rst <psi>` for details. 1721 1722 1723Writeback 1724~~~~~~~~~ 1725 1726Page cache is dirtied through buffered writes and shared mmaps and 1727written asynchronously to the backing filesystem by the writeback 1728mechanism. Writeback sits between the memory and IO domains and 1729regulates the proportion of dirty memory by balancing dirtying and 1730write IOs. 1731 1732The io controller, in conjunction with the memory controller, 1733implements control of page cache writeback IOs. The memory controller 1734defines the memory domain that dirty memory ratio is calculated and 1735maintained for and the io controller defines the io domain which 1736writes out dirty pages for the memory domain. Both system-wide and 1737per-cgroup dirty memory states are examined and the more restrictive 1738of the two is enforced. 1739 1740cgroup writeback requires explicit support from the underlying 1741filesystem. Currently, cgroup writeback is implemented on ext2, ext4, 1742btrfs, f2fs, and xfs. On other filesystems, all writeback IOs are 1743attributed to the root cgroup. 1744 1745There are inherent differences in memory and writeback management 1746which affects how cgroup ownership is tracked. Memory is tracked per 1747page while writeback per inode. For the purpose of writeback, an 1748inode is assigned to a cgroup and all IO requests to write dirty pages 1749from the inode are attributed to that cgroup. 1750 1751As cgroup ownership for memory is tracked per page, there can be pages 1752which are associated with different cgroups than the one the inode is 1753associated with. These are called foreign pages. The writeback 1754constantly keeps track of foreign pages and, if a particular foreign 1755cgroup becomes the majority over a certain period of time, switches 1756the ownership of the inode to that cgroup. 1757 1758While this model is enough for most use cases where a given inode is 1759mostly dirtied by a single cgroup even when the main writing cgroup 1760changes over time, use cases where multiple cgroups write to a single 1761inode simultaneously are not supported well. In such circumstances, a 1762significant portion of IOs are likely to be attributed incorrectly. 1763As memory controller assigns page ownership on the first use and 1764doesn't update it until the page is released, even if writeback 1765strictly follows page ownership, multiple cgroups dirtying overlapping 1766areas wouldn't work as expected. It's recommended to avoid such usage 1767patterns. 1768 1769The sysctl knobs which affect writeback behavior are applied to cgroup 1770writeback as follows. 1771 1772 vm.dirty_background_ratio, vm.dirty_ratio 1773 These ratios apply the same to cgroup writeback with the 1774 amount of available memory capped by limits imposed by the 1775 memory controller and system-wide clean memory. 1776 1777 vm.dirty_background_bytes, vm.dirty_bytes 1778 For cgroup writeback, this is calculated into ratio against 1779 total available memory and applied the same way as 1780 vm.dirty[_background]_ratio. 1781 1782 1783IO Latency 1784~~~~~~~~~~ 1785 1786This is a cgroup v2 controller for IO workload protection. You provide a group 1787with a latency target, and if the average latency exceeds that target the 1788controller will throttle any peers that have a lower latency target than the 1789protected workload. 1790 1791The limits are only applied at the peer level in the hierarchy. This means that 1792in the diagram below, only groups A, B, and C will influence each other, and 1793groups D and F will influence each other. Group G will influence nobody:: 1794 1795 [root] 1796 / | \ 1797 A B C 1798 / \ | 1799 D F G 1800 1801 1802So the ideal way to configure this is to set io.latency in groups A, B, and C. 1803Generally you do not want to set a value lower than the latency your device 1804supports. Experiment to find the value that works best for your workload. 1805Start at higher than the expected latency for your device and watch the 1806avg_lat value in io.stat for your workload group to get an idea of the 1807latency you see during normal operation. Use the avg_lat value as a basis for 1808your real setting, setting at 10-15% higher than the value in io.stat. 1809 1810How IO Latency Throttling Works 1811~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1812 1813io.latency is work conserving; so as long as everybody is meeting their latency 1814target the controller doesn't do anything. Once a group starts missing its 1815target it begins throttling any peer group that has a higher target than itself. 1816This throttling takes 2 forms: 1817 1818- Queue depth throttling. This is the number of outstanding IO's a group is 1819 allowed to have. We will clamp down relatively quickly, starting at no limit 1820 and going all the way down to 1 IO at a time. 1821 1822- Artificial delay induction. There are certain types of IO that cannot be 1823 throttled without possibly adversely affecting higher priority groups. This 1824 includes swapping and metadata IO. These types of IO are allowed to occur 1825 normally, however they are "charged" to the originating group. If the 1826 originating group is being throttled you will see the use_delay and delay 1827 fields in io.stat increase. The delay value is how many microseconds that are 1828 being added to any process that runs in this group. Because this number can 1829 grow quite large if there is a lot of swapping or metadata IO occurring we 1830 limit the individual delay events to 1 second at a time. 1831 1832Once the victimized group starts meeting its latency target again it will start 1833unthrottling any peer groups that were throttled previously. If the victimized 1834group simply stops doing IO the global counter will unthrottle appropriately. 1835 1836IO Latency Interface Files 1837~~~~~~~~~~~~~~~~~~~~~~~~~~ 1838 1839 io.latency 1840 This takes a similar format as the other controllers. 1841 1842 "MAJOR:MINOR target=<target time in microseconds" 1843 1844 io.stat 1845 If the controller is enabled you will see extra stats in io.stat in 1846 addition to the normal ones. 1847 1848 depth 1849 This is the current queue depth for the group. 1850 1851 avg_lat 1852 This is an exponential moving average with a decay rate of 1/exp 1853 bound by the sampling interval. The decay rate interval can be 1854 calculated by multiplying the win value in io.stat by the 1855 corresponding number of samples based on the win value. 1856 1857 win 1858 The sampling window size in milliseconds. This is the minimum 1859 duration of time between evaluation events. Windows only elapse 1860 with IO activity. Idle periods extend the most recent window. 1861 1862PID 1863--- 1864 1865The process number controller is used to allow a cgroup to stop any 1866new tasks from being fork()'d or clone()'d after a specified limit is 1867reached. 1868 1869The number of tasks in a cgroup can be exhausted in ways which other 1870controllers cannot prevent, thus warranting its own controller. For 1871example, a fork bomb is likely to exhaust the number of tasks before 1872hitting memory restrictions. 1873 1874Note that PIDs used in this controller refer to TIDs, process IDs as 1875used by the kernel. 1876 1877 1878PID Interface Files 1879~~~~~~~~~~~~~~~~~~~ 1880 1881 pids.max 1882 A read-write single value file which exists on non-root 1883 cgroups. The default is "max". 1884 1885 Hard limit of number of processes. 1886 1887 pids.current 1888 A read-only single value file which exists on all cgroups. 1889 1890 The number of processes currently in the cgroup and its 1891 descendants. 1892 1893Organisational operations are not blocked by cgroup policies, so it is 1894possible to have pids.current > pids.max. This can be done by either 1895setting the limit to be smaller than pids.current, or attaching enough 1896processes to the cgroup such that pids.current is larger than 1897pids.max. However, it is not possible to violate a cgroup PID policy 1898through fork() or clone(). These will return -EAGAIN if the creation 1899of a new process would cause a cgroup policy to be violated. 1900 1901 1902Cpuset 1903------ 1904 1905The "cpuset" controller provides a mechanism for constraining 1906the CPU and memory node placement of tasks to only the resources 1907specified in the cpuset interface files in a task's current cgroup. 1908This is especially valuable on large NUMA systems where placing jobs 1909on properly sized subsets of the systems with careful processor and 1910memory placement to reduce cross-node memory access and contention 1911can improve overall system performance. 1912 1913The "cpuset" controller is hierarchical. That means the controller 1914cannot use CPUs or memory nodes not allowed in its parent. 1915 1916 1917Cpuset Interface Files 1918~~~~~~~~~~~~~~~~~~~~~~ 1919 1920 cpuset.cpus 1921 A read-write multiple values file which exists on non-root 1922 cpuset-enabled cgroups. 1923 1924 It lists the requested CPUs to be used by tasks within this 1925 cgroup. The actual list of CPUs to be granted, however, is 1926 subjected to constraints imposed by its parent and can differ 1927 from the requested CPUs. 1928 1929 The CPU numbers are comma-separated numbers or ranges. 1930 For example:: 1931 1932 # cat cpuset.cpus 1933 0-4,6,8-10 1934 1935 An empty value indicates that the cgroup is using the same 1936 setting as the nearest cgroup ancestor with a non-empty 1937 "cpuset.cpus" or all the available CPUs if none is found. 1938 1939 The value of "cpuset.cpus" stays constant until the next update 1940 and won't be affected by any CPU hotplug events. 1941 1942 cpuset.cpus.effective 1943 A read-only multiple values file which exists on all 1944 cpuset-enabled cgroups. 1945 1946 It lists the onlined CPUs that are actually granted to this 1947 cgroup by its parent. These CPUs are allowed to be used by 1948 tasks within the current cgroup. 1949 1950 If "cpuset.cpus" is empty, the "cpuset.cpus.effective" file shows 1951 all the CPUs from the parent cgroup that can be available to 1952 be used by this cgroup. Otherwise, it should be a subset of 1953 "cpuset.cpus" unless none of the CPUs listed in "cpuset.cpus" 1954 can be granted. In this case, it will be treated just like an 1955 empty "cpuset.cpus". 1956 1957 Its value will be affected by CPU hotplug events. 1958 1959 cpuset.mems 1960 A read-write multiple values file which exists on non-root 1961 cpuset-enabled cgroups. 1962 1963 It lists the requested memory nodes to be used by tasks within 1964 this cgroup. The actual list of memory nodes granted, however, 1965 is subjected to constraints imposed by its parent and can differ 1966 from the requested memory nodes. 1967 1968 The memory node numbers are comma-separated numbers or ranges. 1969 For example:: 1970 1971 # cat cpuset.mems 1972 0-1,3 1973 1974 An empty value indicates that the cgroup is using the same 1975 setting as the nearest cgroup ancestor with a non-empty 1976 "cpuset.mems" or all the available memory nodes if none 1977 is found. 1978 1979 The value of "cpuset.mems" stays constant until the next update 1980 and won't be affected by any memory nodes hotplug events. 1981 1982 cpuset.mems.effective 1983 A read-only multiple values file which exists on all 1984 cpuset-enabled cgroups. 1985 1986 It lists the onlined memory nodes that are actually granted to 1987 this cgroup by its parent. These memory nodes are allowed to 1988 be used by tasks within the current cgroup. 1989 1990 If "cpuset.mems" is empty, it shows all the memory nodes from the 1991 parent cgroup that will be available to be used by this cgroup. 1992 Otherwise, it should be a subset of "cpuset.mems" unless none of 1993 the memory nodes listed in "cpuset.mems" can be granted. In this 1994 case, it will be treated just like an empty "cpuset.mems". 1995 1996 Its value will be affected by memory nodes hotplug events. 1997 1998 cpuset.cpus.partition 1999 A read-write single value file which exists on non-root 2000 cpuset-enabled cgroups. This flag is owned by the parent cgroup 2001 and is not delegatable. 2002 2003 It accepts only the following input values when written to. 2004 2005 ======== ================================ 2006 "root" a partition root 2007 "member" a non-root member of a partition 2008 ======== ================================ 2009 2010 When set to be a partition root, the current cgroup is the 2011 root of a new partition or scheduling domain that comprises 2012 itself and all its descendants except those that are separate 2013 partition roots themselves and their descendants. The root 2014 cgroup is always a partition root. 2015 2016 There are constraints on where a partition root can be set. 2017 It can only be set in a cgroup if all the following conditions 2018 are true. 2019 2020 1) The "cpuset.cpus" is not empty and the list of CPUs are 2021 exclusive, i.e. they are not shared by any of its siblings. 2022 2) The parent cgroup is a partition root. 2023 3) The "cpuset.cpus" is also a proper subset of the parent's 2024 "cpuset.cpus.effective". 2025 4) There is no child cgroups with cpuset enabled. This is for 2026 eliminating corner cases that have to be handled if such a 2027 condition is allowed. 2028 2029 Setting it to partition root will take the CPUs away from the 2030 effective CPUs of the parent cgroup. Once it is set, this 2031 file cannot be reverted back to "member" if there are any child 2032 cgroups with cpuset enabled. 2033 2034 A parent partition cannot distribute all its CPUs to its 2035 child partitions. There must be at least one cpu left in the 2036 parent partition. 2037 2038 Once becoming a partition root, changes to "cpuset.cpus" is 2039 generally allowed as long as the first condition above is true, 2040 the change will not take away all the CPUs from the parent 2041 partition and the new "cpuset.cpus" value is a superset of its 2042 children's "cpuset.cpus" values. 2043 2044 Sometimes, external factors like changes to ancestors' 2045 "cpuset.cpus" or cpu hotplug can cause the state of the partition 2046 root to change. On read, the "cpuset.sched.partition" file 2047 can show the following values. 2048 2049 ============== ============================== 2050 "member" Non-root member of a partition 2051 "root" Partition root 2052 "root invalid" Invalid partition root 2053 ============== ============================== 2054 2055 It is a partition root if the first 2 partition root conditions 2056 above are true and at least one CPU from "cpuset.cpus" is 2057 granted by the parent cgroup. 2058 2059 A partition root can become invalid if none of CPUs requested 2060 in "cpuset.cpus" can be granted by the parent cgroup or the 2061 parent cgroup is no longer a partition root itself. In this 2062 case, it is not a real partition even though the restriction 2063 of the first partition root condition above will still apply. 2064 The cpu affinity of all the tasks in the cgroup will then be 2065 associated with CPUs in the nearest ancestor partition. 2066 2067 An invalid partition root can be transitioned back to a 2068 real partition root if at least one of the requested CPUs 2069 can now be granted by its parent. In this case, the cpu 2070 affinity of all the tasks in the formerly invalid partition 2071 will be associated to the CPUs of the newly formed partition. 2072 Changing the partition state of an invalid partition root to 2073 "member" is always allowed even if child cpusets are present. 2074 2075 2076Device controller 2077----------------- 2078 2079Device controller manages access to device files. It includes both 2080creation of new device files (using mknod), and access to the 2081existing device files. 2082 2083Cgroup v2 device controller has no interface files and is implemented 2084on top of cgroup BPF. To control access to device files, a user may 2085create bpf programs of the BPF_CGROUP_DEVICE type and attach them 2086to cgroups. On an attempt to access a device file, corresponding 2087BPF programs will be executed, and depending on the return value 2088the attempt will succeed or fail with -EPERM. 2089 2090A BPF_CGROUP_DEVICE program takes a pointer to the bpf_cgroup_dev_ctx 2091structure, which describes the device access attempt: access type 2092(mknod/read/write) and device (type, major and minor numbers). 2093If the program returns 0, the attempt fails with -EPERM, otherwise 2094it succeeds. 2095 2096An example of BPF_CGROUP_DEVICE program may be found in the kernel 2097source tree in the tools/testing/selftests/bpf/dev_cgroup.c file. 2098 2099 2100RDMA 2101---- 2102 2103The "rdma" controller regulates the distribution and accounting of 2104RDMA resources. 2105 2106RDMA Interface Files 2107~~~~~~~~~~~~~~~~~~~~ 2108 2109 rdma.max 2110 A readwrite nested-keyed file that exists for all the cgroups 2111 except root that describes current configured resource limit 2112 for a RDMA/IB device. 2113 2114 Lines are keyed by device name and are not ordered. 2115 Each line contains space separated resource name and its configured 2116 limit that can be distributed. 2117 2118 The following nested keys are defined. 2119 2120 ========== ============================= 2121 hca_handle Maximum number of HCA Handles 2122 hca_object Maximum number of HCA Objects 2123 ========== ============================= 2124 2125 An example for mlx4 and ocrdma device follows:: 2126 2127 mlx4_0 hca_handle=2 hca_object=2000 2128 ocrdma1 hca_handle=3 hca_object=max 2129 2130 rdma.current 2131 A read-only file that describes current resource usage. 2132 It exists for all the cgroup except root. 2133 2134 An example for mlx4 and ocrdma device follows:: 2135 2136 mlx4_0 hca_handle=1 hca_object=20 2137 ocrdma1 hca_handle=1 hca_object=23 2138 2139HugeTLB 2140------- 2141 2142The HugeTLB controller allows to limit the HugeTLB usage per control group and 2143enforces the controller limit during page fault. 2144 2145HugeTLB Interface Files 2146~~~~~~~~~~~~~~~~~~~~~~~ 2147 2148 hugetlb.<hugepagesize>.current 2149 Show current usage for "hugepagesize" hugetlb. It exists for all 2150 the cgroup except root. 2151 2152 hugetlb.<hugepagesize>.max 2153 Set/show the hard limit of "hugepagesize" hugetlb usage. 2154 The default value is "max". It exists for all the cgroup except root. 2155 2156 hugetlb.<hugepagesize>.events 2157 A read-only flat-keyed file which exists on non-root cgroups. 2158 2159 max 2160 The number of allocation failure due to HugeTLB limit 2161 2162 hugetlb.<hugepagesize>.events.local 2163 Similar to hugetlb.<hugepagesize>.events but the fields in the file 2164 are local to the cgroup i.e. not hierarchical. The file modified event 2165 generated on this file reflects only the local events. 2166 2167Misc 2168---- 2169 2170perf_event 2171~~~~~~~~~~ 2172 2173perf_event controller, if not mounted on a legacy hierarchy, is 2174automatically enabled on the v2 hierarchy so that perf events can 2175always be filtered by cgroup v2 path. The controller can still be 2176moved to a legacy hierarchy after v2 hierarchy is populated. 2177 2178 2179Non-normative information 2180------------------------- 2181 2182This section contains information that isn't considered to be a part of 2183the stable kernel API and so is subject to change. 2184 2185 2186CPU controller root cgroup process behaviour 2187~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2188 2189When distributing CPU cycles in the root cgroup each thread in this 2190cgroup is treated as if it was hosted in a separate child cgroup of the 2191root cgroup. This child cgroup weight is dependent on its thread nice 2192level. 2193 2194For details of this mapping see sched_prio_to_weight array in 2195kernel/sched/core.c file (values from this array should be scaled 2196appropriately so the neutral - nice 0 - value is 100 instead of 1024). 2197 2198 2199IO controller root cgroup process behaviour 2200~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2201 2202Root cgroup processes are hosted in an implicit leaf child node. 2203When distributing IO resources this implicit child node is taken into 2204account as if it was a normal child cgroup of the root cgroup with a 2205weight value of 200. 2206 2207 2208Namespace 2209========= 2210 2211Basics 2212------ 2213 2214cgroup namespace provides a mechanism to virtualize the view of the 2215"/proc/$PID/cgroup" file and cgroup mounts. The CLONE_NEWCGROUP clone 2216flag can be used with clone(2) and unshare(2) to create a new cgroup 2217namespace. The process running inside the cgroup namespace will have 2218its "/proc/$PID/cgroup" output restricted to cgroupns root. The 2219cgroupns root is the cgroup of the process at the time of creation of 2220the cgroup namespace. 2221 2222Without cgroup namespace, the "/proc/$PID/cgroup" file shows the 2223complete path of the cgroup of a process. In a container setup where 2224a set of cgroups and namespaces are intended to isolate processes the 2225"/proc/$PID/cgroup" file may leak potential system level information 2226to the isolated processes. For example:: 2227 2228 # cat /proc/self/cgroup 2229 0::/batchjobs/container_id1 2230 2231The path '/batchjobs/container_id1' can be considered as system-data 2232and undesirable to expose to the isolated processes. cgroup namespace 2233can be used to restrict visibility of this path. For example, before 2234creating a cgroup namespace, one would see:: 2235 2236 # ls -l /proc/self/ns/cgroup 2237 lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835] 2238 # cat /proc/self/cgroup 2239 0::/batchjobs/container_id1 2240 2241After unsharing a new namespace, the view changes:: 2242 2243 # ls -l /proc/self/ns/cgroup 2244 lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183] 2245 # cat /proc/self/cgroup 2246 0::/ 2247 2248When some thread from a multi-threaded process unshares its cgroup 2249namespace, the new cgroupns gets applied to the entire process (all 2250the threads). This is natural for the v2 hierarchy; however, for the 2251legacy hierarchies, this may be unexpected. 2252 2253A cgroup namespace is alive as long as there are processes inside or 2254mounts pinning it. When the last usage goes away, the cgroup 2255namespace is destroyed. The cgroupns root and the actual cgroups 2256remain. 2257 2258 2259The Root and Views 2260------------------ 2261 2262The 'cgroupns root' for a cgroup namespace is the cgroup in which the 2263process calling unshare(2) is running. For example, if a process in 2264/batchjobs/container_id1 cgroup calls unshare, cgroup 2265/batchjobs/container_id1 becomes the cgroupns root. For the 2266init_cgroup_ns, this is the real root ('/') cgroup. 2267 2268The cgroupns root cgroup does not change even if the namespace creator 2269process later moves to a different cgroup:: 2270 2271 # ~/unshare -c # unshare cgroupns in some cgroup 2272 # cat /proc/self/cgroup 2273 0::/ 2274 # mkdir sub_cgrp_1 2275 # echo 0 > sub_cgrp_1/cgroup.procs 2276 # cat /proc/self/cgroup 2277 0::/sub_cgrp_1 2278 2279Each process gets its namespace-specific view of "/proc/$PID/cgroup" 2280 2281Processes running inside the cgroup namespace will be able to see 2282cgroup paths (in /proc/self/cgroup) only inside their root cgroup. 2283From within an unshared cgroupns:: 2284 2285 # sleep 100000 & 2286 [1] 7353 2287 # echo 7353 > sub_cgrp_1/cgroup.procs 2288 # cat /proc/7353/cgroup 2289 0::/sub_cgrp_1 2290 2291From the initial cgroup namespace, the real cgroup path will be 2292visible:: 2293 2294 $ cat /proc/7353/cgroup 2295 0::/batchjobs/container_id1/sub_cgrp_1 2296 2297From a sibling cgroup namespace (that is, a namespace rooted at a 2298different cgroup), the cgroup path relative to its own cgroup 2299namespace root will be shown. For instance, if PID 7353's cgroup 2300namespace root is at '/batchjobs/container_id2', then it will see:: 2301 2302 # cat /proc/7353/cgroup 2303 0::/../container_id2/sub_cgrp_1 2304 2305Note that the relative path always starts with '/' to indicate that 2306its relative to the cgroup namespace root of the caller. 2307 2308 2309Migration and setns(2) 2310---------------------- 2311 2312Processes inside a cgroup namespace can move into and out of the 2313namespace root if they have proper access to external cgroups. For 2314example, from inside a namespace with cgroupns root at 2315/batchjobs/container_id1, and assuming that the global hierarchy is 2316still accessible inside cgroupns:: 2317 2318 # cat /proc/7353/cgroup 2319 0::/sub_cgrp_1 2320 # echo 7353 > batchjobs/container_id2/cgroup.procs 2321 # cat /proc/7353/cgroup 2322 0::/../container_id2 2323 2324Note that this kind of setup is not encouraged. A task inside cgroup 2325namespace should only be exposed to its own cgroupns hierarchy. 2326 2327setns(2) to another cgroup namespace is allowed when: 2328 2329(a) the process has CAP_SYS_ADMIN against its current user namespace 2330(b) the process has CAP_SYS_ADMIN against the target cgroup 2331 namespace's userns 2332 2333No implicit cgroup changes happen with attaching to another cgroup 2334namespace. It is expected that the someone moves the attaching 2335process under the target cgroup namespace root. 2336 2337 2338Interaction with Other Namespaces 2339--------------------------------- 2340 2341Namespace specific cgroup hierarchy can be mounted by a process 2342running inside a non-init cgroup namespace:: 2343 2344 # mount -t cgroup2 none $MOUNT_POINT 2345 2346This will mount the unified cgroup hierarchy with cgroupns root as the 2347filesystem root. The process needs CAP_SYS_ADMIN against its user and 2348mount namespaces. 2349 2350The virtualization of /proc/self/cgroup file combined with restricting 2351the view of cgroup hierarchy by namespace-private cgroupfs mount 2352provides a properly isolated cgroup view inside the container. 2353 2354 2355Information on Kernel Programming 2356================================= 2357 2358This section contains kernel programming information in the areas 2359where interacting with cgroup is necessary. cgroup core and 2360controllers are not covered. 2361 2362 2363Filesystem Support for Writeback 2364-------------------------------- 2365 2366A filesystem can support cgroup writeback by updating 2367address_space_operations->writepage[s]() to annotate bio's using the 2368following two functions. 2369 2370 wbc_init_bio(@wbc, @bio) 2371 Should be called for each bio carrying writeback data and 2372 associates the bio with the inode's owner cgroup and the 2373 corresponding request queue. This must be called after 2374 a queue (device) has been associated with the bio and 2375 before submission. 2376 2377 wbc_account_cgroup_owner(@wbc, @page, @bytes) 2378 Should be called for each data segment being written out. 2379 While this function doesn't care exactly when it's called 2380 during the writeback session, it's the easiest and most 2381 natural to call it as data segments are added to a bio. 2382 2383With writeback bio's annotated, cgroup support can be enabled per 2384super_block by setting SB_I_CGROUPWB in ->s_iflags. This allows for 2385selective disabling of cgroup writeback support which is helpful when 2386certain filesystem features, e.g. journaled data mode, are 2387incompatible. 2388 2389wbc_init_bio() binds the specified bio to its cgroup. Depending on 2390the configuration, the bio may be executed at a lower priority and if 2391the writeback session is holding shared resources, e.g. a journal 2392entry, may lead to priority inversion. There is no one easy solution 2393for the problem. Filesystems can try to work around specific problem 2394cases by skipping wbc_init_bio() and using bio_associate_blkg() 2395directly. 2396 2397 2398Deprecated v1 Core Features 2399=========================== 2400 2401- Multiple hierarchies including named ones are not supported. 2402 2403- All v1 mount options are not supported. 2404 2405- The "tasks" file is removed and "cgroup.procs" is not sorted. 2406 2407- "cgroup.clone_children" is removed. 2408 2409- /proc/cgroups is meaningless for v2. Use "cgroup.controllers" file 2410 at the root instead. 2411 2412 2413Issues with v1 and Rationales for v2 2414==================================== 2415 2416Multiple Hierarchies 2417-------------------- 2418 2419cgroup v1 allowed an arbitrary number of hierarchies and each 2420hierarchy could host any number of controllers. While this seemed to 2421provide a high level of flexibility, it wasn't useful in practice. 2422 2423For example, as there is only one instance of each controller, utility 2424type controllers such as freezer which can be useful in all 2425hierarchies could only be used in one. The issue is exacerbated by 2426the fact that controllers couldn't be moved to another hierarchy once 2427hierarchies were populated. Another issue was that all controllers 2428bound to a hierarchy were forced to have exactly the same view of the 2429hierarchy. It wasn't possible to vary the granularity depending on 2430the specific controller. 2431 2432In practice, these issues heavily limited which controllers could be 2433put on the same hierarchy and most configurations resorted to putting 2434each controller on its own hierarchy. Only closely related ones, such 2435as the cpu and cpuacct controllers, made sense to be put on the same 2436hierarchy. This often meant that userland ended up managing multiple 2437similar hierarchies repeating the same steps on each hierarchy 2438whenever a hierarchy management operation was necessary. 2439 2440Furthermore, support for multiple hierarchies came at a steep cost. 2441It greatly complicated cgroup core implementation but more importantly 2442the support for multiple hierarchies restricted how cgroup could be 2443used in general and what controllers was able to do. 2444 2445There was no limit on how many hierarchies there might be, which meant 2446that a thread's cgroup membership couldn't be described in finite 2447length. The key might contain any number of entries and was unlimited 2448in length, which made it highly awkward to manipulate and led to 2449addition of controllers which existed only to identify membership, 2450which in turn exacerbated the original problem of proliferating number 2451of hierarchies. 2452 2453Also, as a controller couldn't have any expectation regarding the 2454topologies of hierarchies other controllers might be on, each 2455controller had to assume that all other controllers were attached to 2456completely orthogonal hierarchies. This made it impossible, or at 2457least very cumbersome, for controllers to cooperate with each other. 2458 2459In most use cases, putting controllers on hierarchies which are 2460completely orthogonal to each other isn't necessary. What usually is 2461called for is the ability to have differing levels of granularity 2462depending on the specific controller. In other words, hierarchy may 2463be collapsed from leaf towards root when viewed from specific 2464controllers. For example, a given configuration might not care about 2465how memory is distributed beyond a certain level while still wanting 2466to control how CPU cycles are distributed. 2467 2468 2469Thread Granularity 2470------------------ 2471 2472cgroup v1 allowed threads of a process to belong to different cgroups. 2473This didn't make sense for some controllers and those controllers 2474ended up implementing different ways to ignore such situations but 2475much more importantly it blurred the line between API exposed to 2476individual applications and system management interface. 2477 2478Generally, in-process knowledge is available only to the process 2479itself; thus, unlike service-level organization of processes, 2480categorizing threads of a process requires active participation from 2481the application which owns the target process. 2482 2483cgroup v1 had an ambiguously defined delegation model which got abused 2484in combination with thread granularity. cgroups were delegated to 2485individual applications so that they can create and manage their own 2486sub-hierarchies and control resource distributions along them. This 2487effectively raised cgroup to the status of a syscall-like API exposed 2488to lay programs. 2489 2490First of all, cgroup has a fundamentally inadequate interface to be 2491exposed this way. For a process to access its own knobs, it has to 2492extract the path on the target hierarchy from /proc/self/cgroup, 2493construct the path by appending the name of the knob to the path, open 2494and then read and/or write to it. This is not only extremely clunky 2495and unusual but also inherently racy. There is no conventional way to 2496define transaction across the required steps and nothing can guarantee 2497that the process would actually be operating on its own sub-hierarchy. 2498 2499cgroup controllers implemented a number of knobs which would never be 2500accepted as public APIs because they were just adding control knobs to 2501system-management pseudo filesystem. cgroup ended up with interface 2502knobs which were not properly abstracted or refined and directly 2503revealed kernel internal details. These knobs got exposed to 2504individual applications through the ill-defined delegation mechanism 2505effectively abusing cgroup as a shortcut to implementing public APIs 2506without going through the required scrutiny. 2507 2508This was painful for both userland and kernel. Userland ended up with 2509misbehaving and poorly abstracted interfaces and kernel exposing and 2510locked into constructs inadvertently. 2511 2512 2513Competition Between Inner Nodes and Threads 2514------------------------------------------- 2515 2516cgroup v1 allowed threads to be in any cgroups which created an 2517interesting problem where threads belonging to a parent cgroup and its 2518children cgroups competed for resources. This was nasty as two 2519different types of entities competed and there was no obvious way to 2520settle it. Different controllers did different things. 2521 2522The cpu controller considered threads and cgroups as equivalents and 2523mapped nice levels to cgroup weights. This worked for some cases but 2524fell flat when children wanted to be allocated specific ratios of CPU 2525cycles and the number of internal threads fluctuated - the ratios 2526constantly changed as the number of competing entities fluctuated. 2527There also were other issues. The mapping from nice level to weight 2528wasn't obvious or universal, and there were various other knobs which 2529simply weren't available for threads. 2530 2531The io controller implicitly created a hidden leaf node for each 2532cgroup to host the threads. The hidden leaf had its own copies of all 2533the knobs with ``leaf_`` prefixed. While this allowed equivalent 2534control over internal threads, it was with serious drawbacks. It 2535always added an extra layer of nesting which wouldn't be necessary 2536otherwise, made the interface messy and significantly complicated the 2537implementation. 2538 2539The memory controller didn't have a way to control what happened 2540between internal tasks and child cgroups and the behavior was not 2541clearly defined. There were attempts to add ad-hoc behaviors and 2542knobs to tailor the behavior to specific workloads which would have 2543led to problems extremely difficult to resolve in the long term. 2544 2545Multiple controllers struggled with internal tasks and came up with 2546different ways to deal with it; unfortunately, all the approaches were 2547severely flawed and, furthermore, the widely different behaviors 2548made cgroup as a whole highly inconsistent. 2549 2550This clearly is a problem which needs to be addressed from cgroup core 2551in a uniform way. 2552 2553 2554Other Interface Issues 2555---------------------- 2556 2557cgroup v1 grew without oversight and developed a large number of 2558idiosyncrasies and inconsistencies. One issue on the cgroup core side 2559was how an empty cgroup was notified - a userland helper binary was 2560forked and executed for each event. The event delivery wasn't 2561recursive or delegatable. The limitations of the mechanism also led 2562to in-kernel event delivery filtering mechanism further complicating 2563the interface. 2564 2565Controller interfaces were problematic too. An extreme example is 2566controllers completely ignoring hierarchical organization and treating 2567all cgroups as if they were all located directly under the root 2568cgroup. Some controllers exposed a large amount of inconsistent 2569implementation details to userland. 2570 2571There also was no consistency across controllers. When a new cgroup 2572was created, some controllers defaulted to not imposing extra 2573restrictions while others disallowed any resource usage until 2574explicitly configured. Configuration knobs for the same type of 2575control used widely differing naming schemes and formats. Statistics 2576and information knobs were named arbitrarily and used different 2577formats and units even in the same controller. 2578 2579cgroup v2 establishes common conventions where appropriate and updates 2580controllers so that they expose minimal and consistent interfaces. 2581 2582 2583Controller Issues and Remedies 2584------------------------------ 2585 2586Memory 2587~~~~~~ 2588 2589The original lower boundary, the soft limit, is defined as a limit 2590that is per default unset. As a result, the set of cgroups that 2591global reclaim prefers is opt-in, rather than opt-out. The costs for 2592optimizing these mostly negative lookups are so high that the 2593implementation, despite its enormous size, does not even provide the 2594basic desirable behavior. First off, the soft limit has no 2595hierarchical meaning. All configured groups are organized in a global 2596rbtree and treated like equal peers, regardless where they are located 2597in the hierarchy. This makes subtree delegation impossible. Second, 2598the soft limit reclaim pass is so aggressive that it not just 2599introduces high allocation latencies into the system, but also impacts 2600system performance due to overreclaim, to the point where the feature 2601becomes self-defeating. 2602 2603The memory.low boundary on the other hand is a top-down allocated 2604reserve. A cgroup enjoys reclaim protection when it's within its 2605effective low, which makes delegation of subtrees possible. It also 2606enjoys having reclaim pressure proportional to its overage when 2607above its effective low. 2608 2609The original high boundary, the hard limit, is defined as a strict 2610limit that can not budge, even if the OOM killer has to be called. 2611But this generally goes against the goal of making the most out of the 2612available memory. The memory consumption of workloads varies during 2613runtime, and that requires users to overcommit. But doing that with a 2614strict upper limit requires either a fairly accurate prediction of the 2615working set size or adding slack to the limit. Since working set size 2616estimation is hard and error prone, and getting it wrong results in 2617OOM kills, most users tend to err on the side of a looser limit and 2618end up wasting precious resources. 2619 2620The memory.high boundary on the other hand can be set much more 2621conservatively. When hit, it throttles allocations by forcing them 2622into direct reclaim to work off the excess, but it never invokes the 2623OOM killer. As a result, a high boundary that is chosen too 2624aggressively will not terminate the processes, but instead it will 2625lead to gradual performance degradation. The user can monitor this 2626and make corrections until the minimal memory footprint that still 2627gives acceptable performance is found. 2628 2629In extreme cases, with many concurrent allocations and a complete 2630breakdown of reclaim progress within the group, the high boundary can 2631be exceeded. But even then it's mostly better to satisfy the 2632allocation from the slack available in other groups or the rest of the 2633system than killing the group. Otherwise, memory.max is there to 2634limit this type of spillover and ultimately contain buggy or even 2635malicious applications. 2636 2637Setting the original memory.limit_in_bytes below the current usage was 2638subject to a race condition, where concurrent charges could cause the 2639limit setting to fail. memory.max on the other hand will first set the 2640limit to prevent new charges, and then reclaim and OOM kill until the 2641new limit is met - or the task writing to memory.max is killed. 2642 2643The combined memory+swap accounting and limiting is replaced by real 2644control over swap space. 2645 2646The main argument for a combined memory+swap facility in the original 2647cgroup design was that global or parental pressure would always be 2648able to swap all anonymous memory of a child group, regardless of the 2649child's own (possibly untrusted) configuration. However, untrusted 2650groups can sabotage swapping by other means - such as referencing its 2651anonymous memory in a tight loop - and an admin can not assume full 2652swappability when overcommitting untrusted jobs. 2653 2654For trusted jobs, on the other hand, a combined counter is not an 2655intuitive userspace interface, and it flies in the face of the idea 2656that cgroup controllers should account and limit specific physical 2657resources. Swap space is a resource like all others in the system, 2658and that's why unified hierarchy allows distributing it separately. 2659