1================ 2Control Group v2 3================ 4 5:Date: October, 2015 6:Author: Tejun Heo <tj@kernel.org> 7 8This is the authoritative documentation on the design, interface and 9conventions of cgroup v2. It describes all userland-visible aspects 10of cgroup including core and specific controller behaviors. All 11future changes must be reflected in this document. Documentation for 12v1 is available under Documentation/cgroup-v1/. 13 14.. CONTENTS 15 16 1. Introduction 17 1-1. Terminology 18 1-2. What is cgroup? 19 2. Basic Operations 20 2-1. Mounting 21 2-2. Organizing Processes and Threads 22 2-2-1. Processes 23 2-2-2. Threads 24 2-3. [Un]populated Notification 25 2-4. Controlling Controllers 26 2-4-1. Enabling and Disabling 27 2-4-2. Top-down Constraint 28 2-4-3. No Internal Process Constraint 29 2-5. Delegation 30 2-5-1. Model of Delegation 31 2-5-2. Delegation Containment 32 2-6. Guidelines 33 2-6-1. Organize Once and Control 34 2-6-2. Avoid Name Collisions 35 3. Resource Distribution Models 36 3-1. Weights 37 3-2. Limits 38 3-3. Protections 39 3-4. Allocations 40 4. Interface Files 41 4-1. Format 42 4-2. Conventions 43 4-3. Core Interface Files 44 5. Controllers 45 5-1. CPU 46 5-1-1. CPU Interface Files 47 5-2. Memory 48 5-2-1. Memory Interface Files 49 5-2-2. Usage Guidelines 50 5-2-3. Memory Ownership 51 5-3. IO 52 5-3-1. IO Interface Files 53 5-3-2. Writeback 54 5-3-3. IO Latency 55 5-3-3-1. How IO Latency Throttling Works 56 5-3-3-2. IO Latency Interface Files 57 5-4. PID 58 5-4-1. PID Interface Files 59 5-5. Cpuset 60 5.5-1. Cpuset Interface Files 61 5-6. Device 62 5-7. RDMA 63 5-7-1. RDMA Interface Files 64 5-8. Misc 65 5-8-1. perf_event 66 5-N. Non-normative information 67 5-N-1. CPU controller root cgroup process behaviour 68 5-N-2. IO controller root cgroup process behaviour 69 6. Namespace 70 6-1. Basics 71 6-2. The Root and Views 72 6-3. Migration and setns(2) 73 6-4. Interaction with Other Namespaces 74 P. Information on Kernel Programming 75 P-1. Filesystem Support for Writeback 76 D. Deprecated v1 Core Features 77 R. Issues with v1 and Rationales for v2 78 R-1. Multiple Hierarchies 79 R-2. Thread Granularity 80 R-3. Competition Between Inner Nodes and Threads 81 R-4. Other Interface Issues 82 R-5. Controller Issues and Remedies 83 R-5-1. Memory 84 85 86Introduction 87============ 88 89Terminology 90----------- 91 92"cgroup" stands for "control group" and is never capitalized. The 93singular form is used to designate the whole feature and also as a 94qualifier as in "cgroup controllers". When explicitly referring to 95multiple individual control groups, the plural form "cgroups" is used. 96 97 98What is cgroup? 99--------------- 100 101cgroup is a mechanism to organize processes hierarchically and 102distribute system resources along the hierarchy in a controlled and 103configurable manner. 104 105cgroup is largely composed of two parts - the core and controllers. 106cgroup core is primarily responsible for hierarchically organizing 107processes. A cgroup controller is usually responsible for 108distributing a specific type of system resource along the hierarchy 109although there are utility controllers which serve purposes other than 110resource distribution. 111 112cgroups form a tree structure and every process in the system belongs 113to one and only one cgroup. All threads of a process belong to the 114same cgroup. On creation, all processes are put in the cgroup that 115the parent process belongs to at the time. A process can be migrated 116to another cgroup. Migration of a process doesn't affect already 117existing descendant processes. 118 119Following certain structural constraints, controllers may be enabled or 120disabled selectively on a cgroup. All controller behaviors are 121hierarchical - if a controller is enabled on a cgroup, it affects all 122processes which belong to the cgroups consisting the inclusive 123sub-hierarchy of the cgroup. When a controller is enabled on a nested 124cgroup, it always restricts the resource distribution further. The 125restrictions set closer to the root in the hierarchy can not be 126overridden from further away. 127 128 129Basic Operations 130================ 131 132Mounting 133-------- 134 135Unlike v1, cgroup v2 has only single hierarchy. The cgroup v2 136hierarchy can be mounted with the following mount command:: 137 138 # mount -t cgroup2 none $MOUNT_POINT 139 140cgroup2 filesystem has the magic number 0x63677270 ("cgrp"). All 141controllers which support v2 and are not bound to a v1 hierarchy are 142automatically bound to the v2 hierarchy and show up at the root. 143Controllers which are not in active use in the v2 hierarchy can be 144bound to other hierarchies. This allows mixing v2 hierarchy with the 145legacy v1 multiple hierarchies in a fully backward compatible way. 146 147A controller can be moved across hierarchies only after the controller 148is no longer referenced in its current hierarchy. Because per-cgroup 149controller states are destroyed asynchronously and controllers may 150have lingering references, a controller may not show up immediately on 151the v2 hierarchy after the final umount of the previous hierarchy. 152Similarly, a controller should be fully disabled to be moved out of 153the unified hierarchy and it may take some time for the disabled 154controller to become available for other hierarchies; furthermore, due 155to inter-controller dependencies, other controllers may need to be 156disabled too. 157 158While useful for development and manual configurations, moving 159controllers dynamically between the v2 and other hierarchies is 160strongly discouraged for production use. It is recommended to decide 161the hierarchies and controller associations before starting using the 162controllers after system boot. 163 164During transition to v2, system management software might still 165automount the v1 cgroup filesystem and so hijack all controllers 166during boot, before manual intervention is possible. To make testing 167and experimenting easier, the kernel parameter cgroup_no_v1= allows 168disabling controllers in v1 and make them always available in v2. 169 170cgroup v2 currently supports the following mount options. 171 172 nsdelegate 173 174 Consider cgroup namespaces as delegation boundaries. This 175 option is system wide and can only be set on mount or modified 176 through remount from the init namespace. The mount option is 177 ignored on non-init namespace mounts. Please refer to the 178 Delegation section for details. 179 180 181Organizing Processes and Threads 182-------------------------------- 183 184Processes 185~~~~~~~~~ 186 187Initially, only the root cgroup exists to which all processes belong. 188A child cgroup can be created by creating a sub-directory:: 189 190 # mkdir $CGROUP_NAME 191 192A given cgroup may have multiple child cgroups forming a tree 193structure. Each cgroup has a read-writable interface file 194"cgroup.procs". When read, it lists the PIDs of all processes which 195belong to the cgroup one-per-line. The PIDs are not ordered and the 196same PID may show up more than once if the process got moved to 197another cgroup and then back or the PID got recycled while reading. 198 199A process can be migrated into a cgroup by writing its PID to the 200target cgroup's "cgroup.procs" file. Only one process can be migrated 201on a single write(2) call. If a process is composed of multiple 202threads, writing the PID of any thread migrates all threads of the 203process. 204 205When a process forks a child process, the new process is born into the 206cgroup that the forking process belongs to at the time of the 207operation. After exit, a process stays associated with the cgroup 208that it belonged to at the time of exit until it's reaped; however, a 209zombie process does not appear in "cgroup.procs" and thus can't be 210moved to another cgroup. 211 212A cgroup which doesn't have any children or live processes can be 213destroyed by removing the directory. Note that a cgroup which doesn't 214have any children and is associated only with zombie processes is 215considered empty and can be removed:: 216 217 # rmdir $CGROUP_NAME 218 219"/proc/$PID/cgroup" lists a process's cgroup membership. If legacy 220cgroup is in use in the system, this file may contain multiple lines, 221one for each hierarchy. The entry for cgroup v2 is always in the 222format "0::$PATH":: 223 224 # cat /proc/842/cgroup 225 ... 226 0::/test-cgroup/test-cgroup-nested 227 228If the process becomes a zombie and the cgroup it was associated with 229is removed subsequently, " (deleted)" is appended to the path:: 230 231 # cat /proc/842/cgroup 232 ... 233 0::/test-cgroup/test-cgroup-nested (deleted) 234 235 236Threads 237~~~~~~~ 238 239cgroup v2 supports thread granularity for a subset of controllers to 240support use cases requiring hierarchical resource distribution across 241the threads of a group of processes. By default, all threads of a 242process belong to the same cgroup, which also serves as the resource 243domain to host resource consumptions which are not specific to a 244process or thread. The thread mode allows threads to be spread across 245a subtree while still maintaining the common resource domain for them. 246 247Controllers which support thread mode are called threaded controllers. 248The ones which don't are called domain controllers. 249 250Marking a cgroup threaded makes it join the resource domain of its 251parent as a threaded cgroup. The parent may be another threaded 252cgroup whose resource domain is further up in the hierarchy. The root 253of a threaded subtree, that is, the nearest ancestor which is not 254threaded, is called threaded domain or thread root interchangeably and 255serves as the resource domain for the entire subtree. 256 257Inside a threaded subtree, threads of a process can be put in 258different cgroups and are not subject to the no internal process 259constraint - threaded controllers can be enabled on non-leaf cgroups 260whether they have threads in them or not. 261 262As the threaded domain cgroup hosts all the domain resource 263consumptions of the subtree, it is considered to have internal 264resource consumptions whether there are processes in it or not and 265can't have populated child cgroups which aren't threaded. Because the 266root cgroup is not subject to no internal process constraint, it can 267serve both as a threaded domain and a parent to domain cgroups. 268 269The current operation mode or type of the cgroup is shown in the 270"cgroup.type" file which indicates whether the cgroup is a normal 271domain, a domain which is serving as the domain of a threaded subtree, 272or a threaded cgroup. 273 274On creation, a cgroup is always a domain cgroup and can be made 275threaded by writing "threaded" to the "cgroup.type" file. The 276operation is single direction:: 277 278 # echo threaded > cgroup.type 279 280Once threaded, the cgroup can't be made a domain again. To enable the 281thread mode, the following conditions must be met. 282 283- As the cgroup will join the parent's resource domain. The parent 284 must either be a valid (threaded) domain or a threaded cgroup. 285 286- When the parent is an unthreaded domain, it must not have any domain 287 controllers enabled or populated domain children. The root is 288 exempt from this requirement. 289 290Topology-wise, a cgroup can be in an invalid state. Please consider 291the following topology:: 292 293 A (threaded domain) - B (threaded) - C (domain, just created) 294 295C is created as a domain but isn't connected to a parent which can 296host child domains. C can't be used until it is turned into a 297threaded cgroup. "cgroup.type" file will report "domain (invalid)" in 298these cases. Operations which fail due to invalid topology use 299EOPNOTSUPP as the errno. 300 301A domain cgroup is turned into a threaded domain when one of its child 302cgroup becomes threaded or threaded controllers are enabled in the 303"cgroup.subtree_control" file while there are processes in the cgroup. 304A threaded domain reverts to a normal domain when the conditions 305clear. 306 307When read, "cgroup.threads" contains the list of the thread IDs of all 308threads in the cgroup. Except that the operations are per-thread 309instead of per-process, "cgroup.threads" has the same format and 310behaves the same way as "cgroup.procs". While "cgroup.threads" can be 311written to in any cgroup, as it can only move threads inside the same 312threaded domain, its operations are confined inside each threaded 313subtree. 314 315The threaded domain cgroup serves as the resource domain for the whole 316subtree, and, while the threads can be scattered across the subtree, 317all the processes are considered to be in the threaded domain cgroup. 318"cgroup.procs" in a threaded domain cgroup contains the PIDs of all 319processes in the subtree and is not readable in the subtree proper. 320However, "cgroup.procs" can be written to from anywhere in the subtree 321to migrate all threads of the matching process to the cgroup. 322 323Only threaded controllers can be enabled in a threaded subtree. When 324a threaded controller is enabled inside a threaded subtree, it only 325accounts for and controls resource consumptions associated with the 326threads in the cgroup and its descendants. All consumptions which 327aren't tied to a specific thread belong to the threaded domain cgroup. 328 329Because a threaded subtree is exempt from no internal process 330constraint, a threaded controller must be able to handle competition 331between threads in a non-leaf cgroup and its child cgroups. Each 332threaded controller defines how such competitions are handled. 333 334 335[Un]populated Notification 336-------------------------- 337 338Each non-root cgroup has a "cgroup.events" file which contains 339"populated" field indicating whether the cgroup's sub-hierarchy has 340live processes in it. Its value is 0 if there is no live process in 341the cgroup and its descendants; otherwise, 1. poll and [id]notify 342events are triggered when the value changes. This can be used, for 343example, to start a clean-up operation after all processes of a given 344sub-hierarchy have exited. The populated state updates and 345notifications are recursive. Consider the following sub-hierarchy 346where the numbers in the parentheses represent the numbers of processes 347in each cgroup:: 348 349 A(4) - B(0) - C(1) 350 \ D(0) 351 352A, B and C's "populated" fields would be 1 while D's 0. After the one 353process in C exits, B and C's "populated" fields would flip to "0" and 354file modified events will be generated on the "cgroup.events" files of 355both cgroups. 356 357 358Controlling Controllers 359----------------------- 360 361Enabling and Disabling 362~~~~~~~~~~~~~~~~~~~~~~ 363 364Each cgroup has a "cgroup.controllers" file which lists all 365controllers available for the cgroup to enable:: 366 367 # cat cgroup.controllers 368 cpu io memory 369 370No controller is enabled by default. Controllers can be enabled and 371disabled by writing to the "cgroup.subtree_control" file:: 372 373 # echo "+cpu +memory -io" > cgroup.subtree_control 374 375Only controllers which are listed in "cgroup.controllers" can be 376enabled. When multiple operations are specified as above, either they 377all succeed or fail. If multiple operations on the same controller 378are specified, the last one is effective. 379 380Enabling a controller in a cgroup indicates that the distribution of 381the target resource across its immediate children will be controlled. 382Consider the following sub-hierarchy. The enabled controllers are 383listed in parentheses:: 384 385 A(cpu,memory) - B(memory) - C() 386 \ D() 387 388As A has "cpu" and "memory" enabled, A will control the distribution 389of CPU cycles and memory to its children, in this case, B. As B has 390"memory" enabled but not "CPU", C and D will compete freely on CPU 391cycles but their division of memory available to B will be controlled. 392 393As a controller regulates the distribution of the target resource to 394the cgroup's children, enabling it creates the controller's interface 395files in the child cgroups. In the above example, enabling "cpu" on B 396would create the "cpu." prefixed controller interface files in C and 397D. Likewise, disabling "memory" from B would remove the "memory." 398prefixed controller interface files from C and D. This means that the 399controller interface files - anything which doesn't start with 400"cgroup." are owned by the parent rather than the cgroup itself. 401 402 403Top-down Constraint 404~~~~~~~~~~~~~~~~~~~ 405 406Resources are distributed top-down and a cgroup can further distribute 407a resource only if the resource has been distributed to it from the 408parent. This means that all non-root "cgroup.subtree_control" files 409can only contain controllers which are enabled in the parent's 410"cgroup.subtree_control" file. A controller can be enabled only if 411the parent has the controller enabled and a controller can't be 412disabled if one or more children have it enabled. 413 414 415No Internal Process Constraint 416~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 417 418Non-root cgroups can distribute domain resources to their children 419only when they don't have any processes of their own. In other words, 420only domain cgroups which don't contain any processes can have domain 421controllers enabled in their "cgroup.subtree_control" files. 422 423This guarantees that, when a domain controller is looking at the part 424of the hierarchy which has it enabled, processes are always only on 425the leaves. This rules out situations where child cgroups compete 426against internal processes of the parent. 427 428The root cgroup is exempt from this restriction. Root contains 429processes and anonymous resource consumption which can't be associated 430with any other cgroups and requires special treatment from most 431controllers. How resource consumption in the root cgroup is governed 432is up to each controller (for more information on this topic please 433refer to the Non-normative information section in the Controllers 434chapter). 435 436Note that the restriction doesn't get in the way if there is no 437enabled controller in the cgroup's "cgroup.subtree_control". This is 438important as otherwise it wouldn't be possible to create children of a 439populated cgroup. To control resource distribution of a cgroup, the 440cgroup must create children and transfer all its processes to the 441children before enabling controllers in its "cgroup.subtree_control" 442file. 443 444 445Delegation 446---------- 447 448Model of Delegation 449~~~~~~~~~~~~~~~~~~~ 450 451A cgroup can be delegated in two ways. First, to a less privileged 452user by granting write access of the directory and its "cgroup.procs", 453"cgroup.threads" and "cgroup.subtree_control" files to the user. 454Second, if the "nsdelegate" mount option is set, automatically to a 455cgroup namespace on namespace creation. 456 457Because the resource control interface files in a given directory 458control the distribution of the parent's resources, the delegatee 459shouldn't be allowed to write to them. For the first method, this is 460achieved by not granting access to these files. For the second, the 461kernel rejects writes to all files other than "cgroup.procs" and 462"cgroup.subtree_control" on a namespace root from inside the 463namespace. 464 465The end results are equivalent for both delegation types. Once 466delegated, the user can build sub-hierarchy under the directory, 467organize processes inside it as it sees fit and further distribute the 468resources it received from the parent. The limits and other settings 469of all resource controllers are hierarchical and regardless of what 470happens in the delegated sub-hierarchy, nothing can escape the 471resource restrictions imposed by the parent. 472 473Currently, cgroup doesn't impose any restrictions on the number of 474cgroups in or nesting depth of a delegated sub-hierarchy; however, 475this may be limited explicitly in the future. 476 477 478Delegation Containment 479~~~~~~~~~~~~~~~~~~~~~~ 480 481A delegated sub-hierarchy is contained in the sense that processes 482can't be moved into or out of the sub-hierarchy by the delegatee. 483 484For delegations to a less privileged user, this is achieved by 485requiring the following conditions for a process with a non-root euid 486to migrate a target process into a cgroup by writing its PID to the 487"cgroup.procs" file. 488 489- The writer must have write access to the "cgroup.procs" file. 490 491- The writer must have write access to the "cgroup.procs" file of the 492 common ancestor of the source and destination cgroups. 493 494The above two constraints ensure that while a delegatee may migrate 495processes around freely in the delegated sub-hierarchy it can't pull 496in from or push out to outside the sub-hierarchy. 497 498For an example, let's assume cgroups C0 and C1 have been delegated to 499user U0 who created C00, C01 under C0 and C10 under C1 as follows and 500all processes under C0 and C1 belong to U0:: 501 502 ~~~~~~~~~~~~~ - C0 - C00 503 ~ cgroup ~ \ C01 504 ~ hierarchy ~ 505 ~~~~~~~~~~~~~ - C1 - C10 506 507Let's also say U0 wants to write the PID of a process which is 508currently in C10 into "C00/cgroup.procs". U0 has write access to the 509file; however, the common ancestor of the source cgroup C10 and the 510destination cgroup C00 is above the points of delegation and U0 would 511not have write access to its "cgroup.procs" files and thus the write 512will be denied with -EACCES. 513 514For delegations to namespaces, containment is achieved by requiring 515that both the source and destination cgroups are reachable from the 516namespace of the process which is attempting the migration. If either 517is not reachable, the migration is rejected with -ENOENT. 518 519 520Guidelines 521---------- 522 523Organize Once and Control 524~~~~~~~~~~~~~~~~~~~~~~~~~ 525 526Migrating a process across cgroups is a relatively expensive operation 527and stateful resources such as memory are not moved together with the 528process. This is an explicit design decision as there often exist 529inherent trade-offs between migration and various hot paths in terms 530of synchronization cost. 531 532As such, migrating processes across cgroups frequently as a means to 533apply different resource restrictions is discouraged. A workload 534should be assigned to a cgroup according to the system's logical and 535resource structure once on start-up. Dynamic adjustments to resource 536distribution can be made by changing controller configuration through 537the interface files. 538 539 540Avoid Name Collisions 541~~~~~~~~~~~~~~~~~~~~~ 542 543Interface files for a cgroup and its children cgroups occupy the same 544directory and it is possible to create children cgroups which collide 545with interface files. 546 547All cgroup core interface files are prefixed with "cgroup." and each 548controller's interface files are prefixed with the controller name and 549a dot. A controller's name is composed of lower case alphabets and 550'_'s but never begins with an '_' so it can be used as the prefix 551character for collision avoidance. Also, interface file names won't 552start or end with terms which are often used in categorizing workloads 553such as job, service, slice, unit or workload. 554 555cgroup doesn't do anything to prevent name collisions and it's the 556user's responsibility to avoid them. 557 558 559Resource Distribution Models 560============================ 561 562cgroup controllers implement several resource distribution schemes 563depending on the resource type and expected use cases. This section 564describes major schemes in use along with their expected behaviors. 565 566 567Weights 568------- 569 570A parent's resource is distributed by adding up the weights of all 571active children and giving each the fraction matching the ratio of its 572weight against the sum. As only children which can make use of the 573resource at the moment participate in the distribution, this is 574work-conserving. Due to the dynamic nature, this model is usually 575used for stateless resources. 576 577All weights are in the range [1, 10000] with the default at 100. This 578allows symmetric multiplicative biases in both directions at fine 579enough granularity while staying in the intuitive range. 580 581As long as the weight is in range, all configuration combinations are 582valid and there is no reason to reject configuration changes or 583process migrations. 584 585"cpu.weight" proportionally distributes CPU cycles to active children 586and is an example of this type. 587 588 589Limits 590------ 591 592A child can only consume upto the configured amount of the resource. 593Limits can be over-committed - the sum of the limits of children can 594exceed the amount of resource available to the parent. 595 596Limits are in the range [0, max] and defaults to "max", which is noop. 597 598As limits can be over-committed, all configuration combinations are 599valid and there is no reason to reject configuration changes or 600process migrations. 601 602"io.max" limits the maximum BPS and/or IOPS that a cgroup can consume 603on an IO device and is an example of this type. 604 605 606Protections 607----------- 608 609A cgroup is protected to be allocated upto the configured amount of 610the resource if the usages of all its ancestors are under their 611protected levels. Protections can be hard guarantees or best effort 612soft boundaries. Protections can also be over-committed in which case 613only upto the amount available to the parent is protected among 614children. 615 616Protections are in the range [0, max] and defaults to 0, which is 617noop. 618 619As protections can be over-committed, all configuration combinations 620are valid and there is no reason to reject configuration changes or 621process migrations. 622 623"memory.low" implements best-effort memory protection and is an 624example of this type. 625 626 627Allocations 628----------- 629 630A cgroup is exclusively allocated a certain amount of a finite 631resource. Allocations can't be over-committed - the sum of the 632allocations of children can not exceed the amount of resource 633available to the parent. 634 635Allocations are in the range [0, max] and defaults to 0, which is no 636resource. 637 638As allocations can't be over-committed, some configuration 639combinations are invalid and should be rejected. Also, if the 640resource is mandatory for execution of processes, process migrations 641may be rejected. 642 643"cpu.rt.max" hard-allocates realtime slices and is an example of this 644type. 645 646 647Interface Files 648=============== 649 650Format 651------ 652 653All interface files should be in one of the following formats whenever 654possible:: 655 656 New-line separated values 657 (when only one value can be written at once) 658 659 VAL0\n 660 VAL1\n 661 ... 662 663 Space separated values 664 (when read-only or multiple values can be written at once) 665 666 VAL0 VAL1 ...\n 667 668 Flat keyed 669 670 KEY0 VAL0\n 671 KEY1 VAL1\n 672 ... 673 674 Nested keyed 675 676 KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01... 677 KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11... 678 ... 679 680For a writable file, the format for writing should generally match 681reading; however, controllers may allow omitting later fields or 682implement restricted shortcuts for most common use cases. 683 684For both flat and nested keyed files, only the values for a single key 685can be written at a time. For nested keyed files, the sub key pairs 686may be specified in any order and not all pairs have to be specified. 687 688 689Conventions 690----------- 691 692- Settings for a single feature should be contained in a single file. 693 694- The root cgroup should be exempt from resource control and thus 695 shouldn't have resource control interface files. Also, 696 informational files on the root cgroup which end up showing global 697 information available elsewhere shouldn't exist. 698 699- If a controller implements weight based resource distribution, its 700 interface file should be named "weight" and have the range [1, 701 10000] with 100 as the default. The values are chosen to allow 702 enough and symmetric bias in both directions while keeping it 703 intuitive (the default is 100%). 704 705- If a controller implements an absolute resource guarantee and/or 706 limit, the interface files should be named "min" and "max" 707 respectively. If a controller implements best effort resource 708 guarantee and/or limit, the interface files should be named "low" 709 and "high" respectively. 710 711 In the above four control files, the special token "max" should be 712 used to represent upward infinity for both reading and writing. 713 714- If a setting has a configurable default value and keyed specific 715 overrides, the default entry should be keyed with "default" and 716 appear as the first entry in the file. 717 718 The default value can be updated by writing either "default $VAL" or 719 "$VAL". 720 721 When writing to update a specific override, "default" can be used as 722 the value to indicate removal of the override. Override entries 723 with "default" as the value must not appear when read. 724 725 For example, a setting which is keyed by major:minor device numbers 726 with integer values may look like the following:: 727 728 # cat cgroup-example-interface-file 729 default 150 730 8:0 300 731 732 The default value can be updated by:: 733 734 # echo 125 > cgroup-example-interface-file 735 736 or:: 737 738 # echo "default 125" > cgroup-example-interface-file 739 740 An override can be set by:: 741 742 # echo "8:16 170" > cgroup-example-interface-file 743 744 and cleared by:: 745 746 # echo "8:0 default" > cgroup-example-interface-file 747 # cat cgroup-example-interface-file 748 default 125 749 8:16 170 750 751- For events which are not very high frequency, an interface file 752 "events" should be created which lists event key value pairs. 753 Whenever a notifiable event happens, file modified event should be 754 generated on the file. 755 756 757Core Interface Files 758-------------------- 759 760All cgroup core files are prefixed with "cgroup." 761 762 cgroup.type 763 764 A read-write single value file which exists on non-root 765 cgroups. 766 767 When read, it indicates the current type of the cgroup, which 768 can be one of the following values. 769 770 - "domain" : A normal valid domain cgroup. 771 772 - "domain threaded" : A threaded domain cgroup which is 773 serving as the root of a threaded subtree. 774 775 - "domain invalid" : A cgroup which is in an invalid state. 776 It can't be populated or have controllers enabled. It may 777 be allowed to become a threaded cgroup. 778 779 - "threaded" : A threaded cgroup which is a member of a 780 threaded subtree. 781 782 A cgroup can be turned into a threaded cgroup by writing 783 "threaded" to this file. 784 785 cgroup.procs 786 A read-write new-line separated values file which exists on 787 all cgroups. 788 789 When read, it lists the PIDs of all processes which belong to 790 the cgroup one-per-line. The PIDs are not ordered and the 791 same PID may show up more than once if the process got moved 792 to another cgroup and then back or the PID got recycled while 793 reading. 794 795 A PID can be written to migrate the process associated with 796 the PID to the cgroup. The writer should match all of the 797 following conditions. 798 799 - It must have write access to the "cgroup.procs" file. 800 801 - It must have write access to the "cgroup.procs" file of the 802 common ancestor of the source and destination cgroups. 803 804 When delegating a sub-hierarchy, write access to this file 805 should be granted along with the containing directory. 806 807 In a threaded cgroup, reading this file fails with EOPNOTSUPP 808 as all the processes belong to the thread root. Writing is 809 supported and moves every thread of the process to the cgroup. 810 811 cgroup.threads 812 A read-write new-line separated values file which exists on 813 all cgroups. 814 815 When read, it lists the TIDs of all threads which belong to 816 the cgroup one-per-line. The TIDs are not ordered and the 817 same TID may show up more than once if the thread got moved to 818 another cgroup and then back or the TID got recycled while 819 reading. 820 821 A TID can be written to migrate the thread associated with the 822 TID to the cgroup. The writer should match all of the 823 following conditions. 824 825 - It must have write access to the "cgroup.threads" file. 826 827 - The cgroup that the thread is currently in must be in the 828 same resource domain as the destination cgroup. 829 830 - It must have write access to the "cgroup.procs" file of the 831 common ancestor of the source and destination cgroups. 832 833 When delegating a sub-hierarchy, write access to this file 834 should be granted along with the containing directory. 835 836 cgroup.controllers 837 A read-only space separated values file which exists on all 838 cgroups. 839 840 It shows space separated list of all controllers available to 841 the cgroup. The controllers are not ordered. 842 843 cgroup.subtree_control 844 A read-write space separated values file which exists on all 845 cgroups. Starts out empty. 846 847 When read, it shows space separated list of the controllers 848 which are enabled to control resource distribution from the 849 cgroup to its children. 850 851 Space separated list of controllers prefixed with '+' or '-' 852 can be written to enable or disable controllers. A controller 853 name prefixed with '+' enables the controller and '-' 854 disables. If a controller appears more than once on the list, 855 the last one is effective. When multiple enable and disable 856 operations are specified, either all succeed or all fail. 857 858 cgroup.events 859 A read-only flat-keyed file which exists on non-root cgroups. 860 The following entries are defined. Unless specified 861 otherwise, a value change in this file generates a file 862 modified event. 863 864 populated 865 1 if the cgroup or its descendants contains any live 866 processes; otherwise, 0. 867 frozen 868 1 if the cgroup is frozen; otherwise, 0. 869 870 cgroup.max.descendants 871 A read-write single value files. The default is "max". 872 873 Maximum allowed number of descent cgroups. 874 If the actual number of descendants is equal or larger, 875 an attempt to create a new cgroup in the hierarchy will fail. 876 877 cgroup.max.depth 878 A read-write single value files. The default is "max". 879 880 Maximum allowed descent depth below the current cgroup. 881 If the actual descent depth is equal or larger, 882 an attempt to create a new child cgroup will fail. 883 884 cgroup.stat 885 A read-only flat-keyed file with the following entries: 886 887 nr_descendants 888 Total number of visible descendant cgroups. 889 890 nr_dying_descendants 891 Total number of dying descendant cgroups. A cgroup becomes 892 dying after being deleted by a user. The cgroup will remain 893 in dying state for some time undefined time (which can depend 894 on system load) before being completely destroyed. 895 896 A process can't enter a dying cgroup under any circumstances, 897 a dying cgroup can't revive. 898 899 A dying cgroup can consume system resources not exceeding 900 limits, which were active at the moment of cgroup deletion. 901 902 cgroup.freeze 903 A read-write single value file which exists on non-root cgroups. 904 Allowed values are "0" and "1". The default is "0". 905 906 Writing "1" to the file causes freezing of the cgroup and all 907 descendant cgroups. This means that all belonging processes will 908 be stopped and will not run until the cgroup will be explicitly 909 unfrozen. Freezing of the cgroup may take some time; when this action 910 is completed, the "frozen" value in the cgroup.events control file 911 will be updated to "1" and the corresponding notification will be 912 issued. 913 914 A cgroup can be frozen either by its own settings, or by settings 915 of any ancestor cgroups. If any of ancestor cgroups is frozen, the 916 cgroup will remain frozen. 917 918 Processes in the frozen cgroup can be killed by a fatal signal. 919 They also can enter and leave a frozen cgroup: either by an explicit 920 move by a user, or if freezing of the cgroup races with fork(). 921 If a process is moved to a frozen cgroup, it stops. If a process is 922 moved out of a frozen cgroup, it becomes running. 923 924 Frozen status of a cgroup doesn't affect any cgroup tree operations: 925 it's possible to delete a frozen (and empty) cgroup, as well as 926 create new sub-cgroups. 927 928Controllers 929=========== 930 931CPU 932--- 933 934The "cpu" controllers regulates distribution of CPU cycles. This 935controller implements weight and absolute bandwidth limit models for 936normal scheduling policy and absolute bandwidth allocation model for 937realtime scheduling policy. 938 939WARNING: cgroup2 doesn't yet support control of realtime processes and 940the cpu controller can only be enabled when all RT processes are in 941the root cgroup. Be aware that system management software may already 942have placed RT processes into nonroot cgroups during the system boot 943process, and these processes may need to be moved to the root cgroup 944before the cpu controller can be enabled. 945 946 947CPU Interface Files 948~~~~~~~~~~~~~~~~~~~ 949 950All time durations are in microseconds. 951 952 cpu.stat 953 A read-only flat-keyed file which exists on non-root cgroups. 954 This file exists whether the controller is enabled or not. 955 956 It always reports the following three stats: 957 958 - usage_usec 959 - user_usec 960 - system_usec 961 962 and the following three when the controller is enabled: 963 964 - nr_periods 965 - nr_throttled 966 - throttled_usec 967 968 cpu.weight 969 A read-write single value file which exists on non-root 970 cgroups. The default is "100". 971 972 The weight in the range [1, 10000]. 973 974 cpu.weight.nice 975 A read-write single value file which exists on non-root 976 cgroups. The default is "0". 977 978 The nice value is in the range [-20, 19]. 979 980 This interface file is an alternative interface for 981 "cpu.weight" and allows reading and setting weight using the 982 same values used by nice(2). Because the range is smaller and 983 granularity is coarser for the nice values, the read value is 984 the closest approximation of the current weight. 985 986 cpu.max 987 A read-write two value file which exists on non-root cgroups. 988 The default is "max 100000". 989 990 The maximum bandwidth limit. It's in the following format:: 991 992 $MAX $PERIOD 993 994 which indicates that the group may consume upto $MAX in each 995 $PERIOD duration. "max" for $MAX indicates no limit. If only 996 one number is written, $MAX is updated. 997 998 cpu.pressure 999 A read-only nested-key file which exists on non-root cgroups. 1000 1001 Shows pressure stall information for CPU. See 1002 Documentation/accounting/psi.txt for details. 1003 1004 1005Memory 1006------ 1007 1008The "memory" controller regulates distribution of memory. Memory is 1009stateful and implements both limit and protection models. Due to the 1010intertwining between memory usage and reclaim pressure and the 1011stateful nature of memory, the distribution model is relatively 1012complex. 1013 1014While not completely water-tight, all major memory usages by a given 1015cgroup are tracked so that the total memory consumption can be 1016accounted and controlled to a reasonable extent. Currently, the 1017following types of memory usages are tracked. 1018 1019- Userland memory - page cache and anonymous memory. 1020 1021- Kernel data structures such as dentries and inodes. 1022 1023- TCP socket buffers. 1024 1025The above list may expand in the future for better coverage. 1026 1027 1028Memory Interface Files 1029~~~~~~~~~~~~~~~~~~~~~~ 1030 1031All memory amounts are in bytes. If a value which is not aligned to 1032PAGE_SIZE is written, the value may be rounded up to the closest 1033PAGE_SIZE multiple when read back. 1034 1035 memory.current 1036 A read-only single value file which exists on non-root 1037 cgroups. 1038 1039 The total amount of memory currently being used by the cgroup 1040 and its descendants. 1041 1042 memory.min 1043 A read-write single value file which exists on non-root 1044 cgroups. The default is "0". 1045 1046 Hard memory protection. If the memory usage of a cgroup 1047 is within its effective min boundary, the cgroup's memory 1048 won't be reclaimed under any conditions. If there is no 1049 unprotected reclaimable memory available, OOM killer 1050 is invoked. 1051 1052 Effective min boundary is limited by memory.min values of 1053 all ancestor cgroups. If there is memory.min overcommitment 1054 (child cgroup or cgroups are requiring more protected memory 1055 than parent will allow), then each child cgroup will get 1056 the part of parent's protection proportional to its 1057 actual memory usage below memory.min. 1058 1059 Putting more memory than generally available under this 1060 protection is discouraged and may lead to constant OOMs. 1061 1062 If a memory cgroup is not populated with processes, 1063 its memory.min is ignored. 1064 1065 memory.low 1066 A read-write single value file which exists on non-root 1067 cgroups. The default is "0". 1068 1069 Best-effort memory protection. If the memory usage of a 1070 cgroup is within its effective low boundary, the cgroup's 1071 memory won't be reclaimed unless memory can be reclaimed 1072 from unprotected cgroups. 1073 1074 Effective low boundary is limited by memory.low values of 1075 all ancestor cgroups. If there is memory.low overcommitment 1076 (child cgroup or cgroups are requiring more protected memory 1077 than parent will allow), then each child cgroup will get 1078 the part of parent's protection proportional to its 1079 actual memory usage below memory.low. 1080 1081 Putting more memory than generally available under this 1082 protection is discouraged. 1083 1084 memory.high 1085 A read-write single value file which exists on non-root 1086 cgroups. The default is "max". 1087 1088 Memory usage throttle limit. This is the main mechanism to 1089 control memory usage of a cgroup. If a cgroup's usage goes 1090 over the high boundary, the processes of the cgroup are 1091 throttled and put under heavy reclaim pressure. 1092 1093 Going over the high limit never invokes the OOM killer and 1094 under extreme conditions the limit may be breached. 1095 1096 memory.max 1097 A read-write single value file which exists on non-root 1098 cgroups. The default is "max". 1099 1100 Memory usage hard limit. This is the final protection 1101 mechanism. If a cgroup's memory usage reaches this limit and 1102 can't be reduced, the OOM killer is invoked in the cgroup. 1103 Under certain circumstances, the usage may go over the limit 1104 temporarily. 1105 1106 This is the ultimate protection mechanism. As long as the 1107 high limit is used and monitored properly, this limit's 1108 utility is limited to providing the final safety net. 1109 1110 memory.oom.group 1111 A read-write single value file which exists on non-root 1112 cgroups. The default value is "0". 1113 1114 Determines whether the cgroup should be treated as 1115 an indivisible workload by the OOM killer. If set, 1116 all tasks belonging to the cgroup or to its descendants 1117 (if the memory cgroup is not a leaf cgroup) are killed 1118 together or not at all. This can be used to avoid 1119 partial kills to guarantee workload integrity. 1120 1121 Tasks with the OOM protection (oom_score_adj set to -1000) 1122 are treated as an exception and are never killed. 1123 1124 If the OOM killer is invoked in a cgroup, it's not going 1125 to kill any tasks outside of this cgroup, regardless 1126 memory.oom.group values of ancestor cgroups. 1127 1128 memory.events 1129 A read-only flat-keyed file which exists on non-root cgroups. 1130 The following entries are defined. Unless specified 1131 otherwise, a value change in this file generates a file 1132 modified event. 1133 1134 low 1135 The number of times the cgroup is reclaimed due to 1136 high memory pressure even though its usage is under 1137 the low boundary. This usually indicates that the low 1138 boundary is over-committed. 1139 1140 high 1141 The number of times processes of the cgroup are 1142 throttled and routed to perform direct memory reclaim 1143 because the high memory boundary was exceeded. For a 1144 cgroup whose memory usage is capped by the high limit 1145 rather than global memory pressure, this event's 1146 occurrences are expected. 1147 1148 max 1149 The number of times the cgroup's memory usage was 1150 about to go over the max boundary. If direct reclaim 1151 fails to bring it down, the cgroup goes to OOM state. 1152 1153 oom 1154 The number of time the cgroup's memory usage was 1155 reached the limit and allocation was about to fail. 1156 1157 Depending on context result could be invocation of OOM 1158 killer and retrying allocation or failing allocation. 1159 1160 Failed allocation in its turn could be returned into 1161 userspace as -ENOMEM or silently ignored in cases like 1162 disk readahead. For now OOM in memory cgroup kills 1163 tasks iff shortage has happened inside page fault. 1164 1165 This event is not raised if the OOM killer is not 1166 considered as an option, e.g. for failed high-order 1167 allocations. 1168 1169 oom_kill 1170 The number of processes belonging to this cgroup 1171 killed by any kind of OOM killer. 1172 1173 memory.stat 1174 A read-only flat-keyed file which exists on non-root cgroups. 1175 1176 This breaks down the cgroup's memory footprint into different 1177 types of memory, type-specific details, and other information 1178 on the state and past events of the memory management system. 1179 1180 All memory amounts are in bytes. 1181 1182 The entries are ordered to be human readable, and new entries 1183 can show up in the middle. Don't rely on items remaining in a 1184 fixed position; use the keys to look up specific values! 1185 1186 anon 1187 Amount of memory used in anonymous mappings such as 1188 brk(), sbrk(), and mmap(MAP_ANONYMOUS) 1189 1190 file 1191 Amount of memory used to cache filesystem data, 1192 including tmpfs and shared memory. 1193 1194 kernel_stack 1195 Amount of memory allocated to kernel stacks. 1196 1197 slab 1198 Amount of memory used for storing in-kernel data 1199 structures. 1200 1201 sock 1202 Amount of memory used in network transmission buffers 1203 1204 shmem 1205 Amount of cached filesystem data that is swap-backed, 1206 such as tmpfs, shm segments, shared anonymous mmap()s 1207 1208 file_mapped 1209 Amount of cached filesystem data mapped with mmap() 1210 1211 file_dirty 1212 Amount of cached filesystem data that was modified but 1213 not yet written back to disk 1214 1215 file_writeback 1216 Amount of cached filesystem data that was modified and 1217 is currently being written back to disk 1218 1219 anon_thp 1220 Amount of memory used in anonymous mappings backed by 1221 transparent hugepages 1222 1223 inactive_anon, active_anon, inactive_file, active_file, unevictable 1224 Amount of memory, swap-backed and filesystem-backed, 1225 on the internal memory management lists used by the 1226 page reclaim algorithm 1227 1228 slab_reclaimable 1229 Part of "slab" that might be reclaimed, such as 1230 dentries and inodes. 1231 1232 slab_unreclaimable 1233 Part of "slab" that cannot be reclaimed on memory 1234 pressure. 1235 1236 pgfault 1237 Total number of page faults incurred 1238 1239 pgmajfault 1240 Number of major page faults incurred 1241 1242 workingset_refault 1243 1244 Number of refaults of previously evicted pages 1245 1246 workingset_activate 1247 1248 Number of refaulted pages that were immediately activated 1249 1250 workingset_nodereclaim 1251 1252 Number of times a shadow node has been reclaimed 1253 1254 pgrefill 1255 1256 Amount of scanned pages (in an active LRU list) 1257 1258 pgscan 1259 1260 Amount of scanned pages (in an inactive LRU list) 1261 1262 pgsteal 1263 1264 Amount of reclaimed pages 1265 1266 pgactivate 1267 1268 Amount of pages moved to the active LRU list 1269 1270 pgdeactivate 1271 1272 Amount of pages moved to the inactive LRU lis 1273 1274 pglazyfree 1275 1276 Amount of pages postponed to be freed under memory pressure 1277 1278 pglazyfreed 1279 1280 Amount of reclaimed lazyfree pages 1281 1282 thp_fault_alloc 1283 1284 Number of transparent hugepages which were allocated to satisfy 1285 a page fault, including COW faults. This counter is not present 1286 when CONFIG_TRANSPARENT_HUGEPAGE is not set. 1287 1288 thp_collapse_alloc 1289 1290 Number of transparent hugepages which were allocated to allow 1291 collapsing an existing range of pages. This counter is not 1292 present when CONFIG_TRANSPARENT_HUGEPAGE is not set. 1293 1294 memory.swap.current 1295 A read-only single value file which exists on non-root 1296 cgroups. 1297 1298 The total amount of swap currently being used by the cgroup 1299 and its descendants. 1300 1301 memory.swap.max 1302 A read-write single value file which exists on non-root 1303 cgroups. The default is "max". 1304 1305 Swap usage hard limit. If a cgroup's swap usage reaches this 1306 limit, anonymous memory of the cgroup will not be swapped out. 1307 1308 memory.swap.events 1309 A read-only flat-keyed file which exists on non-root cgroups. 1310 The following entries are defined. Unless specified 1311 otherwise, a value change in this file generates a file 1312 modified event. 1313 1314 max 1315 The number of times the cgroup's swap usage was about 1316 to go over the max boundary and swap allocation 1317 failed. 1318 1319 fail 1320 The number of times swap allocation failed either 1321 because of running out of swap system-wide or max 1322 limit. 1323 1324 When reduced under the current usage, the existing swap 1325 entries are reclaimed gradually and the swap usage may stay 1326 higher than the limit for an extended period of time. This 1327 reduces the impact on the workload and memory management. 1328 1329 memory.pressure 1330 A read-only nested-key file which exists on non-root cgroups. 1331 1332 Shows pressure stall information for memory. See 1333 Documentation/accounting/psi.txt for details. 1334 1335 1336Usage Guidelines 1337~~~~~~~~~~~~~~~~ 1338 1339"memory.high" is the main mechanism to control memory usage. 1340Over-committing on high limit (sum of high limits > available memory) 1341and letting global memory pressure to distribute memory according to 1342usage is a viable strategy. 1343 1344Because breach of the high limit doesn't trigger the OOM killer but 1345throttles the offending cgroup, a management agent has ample 1346opportunities to monitor and take appropriate actions such as granting 1347more memory or terminating the workload. 1348 1349Determining whether a cgroup has enough memory is not trivial as 1350memory usage doesn't indicate whether the workload can benefit from 1351more memory. For example, a workload which writes data received from 1352network to a file can use all available memory but can also operate as 1353performant with a small amount of memory. A measure of memory 1354pressure - how much the workload is being impacted due to lack of 1355memory - is necessary to determine whether a workload needs more 1356memory; unfortunately, memory pressure monitoring mechanism isn't 1357implemented yet. 1358 1359 1360Memory Ownership 1361~~~~~~~~~~~~~~~~ 1362 1363A memory area is charged to the cgroup which instantiated it and stays 1364charged to the cgroup until the area is released. Migrating a process 1365to a different cgroup doesn't move the memory usages that it 1366instantiated while in the previous cgroup to the new cgroup. 1367 1368A memory area may be used by processes belonging to different cgroups. 1369To which cgroup the area will be charged is in-deterministic; however, 1370over time, the memory area is likely to end up in a cgroup which has 1371enough memory allowance to avoid high reclaim pressure. 1372 1373If a cgroup sweeps a considerable amount of memory which is expected 1374to be accessed repeatedly by other cgroups, it may make sense to use 1375POSIX_FADV_DONTNEED to relinquish the ownership of memory areas 1376belonging to the affected files to ensure correct memory ownership. 1377 1378 1379IO 1380-- 1381 1382The "io" controller regulates the distribution of IO resources. This 1383controller implements both weight based and absolute bandwidth or IOPS 1384limit distribution; however, weight based distribution is available 1385only if cfq-iosched is in use and neither scheme is available for 1386blk-mq devices. 1387 1388 1389IO Interface Files 1390~~~~~~~~~~~~~~~~~~ 1391 1392 io.stat 1393 A read-only nested-keyed file which exists on non-root 1394 cgroups. 1395 1396 Lines are keyed by $MAJ:$MIN device numbers and not ordered. 1397 The following nested keys are defined. 1398 1399 ====== ===================== 1400 rbytes Bytes read 1401 wbytes Bytes written 1402 rios Number of read IOs 1403 wios Number of write IOs 1404 dbytes Bytes discarded 1405 dios Number of discard IOs 1406 ====== ===================== 1407 1408 An example read output follows: 1409 1410 8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0 1411 8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021 1412 1413 io.weight 1414 A read-write flat-keyed file which exists on non-root cgroups. 1415 The default is "default 100". 1416 1417 The first line is the default weight applied to devices 1418 without specific override. The rest are overrides keyed by 1419 $MAJ:$MIN device numbers and not ordered. The weights are in 1420 the range [1, 10000] and specifies the relative amount IO time 1421 the cgroup can use in relation to its siblings. 1422 1423 The default weight can be updated by writing either "default 1424 $WEIGHT" or simply "$WEIGHT". Overrides can be set by writing 1425 "$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default". 1426 1427 An example read output follows:: 1428 1429 default 100 1430 8:16 200 1431 8:0 50 1432 1433 io.max 1434 A read-write nested-keyed file which exists on non-root 1435 cgroups. 1436 1437 BPS and IOPS based IO limit. Lines are keyed by $MAJ:$MIN 1438 device numbers and not ordered. The following nested keys are 1439 defined. 1440 1441 ===== ================================== 1442 rbps Max read bytes per second 1443 wbps Max write bytes per second 1444 riops Max read IO operations per second 1445 wiops Max write IO operations per second 1446 ===== ================================== 1447 1448 When writing, any number of nested key-value pairs can be 1449 specified in any order. "max" can be specified as the value 1450 to remove a specific limit. If the same key is specified 1451 multiple times, the outcome is undefined. 1452 1453 BPS and IOPS are measured in each IO direction and IOs are 1454 delayed if limit is reached. Temporary bursts are allowed. 1455 1456 Setting read limit at 2M BPS and write at 120 IOPS for 8:16:: 1457 1458 echo "8:16 rbps=2097152 wiops=120" > io.max 1459 1460 Reading returns the following:: 1461 1462 8:16 rbps=2097152 wbps=max riops=max wiops=120 1463 1464 Write IOPS limit can be removed by writing the following:: 1465 1466 echo "8:16 wiops=max" > io.max 1467 1468 Reading now returns the following:: 1469 1470 8:16 rbps=2097152 wbps=max riops=max wiops=max 1471 1472 io.pressure 1473 A read-only nested-key file which exists on non-root cgroups. 1474 1475 Shows pressure stall information for IO. See 1476 Documentation/accounting/psi.txt for details. 1477 1478 1479Writeback 1480~~~~~~~~~ 1481 1482Page cache is dirtied through buffered writes and shared mmaps and 1483written asynchronously to the backing filesystem by the writeback 1484mechanism. Writeback sits between the memory and IO domains and 1485regulates the proportion of dirty memory by balancing dirtying and 1486write IOs. 1487 1488The io controller, in conjunction with the memory controller, 1489implements control of page cache writeback IOs. The memory controller 1490defines the memory domain that dirty memory ratio is calculated and 1491maintained for and the io controller defines the io domain which 1492writes out dirty pages for the memory domain. Both system-wide and 1493per-cgroup dirty memory states are examined and the more restrictive 1494of the two is enforced. 1495 1496cgroup writeback requires explicit support from the underlying 1497filesystem. Currently, cgroup writeback is implemented on ext2, ext4 1498and btrfs. On other filesystems, all writeback IOs are attributed to 1499the root cgroup. 1500 1501There are inherent differences in memory and writeback management 1502which affects how cgroup ownership is tracked. Memory is tracked per 1503page while writeback per inode. For the purpose of writeback, an 1504inode is assigned to a cgroup and all IO requests to write dirty pages 1505from the inode are attributed to that cgroup. 1506 1507As cgroup ownership for memory is tracked per page, there can be pages 1508which are associated with different cgroups than the one the inode is 1509associated with. These are called foreign pages. The writeback 1510constantly keeps track of foreign pages and, if a particular foreign 1511cgroup becomes the majority over a certain period of time, switches 1512the ownership of the inode to that cgroup. 1513 1514While this model is enough for most use cases where a given inode is 1515mostly dirtied by a single cgroup even when the main writing cgroup 1516changes over time, use cases where multiple cgroups write to a single 1517inode simultaneously are not supported well. In such circumstances, a 1518significant portion of IOs are likely to be attributed incorrectly. 1519As memory controller assigns page ownership on the first use and 1520doesn't update it until the page is released, even if writeback 1521strictly follows page ownership, multiple cgroups dirtying overlapping 1522areas wouldn't work as expected. It's recommended to avoid such usage 1523patterns. 1524 1525The sysctl knobs which affect writeback behavior are applied to cgroup 1526writeback as follows. 1527 1528 vm.dirty_background_ratio, vm.dirty_ratio 1529 These ratios apply the same to cgroup writeback with the 1530 amount of available memory capped by limits imposed by the 1531 memory controller and system-wide clean memory. 1532 1533 vm.dirty_background_bytes, vm.dirty_bytes 1534 For cgroup writeback, this is calculated into ratio against 1535 total available memory and applied the same way as 1536 vm.dirty[_background]_ratio. 1537 1538 1539IO Latency 1540~~~~~~~~~~ 1541 1542This is a cgroup v2 controller for IO workload protection. You provide a group 1543with a latency target, and if the average latency exceeds that target the 1544controller will throttle any peers that have a lower latency target than the 1545protected workload. 1546 1547The limits are only applied at the peer level in the hierarchy. This means that 1548in the diagram below, only groups A, B, and C will influence each other, and 1549groups D and F will influence each other. Group G will influence nobody:: 1550 1551 [root] 1552 / | \ 1553 A B C 1554 / \ | 1555 D F G 1556 1557 1558So the ideal way to configure this is to set io.latency in groups A, B, and C. 1559Generally you do not want to set a value lower than the latency your device 1560supports. Experiment to find the value that works best for your workload. 1561Start at higher than the expected latency for your device and watch the 1562avg_lat value in io.stat for your workload group to get an idea of the 1563latency you see during normal operation. Use the avg_lat value as a basis for 1564your real setting, setting at 10-15% higher than the value in io.stat. 1565 1566How IO Latency Throttling Works 1567~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1568 1569io.latency is work conserving; so as long as everybody is meeting their latency 1570target the controller doesn't do anything. Once a group starts missing its 1571target it begins throttling any peer group that has a higher target than itself. 1572This throttling takes 2 forms: 1573 1574- Queue depth throttling. This is the number of outstanding IO's a group is 1575 allowed to have. We will clamp down relatively quickly, starting at no limit 1576 and going all the way down to 1 IO at a time. 1577 1578- Artificial delay induction. There are certain types of IO that cannot be 1579 throttled without possibly adversely affecting higher priority groups. This 1580 includes swapping and metadata IO. These types of IO are allowed to occur 1581 normally, however they are "charged" to the originating group. If the 1582 originating group is being throttled you will see the use_delay and delay 1583 fields in io.stat increase. The delay value is how many microseconds that are 1584 being added to any process that runs in this group. Because this number can 1585 grow quite large if there is a lot of swapping or metadata IO occurring we 1586 limit the individual delay events to 1 second at a time. 1587 1588Once the victimized group starts meeting its latency target again it will start 1589unthrottling any peer groups that were throttled previously. If the victimized 1590group simply stops doing IO the global counter will unthrottle appropriately. 1591 1592IO Latency Interface Files 1593~~~~~~~~~~~~~~~~~~~~~~~~~~ 1594 1595 io.latency 1596 This takes a similar format as the other controllers. 1597 1598 "MAJOR:MINOR target=<target time in microseconds" 1599 1600 io.stat 1601 If the controller is enabled you will see extra stats in io.stat in 1602 addition to the normal ones. 1603 1604 depth 1605 This is the current queue depth for the group. 1606 1607 avg_lat 1608 This is an exponential moving average with a decay rate of 1/exp 1609 bound by the sampling interval. The decay rate interval can be 1610 calculated by multiplying the win value in io.stat by the 1611 corresponding number of samples based on the win value. 1612 1613 win 1614 The sampling window size in milliseconds. This is the minimum 1615 duration of time between evaluation events. Windows only elapse 1616 with IO activity. Idle periods extend the most recent window. 1617 1618PID 1619--- 1620 1621The process number controller is used to allow a cgroup to stop any 1622new tasks from being fork()'d or clone()'d after a specified limit is 1623reached. 1624 1625The number of tasks in a cgroup can be exhausted in ways which other 1626controllers cannot prevent, thus warranting its own controller. For 1627example, a fork bomb is likely to exhaust the number of tasks before 1628hitting memory restrictions. 1629 1630Note that PIDs used in this controller refer to TIDs, process IDs as 1631used by the kernel. 1632 1633 1634PID Interface Files 1635~~~~~~~~~~~~~~~~~~~ 1636 1637 pids.max 1638 A read-write single value file which exists on non-root 1639 cgroups. The default is "max". 1640 1641 Hard limit of number of processes. 1642 1643 pids.current 1644 A read-only single value file which exists on all cgroups. 1645 1646 The number of processes currently in the cgroup and its 1647 descendants. 1648 1649Organisational operations are not blocked by cgroup policies, so it is 1650possible to have pids.current > pids.max. This can be done by either 1651setting the limit to be smaller than pids.current, or attaching enough 1652processes to the cgroup such that pids.current is larger than 1653pids.max. However, it is not possible to violate a cgroup PID policy 1654through fork() or clone(). These will return -EAGAIN if the creation 1655of a new process would cause a cgroup policy to be violated. 1656 1657 1658Cpuset 1659------ 1660 1661The "cpuset" controller provides a mechanism for constraining 1662the CPU and memory node placement of tasks to only the resources 1663specified in the cpuset interface files in a task's current cgroup. 1664This is especially valuable on large NUMA systems where placing jobs 1665on properly sized subsets of the systems with careful processor and 1666memory placement to reduce cross-node memory access and contention 1667can improve overall system performance. 1668 1669The "cpuset" controller is hierarchical. That means the controller 1670cannot use CPUs or memory nodes not allowed in its parent. 1671 1672 1673Cpuset Interface Files 1674~~~~~~~~~~~~~~~~~~~~~~ 1675 1676 cpuset.cpus 1677 A read-write multiple values file which exists on non-root 1678 cpuset-enabled cgroups. 1679 1680 It lists the requested CPUs to be used by tasks within this 1681 cgroup. The actual list of CPUs to be granted, however, is 1682 subjected to constraints imposed by its parent and can differ 1683 from the requested CPUs. 1684 1685 The CPU numbers are comma-separated numbers or ranges. 1686 For example: 1687 1688 # cat cpuset.cpus 1689 0-4,6,8-10 1690 1691 An empty value indicates that the cgroup is using the same 1692 setting as the nearest cgroup ancestor with a non-empty 1693 "cpuset.cpus" or all the available CPUs if none is found. 1694 1695 The value of "cpuset.cpus" stays constant until the next update 1696 and won't be affected by any CPU hotplug events. 1697 1698 cpuset.cpus.effective 1699 A read-only multiple values file which exists on all 1700 cpuset-enabled cgroups. 1701 1702 It lists the onlined CPUs that are actually granted to this 1703 cgroup by its parent. These CPUs are allowed to be used by 1704 tasks within the current cgroup. 1705 1706 If "cpuset.cpus" is empty, the "cpuset.cpus.effective" file shows 1707 all the CPUs from the parent cgroup that can be available to 1708 be used by this cgroup. Otherwise, it should be a subset of 1709 "cpuset.cpus" unless none of the CPUs listed in "cpuset.cpus" 1710 can be granted. In this case, it will be treated just like an 1711 empty "cpuset.cpus". 1712 1713 Its value will be affected by CPU hotplug events. 1714 1715 cpuset.mems 1716 A read-write multiple values file which exists on non-root 1717 cpuset-enabled cgroups. 1718 1719 It lists the requested memory nodes to be used by tasks within 1720 this cgroup. The actual list of memory nodes granted, however, 1721 is subjected to constraints imposed by its parent and can differ 1722 from the requested memory nodes. 1723 1724 The memory node numbers are comma-separated numbers or ranges. 1725 For example: 1726 1727 # cat cpuset.mems 1728 0-1,3 1729 1730 An empty value indicates that the cgroup is using the same 1731 setting as the nearest cgroup ancestor with a non-empty 1732 "cpuset.mems" or all the available memory nodes if none 1733 is found. 1734 1735 The value of "cpuset.mems" stays constant until the next update 1736 and won't be affected by any memory nodes hotplug events. 1737 1738 cpuset.mems.effective 1739 A read-only multiple values file which exists on all 1740 cpuset-enabled cgroups. 1741 1742 It lists the onlined memory nodes that are actually granted to 1743 this cgroup by its parent. These memory nodes are allowed to 1744 be used by tasks within the current cgroup. 1745 1746 If "cpuset.mems" is empty, it shows all the memory nodes from the 1747 parent cgroup that will be available to be used by this cgroup. 1748 Otherwise, it should be a subset of "cpuset.mems" unless none of 1749 the memory nodes listed in "cpuset.mems" can be granted. In this 1750 case, it will be treated just like an empty "cpuset.mems". 1751 1752 Its value will be affected by memory nodes hotplug events. 1753 1754 cpuset.cpus.partition 1755 A read-write single value file which exists on non-root 1756 cpuset-enabled cgroups. This flag is owned by the parent cgroup 1757 and is not delegatable. 1758 1759 It accepts only the following input values when written to. 1760 1761 "root" - a paritition root 1762 "member" - a non-root member of a partition 1763 1764 When set to be a partition root, the current cgroup is the 1765 root of a new partition or scheduling domain that comprises 1766 itself and all its descendants except those that are separate 1767 partition roots themselves and their descendants. The root 1768 cgroup is always a partition root. 1769 1770 There are constraints on where a partition root can be set. 1771 It can only be set in a cgroup if all the following conditions 1772 are true. 1773 1774 1) The "cpuset.cpus" is not empty and the list of CPUs are 1775 exclusive, i.e. they are not shared by any of its siblings. 1776 2) The parent cgroup is a partition root. 1777 3) The "cpuset.cpus" is also a proper subset of the parent's 1778 "cpuset.cpus.effective". 1779 4) There is no child cgroups with cpuset enabled. This is for 1780 eliminating corner cases that have to be handled if such a 1781 condition is allowed. 1782 1783 Setting it to partition root will take the CPUs away from the 1784 effective CPUs of the parent cgroup. Once it is set, this 1785 file cannot be reverted back to "member" if there are any child 1786 cgroups with cpuset enabled. 1787 1788 A parent partition cannot distribute all its CPUs to its 1789 child partitions. There must be at least one cpu left in the 1790 parent partition. 1791 1792 Once becoming a partition root, changes to "cpuset.cpus" is 1793 generally allowed as long as the first condition above is true, 1794 the change will not take away all the CPUs from the parent 1795 partition and the new "cpuset.cpus" value is a superset of its 1796 children's "cpuset.cpus" values. 1797 1798 Sometimes, external factors like changes to ancestors' 1799 "cpuset.cpus" or cpu hotplug can cause the state of the partition 1800 root to change. On read, the "cpuset.sched.partition" file 1801 can show the following values. 1802 1803 "member" Non-root member of a partition 1804 "root" Partition root 1805 "root invalid" Invalid partition root 1806 1807 It is a partition root if the first 2 partition root conditions 1808 above are true and at least one CPU from "cpuset.cpus" is 1809 granted by the parent cgroup. 1810 1811 A partition root can become invalid if none of CPUs requested 1812 in "cpuset.cpus" can be granted by the parent cgroup or the 1813 parent cgroup is no longer a partition root itself. In this 1814 case, it is not a real partition even though the restriction 1815 of the first partition root condition above will still apply. 1816 The cpu affinity of all the tasks in the cgroup will then be 1817 associated with CPUs in the nearest ancestor partition. 1818 1819 An invalid partition root can be transitioned back to a 1820 real partition root if at least one of the requested CPUs 1821 can now be granted by its parent. In this case, the cpu 1822 affinity of all the tasks in the formerly invalid partition 1823 will be associated to the CPUs of the newly formed partition. 1824 Changing the partition state of an invalid partition root to 1825 "member" is always allowed even if child cpusets are present. 1826 1827 1828Device controller 1829----------------- 1830 1831Device controller manages access to device files. It includes both 1832creation of new device files (using mknod), and access to the 1833existing device files. 1834 1835Cgroup v2 device controller has no interface files and is implemented 1836on top of cgroup BPF. To control access to device files, a user may 1837create bpf programs of the BPF_CGROUP_DEVICE type and attach them 1838to cgroups. On an attempt to access a device file, corresponding 1839BPF programs will be executed, and depending on the return value 1840the attempt will succeed or fail with -EPERM. 1841 1842A BPF_CGROUP_DEVICE program takes a pointer to the bpf_cgroup_dev_ctx 1843structure, which describes the device access attempt: access type 1844(mknod/read/write) and device (type, major and minor numbers). 1845If the program returns 0, the attempt fails with -EPERM, otherwise 1846it succeeds. 1847 1848An example of BPF_CGROUP_DEVICE program may be found in the kernel 1849source tree in the tools/testing/selftests/bpf/dev_cgroup.c file. 1850 1851 1852RDMA 1853---- 1854 1855The "rdma" controller regulates the distribution and accounting of 1856of RDMA resources. 1857 1858RDMA Interface Files 1859~~~~~~~~~~~~~~~~~~~~ 1860 1861 rdma.max 1862 A readwrite nested-keyed file that exists for all the cgroups 1863 except root that describes current configured resource limit 1864 for a RDMA/IB device. 1865 1866 Lines are keyed by device name and are not ordered. 1867 Each line contains space separated resource name and its configured 1868 limit that can be distributed. 1869 1870 The following nested keys are defined. 1871 1872 ========== ============================= 1873 hca_handle Maximum number of HCA Handles 1874 hca_object Maximum number of HCA Objects 1875 ========== ============================= 1876 1877 An example for mlx4 and ocrdma device follows:: 1878 1879 mlx4_0 hca_handle=2 hca_object=2000 1880 ocrdma1 hca_handle=3 hca_object=max 1881 1882 rdma.current 1883 A read-only file that describes current resource usage. 1884 It exists for all the cgroup except root. 1885 1886 An example for mlx4 and ocrdma device follows:: 1887 1888 mlx4_0 hca_handle=1 hca_object=20 1889 ocrdma1 hca_handle=1 hca_object=23 1890 1891 1892Misc 1893---- 1894 1895perf_event 1896~~~~~~~~~~ 1897 1898perf_event controller, if not mounted on a legacy hierarchy, is 1899automatically enabled on the v2 hierarchy so that perf events can 1900always be filtered by cgroup v2 path. The controller can still be 1901moved to a legacy hierarchy after v2 hierarchy is populated. 1902 1903 1904Non-normative information 1905------------------------- 1906 1907This section contains information that isn't considered to be a part of 1908the stable kernel API and so is subject to change. 1909 1910 1911CPU controller root cgroup process behaviour 1912~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1913 1914When distributing CPU cycles in the root cgroup each thread in this 1915cgroup is treated as if it was hosted in a separate child cgroup of the 1916root cgroup. This child cgroup weight is dependent on its thread nice 1917level. 1918 1919For details of this mapping see sched_prio_to_weight array in 1920kernel/sched/core.c file (values from this array should be scaled 1921appropriately so the neutral - nice 0 - value is 100 instead of 1024). 1922 1923 1924IO controller root cgroup process behaviour 1925~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1926 1927Root cgroup processes are hosted in an implicit leaf child node. 1928When distributing IO resources this implicit child node is taken into 1929account as if it was a normal child cgroup of the root cgroup with a 1930weight value of 200. 1931 1932 1933Namespace 1934========= 1935 1936Basics 1937------ 1938 1939cgroup namespace provides a mechanism to virtualize the view of the 1940"/proc/$PID/cgroup" file and cgroup mounts. The CLONE_NEWCGROUP clone 1941flag can be used with clone(2) and unshare(2) to create a new cgroup 1942namespace. The process running inside the cgroup namespace will have 1943its "/proc/$PID/cgroup" output restricted to cgroupns root. The 1944cgroupns root is the cgroup of the process at the time of creation of 1945the cgroup namespace. 1946 1947Without cgroup namespace, the "/proc/$PID/cgroup" file shows the 1948complete path of the cgroup of a process. In a container setup where 1949a set of cgroups and namespaces are intended to isolate processes the 1950"/proc/$PID/cgroup" file may leak potential system level information 1951to the isolated processes. For Example:: 1952 1953 # cat /proc/self/cgroup 1954 0::/batchjobs/container_id1 1955 1956The path '/batchjobs/container_id1' can be considered as system-data 1957and undesirable to expose to the isolated processes. cgroup namespace 1958can be used to restrict visibility of this path. For example, before 1959creating a cgroup namespace, one would see:: 1960 1961 # ls -l /proc/self/ns/cgroup 1962 lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835] 1963 # cat /proc/self/cgroup 1964 0::/batchjobs/container_id1 1965 1966After unsharing a new namespace, the view changes:: 1967 1968 # ls -l /proc/self/ns/cgroup 1969 lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183] 1970 # cat /proc/self/cgroup 1971 0::/ 1972 1973When some thread from a multi-threaded process unshares its cgroup 1974namespace, the new cgroupns gets applied to the entire process (all 1975the threads). This is natural for the v2 hierarchy; however, for the 1976legacy hierarchies, this may be unexpected. 1977 1978A cgroup namespace is alive as long as there are processes inside or 1979mounts pinning it. When the last usage goes away, the cgroup 1980namespace is destroyed. The cgroupns root and the actual cgroups 1981remain. 1982 1983 1984The Root and Views 1985------------------ 1986 1987The 'cgroupns root' for a cgroup namespace is the cgroup in which the 1988process calling unshare(2) is running. For example, if a process in 1989/batchjobs/container_id1 cgroup calls unshare, cgroup 1990/batchjobs/container_id1 becomes the cgroupns root. For the 1991init_cgroup_ns, this is the real root ('/') cgroup. 1992 1993The cgroupns root cgroup does not change even if the namespace creator 1994process later moves to a different cgroup:: 1995 1996 # ~/unshare -c # unshare cgroupns in some cgroup 1997 # cat /proc/self/cgroup 1998 0::/ 1999 # mkdir sub_cgrp_1 2000 # echo 0 > sub_cgrp_1/cgroup.procs 2001 # cat /proc/self/cgroup 2002 0::/sub_cgrp_1 2003 2004Each process gets its namespace-specific view of "/proc/$PID/cgroup" 2005 2006Processes running inside the cgroup namespace will be able to see 2007cgroup paths (in /proc/self/cgroup) only inside their root cgroup. 2008From within an unshared cgroupns:: 2009 2010 # sleep 100000 & 2011 [1] 7353 2012 # echo 7353 > sub_cgrp_1/cgroup.procs 2013 # cat /proc/7353/cgroup 2014 0::/sub_cgrp_1 2015 2016From the initial cgroup namespace, the real cgroup path will be 2017visible:: 2018 2019 $ cat /proc/7353/cgroup 2020 0::/batchjobs/container_id1/sub_cgrp_1 2021 2022From a sibling cgroup namespace (that is, a namespace rooted at a 2023different cgroup), the cgroup path relative to its own cgroup 2024namespace root will be shown. For instance, if PID 7353's cgroup 2025namespace root is at '/batchjobs/container_id2', then it will see:: 2026 2027 # cat /proc/7353/cgroup 2028 0::/../container_id2/sub_cgrp_1 2029 2030Note that the relative path always starts with '/' to indicate that 2031its relative to the cgroup namespace root of the caller. 2032 2033 2034Migration and setns(2) 2035---------------------- 2036 2037Processes inside a cgroup namespace can move into and out of the 2038namespace root if they have proper access to external cgroups. For 2039example, from inside a namespace with cgroupns root at 2040/batchjobs/container_id1, and assuming that the global hierarchy is 2041still accessible inside cgroupns:: 2042 2043 # cat /proc/7353/cgroup 2044 0::/sub_cgrp_1 2045 # echo 7353 > batchjobs/container_id2/cgroup.procs 2046 # cat /proc/7353/cgroup 2047 0::/../container_id2 2048 2049Note that this kind of setup is not encouraged. A task inside cgroup 2050namespace should only be exposed to its own cgroupns hierarchy. 2051 2052setns(2) to another cgroup namespace is allowed when: 2053 2054(a) the process has CAP_SYS_ADMIN against its current user namespace 2055(b) the process has CAP_SYS_ADMIN against the target cgroup 2056 namespace's userns 2057 2058No implicit cgroup changes happen with attaching to another cgroup 2059namespace. It is expected that the someone moves the attaching 2060process under the target cgroup namespace root. 2061 2062 2063Interaction with Other Namespaces 2064--------------------------------- 2065 2066Namespace specific cgroup hierarchy can be mounted by a process 2067running inside a non-init cgroup namespace:: 2068 2069 # mount -t cgroup2 none $MOUNT_POINT 2070 2071This will mount the unified cgroup hierarchy with cgroupns root as the 2072filesystem root. The process needs CAP_SYS_ADMIN against its user and 2073mount namespaces. 2074 2075The virtualization of /proc/self/cgroup file combined with restricting 2076the view of cgroup hierarchy by namespace-private cgroupfs mount 2077provides a properly isolated cgroup view inside the container. 2078 2079 2080Information on Kernel Programming 2081================================= 2082 2083This section contains kernel programming information in the areas 2084where interacting with cgroup is necessary. cgroup core and 2085controllers are not covered. 2086 2087 2088Filesystem Support for Writeback 2089-------------------------------- 2090 2091A filesystem can support cgroup writeback by updating 2092address_space_operations->writepage[s]() to annotate bio's using the 2093following two functions. 2094 2095 wbc_init_bio(@wbc, @bio) 2096 Should be called for each bio carrying writeback data and 2097 associates the bio with the inode's owner cgroup and the 2098 corresponding request queue. This must be called after 2099 a queue (device) has been associated with the bio and 2100 before submission. 2101 2102 wbc_account_io(@wbc, @page, @bytes) 2103 Should be called for each data segment being written out. 2104 While this function doesn't care exactly when it's called 2105 during the writeback session, it's the easiest and most 2106 natural to call it as data segments are added to a bio. 2107 2108With writeback bio's annotated, cgroup support can be enabled per 2109super_block by setting SB_I_CGROUPWB in ->s_iflags. This allows for 2110selective disabling of cgroup writeback support which is helpful when 2111certain filesystem features, e.g. journaled data mode, are 2112incompatible. 2113 2114wbc_init_bio() binds the specified bio to its cgroup. Depending on 2115the configuration, the bio may be executed at a lower priority and if 2116the writeback session is holding shared resources, e.g. a journal 2117entry, may lead to priority inversion. There is no one easy solution 2118for the problem. Filesystems can try to work around specific problem 2119cases by skipping wbc_init_bio() and using bio_associate_blkg() 2120directly. 2121 2122 2123Deprecated v1 Core Features 2124=========================== 2125 2126- Multiple hierarchies including named ones are not supported. 2127 2128- All v1 mount options are not supported. 2129 2130- The "tasks" file is removed and "cgroup.procs" is not sorted. 2131 2132- "cgroup.clone_children" is removed. 2133 2134- /proc/cgroups is meaningless for v2. Use "cgroup.controllers" file 2135 at the root instead. 2136 2137 2138Issues with v1 and Rationales for v2 2139==================================== 2140 2141Multiple Hierarchies 2142-------------------- 2143 2144cgroup v1 allowed an arbitrary number of hierarchies and each 2145hierarchy could host any number of controllers. While this seemed to 2146provide a high level of flexibility, it wasn't useful in practice. 2147 2148For example, as there is only one instance of each controller, utility 2149type controllers such as freezer which can be useful in all 2150hierarchies could only be used in one. The issue is exacerbated by 2151the fact that controllers couldn't be moved to another hierarchy once 2152hierarchies were populated. Another issue was that all controllers 2153bound to a hierarchy were forced to have exactly the same view of the 2154hierarchy. It wasn't possible to vary the granularity depending on 2155the specific controller. 2156 2157In practice, these issues heavily limited which controllers could be 2158put on the same hierarchy and most configurations resorted to putting 2159each controller on its own hierarchy. Only closely related ones, such 2160as the cpu and cpuacct controllers, made sense to be put on the same 2161hierarchy. This often meant that userland ended up managing multiple 2162similar hierarchies repeating the same steps on each hierarchy 2163whenever a hierarchy management operation was necessary. 2164 2165Furthermore, support for multiple hierarchies came at a steep cost. 2166It greatly complicated cgroup core implementation but more importantly 2167the support for multiple hierarchies restricted how cgroup could be 2168used in general and what controllers was able to do. 2169 2170There was no limit on how many hierarchies there might be, which meant 2171that a thread's cgroup membership couldn't be described in finite 2172length. The key might contain any number of entries and was unlimited 2173in length, which made it highly awkward to manipulate and led to 2174addition of controllers which existed only to identify membership, 2175which in turn exacerbated the original problem of proliferating number 2176of hierarchies. 2177 2178Also, as a controller couldn't have any expectation regarding the 2179topologies of hierarchies other controllers might be on, each 2180controller had to assume that all other controllers were attached to 2181completely orthogonal hierarchies. This made it impossible, or at 2182least very cumbersome, for controllers to cooperate with each other. 2183 2184In most use cases, putting controllers on hierarchies which are 2185completely orthogonal to each other isn't necessary. What usually is 2186called for is the ability to have differing levels of granularity 2187depending on the specific controller. In other words, hierarchy may 2188be collapsed from leaf towards root when viewed from specific 2189controllers. For example, a given configuration might not care about 2190how memory is distributed beyond a certain level while still wanting 2191to control how CPU cycles are distributed. 2192 2193 2194Thread Granularity 2195------------------ 2196 2197cgroup v1 allowed threads of a process to belong to different cgroups. 2198This didn't make sense for some controllers and those controllers 2199ended up implementing different ways to ignore such situations but 2200much more importantly it blurred the line between API exposed to 2201individual applications and system management interface. 2202 2203Generally, in-process knowledge is available only to the process 2204itself; thus, unlike service-level organization of processes, 2205categorizing threads of a process requires active participation from 2206the application which owns the target process. 2207 2208cgroup v1 had an ambiguously defined delegation model which got abused 2209in combination with thread granularity. cgroups were delegated to 2210individual applications so that they can create and manage their own 2211sub-hierarchies and control resource distributions along them. This 2212effectively raised cgroup to the status of a syscall-like API exposed 2213to lay programs. 2214 2215First of all, cgroup has a fundamentally inadequate interface to be 2216exposed this way. For a process to access its own knobs, it has to 2217extract the path on the target hierarchy from /proc/self/cgroup, 2218construct the path by appending the name of the knob to the path, open 2219and then read and/or write to it. This is not only extremely clunky 2220and unusual but also inherently racy. There is no conventional way to 2221define transaction across the required steps and nothing can guarantee 2222that the process would actually be operating on its own sub-hierarchy. 2223 2224cgroup controllers implemented a number of knobs which would never be 2225accepted as public APIs because they were just adding control knobs to 2226system-management pseudo filesystem. cgroup ended up with interface 2227knobs which were not properly abstracted or refined and directly 2228revealed kernel internal details. These knobs got exposed to 2229individual applications through the ill-defined delegation mechanism 2230effectively abusing cgroup as a shortcut to implementing public APIs 2231without going through the required scrutiny. 2232 2233This was painful for both userland and kernel. Userland ended up with 2234misbehaving and poorly abstracted interfaces and kernel exposing and 2235locked into constructs inadvertently. 2236 2237 2238Competition Between Inner Nodes and Threads 2239------------------------------------------- 2240 2241cgroup v1 allowed threads to be in any cgroups which created an 2242interesting problem where threads belonging to a parent cgroup and its 2243children cgroups competed for resources. This was nasty as two 2244different types of entities competed and there was no obvious way to 2245settle it. Different controllers did different things. 2246 2247The cpu controller considered threads and cgroups as equivalents and 2248mapped nice levels to cgroup weights. This worked for some cases but 2249fell flat when children wanted to be allocated specific ratios of CPU 2250cycles and the number of internal threads fluctuated - the ratios 2251constantly changed as the number of competing entities fluctuated. 2252There also were other issues. The mapping from nice level to weight 2253wasn't obvious or universal, and there were various other knobs which 2254simply weren't available for threads. 2255 2256The io controller implicitly created a hidden leaf node for each 2257cgroup to host the threads. The hidden leaf had its own copies of all 2258the knobs with ``leaf_`` prefixed. While this allowed equivalent 2259control over internal threads, it was with serious drawbacks. It 2260always added an extra layer of nesting which wouldn't be necessary 2261otherwise, made the interface messy and significantly complicated the 2262implementation. 2263 2264The memory controller didn't have a way to control what happened 2265between internal tasks and child cgroups and the behavior was not 2266clearly defined. There were attempts to add ad-hoc behaviors and 2267knobs to tailor the behavior to specific workloads which would have 2268led to problems extremely difficult to resolve in the long term. 2269 2270Multiple controllers struggled with internal tasks and came up with 2271different ways to deal with it; unfortunately, all the approaches were 2272severely flawed and, furthermore, the widely different behaviors 2273made cgroup as a whole highly inconsistent. 2274 2275This clearly is a problem which needs to be addressed from cgroup core 2276in a uniform way. 2277 2278 2279Other Interface Issues 2280---------------------- 2281 2282cgroup v1 grew without oversight and developed a large number of 2283idiosyncrasies and inconsistencies. One issue on the cgroup core side 2284was how an empty cgroup was notified - a userland helper binary was 2285forked and executed for each event. The event delivery wasn't 2286recursive or delegatable. The limitations of the mechanism also led 2287to in-kernel event delivery filtering mechanism further complicating 2288the interface. 2289 2290Controller interfaces were problematic too. An extreme example is 2291controllers completely ignoring hierarchical organization and treating 2292all cgroups as if they were all located directly under the root 2293cgroup. Some controllers exposed a large amount of inconsistent 2294implementation details to userland. 2295 2296There also was no consistency across controllers. When a new cgroup 2297was created, some controllers defaulted to not imposing extra 2298restrictions while others disallowed any resource usage until 2299explicitly configured. Configuration knobs for the same type of 2300control used widely differing naming schemes and formats. Statistics 2301and information knobs were named arbitrarily and used different 2302formats and units even in the same controller. 2303 2304cgroup v2 establishes common conventions where appropriate and updates 2305controllers so that they expose minimal and consistent interfaces. 2306 2307 2308Controller Issues and Remedies 2309------------------------------ 2310 2311Memory 2312~~~~~~ 2313 2314The original lower boundary, the soft limit, is defined as a limit 2315that is per default unset. As a result, the set of cgroups that 2316global reclaim prefers is opt-in, rather than opt-out. The costs for 2317optimizing these mostly negative lookups are so high that the 2318implementation, despite its enormous size, does not even provide the 2319basic desirable behavior. First off, the soft limit has no 2320hierarchical meaning. All configured groups are organized in a global 2321rbtree and treated like equal peers, regardless where they are located 2322in the hierarchy. This makes subtree delegation impossible. Second, 2323the soft limit reclaim pass is so aggressive that it not just 2324introduces high allocation latencies into the system, but also impacts 2325system performance due to overreclaim, to the point where the feature 2326becomes self-defeating. 2327 2328The memory.low boundary on the other hand is a top-down allocated 2329reserve. A cgroup enjoys reclaim protection when it's within its low, 2330which makes delegation of subtrees possible. 2331 2332The original high boundary, the hard limit, is defined as a strict 2333limit that can not budge, even if the OOM killer has to be called. 2334But this generally goes against the goal of making the most out of the 2335available memory. The memory consumption of workloads varies during 2336runtime, and that requires users to overcommit. But doing that with a 2337strict upper limit requires either a fairly accurate prediction of the 2338working set size or adding slack to the limit. Since working set size 2339estimation is hard and error prone, and getting it wrong results in 2340OOM kills, most users tend to err on the side of a looser limit and 2341end up wasting precious resources. 2342 2343The memory.high boundary on the other hand can be set much more 2344conservatively. When hit, it throttles allocations by forcing them 2345into direct reclaim to work off the excess, but it never invokes the 2346OOM killer. As a result, a high boundary that is chosen too 2347aggressively will not terminate the processes, but instead it will 2348lead to gradual performance degradation. The user can monitor this 2349and make corrections until the minimal memory footprint that still 2350gives acceptable performance is found. 2351 2352In extreme cases, with many concurrent allocations and a complete 2353breakdown of reclaim progress within the group, the high boundary can 2354be exceeded. But even then it's mostly better to satisfy the 2355allocation from the slack available in other groups or the rest of the 2356system than killing the group. Otherwise, memory.max is there to 2357limit this type of spillover and ultimately contain buggy or even 2358malicious applications. 2359 2360Setting the original memory.limit_in_bytes below the current usage was 2361subject to a race condition, where concurrent charges could cause the 2362limit setting to fail. memory.max on the other hand will first set the 2363limit to prevent new charges, and then reclaim and OOM kill until the 2364new limit is met - or the task writing to memory.max is killed. 2365 2366The combined memory+swap accounting and limiting is replaced by real 2367control over swap space. 2368 2369The main argument for a combined memory+swap facility in the original 2370cgroup design was that global or parental pressure would always be 2371able to swap all anonymous memory of a child group, regardless of the 2372child's own (possibly untrusted) configuration. However, untrusted 2373groups can sabotage swapping by other means - such as referencing its 2374anonymous memory in a tight loop - and an admin can not assume full 2375swappability when overcommitting untrusted jobs. 2376 2377For trusted jobs, on the other hand, a combined counter is not an 2378intuitive userspace interface, and it flies in the face of the idea 2379that cgroup controllers should account and limit specific physical 2380resources. Swap space is a resource like all others in the system, 2381and that's why unified hierarchy allows distributing it separately. 2382