1.. _cgroup-v2: 2 3================ 4Control Group v2 5================ 6 7:Date: October, 2015 8:Author: Tejun Heo <tj@kernel.org> 9 10This is the authoritative documentation on the design, interface and 11conventions of cgroup v2. It describes all userland-visible aspects 12of cgroup including core and specific controller behaviors. All 13future changes must be reflected in this document. Documentation for 14v1 is available under :ref:`Documentation/admin-guide/cgroup-v1/index.rst <cgroup-v1>`. 15 16.. CONTENTS 17 18 1. Introduction 19 1-1. Terminology 20 1-2. What is cgroup? 21 2. Basic Operations 22 2-1. Mounting 23 2-2. Organizing Processes and Threads 24 2-2-1. Processes 25 2-2-2. Threads 26 2-3. [Un]populated Notification 27 2-4. Controlling Controllers 28 2-4-1. Enabling and Disabling 29 2-4-2. Top-down Constraint 30 2-4-3. No Internal Process Constraint 31 2-5. Delegation 32 2-5-1. Model of Delegation 33 2-5-2. Delegation Containment 34 2-6. Guidelines 35 2-6-1. Organize Once and Control 36 2-6-2. Avoid Name Collisions 37 3. Resource Distribution Models 38 3-1. Weights 39 3-2. Limits 40 3-3. Protections 41 3-4. Allocations 42 4. Interface Files 43 4-1. Format 44 4-2. Conventions 45 4-3. Core Interface Files 46 5. Controllers 47 5-1. CPU 48 5-1-1. CPU Interface Files 49 5-2. Memory 50 5-2-1. Memory Interface Files 51 5-2-2. Usage Guidelines 52 5-2-3. Memory Ownership 53 5-3. IO 54 5-3-1. IO Interface Files 55 5-3-2. Writeback 56 5-3-3. IO Latency 57 5-3-3-1. How IO Latency Throttling Works 58 5-3-3-2. IO Latency Interface Files 59 5-3-4. IO Priority 60 5-4. PID 61 5-4-1. PID Interface Files 62 5-5. Cpuset 63 5.5-1. Cpuset Interface Files 64 5-6. Device 65 5-7. RDMA 66 5-7-1. RDMA Interface Files 67 5-8. HugeTLB 68 5.8-1. HugeTLB Interface Files 69 5-9. Misc 70 5.9-1 Miscellaneous cgroup Interface Files 71 5.9-2 Migration and Ownership 72 5-10. Others 73 5-10-1. perf_event 74 5-N. Non-normative information 75 5-N-1. CPU controller root cgroup process behaviour 76 5-N-2. IO controller root cgroup process behaviour 77 6. Namespace 78 6-1. Basics 79 6-2. The Root and Views 80 6-3. Migration and setns(2) 81 6-4. Interaction with Other Namespaces 82 P. Information on Kernel Programming 83 P-1. Filesystem Support for Writeback 84 D. Deprecated v1 Core Features 85 R. Issues with v1 and Rationales for v2 86 R-1. Multiple Hierarchies 87 R-2. Thread Granularity 88 R-3. Competition Between Inner Nodes and Threads 89 R-4. Other Interface Issues 90 R-5. Controller Issues and Remedies 91 R-5-1. Memory 92 93 94Introduction 95============ 96 97Terminology 98----------- 99 100"cgroup" stands for "control group" and is never capitalized. The 101singular form is used to designate the whole feature and also as a 102qualifier as in "cgroup controllers". When explicitly referring to 103multiple individual control groups, the plural form "cgroups" is used. 104 105 106What is cgroup? 107--------------- 108 109cgroup is a mechanism to organize processes hierarchically and 110distribute system resources along the hierarchy in a controlled and 111configurable manner. 112 113cgroup is largely composed of two parts - the core and controllers. 114cgroup core is primarily responsible for hierarchically organizing 115processes. A cgroup controller is usually responsible for 116distributing a specific type of system resource along the hierarchy 117although there are utility controllers which serve purposes other than 118resource distribution. 119 120cgroups form a tree structure and every process in the system belongs 121to one and only one cgroup. All threads of a process belong to the 122same cgroup. On creation, all processes are put in the cgroup that 123the parent process belongs to at the time. A process can be migrated 124to another cgroup. Migration of a process doesn't affect already 125existing descendant processes. 126 127Following certain structural constraints, controllers may be enabled or 128disabled selectively on a cgroup. All controller behaviors are 129hierarchical - if a controller is enabled on a cgroup, it affects all 130processes which belong to the cgroups consisting the inclusive 131sub-hierarchy of the cgroup. When a controller is enabled on a nested 132cgroup, it always restricts the resource distribution further. The 133restrictions set closer to the root in the hierarchy can not be 134overridden from further away. 135 136 137Basic Operations 138================ 139 140Mounting 141-------- 142 143Unlike v1, cgroup v2 has only single hierarchy. The cgroup v2 144hierarchy can be mounted with the following mount command:: 145 146 # mount -t cgroup2 none $MOUNT_POINT 147 148cgroup2 filesystem has the magic number 0x63677270 ("cgrp"). All 149controllers which support v2 and are not bound to a v1 hierarchy are 150automatically bound to the v2 hierarchy and show up at the root. 151Controllers which are not in active use in the v2 hierarchy can be 152bound to other hierarchies. This allows mixing v2 hierarchy with the 153legacy v1 multiple hierarchies in a fully backward compatible way. 154 155A controller can be moved across hierarchies only after the controller 156is no longer referenced in its current hierarchy. Because per-cgroup 157controller states are destroyed asynchronously and controllers may 158have lingering references, a controller may not show up immediately on 159the v2 hierarchy after the final umount of the previous hierarchy. 160Similarly, a controller should be fully disabled to be moved out of 161the unified hierarchy and it may take some time for the disabled 162controller to become available for other hierarchies; furthermore, due 163to inter-controller dependencies, other controllers may need to be 164disabled too. 165 166While useful for development and manual configurations, moving 167controllers dynamically between the v2 and other hierarchies is 168strongly discouraged for production use. It is recommended to decide 169the hierarchies and controller associations before starting using the 170controllers after system boot. 171 172During transition to v2, system management software might still 173automount the v1 cgroup filesystem and so hijack all controllers 174during boot, before manual intervention is possible. To make testing 175and experimenting easier, the kernel parameter cgroup_no_v1= allows 176disabling controllers in v1 and make them always available in v2. 177 178cgroup v2 currently supports the following mount options. 179 180 nsdelegate 181 Consider cgroup namespaces as delegation boundaries. This 182 option is system wide and can only be set on mount or modified 183 through remount from the init namespace. The mount option is 184 ignored on non-init namespace mounts. Please refer to the 185 Delegation section for details. 186 187 favordynmods 188 Reduce the latencies of dynamic cgroup modifications such as 189 task migrations and controller on/offs at the cost of making 190 hot path operations such as forks and exits more expensive. 191 The static usage pattern of creating a cgroup, enabling 192 controllers, and then seeding it with CLONE_INTO_CGROUP is 193 not affected by this option. 194 195 memory_localevents 196 Only populate memory.events with data for the current cgroup, 197 and not any subtrees. This is legacy behaviour, the default 198 behaviour without this option is to include subtree counts. 199 This option is system wide and can only be set on mount or 200 modified through remount from the init namespace. The mount 201 option is ignored on non-init namespace mounts. 202 203 memory_recursiveprot 204 Recursively apply memory.min and memory.low protection to 205 entire subtrees, without requiring explicit downward 206 propagation into leaf cgroups. This allows protecting entire 207 subtrees from one another, while retaining free competition 208 within those subtrees. This should have been the default 209 behavior but is a mount-option to avoid regressing setups 210 relying on the original semantics (e.g. specifying bogusly 211 high 'bypass' protection values at higher tree levels). 212 213 214Organizing Processes and Threads 215-------------------------------- 216 217Processes 218~~~~~~~~~ 219 220Initially, only the root cgroup exists to which all processes belong. 221A child cgroup can be created by creating a sub-directory:: 222 223 # mkdir $CGROUP_NAME 224 225A given cgroup may have multiple child cgroups forming a tree 226structure. Each cgroup has a read-writable interface file 227"cgroup.procs". When read, it lists the PIDs of all processes which 228belong to the cgroup one-per-line. The PIDs are not ordered and the 229same PID may show up more than once if the process got moved to 230another cgroup and then back or the PID got recycled while reading. 231 232A process can be migrated into a cgroup by writing its PID to the 233target cgroup's "cgroup.procs" file. Only one process can be migrated 234on a single write(2) call. If a process is composed of multiple 235threads, writing the PID of any thread migrates all threads of the 236process. 237 238When a process forks a child process, the new process is born into the 239cgroup that the forking process belongs to at the time of the 240operation. After exit, a process stays associated with the cgroup 241that it belonged to at the time of exit until it's reaped; however, a 242zombie process does not appear in "cgroup.procs" and thus can't be 243moved to another cgroup. 244 245A cgroup which doesn't have any children or live processes can be 246destroyed by removing the directory. Note that a cgroup which doesn't 247have any children and is associated only with zombie processes is 248considered empty and can be removed:: 249 250 # rmdir $CGROUP_NAME 251 252"/proc/$PID/cgroup" lists a process's cgroup membership. If legacy 253cgroup is in use in the system, this file may contain multiple lines, 254one for each hierarchy. The entry for cgroup v2 is always in the 255format "0::$PATH":: 256 257 # cat /proc/842/cgroup 258 ... 259 0::/test-cgroup/test-cgroup-nested 260 261If the process becomes a zombie and the cgroup it was associated with 262is removed subsequently, " (deleted)" is appended to the path:: 263 264 # cat /proc/842/cgroup 265 ... 266 0::/test-cgroup/test-cgroup-nested (deleted) 267 268 269Threads 270~~~~~~~ 271 272cgroup v2 supports thread granularity for a subset of controllers to 273support use cases requiring hierarchical resource distribution across 274the threads of a group of processes. By default, all threads of a 275process belong to the same cgroup, which also serves as the resource 276domain to host resource consumptions which are not specific to a 277process or thread. The thread mode allows threads to be spread across 278a subtree while still maintaining the common resource domain for them. 279 280Controllers which support thread mode are called threaded controllers. 281The ones which don't are called domain controllers. 282 283Marking a cgroup threaded makes it join the resource domain of its 284parent as a threaded cgroup. The parent may be another threaded 285cgroup whose resource domain is further up in the hierarchy. The root 286of a threaded subtree, that is, the nearest ancestor which is not 287threaded, is called threaded domain or thread root interchangeably and 288serves as the resource domain for the entire subtree. 289 290Inside a threaded subtree, threads of a process can be put in 291different cgroups and are not subject to the no internal process 292constraint - threaded controllers can be enabled on non-leaf cgroups 293whether they have threads in them or not. 294 295As the threaded domain cgroup hosts all the domain resource 296consumptions of the subtree, it is considered to have internal 297resource consumptions whether there are processes in it or not and 298can't have populated child cgroups which aren't threaded. Because the 299root cgroup is not subject to no internal process constraint, it can 300serve both as a threaded domain and a parent to domain cgroups. 301 302The current operation mode or type of the cgroup is shown in the 303"cgroup.type" file which indicates whether the cgroup is a normal 304domain, a domain which is serving as the domain of a threaded subtree, 305or a threaded cgroup. 306 307On creation, a cgroup is always a domain cgroup and can be made 308threaded by writing "threaded" to the "cgroup.type" file. The 309operation is single direction:: 310 311 # echo threaded > cgroup.type 312 313Once threaded, the cgroup can't be made a domain again. To enable the 314thread mode, the following conditions must be met. 315 316- As the cgroup will join the parent's resource domain. The parent 317 must either be a valid (threaded) domain or a threaded cgroup. 318 319- When the parent is an unthreaded domain, it must not have any domain 320 controllers enabled or populated domain children. The root is 321 exempt from this requirement. 322 323Topology-wise, a cgroup can be in an invalid state. Please consider 324the following topology:: 325 326 A (threaded domain) - B (threaded) - C (domain, just created) 327 328C is created as a domain but isn't connected to a parent which can 329host child domains. C can't be used until it is turned into a 330threaded cgroup. "cgroup.type" file will report "domain (invalid)" in 331these cases. Operations which fail due to invalid topology use 332EOPNOTSUPP as the errno. 333 334A domain cgroup is turned into a threaded domain when one of its child 335cgroup becomes threaded or threaded controllers are enabled in the 336"cgroup.subtree_control" file while there are processes in the cgroup. 337A threaded domain reverts to a normal domain when the conditions 338clear. 339 340When read, "cgroup.threads" contains the list of the thread IDs of all 341threads in the cgroup. Except that the operations are per-thread 342instead of per-process, "cgroup.threads" has the same format and 343behaves the same way as "cgroup.procs". While "cgroup.threads" can be 344written to in any cgroup, as it can only move threads inside the same 345threaded domain, its operations are confined inside each threaded 346subtree. 347 348The threaded domain cgroup serves as the resource domain for the whole 349subtree, and, while the threads can be scattered across the subtree, 350all the processes are considered to be in the threaded domain cgroup. 351"cgroup.procs" in a threaded domain cgroup contains the PIDs of all 352processes in the subtree and is not readable in the subtree proper. 353However, "cgroup.procs" can be written to from anywhere in the subtree 354to migrate all threads of the matching process to the cgroup. 355 356Only threaded controllers can be enabled in a threaded subtree. When 357a threaded controller is enabled inside a threaded subtree, it only 358accounts for and controls resource consumptions associated with the 359threads in the cgroup and its descendants. All consumptions which 360aren't tied to a specific thread belong to the threaded domain cgroup. 361 362Because a threaded subtree is exempt from no internal process 363constraint, a threaded controller must be able to handle competition 364between threads in a non-leaf cgroup and its child cgroups. Each 365threaded controller defines how such competitions are handled. 366 367 368[Un]populated Notification 369-------------------------- 370 371Each non-root cgroup has a "cgroup.events" file which contains 372"populated" field indicating whether the cgroup's sub-hierarchy has 373live processes in it. Its value is 0 if there is no live process in 374the cgroup and its descendants; otherwise, 1. poll and [id]notify 375events are triggered when the value changes. This can be used, for 376example, to start a clean-up operation after all processes of a given 377sub-hierarchy have exited. The populated state updates and 378notifications are recursive. Consider the following sub-hierarchy 379where the numbers in the parentheses represent the numbers of processes 380in each cgroup:: 381 382 A(4) - B(0) - C(1) 383 \ D(0) 384 385A, B and C's "populated" fields would be 1 while D's 0. After the one 386process in C exits, B and C's "populated" fields would flip to "0" and 387file modified events will be generated on the "cgroup.events" files of 388both cgroups. 389 390 391Controlling Controllers 392----------------------- 393 394Enabling and Disabling 395~~~~~~~~~~~~~~~~~~~~~~ 396 397Each cgroup has a "cgroup.controllers" file which lists all 398controllers available for the cgroup to enable:: 399 400 # cat cgroup.controllers 401 cpu io memory 402 403No controller is enabled by default. Controllers can be enabled and 404disabled by writing to the "cgroup.subtree_control" file:: 405 406 # echo "+cpu +memory -io" > cgroup.subtree_control 407 408Only controllers which are listed in "cgroup.controllers" can be 409enabled. When multiple operations are specified as above, either they 410all succeed or fail. If multiple operations on the same controller 411are specified, the last one is effective. 412 413Enabling a controller in a cgroup indicates that the distribution of 414the target resource across its immediate children will be controlled. 415Consider the following sub-hierarchy. The enabled controllers are 416listed in parentheses:: 417 418 A(cpu,memory) - B(memory) - C() 419 \ D() 420 421As A has "cpu" and "memory" enabled, A will control the distribution 422of CPU cycles and memory to its children, in this case, B. As B has 423"memory" enabled but not "CPU", C and D will compete freely on CPU 424cycles but their division of memory available to B will be controlled. 425 426As a controller regulates the distribution of the target resource to 427the cgroup's children, enabling it creates the controller's interface 428files in the child cgroups. In the above example, enabling "cpu" on B 429would create the "cpu." prefixed controller interface files in C and 430D. Likewise, disabling "memory" from B would remove the "memory." 431prefixed controller interface files from C and D. This means that the 432controller interface files - anything which doesn't start with 433"cgroup." are owned by the parent rather than the cgroup itself. 434 435 436Top-down Constraint 437~~~~~~~~~~~~~~~~~~~ 438 439Resources are distributed top-down and a cgroup can further distribute 440a resource only if the resource has been distributed to it from the 441parent. This means that all non-root "cgroup.subtree_control" files 442can only contain controllers which are enabled in the parent's 443"cgroup.subtree_control" file. A controller can be enabled only if 444the parent has the controller enabled and a controller can't be 445disabled if one or more children have it enabled. 446 447 448No Internal Process Constraint 449~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 450 451Non-root cgroups can distribute domain resources to their children 452only when they don't have any processes of their own. In other words, 453only domain cgroups which don't contain any processes can have domain 454controllers enabled in their "cgroup.subtree_control" files. 455 456This guarantees that, when a domain controller is looking at the part 457of the hierarchy which has it enabled, processes are always only on 458the leaves. This rules out situations where child cgroups compete 459against internal processes of the parent. 460 461The root cgroup is exempt from this restriction. Root contains 462processes and anonymous resource consumption which can't be associated 463with any other cgroups and requires special treatment from most 464controllers. How resource consumption in the root cgroup is governed 465is up to each controller (for more information on this topic please 466refer to the Non-normative information section in the Controllers 467chapter). 468 469Note that the restriction doesn't get in the way if there is no 470enabled controller in the cgroup's "cgroup.subtree_control". This is 471important as otherwise it wouldn't be possible to create children of a 472populated cgroup. To control resource distribution of a cgroup, the 473cgroup must create children and transfer all its processes to the 474children before enabling controllers in its "cgroup.subtree_control" 475file. 476 477 478Delegation 479---------- 480 481Model of Delegation 482~~~~~~~~~~~~~~~~~~~ 483 484A cgroup can be delegated in two ways. First, to a less privileged 485user by granting write access of the directory and its "cgroup.procs", 486"cgroup.threads" and "cgroup.subtree_control" files to the user. 487Second, if the "nsdelegate" mount option is set, automatically to a 488cgroup namespace on namespace creation. 489 490Because the resource control interface files in a given directory 491control the distribution of the parent's resources, the delegatee 492shouldn't be allowed to write to them. For the first method, this is 493achieved by not granting access to these files. For the second, the 494kernel rejects writes to all files other than "cgroup.procs" and 495"cgroup.subtree_control" on a namespace root from inside the 496namespace. 497 498The end results are equivalent for both delegation types. Once 499delegated, the user can build sub-hierarchy under the directory, 500organize processes inside it as it sees fit and further distribute the 501resources it received from the parent. The limits and other settings 502of all resource controllers are hierarchical and regardless of what 503happens in the delegated sub-hierarchy, nothing can escape the 504resource restrictions imposed by the parent. 505 506Currently, cgroup doesn't impose any restrictions on the number of 507cgroups in or nesting depth of a delegated sub-hierarchy; however, 508this may be limited explicitly in the future. 509 510 511Delegation Containment 512~~~~~~~~~~~~~~~~~~~~~~ 513 514A delegated sub-hierarchy is contained in the sense that processes 515can't be moved into or out of the sub-hierarchy by the delegatee. 516 517For delegations to a less privileged user, this is achieved by 518requiring the following conditions for a process with a non-root euid 519to migrate a target process into a cgroup by writing its PID to the 520"cgroup.procs" file. 521 522- The writer must have write access to the "cgroup.procs" file. 523 524- The writer must have write access to the "cgroup.procs" file of the 525 common ancestor of the source and destination cgroups. 526 527The above two constraints ensure that while a delegatee may migrate 528processes around freely in the delegated sub-hierarchy it can't pull 529in from or push out to outside the sub-hierarchy. 530 531For an example, let's assume cgroups C0 and C1 have been delegated to 532user U0 who created C00, C01 under C0 and C10 under C1 as follows and 533all processes under C0 and C1 belong to U0:: 534 535 ~~~~~~~~~~~~~ - C0 - C00 536 ~ cgroup ~ \ C01 537 ~ hierarchy ~ 538 ~~~~~~~~~~~~~ - C1 - C10 539 540Let's also say U0 wants to write the PID of a process which is 541currently in C10 into "C00/cgroup.procs". U0 has write access to the 542file; however, the common ancestor of the source cgroup C10 and the 543destination cgroup C00 is above the points of delegation and U0 would 544not have write access to its "cgroup.procs" files and thus the write 545will be denied with -EACCES. 546 547For delegations to namespaces, containment is achieved by requiring 548that both the source and destination cgroups are reachable from the 549namespace of the process which is attempting the migration. If either 550is not reachable, the migration is rejected with -ENOENT. 551 552 553Guidelines 554---------- 555 556Organize Once and Control 557~~~~~~~~~~~~~~~~~~~~~~~~~ 558 559Migrating a process across cgroups is a relatively expensive operation 560and stateful resources such as memory are not moved together with the 561process. This is an explicit design decision as there often exist 562inherent trade-offs between migration and various hot paths in terms 563of synchronization cost. 564 565As such, migrating processes across cgroups frequently as a means to 566apply different resource restrictions is discouraged. A workload 567should be assigned to a cgroup according to the system's logical and 568resource structure once on start-up. Dynamic adjustments to resource 569distribution can be made by changing controller configuration through 570the interface files. 571 572 573Avoid Name Collisions 574~~~~~~~~~~~~~~~~~~~~~ 575 576Interface files for a cgroup and its children cgroups occupy the same 577directory and it is possible to create children cgroups which collide 578with interface files. 579 580All cgroup core interface files are prefixed with "cgroup." and each 581controller's interface files are prefixed with the controller name and 582a dot. A controller's name is composed of lower case alphabets and 583'_'s but never begins with an '_' so it can be used as the prefix 584character for collision avoidance. Also, interface file names won't 585start or end with terms which are often used in categorizing workloads 586such as job, service, slice, unit or workload. 587 588cgroup doesn't do anything to prevent name collisions and it's the 589user's responsibility to avoid them. 590 591 592Resource Distribution Models 593============================ 594 595cgroup controllers implement several resource distribution schemes 596depending on the resource type and expected use cases. This section 597describes major schemes in use along with their expected behaviors. 598 599 600Weights 601------- 602 603A parent's resource is distributed by adding up the weights of all 604active children and giving each the fraction matching the ratio of its 605weight against the sum. As only children which can make use of the 606resource at the moment participate in the distribution, this is 607work-conserving. Due to the dynamic nature, this model is usually 608used for stateless resources. 609 610All weights are in the range [1, 10000] with the default at 100. This 611allows symmetric multiplicative biases in both directions at fine 612enough granularity while staying in the intuitive range. 613 614As long as the weight is in range, all configuration combinations are 615valid and there is no reason to reject configuration changes or 616process migrations. 617 618"cpu.weight" proportionally distributes CPU cycles to active children 619and is an example of this type. 620 621 622.. _cgroupv2-limits-distributor: 623 624Limits 625------ 626 627A child can only consume up to the configured amount of the resource. 628Limits can be over-committed - the sum of the limits of children can 629exceed the amount of resource available to the parent. 630 631Limits are in the range [0, max] and defaults to "max", which is noop. 632 633As limits can be over-committed, all configuration combinations are 634valid and there is no reason to reject configuration changes or 635process migrations. 636 637"io.max" limits the maximum BPS and/or IOPS that a cgroup can consume 638on an IO device and is an example of this type. 639 640.. _cgroupv2-protections-distributor: 641 642Protections 643----------- 644 645A cgroup is protected up to the configured amount of the resource 646as long as the usages of all its ancestors are under their 647protected levels. Protections can be hard guarantees or best effort 648soft boundaries. Protections can also be over-committed in which case 649only up to the amount available to the parent is protected among 650children. 651 652Protections are in the range [0, max] and defaults to 0, which is 653noop. 654 655As protections can be over-committed, all configuration combinations 656are valid and there is no reason to reject configuration changes or 657process migrations. 658 659"memory.low" implements best-effort memory protection and is an 660example of this type. 661 662 663Allocations 664----------- 665 666A cgroup is exclusively allocated a certain amount of a finite 667resource. Allocations can't be over-committed - the sum of the 668allocations of children can not exceed the amount of resource 669available to the parent. 670 671Allocations are in the range [0, max] and defaults to 0, which is no 672resource. 673 674As allocations can't be over-committed, some configuration 675combinations are invalid and should be rejected. Also, if the 676resource is mandatory for execution of processes, process migrations 677may be rejected. 678 679"cpu.rt.max" hard-allocates realtime slices and is an example of this 680type. 681 682 683Interface Files 684=============== 685 686Format 687------ 688 689All interface files should be in one of the following formats whenever 690possible:: 691 692 New-line separated values 693 (when only one value can be written at once) 694 695 VAL0\n 696 VAL1\n 697 ... 698 699 Space separated values 700 (when read-only or multiple values can be written at once) 701 702 VAL0 VAL1 ...\n 703 704 Flat keyed 705 706 KEY0 VAL0\n 707 KEY1 VAL1\n 708 ... 709 710 Nested keyed 711 712 KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01... 713 KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11... 714 ... 715 716For a writable file, the format for writing should generally match 717reading; however, controllers may allow omitting later fields or 718implement restricted shortcuts for most common use cases. 719 720For both flat and nested keyed files, only the values for a single key 721can be written at a time. For nested keyed files, the sub key pairs 722may be specified in any order and not all pairs have to be specified. 723 724 725Conventions 726----------- 727 728- Settings for a single feature should be contained in a single file. 729 730- The root cgroup should be exempt from resource control and thus 731 shouldn't have resource control interface files. 732 733- The default time unit is microseconds. If a different unit is ever 734 used, an explicit unit suffix must be present. 735 736- A parts-per quantity should use a percentage decimal with at least 737 two digit fractional part - e.g. 13.40. 738 739- If a controller implements weight based resource distribution, its 740 interface file should be named "weight" and have the range [1, 741 10000] with 100 as the default. The values are chosen to allow 742 enough and symmetric bias in both directions while keeping it 743 intuitive (the default is 100%). 744 745- If a controller implements an absolute resource guarantee and/or 746 limit, the interface files should be named "min" and "max" 747 respectively. If a controller implements best effort resource 748 guarantee and/or limit, the interface files should be named "low" 749 and "high" respectively. 750 751 In the above four control files, the special token "max" should be 752 used to represent upward infinity for both reading and writing. 753 754- If a setting has a configurable default value and keyed specific 755 overrides, the default entry should be keyed with "default" and 756 appear as the first entry in the file. 757 758 The default value can be updated by writing either "default $VAL" or 759 "$VAL". 760 761 When writing to update a specific override, "default" can be used as 762 the value to indicate removal of the override. Override entries 763 with "default" as the value must not appear when read. 764 765 For example, a setting which is keyed by major:minor device numbers 766 with integer values may look like the following:: 767 768 # cat cgroup-example-interface-file 769 default 150 770 8:0 300 771 772 The default value can be updated by:: 773 774 # echo 125 > cgroup-example-interface-file 775 776 or:: 777 778 # echo "default 125" > cgroup-example-interface-file 779 780 An override can be set by:: 781 782 # echo "8:16 170" > cgroup-example-interface-file 783 784 and cleared by:: 785 786 # echo "8:0 default" > cgroup-example-interface-file 787 # cat cgroup-example-interface-file 788 default 125 789 8:16 170 790 791- For events which are not very high frequency, an interface file 792 "events" should be created which lists event key value pairs. 793 Whenever a notifiable event happens, file modified event should be 794 generated on the file. 795 796 797Core Interface Files 798-------------------- 799 800All cgroup core files are prefixed with "cgroup." 801 802 cgroup.type 803 A read-write single value file which exists on non-root 804 cgroups. 805 806 When read, it indicates the current type of the cgroup, which 807 can be one of the following values. 808 809 - "domain" : A normal valid domain cgroup. 810 811 - "domain threaded" : A threaded domain cgroup which is 812 serving as the root of a threaded subtree. 813 814 - "domain invalid" : A cgroup which is in an invalid state. 815 It can't be populated or have controllers enabled. It may 816 be allowed to become a threaded cgroup. 817 818 - "threaded" : A threaded cgroup which is a member of a 819 threaded subtree. 820 821 A cgroup can be turned into a threaded cgroup by writing 822 "threaded" to this file. 823 824 cgroup.procs 825 A read-write new-line separated values file which exists on 826 all cgroups. 827 828 When read, it lists the PIDs of all processes which belong to 829 the cgroup one-per-line. The PIDs are not ordered and the 830 same PID may show up more than once if the process got moved 831 to another cgroup and then back or the PID got recycled while 832 reading. 833 834 A PID can be written to migrate the process associated with 835 the PID to the cgroup. The writer should match all of the 836 following conditions. 837 838 - It must have write access to the "cgroup.procs" file. 839 840 - It must have write access to the "cgroup.procs" file of the 841 common ancestor of the source and destination cgroups. 842 843 When delegating a sub-hierarchy, write access to this file 844 should be granted along with the containing directory. 845 846 In a threaded cgroup, reading this file fails with EOPNOTSUPP 847 as all the processes belong to the thread root. Writing is 848 supported and moves every thread of the process to the cgroup. 849 850 cgroup.threads 851 A read-write new-line separated values file which exists on 852 all cgroups. 853 854 When read, it lists the TIDs of all threads which belong to 855 the cgroup one-per-line. The TIDs are not ordered and the 856 same TID may show up more than once if the thread got moved to 857 another cgroup and then back or the TID got recycled while 858 reading. 859 860 A TID can be written to migrate the thread associated with the 861 TID to the cgroup. The writer should match all of the 862 following conditions. 863 864 - It must have write access to the "cgroup.threads" file. 865 866 - The cgroup that the thread is currently in must be in the 867 same resource domain as the destination cgroup. 868 869 - It must have write access to the "cgroup.procs" file of the 870 common ancestor of the source and destination cgroups. 871 872 When delegating a sub-hierarchy, write access to this file 873 should be granted along with the containing directory. 874 875 cgroup.controllers 876 A read-only space separated values file which exists on all 877 cgroups. 878 879 It shows space separated list of all controllers available to 880 the cgroup. The controllers are not ordered. 881 882 cgroup.subtree_control 883 A read-write space separated values file which exists on all 884 cgroups. Starts out empty. 885 886 When read, it shows space separated list of the controllers 887 which are enabled to control resource distribution from the 888 cgroup to its children. 889 890 Space separated list of controllers prefixed with '+' or '-' 891 can be written to enable or disable controllers. A controller 892 name prefixed with '+' enables the controller and '-' 893 disables. If a controller appears more than once on the list, 894 the last one is effective. When multiple enable and disable 895 operations are specified, either all succeed or all fail. 896 897 cgroup.events 898 A read-only flat-keyed file which exists on non-root cgroups. 899 The following entries are defined. Unless specified 900 otherwise, a value change in this file generates a file 901 modified event. 902 903 populated 904 1 if the cgroup or its descendants contains any live 905 processes; otherwise, 0. 906 frozen 907 1 if the cgroup is frozen; otherwise, 0. 908 909 cgroup.max.descendants 910 A read-write single value files. The default is "max". 911 912 Maximum allowed number of descent cgroups. 913 If the actual number of descendants is equal or larger, 914 an attempt to create a new cgroup in the hierarchy will fail. 915 916 cgroup.max.depth 917 A read-write single value files. The default is "max". 918 919 Maximum allowed descent depth below the current cgroup. 920 If the actual descent depth is equal or larger, 921 an attempt to create a new child cgroup will fail. 922 923 cgroup.stat 924 A read-only flat-keyed file with the following entries: 925 926 nr_descendants 927 Total number of visible descendant cgroups. 928 929 nr_dying_descendants 930 Total number of dying descendant cgroups. A cgroup becomes 931 dying after being deleted by a user. The cgroup will remain 932 in dying state for some time undefined time (which can depend 933 on system load) before being completely destroyed. 934 935 A process can't enter a dying cgroup under any circumstances, 936 a dying cgroup can't revive. 937 938 A dying cgroup can consume system resources not exceeding 939 limits, which were active at the moment of cgroup deletion. 940 941 cgroup.freeze 942 A read-write single value file which exists on non-root cgroups. 943 Allowed values are "0" and "1". The default is "0". 944 945 Writing "1" to the file causes freezing of the cgroup and all 946 descendant cgroups. This means that all belonging processes will 947 be stopped and will not run until the cgroup will be explicitly 948 unfrozen. Freezing of the cgroup may take some time; when this action 949 is completed, the "frozen" value in the cgroup.events control file 950 will be updated to "1" and the corresponding notification will be 951 issued. 952 953 A cgroup can be frozen either by its own settings, or by settings 954 of any ancestor cgroups. If any of ancestor cgroups is frozen, the 955 cgroup will remain frozen. 956 957 Processes in the frozen cgroup can be killed by a fatal signal. 958 They also can enter and leave a frozen cgroup: either by an explicit 959 move by a user, or if freezing of the cgroup races with fork(). 960 If a process is moved to a frozen cgroup, it stops. If a process is 961 moved out of a frozen cgroup, it becomes running. 962 963 Frozen status of a cgroup doesn't affect any cgroup tree operations: 964 it's possible to delete a frozen (and empty) cgroup, as well as 965 create new sub-cgroups. 966 967 cgroup.kill 968 A write-only single value file which exists in non-root cgroups. 969 The only allowed value is "1". 970 971 Writing "1" to the file causes the cgroup and all descendant cgroups to 972 be killed. This means that all processes located in the affected cgroup 973 tree will be killed via SIGKILL. 974 975 Killing a cgroup tree will deal with concurrent forks appropriately and 976 is protected against migrations. 977 978 In a threaded cgroup, writing this file fails with EOPNOTSUPP as 979 killing cgroups is a process directed operation, i.e. it affects 980 the whole thread-group. 981 982 cgroup.pressure 983 A read-write single value file that allowed values are "0" and "1". 984 The default is "1". 985 986 Writing "0" to the file will disable the cgroup PSI accounting. 987 Writing "1" to the file will re-enable the cgroup PSI accounting. 988 989 This control attribute is not hierarchical, so disable or enable PSI 990 accounting in a cgroup does not affect PSI accounting in descendants 991 and doesn't need pass enablement via ancestors from root. 992 993 The reason this control attribute exists is that PSI accounts stalls for 994 each cgroup separately and aggregates it at each level of the hierarchy. 995 This may cause non-negligible overhead for some workloads when under 996 deep level of the hierarchy, in which case this control attribute can 997 be used to disable PSI accounting in the non-leaf cgroups. 998 999 irq.pressure 1000 A read-write nested-keyed file. 1001 1002 Shows pressure stall information for IRQ/SOFTIRQ. See 1003 :ref:`Documentation/accounting/psi.rst <psi>` for details. 1004 1005Controllers 1006=========== 1007 1008.. _cgroup-v2-cpu: 1009 1010CPU 1011--- 1012 1013The "cpu" controllers regulates distribution of CPU cycles. This 1014controller implements weight and absolute bandwidth limit models for 1015normal scheduling policy and absolute bandwidth allocation model for 1016realtime scheduling policy. 1017 1018In all the above models, cycles distribution is defined only on a temporal 1019base and it does not account for the frequency at which tasks are executed. 1020The (optional) utilization clamping support allows to hint the schedutil 1021cpufreq governor about the minimum desired frequency which should always be 1022provided by a CPU, as well as the maximum desired frequency, which should not 1023be exceeded by a CPU. 1024 1025WARNING: cgroup2 doesn't yet support control of realtime processes and 1026the cpu controller can only be enabled when all RT processes are in 1027the root cgroup. Be aware that system management software may already 1028have placed RT processes into nonroot cgroups during the system boot 1029process, and these processes may need to be moved to the root cgroup 1030before the cpu controller can be enabled. 1031 1032 1033CPU Interface Files 1034~~~~~~~~~~~~~~~~~~~ 1035 1036All time durations are in microseconds. 1037 1038 cpu.stat 1039 A read-only flat-keyed file. 1040 This file exists whether the controller is enabled or not. 1041 1042 It always reports the following three stats: 1043 1044 - usage_usec 1045 - user_usec 1046 - system_usec 1047 1048 and the following three when the controller is enabled: 1049 1050 - nr_periods 1051 - nr_throttled 1052 - throttled_usec 1053 - nr_bursts 1054 - burst_usec 1055 1056 cpu.weight 1057 A read-write single value file which exists on non-root 1058 cgroups. The default is "100". 1059 1060 The weight in the range [1, 10000]. 1061 1062 cpu.weight.nice 1063 A read-write single value file which exists on non-root 1064 cgroups. The default is "0". 1065 1066 The nice value is in the range [-20, 19]. 1067 1068 This interface file is an alternative interface for 1069 "cpu.weight" and allows reading and setting weight using the 1070 same values used by nice(2). Because the range is smaller and 1071 granularity is coarser for the nice values, the read value is 1072 the closest approximation of the current weight. 1073 1074 cpu.max 1075 A read-write two value file which exists on non-root cgroups. 1076 The default is "max 100000". 1077 1078 The maximum bandwidth limit. It's in the following format:: 1079 1080 $MAX $PERIOD 1081 1082 which indicates that the group may consume up to $MAX in each 1083 $PERIOD duration. "max" for $MAX indicates no limit. If only 1084 one number is written, $MAX is updated. 1085 1086 cpu.max.burst 1087 A read-write single value file which exists on non-root 1088 cgroups. The default is "0". 1089 1090 The burst in the range [0, $MAX]. 1091 1092 cpu.pressure 1093 A read-write nested-keyed file. 1094 1095 Shows pressure stall information for CPU. See 1096 :ref:`Documentation/accounting/psi.rst <psi>` for details. 1097 1098 cpu.uclamp.min 1099 A read-write single value file which exists on non-root cgroups. 1100 The default is "0", i.e. no utilization boosting. 1101 1102 The requested minimum utilization (protection) as a percentage 1103 rational number, e.g. 12.34 for 12.34%. 1104 1105 This interface allows reading and setting minimum utilization clamp 1106 values similar to the sched_setattr(2). This minimum utilization 1107 value is used to clamp the task specific minimum utilization clamp. 1108 1109 The requested minimum utilization (protection) is always capped by 1110 the current value for the maximum utilization (limit), i.e. 1111 `cpu.uclamp.max`. 1112 1113 cpu.uclamp.max 1114 A read-write single value file which exists on non-root cgroups. 1115 The default is "max". i.e. no utilization capping 1116 1117 The requested maximum utilization (limit) as a percentage rational 1118 number, e.g. 98.76 for 98.76%. 1119 1120 This interface allows reading and setting maximum utilization clamp 1121 values similar to the sched_setattr(2). This maximum utilization 1122 value is used to clamp the task specific maximum utilization clamp. 1123 1124 1125 1126Memory 1127------ 1128 1129The "memory" controller regulates distribution of memory. Memory is 1130stateful and implements both limit and protection models. Due to the 1131intertwining between memory usage and reclaim pressure and the 1132stateful nature of memory, the distribution model is relatively 1133complex. 1134 1135While not completely water-tight, all major memory usages by a given 1136cgroup are tracked so that the total memory consumption can be 1137accounted and controlled to a reasonable extent. Currently, the 1138following types of memory usages are tracked. 1139 1140- Userland memory - page cache and anonymous memory. 1141 1142- Kernel data structures such as dentries and inodes. 1143 1144- TCP socket buffers. 1145 1146The above list may expand in the future for better coverage. 1147 1148 1149Memory Interface Files 1150~~~~~~~~~~~~~~~~~~~~~~ 1151 1152All memory amounts are in bytes. If a value which is not aligned to 1153PAGE_SIZE is written, the value may be rounded up to the closest 1154PAGE_SIZE multiple when read back. 1155 1156 memory.current 1157 A read-only single value file which exists on non-root 1158 cgroups. 1159 1160 The total amount of memory currently being used by the cgroup 1161 and its descendants. 1162 1163 memory.min 1164 A read-write single value file which exists on non-root 1165 cgroups. The default is "0". 1166 1167 Hard memory protection. If the memory usage of a cgroup 1168 is within its effective min boundary, the cgroup's memory 1169 won't be reclaimed under any conditions. If there is no 1170 unprotected reclaimable memory available, OOM killer 1171 is invoked. Above the effective min boundary (or 1172 effective low boundary if it is higher), pages are reclaimed 1173 proportionally to the overage, reducing reclaim pressure for 1174 smaller overages. 1175 1176 Effective min boundary is limited by memory.min values of 1177 all ancestor cgroups. If there is memory.min overcommitment 1178 (child cgroup or cgroups are requiring more protected memory 1179 than parent will allow), then each child cgroup will get 1180 the part of parent's protection proportional to its 1181 actual memory usage below memory.min. 1182 1183 Putting more memory than generally available under this 1184 protection is discouraged and may lead to constant OOMs. 1185 1186 If a memory cgroup is not populated with processes, 1187 its memory.min is ignored. 1188 1189 memory.low 1190 A read-write single value file which exists on non-root 1191 cgroups. The default is "0". 1192 1193 Best-effort memory protection. If the memory usage of a 1194 cgroup is within its effective low boundary, the cgroup's 1195 memory won't be reclaimed unless there is no reclaimable 1196 memory available in unprotected cgroups. 1197 Above the effective low boundary (or 1198 effective min boundary if it is higher), pages are reclaimed 1199 proportionally to the overage, reducing reclaim pressure for 1200 smaller overages. 1201 1202 Effective low boundary is limited by memory.low values of 1203 all ancestor cgroups. If there is memory.low overcommitment 1204 (child cgroup or cgroups are requiring more protected memory 1205 than parent will allow), then each child cgroup will get 1206 the part of parent's protection proportional to its 1207 actual memory usage below memory.low. 1208 1209 Putting more memory than generally available under this 1210 protection is discouraged. 1211 1212 memory.high 1213 A read-write single value file which exists on non-root 1214 cgroups. The default is "max". 1215 1216 Memory usage throttle limit. If a cgroup's usage goes 1217 over the high boundary, the processes of the cgroup are 1218 throttled and put under heavy reclaim pressure. 1219 1220 Going over the high limit never invokes the OOM killer and 1221 under extreme conditions the limit may be breached. The high 1222 limit should be used in scenarios where an external process 1223 monitors the limited cgroup to alleviate heavy reclaim 1224 pressure. 1225 1226 memory.max 1227 A read-write single value file which exists on non-root 1228 cgroups. The default is "max". 1229 1230 Memory usage hard limit. This is the main mechanism to limit 1231 memory usage of a cgroup. If a cgroup's memory usage reaches 1232 this limit and can't be reduced, the OOM killer is invoked in 1233 the cgroup. Under certain circumstances, the usage may go 1234 over the limit temporarily. 1235 1236 In default configuration regular 0-order allocations always 1237 succeed unless OOM killer chooses current task as a victim. 1238 1239 Some kinds of allocations don't invoke the OOM killer. 1240 Caller could retry them differently, return into userspace 1241 as -ENOMEM or silently ignore in cases like disk readahead. 1242 1243 memory.reclaim 1244 A write-only nested-keyed file which exists for all cgroups. 1245 1246 This is a simple interface to trigger memory reclaim in the 1247 target cgroup. 1248 1249 This file accepts a single key, the number of bytes to reclaim. 1250 No nested keys are currently supported. 1251 1252 Example:: 1253 1254 echo "1G" > memory.reclaim 1255 1256 The interface can be later extended with nested keys to 1257 configure the reclaim behavior. For example, specify the 1258 type of memory to reclaim from (anon, file, ..). 1259 1260 Please note that the kernel can over or under reclaim from 1261 the target cgroup. If less bytes are reclaimed than the 1262 specified amount, -EAGAIN is returned. 1263 1264 Please note that the proactive reclaim (triggered by this 1265 interface) is not meant to indicate memory pressure on the 1266 memory cgroup. Therefore socket memory balancing triggered by 1267 the memory reclaim normally is not exercised in this case. 1268 This means that the networking layer will not adapt based on 1269 reclaim induced by memory.reclaim. 1270 1271 memory.peak 1272 A read-only single value file which exists on non-root 1273 cgroups. 1274 1275 The max memory usage recorded for the cgroup and its 1276 descendants since the creation of the cgroup. 1277 1278 memory.oom.group 1279 A read-write single value file which exists on non-root 1280 cgroups. The default value is "0". 1281 1282 Determines whether the cgroup should be treated as 1283 an indivisible workload by the OOM killer. If set, 1284 all tasks belonging to the cgroup or to its descendants 1285 (if the memory cgroup is not a leaf cgroup) are killed 1286 together or not at all. This can be used to avoid 1287 partial kills to guarantee workload integrity. 1288 1289 Tasks with the OOM protection (oom_score_adj set to -1000) 1290 are treated as an exception and are never killed. 1291 1292 If the OOM killer is invoked in a cgroup, it's not going 1293 to kill any tasks outside of this cgroup, regardless 1294 memory.oom.group values of ancestor cgroups. 1295 1296 memory.events 1297 A read-only flat-keyed file which exists on non-root cgroups. 1298 The following entries are defined. Unless specified 1299 otherwise, a value change in this file generates a file 1300 modified event. 1301 1302 Note that all fields in this file are hierarchical and the 1303 file modified event can be generated due to an event down the 1304 hierarchy. For the local events at the cgroup level see 1305 memory.events.local. 1306 1307 low 1308 The number of times the cgroup is reclaimed due to 1309 high memory pressure even though its usage is under 1310 the low boundary. This usually indicates that the low 1311 boundary is over-committed. 1312 1313 high 1314 The number of times processes of the cgroup are 1315 throttled and routed to perform direct memory reclaim 1316 because the high memory boundary was exceeded. For a 1317 cgroup whose memory usage is capped by the high limit 1318 rather than global memory pressure, this event's 1319 occurrences are expected. 1320 1321 max 1322 The number of times the cgroup's memory usage was 1323 about to go over the max boundary. If direct reclaim 1324 fails to bring it down, the cgroup goes to OOM state. 1325 1326 oom 1327 The number of time the cgroup's memory usage was 1328 reached the limit and allocation was about to fail. 1329 1330 This event is not raised if the OOM killer is not 1331 considered as an option, e.g. for failed high-order 1332 allocations or if caller asked to not retry attempts. 1333 1334 oom_kill 1335 The number of processes belonging to this cgroup 1336 killed by any kind of OOM killer. 1337 1338 oom_group_kill 1339 The number of times a group OOM has occurred. 1340 1341 memory.events.local 1342 Similar to memory.events but the fields in the file are local 1343 to the cgroup i.e. not hierarchical. The file modified event 1344 generated on this file reflects only the local events. 1345 1346 memory.stat 1347 A read-only flat-keyed file which exists on non-root cgroups. 1348 1349 This breaks down the cgroup's memory footprint into different 1350 types of memory, type-specific details, and other information 1351 on the state and past events of the memory management system. 1352 1353 All memory amounts are in bytes. 1354 1355 The entries are ordered to be human readable, and new entries 1356 can show up in the middle. Don't rely on items remaining in a 1357 fixed position; use the keys to look up specific values! 1358 1359 If the entry has no per-node counter (or not show in the 1360 memory.numa_stat). We use 'npn' (non-per-node) as the tag 1361 to indicate that it will not show in the memory.numa_stat. 1362 1363 anon 1364 Amount of memory used in anonymous mappings such as 1365 brk(), sbrk(), and mmap(MAP_ANONYMOUS) 1366 1367 file 1368 Amount of memory used to cache filesystem data, 1369 including tmpfs and shared memory. 1370 1371 kernel (npn) 1372 Amount of total kernel memory, including 1373 (kernel_stack, pagetables, percpu, vmalloc, slab) in 1374 addition to other kernel memory use cases. 1375 1376 kernel_stack 1377 Amount of memory allocated to kernel stacks. 1378 1379 pagetables 1380 Amount of memory allocated for page tables. 1381 1382 sec_pagetables 1383 Amount of memory allocated for secondary page tables, 1384 this currently includes KVM mmu allocations on x86 1385 and arm64. 1386 1387 percpu (npn) 1388 Amount of memory used for storing per-cpu kernel 1389 data structures. 1390 1391 sock (npn) 1392 Amount of memory used in network transmission buffers 1393 1394 vmalloc (npn) 1395 Amount of memory used for vmap backed memory. 1396 1397 shmem 1398 Amount of cached filesystem data that is swap-backed, 1399 such as tmpfs, shm segments, shared anonymous mmap()s 1400 1401 zswap 1402 Amount of memory consumed by the zswap compression backend. 1403 1404 zswapped 1405 Amount of application memory swapped out to zswap. 1406 1407 file_mapped 1408 Amount of cached filesystem data mapped with mmap() 1409 1410 file_dirty 1411 Amount of cached filesystem data that was modified but 1412 not yet written back to disk 1413 1414 file_writeback 1415 Amount of cached filesystem data that was modified and 1416 is currently being written back to disk 1417 1418 swapcached 1419 Amount of swap cached in memory. The swapcache is accounted 1420 against both memory and swap usage. 1421 1422 anon_thp 1423 Amount of memory used in anonymous mappings backed by 1424 transparent hugepages 1425 1426 file_thp 1427 Amount of cached filesystem data backed by transparent 1428 hugepages 1429 1430 shmem_thp 1431 Amount of shm, tmpfs, shared anonymous mmap()s backed by 1432 transparent hugepages 1433 1434 inactive_anon, active_anon, inactive_file, active_file, unevictable 1435 Amount of memory, swap-backed and filesystem-backed, 1436 on the internal memory management lists used by the 1437 page reclaim algorithm. 1438 1439 As these represent internal list state (eg. shmem pages are on anon 1440 memory management lists), inactive_foo + active_foo may not be equal to 1441 the value for the foo counter, since the foo counter is type-based, not 1442 list-based. 1443 1444 slab_reclaimable 1445 Part of "slab" that might be reclaimed, such as 1446 dentries and inodes. 1447 1448 slab_unreclaimable 1449 Part of "slab" that cannot be reclaimed on memory 1450 pressure. 1451 1452 slab (npn) 1453 Amount of memory used for storing in-kernel data 1454 structures. 1455 1456 workingset_refault_anon 1457 Number of refaults of previously evicted anonymous pages. 1458 1459 workingset_refault_file 1460 Number of refaults of previously evicted file pages. 1461 1462 workingset_activate_anon 1463 Number of refaulted anonymous pages that were immediately 1464 activated. 1465 1466 workingset_activate_file 1467 Number of refaulted file pages that were immediately activated. 1468 1469 workingset_restore_anon 1470 Number of restored anonymous pages which have been detected as 1471 an active workingset before they got reclaimed. 1472 1473 workingset_restore_file 1474 Number of restored file pages which have been detected as an 1475 active workingset before they got reclaimed. 1476 1477 workingset_nodereclaim 1478 Number of times a shadow node has been reclaimed 1479 1480 pgscan (npn) 1481 Amount of scanned pages (in an inactive LRU list) 1482 1483 pgsteal (npn) 1484 Amount of reclaimed pages 1485 1486 pgscan_kswapd (npn) 1487 Amount of scanned pages by kswapd (in an inactive LRU list) 1488 1489 pgscan_direct (npn) 1490 Amount of scanned pages directly (in an inactive LRU list) 1491 1492 pgscan_khugepaged (npn) 1493 Amount of scanned pages by khugepaged (in an inactive LRU list) 1494 1495 pgsteal_kswapd (npn) 1496 Amount of reclaimed pages by kswapd 1497 1498 pgsteal_direct (npn) 1499 Amount of reclaimed pages directly 1500 1501 pgsteal_khugepaged (npn) 1502 Amount of reclaimed pages by khugepaged 1503 1504 pgfault (npn) 1505 Total number of page faults incurred 1506 1507 pgmajfault (npn) 1508 Number of major page faults incurred 1509 1510 pgrefill (npn) 1511 Amount of scanned pages (in an active LRU list) 1512 1513 pgactivate (npn) 1514 Amount of pages moved to the active LRU list 1515 1516 pgdeactivate (npn) 1517 Amount of pages moved to the inactive LRU list 1518 1519 pglazyfree (npn) 1520 Amount of pages postponed to be freed under memory pressure 1521 1522 pglazyfreed (npn) 1523 Amount of reclaimed lazyfree pages 1524 1525 thp_fault_alloc (npn) 1526 Number of transparent hugepages which were allocated to satisfy 1527 a page fault. This counter is not present when CONFIG_TRANSPARENT_HUGEPAGE 1528 is not set. 1529 1530 thp_collapse_alloc (npn) 1531 Number of transparent hugepages which were allocated to allow 1532 collapsing an existing range of pages. This counter is not 1533 present when CONFIG_TRANSPARENT_HUGEPAGE is not set. 1534 1535 memory.numa_stat 1536 A read-only nested-keyed file which exists on non-root cgroups. 1537 1538 This breaks down the cgroup's memory footprint into different 1539 types of memory, type-specific details, and other information 1540 per node on the state of the memory management system. 1541 1542 This is useful for providing visibility into the NUMA locality 1543 information within an memcg since the pages are allowed to be 1544 allocated from any physical node. One of the use case is evaluating 1545 application performance by combining this information with the 1546 application's CPU allocation. 1547 1548 All memory amounts are in bytes. 1549 1550 The output format of memory.numa_stat is:: 1551 1552 type N0=<bytes in node 0> N1=<bytes in node 1> ... 1553 1554 The entries are ordered to be human readable, and new entries 1555 can show up in the middle. Don't rely on items remaining in a 1556 fixed position; use the keys to look up specific values! 1557 1558 The entries can refer to the memory.stat. 1559 1560 memory.swap.current 1561 A read-only single value file which exists on non-root 1562 cgroups. 1563 1564 The total amount of swap currently being used by the cgroup 1565 and its descendants. 1566 1567 memory.swap.high 1568 A read-write single value file which exists on non-root 1569 cgroups. The default is "max". 1570 1571 Swap usage throttle limit. If a cgroup's swap usage exceeds 1572 this limit, all its further allocations will be throttled to 1573 allow userspace to implement custom out-of-memory procedures. 1574 1575 This limit marks a point of no return for the cgroup. It is NOT 1576 designed to manage the amount of swapping a workload does 1577 during regular operation. Compare to memory.swap.max, which 1578 prohibits swapping past a set amount, but lets the cgroup 1579 continue unimpeded as long as other memory can be reclaimed. 1580 1581 Healthy workloads are not expected to reach this limit. 1582 1583 memory.swap.peak 1584 A read-only single value file which exists on non-root 1585 cgroups. 1586 1587 The max swap usage recorded for the cgroup and its 1588 descendants since the creation of the cgroup. 1589 1590 memory.swap.max 1591 A read-write single value file which exists on non-root 1592 cgroups. The default is "max". 1593 1594 Swap usage hard limit. If a cgroup's swap usage reaches this 1595 limit, anonymous memory of the cgroup will not be swapped out. 1596 1597 memory.swap.events 1598 A read-only flat-keyed file which exists on non-root cgroups. 1599 The following entries are defined. Unless specified 1600 otherwise, a value change in this file generates a file 1601 modified event. 1602 1603 high 1604 The number of times the cgroup's swap usage was over 1605 the high threshold. 1606 1607 max 1608 The number of times the cgroup's swap usage was about 1609 to go over the max boundary and swap allocation 1610 failed. 1611 1612 fail 1613 The number of times swap allocation failed either 1614 because of running out of swap system-wide or max 1615 limit. 1616 1617 When reduced under the current usage, the existing swap 1618 entries are reclaimed gradually and the swap usage may stay 1619 higher than the limit for an extended period of time. This 1620 reduces the impact on the workload and memory management. 1621 1622 memory.zswap.current 1623 A read-only single value file which exists on non-root 1624 cgroups. 1625 1626 The total amount of memory consumed by the zswap compression 1627 backend. 1628 1629 memory.zswap.max 1630 A read-write single value file which exists on non-root 1631 cgroups. The default is "max". 1632 1633 Zswap usage hard limit. If a cgroup's zswap pool reaches this 1634 limit, it will refuse to take any more stores before existing 1635 entries fault back in or are written out to disk. 1636 1637 memory.pressure 1638 A read-only nested-keyed file. 1639 1640 Shows pressure stall information for memory. See 1641 :ref:`Documentation/accounting/psi.rst <psi>` for details. 1642 1643 1644Usage Guidelines 1645~~~~~~~~~~~~~~~~ 1646 1647"memory.high" is the main mechanism to control memory usage. 1648Over-committing on high limit (sum of high limits > available memory) 1649and letting global memory pressure to distribute memory according to 1650usage is a viable strategy. 1651 1652Because breach of the high limit doesn't trigger the OOM killer but 1653throttles the offending cgroup, a management agent has ample 1654opportunities to monitor and take appropriate actions such as granting 1655more memory or terminating the workload. 1656 1657Determining whether a cgroup has enough memory is not trivial as 1658memory usage doesn't indicate whether the workload can benefit from 1659more memory. For example, a workload which writes data received from 1660network to a file can use all available memory but can also operate as 1661performant with a small amount of memory. A measure of memory 1662pressure - how much the workload is being impacted due to lack of 1663memory - is necessary to determine whether a workload needs more 1664memory; unfortunately, memory pressure monitoring mechanism isn't 1665implemented yet. 1666 1667 1668Memory Ownership 1669~~~~~~~~~~~~~~~~ 1670 1671A memory area is charged to the cgroup which instantiated it and stays 1672charged to the cgroup until the area is released. Migrating a process 1673to a different cgroup doesn't move the memory usages that it 1674instantiated while in the previous cgroup to the new cgroup. 1675 1676A memory area may be used by processes belonging to different cgroups. 1677To which cgroup the area will be charged is in-deterministic; however, 1678over time, the memory area is likely to end up in a cgroup which has 1679enough memory allowance to avoid high reclaim pressure. 1680 1681If a cgroup sweeps a considerable amount of memory which is expected 1682to be accessed repeatedly by other cgroups, it may make sense to use 1683POSIX_FADV_DONTNEED to relinquish the ownership of memory areas 1684belonging to the affected files to ensure correct memory ownership. 1685 1686 1687IO 1688-- 1689 1690The "io" controller regulates the distribution of IO resources. This 1691controller implements both weight based and absolute bandwidth or IOPS 1692limit distribution; however, weight based distribution is available 1693only if cfq-iosched is in use and neither scheme is available for 1694blk-mq devices. 1695 1696 1697IO Interface Files 1698~~~~~~~~~~~~~~~~~~ 1699 1700 io.stat 1701 A read-only nested-keyed file. 1702 1703 Lines are keyed by $MAJ:$MIN device numbers and not ordered. 1704 The following nested keys are defined. 1705 1706 ====== ===================== 1707 rbytes Bytes read 1708 wbytes Bytes written 1709 rios Number of read IOs 1710 wios Number of write IOs 1711 dbytes Bytes discarded 1712 dios Number of discard IOs 1713 ====== ===================== 1714 1715 An example read output follows:: 1716 1717 8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0 1718 8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021 1719 1720 io.cost.qos 1721 A read-write nested-keyed file which exists only on the root 1722 cgroup. 1723 1724 This file configures the Quality of Service of the IO cost 1725 model based controller (CONFIG_BLK_CGROUP_IOCOST) which 1726 currently implements "io.weight" proportional control. Lines 1727 are keyed by $MAJ:$MIN device numbers and not ordered. The 1728 line for a given device is populated on the first write for 1729 the device on "io.cost.qos" or "io.cost.model". The following 1730 nested keys are defined. 1731 1732 ====== ===================================== 1733 enable Weight-based control enable 1734 ctrl "auto" or "user" 1735 rpct Read latency percentile [0, 100] 1736 rlat Read latency threshold 1737 wpct Write latency percentile [0, 100] 1738 wlat Write latency threshold 1739 min Minimum scaling percentage [1, 10000] 1740 max Maximum scaling percentage [1, 10000] 1741 ====== ===================================== 1742 1743 The controller is disabled by default and can be enabled by 1744 setting "enable" to 1. "rpct" and "wpct" parameters default 1745 to zero and the controller uses internal device saturation 1746 state to adjust the overall IO rate between "min" and "max". 1747 1748 When a better control quality is needed, latency QoS 1749 parameters can be configured. For example:: 1750 1751 8:16 enable=1 ctrl=auto rpct=95.00 rlat=75000 wpct=95.00 wlat=150000 min=50.00 max=150.0 1752 1753 shows that on sdb, the controller is enabled, will consider 1754 the device saturated if the 95th percentile of read completion 1755 latencies is above 75ms or write 150ms, and adjust the overall 1756 IO issue rate between 50% and 150% accordingly. 1757 1758 The lower the saturation point, the better the latency QoS at 1759 the cost of aggregate bandwidth. The narrower the allowed 1760 adjustment range between "min" and "max", the more conformant 1761 to the cost model the IO behavior. Note that the IO issue 1762 base rate may be far off from 100% and setting "min" and "max" 1763 blindly can lead to a significant loss of device capacity or 1764 control quality. "min" and "max" are useful for regulating 1765 devices which show wide temporary behavior changes - e.g. a 1766 ssd which accepts writes at the line speed for a while and 1767 then completely stalls for multiple seconds. 1768 1769 When "ctrl" is "auto", the parameters are controlled by the 1770 kernel and may change automatically. Setting "ctrl" to "user" 1771 or setting any of the percentile and latency parameters puts 1772 it into "user" mode and disables the automatic changes. The 1773 automatic mode can be restored by setting "ctrl" to "auto". 1774 1775 io.cost.model 1776 A read-write nested-keyed file which exists only on the root 1777 cgroup. 1778 1779 This file configures the cost model of the IO cost model based 1780 controller (CONFIG_BLK_CGROUP_IOCOST) which currently 1781 implements "io.weight" proportional control. Lines are keyed 1782 by $MAJ:$MIN device numbers and not ordered. The line for a 1783 given device is populated on the first write for the device on 1784 "io.cost.qos" or "io.cost.model". The following nested keys 1785 are defined. 1786 1787 ===== ================================ 1788 ctrl "auto" or "user" 1789 model The cost model in use - "linear" 1790 ===== ================================ 1791 1792 When "ctrl" is "auto", the kernel may change all parameters 1793 dynamically. When "ctrl" is set to "user" or any other 1794 parameters are written to, "ctrl" become "user" and the 1795 automatic changes are disabled. 1796 1797 When "model" is "linear", the following model parameters are 1798 defined. 1799 1800 ============= ======================================== 1801 [r|w]bps The maximum sequential IO throughput 1802 [r|w]seqiops The maximum 4k sequential IOs per second 1803 [r|w]randiops The maximum 4k random IOs per second 1804 ============= ======================================== 1805 1806 From the above, the builtin linear model determines the base 1807 costs of a sequential and random IO and the cost coefficient 1808 for the IO size. While simple, this model can cover most 1809 common device classes acceptably. 1810 1811 The IO cost model isn't expected to be accurate in absolute 1812 sense and is scaled to the device behavior dynamically. 1813 1814 If needed, tools/cgroup/iocost_coef_gen.py can be used to 1815 generate device-specific coefficients. 1816 1817 io.weight 1818 A read-write flat-keyed file which exists on non-root cgroups. 1819 The default is "default 100". 1820 1821 The first line is the default weight applied to devices 1822 without specific override. The rest are overrides keyed by 1823 $MAJ:$MIN device numbers and not ordered. The weights are in 1824 the range [1, 10000] and specifies the relative amount IO time 1825 the cgroup can use in relation to its siblings. 1826 1827 The default weight can be updated by writing either "default 1828 $WEIGHT" or simply "$WEIGHT". Overrides can be set by writing 1829 "$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default". 1830 1831 An example read output follows:: 1832 1833 default 100 1834 8:16 200 1835 8:0 50 1836 1837 io.max 1838 A read-write nested-keyed file which exists on non-root 1839 cgroups. 1840 1841 BPS and IOPS based IO limit. Lines are keyed by $MAJ:$MIN 1842 device numbers and not ordered. The following nested keys are 1843 defined. 1844 1845 ===== ================================== 1846 rbps Max read bytes per second 1847 wbps Max write bytes per second 1848 riops Max read IO operations per second 1849 wiops Max write IO operations per second 1850 ===== ================================== 1851 1852 When writing, any number of nested key-value pairs can be 1853 specified in any order. "max" can be specified as the value 1854 to remove a specific limit. If the same key is specified 1855 multiple times, the outcome is undefined. 1856 1857 BPS and IOPS are measured in each IO direction and IOs are 1858 delayed if limit is reached. Temporary bursts are allowed. 1859 1860 Setting read limit at 2M BPS and write at 120 IOPS for 8:16:: 1861 1862 echo "8:16 rbps=2097152 wiops=120" > io.max 1863 1864 Reading returns the following:: 1865 1866 8:16 rbps=2097152 wbps=max riops=max wiops=120 1867 1868 Write IOPS limit can be removed by writing the following:: 1869 1870 echo "8:16 wiops=max" > io.max 1871 1872 Reading now returns the following:: 1873 1874 8:16 rbps=2097152 wbps=max riops=max wiops=max 1875 1876 io.pressure 1877 A read-only nested-keyed file. 1878 1879 Shows pressure stall information for IO. See 1880 :ref:`Documentation/accounting/psi.rst <psi>` for details. 1881 1882 1883Writeback 1884~~~~~~~~~ 1885 1886Page cache is dirtied through buffered writes and shared mmaps and 1887written asynchronously to the backing filesystem by the writeback 1888mechanism. Writeback sits between the memory and IO domains and 1889regulates the proportion of dirty memory by balancing dirtying and 1890write IOs. 1891 1892The io controller, in conjunction with the memory controller, 1893implements control of page cache writeback IOs. The memory controller 1894defines the memory domain that dirty memory ratio is calculated and 1895maintained for and the io controller defines the io domain which 1896writes out dirty pages for the memory domain. Both system-wide and 1897per-cgroup dirty memory states are examined and the more restrictive 1898of the two is enforced. 1899 1900cgroup writeback requires explicit support from the underlying 1901filesystem. Currently, cgroup writeback is implemented on ext2, ext4, 1902btrfs, f2fs, and xfs. On other filesystems, all writeback IOs are 1903attributed to the root cgroup. 1904 1905There are inherent differences in memory and writeback management 1906which affects how cgroup ownership is tracked. Memory is tracked per 1907page while writeback per inode. For the purpose of writeback, an 1908inode is assigned to a cgroup and all IO requests to write dirty pages 1909from the inode are attributed to that cgroup. 1910 1911As cgroup ownership for memory is tracked per page, there can be pages 1912which are associated with different cgroups than the one the inode is 1913associated with. These are called foreign pages. The writeback 1914constantly keeps track of foreign pages and, if a particular foreign 1915cgroup becomes the majority over a certain period of time, switches 1916the ownership of the inode to that cgroup. 1917 1918While this model is enough for most use cases where a given inode is 1919mostly dirtied by a single cgroup even when the main writing cgroup 1920changes over time, use cases where multiple cgroups write to a single 1921inode simultaneously are not supported well. In such circumstances, a 1922significant portion of IOs are likely to be attributed incorrectly. 1923As memory controller assigns page ownership on the first use and 1924doesn't update it until the page is released, even if writeback 1925strictly follows page ownership, multiple cgroups dirtying overlapping 1926areas wouldn't work as expected. It's recommended to avoid such usage 1927patterns. 1928 1929The sysctl knobs which affect writeback behavior are applied to cgroup 1930writeback as follows. 1931 1932 vm.dirty_background_ratio, vm.dirty_ratio 1933 These ratios apply the same to cgroup writeback with the 1934 amount of available memory capped by limits imposed by the 1935 memory controller and system-wide clean memory. 1936 1937 vm.dirty_background_bytes, vm.dirty_bytes 1938 For cgroup writeback, this is calculated into ratio against 1939 total available memory and applied the same way as 1940 vm.dirty[_background]_ratio. 1941 1942 1943IO Latency 1944~~~~~~~~~~ 1945 1946This is a cgroup v2 controller for IO workload protection. You provide a group 1947with a latency target, and if the average latency exceeds that target the 1948controller will throttle any peers that have a lower latency target than the 1949protected workload. 1950 1951The limits are only applied at the peer level in the hierarchy. This means that 1952in the diagram below, only groups A, B, and C will influence each other, and 1953groups D and F will influence each other. Group G will influence nobody:: 1954 1955 [root] 1956 / | \ 1957 A B C 1958 / \ | 1959 D F G 1960 1961 1962So the ideal way to configure this is to set io.latency in groups A, B, and C. 1963Generally you do not want to set a value lower than the latency your device 1964supports. Experiment to find the value that works best for your workload. 1965Start at higher than the expected latency for your device and watch the 1966avg_lat value in io.stat for your workload group to get an idea of the 1967latency you see during normal operation. Use the avg_lat value as a basis for 1968your real setting, setting at 10-15% higher than the value in io.stat. 1969 1970How IO Latency Throttling Works 1971~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1972 1973io.latency is work conserving; so as long as everybody is meeting their latency 1974target the controller doesn't do anything. Once a group starts missing its 1975target it begins throttling any peer group that has a higher target than itself. 1976This throttling takes 2 forms: 1977 1978- Queue depth throttling. This is the number of outstanding IO's a group is 1979 allowed to have. We will clamp down relatively quickly, starting at no limit 1980 and going all the way down to 1 IO at a time. 1981 1982- Artificial delay induction. There are certain types of IO that cannot be 1983 throttled without possibly adversely affecting higher priority groups. This 1984 includes swapping and metadata IO. These types of IO are allowed to occur 1985 normally, however they are "charged" to the originating group. If the 1986 originating group is being throttled you will see the use_delay and delay 1987 fields in io.stat increase. The delay value is how many microseconds that are 1988 being added to any process that runs in this group. Because this number can 1989 grow quite large if there is a lot of swapping or metadata IO occurring we 1990 limit the individual delay events to 1 second at a time. 1991 1992Once the victimized group starts meeting its latency target again it will start 1993unthrottling any peer groups that were throttled previously. If the victimized 1994group simply stops doing IO the global counter will unthrottle appropriately. 1995 1996IO Latency Interface Files 1997~~~~~~~~~~~~~~~~~~~~~~~~~~ 1998 1999 io.latency 2000 This takes a similar format as the other controllers. 2001 2002 "MAJOR:MINOR target=<target time in microseconds>" 2003 2004 io.stat 2005 If the controller is enabled you will see extra stats in io.stat in 2006 addition to the normal ones. 2007 2008 depth 2009 This is the current queue depth for the group. 2010 2011 avg_lat 2012 This is an exponential moving average with a decay rate of 1/exp 2013 bound by the sampling interval. The decay rate interval can be 2014 calculated by multiplying the win value in io.stat by the 2015 corresponding number of samples based on the win value. 2016 2017 win 2018 The sampling window size in milliseconds. This is the minimum 2019 duration of time between evaluation events. Windows only elapse 2020 with IO activity. Idle periods extend the most recent window. 2021 2022IO Priority 2023~~~~~~~~~~~ 2024 2025A single attribute controls the behavior of the I/O priority cgroup policy, 2026namely the blkio.prio.class attribute. The following values are accepted for 2027that attribute: 2028 2029 no-change 2030 Do not modify the I/O priority class. 2031 2032 promote-to-rt 2033 For requests that have a non-RT I/O priority class, change it into RT. 2034 Also change the priority level of these requests to 4. Do not modify 2035 the I/O priority of requests that have priority class RT. 2036 2037 restrict-to-be 2038 For requests that do not have an I/O priority class or that have I/O 2039 priority class RT, change it into BE. Also change the priority level 2040 of these requests to 0. Do not modify the I/O priority class of 2041 requests that have priority class IDLE. 2042 2043 idle 2044 Change the I/O priority class of all requests into IDLE, the lowest 2045 I/O priority class. 2046 2047 none-to-rt 2048 Deprecated. Just an alias for promote-to-rt. 2049 2050The following numerical values are associated with the I/O priority policies: 2051 2052+----------------+---+ 2053| no-change | 0 | 2054+----------------+---+ 2055| rt-to-be | 2 | 2056+----------------+---+ 2057| all-to-idle | 3 | 2058+----------------+---+ 2059 2060The numerical value that corresponds to each I/O priority class is as follows: 2061 2062+-------------------------------+---+ 2063| IOPRIO_CLASS_NONE | 0 | 2064+-------------------------------+---+ 2065| IOPRIO_CLASS_RT (real-time) | 1 | 2066+-------------------------------+---+ 2067| IOPRIO_CLASS_BE (best effort) | 2 | 2068+-------------------------------+---+ 2069| IOPRIO_CLASS_IDLE | 3 | 2070+-------------------------------+---+ 2071 2072The algorithm to set the I/O priority class for a request is as follows: 2073 2074- If I/O priority class policy is promote-to-rt, change the request I/O 2075 priority class to IOPRIO_CLASS_RT and change the request I/O priority 2076 level to 4. 2077- If I/O priorityt class is not promote-to-rt, translate the I/O priority 2078 class policy into a number, then change the request I/O priority class 2079 into the maximum of the I/O priority class policy number and the numerical 2080 I/O priority class. 2081 2082PID 2083--- 2084 2085The process number controller is used to allow a cgroup to stop any 2086new tasks from being fork()'d or clone()'d after a specified limit is 2087reached. 2088 2089The number of tasks in a cgroup can be exhausted in ways which other 2090controllers cannot prevent, thus warranting its own controller. For 2091example, a fork bomb is likely to exhaust the number of tasks before 2092hitting memory restrictions. 2093 2094Note that PIDs used in this controller refer to TIDs, process IDs as 2095used by the kernel. 2096 2097 2098PID Interface Files 2099~~~~~~~~~~~~~~~~~~~ 2100 2101 pids.max 2102 A read-write single value file which exists on non-root 2103 cgroups. The default is "max". 2104 2105 Hard limit of number of processes. 2106 2107 pids.current 2108 A read-only single value file which exists on all cgroups. 2109 2110 The number of processes currently in the cgroup and its 2111 descendants. 2112 2113Organisational operations are not blocked by cgroup policies, so it is 2114possible to have pids.current > pids.max. This can be done by either 2115setting the limit to be smaller than pids.current, or attaching enough 2116processes to the cgroup such that pids.current is larger than 2117pids.max. However, it is not possible to violate a cgroup PID policy 2118through fork() or clone(). These will return -EAGAIN if the creation 2119of a new process would cause a cgroup policy to be violated. 2120 2121 2122Cpuset 2123------ 2124 2125The "cpuset" controller provides a mechanism for constraining 2126the CPU and memory node placement of tasks to only the resources 2127specified in the cpuset interface files in a task's current cgroup. 2128This is especially valuable on large NUMA systems where placing jobs 2129on properly sized subsets of the systems with careful processor and 2130memory placement to reduce cross-node memory access and contention 2131can improve overall system performance. 2132 2133The "cpuset" controller is hierarchical. That means the controller 2134cannot use CPUs or memory nodes not allowed in its parent. 2135 2136 2137Cpuset Interface Files 2138~~~~~~~~~~~~~~~~~~~~~~ 2139 2140 cpuset.cpus 2141 A read-write multiple values file which exists on non-root 2142 cpuset-enabled cgroups. 2143 2144 It lists the requested CPUs to be used by tasks within this 2145 cgroup. The actual list of CPUs to be granted, however, is 2146 subjected to constraints imposed by its parent and can differ 2147 from the requested CPUs. 2148 2149 The CPU numbers are comma-separated numbers or ranges. 2150 For example:: 2151 2152 # cat cpuset.cpus 2153 0-4,6,8-10 2154 2155 An empty value indicates that the cgroup is using the same 2156 setting as the nearest cgroup ancestor with a non-empty 2157 "cpuset.cpus" or all the available CPUs if none is found. 2158 2159 The value of "cpuset.cpus" stays constant until the next update 2160 and won't be affected by any CPU hotplug events. 2161 2162 cpuset.cpus.effective 2163 A read-only multiple values file which exists on all 2164 cpuset-enabled cgroups. 2165 2166 It lists the onlined CPUs that are actually granted to this 2167 cgroup by its parent. These CPUs are allowed to be used by 2168 tasks within the current cgroup. 2169 2170 If "cpuset.cpus" is empty, the "cpuset.cpus.effective" file shows 2171 all the CPUs from the parent cgroup that can be available to 2172 be used by this cgroup. Otherwise, it should be a subset of 2173 "cpuset.cpus" unless none of the CPUs listed in "cpuset.cpus" 2174 can be granted. In this case, it will be treated just like an 2175 empty "cpuset.cpus". 2176 2177 Its value will be affected by CPU hotplug events. 2178 2179 cpuset.mems 2180 A read-write multiple values file which exists on non-root 2181 cpuset-enabled cgroups. 2182 2183 It lists the requested memory nodes to be used by tasks within 2184 this cgroup. The actual list of memory nodes granted, however, 2185 is subjected to constraints imposed by its parent and can differ 2186 from the requested memory nodes. 2187 2188 The memory node numbers are comma-separated numbers or ranges. 2189 For example:: 2190 2191 # cat cpuset.mems 2192 0-1,3 2193 2194 An empty value indicates that the cgroup is using the same 2195 setting as the nearest cgroup ancestor with a non-empty 2196 "cpuset.mems" or all the available memory nodes if none 2197 is found. 2198 2199 The value of "cpuset.mems" stays constant until the next update 2200 and won't be affected by any memory nodes hotplug events. 2201 2202 Setting a non-empty value to "cpuset.mems" causes memory of 2203 tasks within the cgroup to be migrated to the designated nodes if 2204 they are currently using memory outside of the designated nodes. 2205 2206 There is a cost for this memory migration. The migration 2207 may not be complete and some memory pages may be left behind. 2208 So it is recommended that "cpuset.mems" should be set properly 2209 before spawning new tasks into the cpuset. Even if there is 2210 a need to change "cpuset.mems" with active tasks, it shouldn't 2211 be done frequently. 2212 2213 cpuset.mems.effective 2214 A read-only multiple values file which exists on all 2215 cpuset-enabled cgroups. 2216 2217 It lists the onlined memory nodes that are actually granted to 2218 this cgroup by its parent. These memory nodes are allowed to 2219 be used by tasks within the current cgroup. 2220 2221 If "cpuset.mems" is empty, it shows all the memory nodes from the 2222 parent cgroup that will be available to be used by this cgroup. 2223 Otherwise, it should be a subset of "cpuset.mems" unless none of 2224 the memory nodes listed in "cpuset.mems" can be granted. In this 2225 case, it will be treated just like an empty "cpuset.mems". 2226 2227 Its value will be affected by memory nodes hotplug events. 2228 2229 cpuset.cpus.partition 2230 A read-write single value file which exists on non-root 2231 cpuset-enabled cgroups. This flag is owned by the parent cgroup 2232 and is not delegatable. 2233 2234 It accepts only the following input values when written to. 2235 2236 ========== ===================================== 2237 "member" Non-root member of a partition 2238 "root" Partition root 2239 "isolated" Partition root without load balancing 2240 ========== ===================================== 2241 2242 The root cgroup is always a partition root and its state 2243 cannot be changed. All other non-root cgroups start out as 2244 "member". 2245 2246 When set to "root", the current cgroup is the root of a new 2247 partition or scheduling domain that comprises itself and all 2248 its descendants except those that are separate partition roots 2249 themselves and their descendants. 2250 2251 When set to "isolated", the CPUs in that partition root will 2252 be in an isolated state without any load balancing from the 2253 scheduler. Tasks placed in such a partition with multiple 2254 CPUs should be carefully distributed and bound to each of the 2255 individual CPUs for optimal performance. 2256 2257 The value shown in "cpuset.cpus.effective" of a partition root 2258 is the CPUs that the partition root can dedicate to a potential 2259 new child partition root. The new child subtracts available 2260 CPUs from its parent "cpuset.cpus.effective". 2261 2262 A partition root ("root" or "isolated") can be in one of the 2263 two possible states - valid or invalid. An invalid partition 2264 root is in a degraded state where some state information may 2265 be retained, but behaves more like a "member". 2266 2267 All possible state transitions among "member", "root" and 2268 "isolated" are allowed. 2269 2270 On read, the "cpuset.cpus.partition" file can show the following 2271 values. 2272 2273 ============================= ===================================== 2274 "member" Non-root member of a partition 2275 "root" Partition root 2276 "isolated" Partition root without load balancing 2277 "root invalid (<reason>)" Invalid partition root 2278 "isolated invalid (<reason>)" Invalid isolated partition root 2279 ============================= ===================================== 2280 2281 In the case of an invalid partition root, a descriptive string on 2282 why the partition is invalid is included within parentheses. 2283 2284 For a partition root to become valid, the following conditions 2285 must be met. 2286 2287 1) The "cpuset.cpus" is exclusive with its siblings , i.e. they 2288 are not shared by any of its siblings (exclusivity rule). 2289 2) The parent cgroup is a valid partition root. 2290 3) The "cpuset.cpus" is not empty and must contain at least 2291 one of the CPUs from parent's "cpuset.cpus", i.e. they overlap. 2292 4) The "cpuset.cpus.effective" cannot be empty unless there is 2293 no task associated with this partition. 2294 2295 External events like hotplug or changes to "cpuset.cpus" can 2296 cause a valid partition root to become invalid and vice versa. 2297 Note that a task cannot be moved to a cgroup with empty 2298 "cpuset.cpus.effective". 2299 2300 For a valid partition root with the sibling cpu exclusivity 2301 rule enabled, changes made to "cpuset.cpus" that violate the 2302 exclusivity rule will invalidate the partition as well as its 2303 sibling partitions with conflicting cpuset.cpus values. So 2304 care must be taking in changing "cpuset.cpus". 2305 2306 A valid non-root parent partition may distribute out all its CPUs 2307 to its child partitions when there is no task associated with it. 2308 2309 Care must be taken to change a valid partition root to 2310 "member" as all its child partitions, if present, will become 2311 invalid causing disruption to tasks running in those child 2312 partitions. These inactivated partitions could be recovered if 2313 their parent is switched back to a partition root with a proper 2314 set of "cpuset.cpus". 2315 2316 Poll and inotify events are triggered whenever the state of 2317 "cpuset.cpus.partition" changes. That includes changes caused 2318 by write to "cpuset.cpus.partition", cpu hotplug or other 2319 changes that modify the validity status of the partition. 2320 This will allow user space agents to monitor unexpected changes 2321 to "cpuset.cpus.partition" without the need to do continuous 2322 polling. 2323 2324 2325Device controller 2326----------------- 2327 2328Device controller manages access to device files. It includes both 2329creation of new device files (using mknod), and access to the 2330existing device files. 2331 2332Cgroup v2 device controller has no interface files and is implemented 2333on top of cgroup BPF. To control access to device files, a user may 2334create bpf programs of type BPF_PROG_TYPE_CGROUP_DEVICE and attach 2335them to cgroups with BPF_CGROUP_DEVICE flag. On an attempt to access a 2336device file, corresponding BPF programs will be executed, and depending 2337on the return value the attempt will succeed or fail with -EPERM. 2338 2339A BPF_PROG_TYPE_CGROUP_DEVICE program takes a pointer to the 2340bpf_cgroup_dev_ctx structure, which describes the device access attempt: 2341access type (mknod/read/write) and device (type, major and minor numbers). 2342If the program returns 0, the attempt fails with -EPERM, otherwise it 2343succeeds. 2344 2345An example of BPF_PROG_TYPE_CGROUP_DEVICE program may be found in 2346tools/testing/selftests/bpf/progs/dev_cgroup.c in the kernel source tree. 2347 2348 2349RDMA 2350---- 2351 2352The "rdma" controller regulates the distribution and accounting of 2353RDMA resources. 2354 2355RDMA Interface Files 2356~~~~~~~~~~~~~~~~~~~~ 2357 2358 rdma.max 2359 A readwrite nested-keyed file that exists for all the cgroups 2360 except root that describes current configured resource limit 2361 for a RDMA/IB device. 2362 2363 Lines are keyed by device name and are not ordered. 2364 Each line contains space separated resource name and its configured 2365 limit that can be distributed. 2366 2367 The following nested keys are defined. 2368 2369 ========== ============================= 2370 hca_handle Maximum number of HCA Handles 2371 hca_object Maximum number of HCA Objects 2372 ========== ============================= 2373 2374 An example for mlx4 and ocrdma device follows:: 2375 2376 mlx4_0 hca_handle=2 hca_object=2000 2377 ocrdma1 hca_handle=3 hca_object=max 2378 2379 rdma.current 2380 A read-only file that describes current resource usage. 2381 It exists for all the cgroup except root. 2382 2383 An example for mlx4 and ocrdma device follows:: 2384 2385 mlx4_0 hca_handle=1 hca_object=20 2386 ocrdma1 hca_handle=1 hca_object=23 2387 2388HugeTLB 2389------- 2390 2391The HugeTLB controller allows to limit the HugeTLB usage per control group and 2392enforces the controller limit during page fault. 2393 2394HugeTLB Interface Files 2395~~~~~~~~~~~~~~~~~~~~~~~ 2396 2397 hugetlb.<hugepagesize>.current 2398 Show current usage for "hugepagesize" hugetlb. It exists for all 2399 the cgroup except root. 2400 2401 hugetlb.<hugepagesize>.max 2402 Set/show the hard limit of "hugepagesize" hugetlb usage. 2403 The default value is "max". It exists for all the cgroup except root. 2404 2405 hugetlb.<hugepagesize>.events 2406 A read-only flat-keyed file which exists on non-root cgroups. 2407 2408 max 2409 The number of allocation failure due to HugeTLB limit 2410 2411 hugetlb.<hugepagesize>.events.local 2412 Similar to hugetlb.<hugepagesize>.events but the fields in the file 2413 are local to the cgroup i.e. not hierarchical. The file modified event 2414 generated on this file reflects only the local events. 2415 2416 hugetlb.<hugepagesize>.numa_stat 2417 Similar to memory.numa_stat, it shows the numa information of the 2418 hugetlb pages of <hugepagesize> in this cgroup. Only active in 2419 use hugetlb pages are included. The per-node values are in bytes. 2420 2421Misc 2422---- 2423 2424The Miscellaneous cgroup provides the resource limiting and tracking 2425mechanism for the scalar resources which cannot be abstracted like the other 2426cgroup resources. Controller is enabled by the CONFIG_CGROUP_MISC config 2427option. 2428 2429A resource can be added to the controller via enum misc_res_type{} in the 2430include/linux/misc_cgroup.h file and the corresponding name via misc_res_name[] 2431in the kernel/cgroup/misc.c file. Provider of the resource must set its 2432capacity prior to using the resource by calling misc_cg_set_capacity(). 2433 2434Once a capacity is set then the resource usage can be updated using charge and 2435uncharge APIs. All of the APIs to interact with misc controller are in 2436include/linux/misc_cgroup.h. 2437 2438Misc Interface Files 2439~~~~~~~~~~~~~~~~~~~~ 2440 2441Miscellaneous controller provides 3 interface files. If two misc resources (res_a and res_b) are registered then: 2442 2443 misc.capacity 2444 A read-only flat-keyed file shown only in the root cgroup. It shows 2445 miscellaneous scalar resources available on the platform along with 2446 their quantities:: 2447 2448 $ cat misc.capacity 2449 res_a 50 2450 res_b 10 2451 2452 misc.current 2453 A read-only flat-keyed file shown in the all cgroups. It shows 2454 the current usage of the resources in the cgroup and its children.:: 2455 2456 $ cat misc.current 2457 res_a 3 2458 res_b 0 2459 2460 misc.max 2461 A read-write flat-keyed file shown in the non root cgroups. Allowed 2462 maximum usage of the resources in the cgroup and its children.:: 2463 2464 $ cat misc.max 2465 res_a max 2466 res_b 4 2467 2468 Limit can be set by:: 2469 2470 # echo res_a 1 > misc.max 2471 2472 Limit can be set to max by:: 2473 2474 # echo res_a max > misc.max 2475 2476 Limits can be set higher than the capacity value in the misc.capacity 2477 file. 2478 2479 misc.events 2480 A read-only flat-keyed file which exists on non-root cgroups. The 2481 following entries are defined. Unless specified otherwise, a value 2482 change in this file generates a file modified event. All fields in 2483 this file are hierarchical. 2484 2485 max 2486 The number of times the cgroup's resource usage was 2487 about to go over the max boundary. 2488 2489Migration and Ownership 2490~~~~~~~~~~~~~~~~~~~~~~~ 2491 2492A miscellaneous scalar resource is charged to the cgroup in which it is used 2493first, and stays charged to that cgroup until that resource is freed. Migrating 2494a process to a different cgroup does not move the charge to the destination 2495cgroup where the process has moved. 2496 2497Others 2498------ 2499 2500perf_event 2501~~~~~~~~~~ 2502 2503perf_event controller, if not mounted on a legacy hierarchy, is 2504automatically enabled on the v2 hierarchy so that perf events can 2505always be filtered by cgroup v2 path. The controller can still be 2506moved to a legacy hierarchy after v2 hierarchy is populated. 2507 2508 2509Non-normative information 2510------------------------- 2511 2512This section contains information that isn't considered to be a part of 2513the stable kernel API and so is subject to change. 2514 2515 2516CPU controller root cgroup process behaviour 2517~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2518 2519When distributing CPU cycles in the root cgroup each thread in this 2520cgroup is treated as if it was hosted in a separate child cgroup of the 2521root cgroup. This child cgroup weight is dependent on its thread nice 2522level. 2523 2524For details of this mapping see sched_prio_to_weight array in 2525kernel/sched/core.c file (values from this array should be scaled 2526appropriately so the neutral - nice 0 - value is 100 instead of 1024). 2527 2528 2529IO controller root cgroup process behaviour 2530~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2531 2532Root cgroup processes are hosted in an implicit leaf child node. 2533When distributing IO resources this implicit child node is taken into 2534account as if it was a normal child cgroup of the root cgroup with a 2535weight value of 200. 2536 2537 2538Namespace 2539========= 2540 2541Basics 2542------ 2543 2544cgroup namespace provides a mechanism to virtualize the view of the 2545"/proc/$PID/cgroup" file and cgroup mounts. The CLONE_NEWCGROUP clone 2546flag can be used with clone(2) and unshare(2) to create a new cgroup 2547namespace. The process running inside the cgroup namespace will have 2548its "/proc/$PID/cgroup" output restricted to cgroupns root. The 2549cgroupns root is the cgroup of the process at the time of creation of 2550the cgroup namespace. 2551 2552Without cgroup namespace, the "/proc/$PID/cgroup" file shows the 2553complete path of the cgroup of a process. In a container setup where 2554a set of cgroups and namespaces are intended to isolate processes the 2555"/proc/$PID/cgroup" file may leak potential system level information 2556to the isolated processes. For example:: 2557 2558 # cat /proc/self/cgroup 2559 0::/batchjobs/container_id1 2560 2561The path '/batchjobs/container_id1' can be considered as system-data 2562and undesirable to expose to the isolated processes. cgroup namespace 2563can be used to restrict visibility of this path. For example, before 2564creating a cgroup namespace, one would see:: 2565 2566 # ls -l /proc/self/ns/cgroup 2567 lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835] 2568 # cat /proc/self/cgroup 2569 0::/batchjobs/container_id1 2570 2571After unsharing a new namespace, the view changes:: 2572 2573 # ls -l /proc/self/ns/cgroup 2574 lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183] 2575 # cat /proc/self/cgroup 2576 0::/ 2577 2578When some thread from a multi-threaded process unshares its cgroup 2579namespace, the new cgroupns gets applied to the entire process (all 2580the threads). This is natural for the v2 hierarchy; however, for the 2581legacy hierarchies, this may be unexpected. 2582 2583A cgroup namespace is alive as long as there are processes inside or 2584mounts pinning it. When the last usage goes away, the cgroup 2585namespace is destroyed. The cgroupns root and the actual cgroups 2586remain. 2587 2588 2589The Root and Views 2590------------------ 2591 2592The 'cgroupns root' for a cgroup namespace is the cgroup in which the 2593process calling unshare(2) is running. For example, if a process in 2594/batchjobs/container_id1 cgroup calls unshare, cgroup 2595/batchjobs/container_id1 becomes the cgroupns root. For the 2596init_cgroup_ns, this is the real root ('/') cgroup. 2597 2598The cgroupns root cgroup does not change even if the namespace creator 2599process later moves to a different cgroup:: 2600 2601 # ~/unshare -c # unshare cgroupns in some cgroup 2602 # cat /proc/self/cgroup 2603 0::/ 2604 # mkdir sub_cgrp_1 2605 # echo 0 > sub_cgrp_1/cgroup.procs 2606 # cat /proc/self/cgroup 2607 0::/sub_cgrp_1 2608 2609Each process gets its namespace-specific view of "/proc/$PID/cgroup" 2610 2611Processes running inside the cgroup namespace will be able to see 2612cgroup paths (in /proc/self/cgroup) only inside their root cgroup. 2613From within an unshared cgroupns:: 2614 2615 # sleep 100000 & 2616 [1] 7353 2617 # echo 7353 > sub_cgrp_1/cgroup.procs 2618 # cat /proc/7353/cgroup 2619 0::/sub_cgrp_1 2620 2621From the initial cgroup namespace, the real cgroup path will be 2622visible:: 2623 2624 $ cat /proc/7353/cgroup 2625 0::/batchjobs/container_id1/sub_cgrp_1 2626 2627From a sibling cgroup namespace (that is, a namespace rooted at a 2628different cgroup), the cgroup path relative to its own cgroup 2629namespace root will be shown. For instance, if PID 7353's cgroup 2630namespace root is at '/batchjobs/container_id2', then it will see:: 2631 2632 # cat /proc/7353/cgroup 2633 0::/../container_id2/sub_cgrp_1 2634 2635Note that the relative path always starts with '/' to indicate that 2636its relative to the cgroup namespace root of the caller. 2637 2638 2639Migration and setns(2) 2640---------------------- 2641 2642Processes inside a cgroup namespace can move into and out of the 2643namespace root if they have proper access to external cgroups. For 2644example, from inside a namespace with cgroupns root at 2645/batchjobs/container_id1, and assuming that the global hierarchy is 2646still accessible inside cgroupns:: 2647 2648 # cat /proc/7353/cgroup 2649 0::/sub_cgrp_1 2650 # echo 7353 > batchjobs/container_id2/cgroup.procs 2651 # cat /proc/7353/cgroup 2652 0::/../container_id2 2653 2654Note that this kind of setup is not encouraged. A task inside cgroup 2655namespace should only be exposed to its own cgroupns hierarchy. 2656 2657setns(2) to another cgroup namespace is allowed when: 2658 2659(a) the process has CAP_SYS_ADMIN against its current user namespace 2660(b) the process has CAP_SYS_ADMIN against the target cgroup 2661 namespace's userns 2662 2663No implicit cgroup changes happen with attaching to another cgroup 2664namespace. It is expected that the someone moves the attaching 2665process under the target cgroup namespace root. 2666 2667 2668Interaction with Other Namespaces 2669--------------------------------- 2670 2671Namespace specific cgroup hierarchy can be mounted by a process 2672running inside a non-init cgroup namespace:: 2673 2674 # mount -t cgroup2 none $MOUNT_POINT 2675 2676This will mount the unified cgroup hierarchy with cgroupns root as the 2677filesystem root. The process needs CAP_SYS_ADMIN against its user and 2678mount namespaces. 2679 2680The virtualization of /proc/self/cgroup file combined with restricting 2681the view of cgroup hierarchy by namespace-private cgroupfs mount 2682provides a properly isolated cgroup view inside the container. 2683 2684 2685Information on Kernel Programming 2686================================= 2687 2688This section contains kernel programming information in the areas 2689where interacting with cgroup is necessary. cgroup core and 2690controllers are not covered. 2691 2692 2693Filesystem Support for Writeback 2694-------------------------------- 2695 2696A filesystem can support cgroup writeback by updating 2697address_space_operations->writepage[s]() to annotate bio's using the 2698following two functions. 2699 2700 wbc_init_bio(@wbc, @bio) 2701 Should be called for each bio carrying writeback data and 2702 associates the bio with the inode's owner cgroup and the 2703 corresponding request queue. This must be called after 2704 a queue (device) has been associated with the bio and 2705 before submission. 2706 2707 wbc_account_cgroup_owner(@wbc, @page, @bytes) 2708 Should be called for each data segment being written out. 2709 While this function doesn't care exactly when it's called 2710 during the writeback session, it's the easiest and most 2711 natural to call it as data segments are added to a bio. 2712 2713With writeback bio's annotated, cgroup support can be enabled per 2714super_block by setting SB_I_CGROUPWB in ->s_iflags. This allows for 2715selective disabling of cgroup writeback support which is helpful when 2716certain filesystem features, e.g. journaled data mode, are 2717incompatible. 2718 2719wbc_init_bio() binds the specified bio to its cgroup. Depending on 2720the configuration, the bio may be executed at a lower priority and if 2721the writeback session is holding shared resources, e.g. a journal 2722entry, may lead to priority inversion. There is no one easy solution 2723for the problem. Filesystems can try to work around specific problem 2724cases by skipping wbc_init_bio() and using bio_associate_blkg() 2725directly. 2726 2727 2728Deprecated v1 Core Features 2729=========================== 2730 2731- Multiple hierarchies including named ones are not supported. 2732 2733- All v1 mount options are not supported. 2734 2735- The "tasks" file is removed and "cgroup.procs" is not sorted. 2736 2737- "cgroup.clone_children" is removed. 2738 2739- /proc/cgroups is meaningless for v2. Use "cgroup.controllers" file 2740 at the root instead. 2741 2742 2743Issues with v1 and Rationales for v2 2744==================================== 2745 2746Multiple Hierarchies 2747-------------------- 2748 2749cgroup v1 allowed an arbitrary number of hierarchies and each 2750hierarchy could host any number of controllers. While this seemed to 2751provide a high level of flexibility, it wasn't useful in practice. 2752 2753For example, as there is only one instance of each controller, utility 2754type controllers such as freezer which can be useful in all 2755hierarchies could only be used in one. The issue is exacerbated by 2756the fact that controllers couldn't be moved to another hierarchy once 2757hierarchies were populated. Another issue was that all controllers 2758bound to a hierarchy were forced to have exactly the same view of the 2759hierarchy. It wasn't possible to vary the granularity depending on 2760the specific controller. 2761 2762In practice, these issues heavily limited which controllers could be 2763put on the same hierarchy and most configurations resorted to putting 2764each controller on its own hierarchy. Only closely related ones, such 2765as the cpu and cpuacct controllers, made sense to be put on the same 2766hierarchy. This often meant that userland ended up managing multiple 2767similar hierarchies repeating the same steps on each hierarchy 2768whenever a hierarchy management operation was necessary. 2769 2770Furthermore, support for multiple hierarchies came at a steep cost. 2771It greatly complicated cgroup core implementation but more importantly 2772the support for multiple hierarchies restricted how cgroup could be 2773used in general and what controllers was able to do. 2774 2775There was no limit on how many hierarchies there might be, which meant 2776that a thread's cgroup membership couldn't be described in finite 2777length. The key might contain any number of entries and was unlimited 2778in length, which made it highly awkward to manipulate and led to 2779addition of controllers which existed only to identify membership, 2780which in turn exacerbated the original problem of proliferating number 2781of hierarchies. 2782 2783Also, as a controller couldn't have any expectation regarding the 2784topologies of hierarchies other controllers might be on, each 2785controller had to assume that all other controllers were attached to 2786completely orthogonal hierarchies. This made it impossible, or at 2787least very cumbersome, for controllers to cooperate with each other. 2788 2789In most use cases, putting controllers on hierarchies which are 2790completely orthogonal to each other isn't necessary. What usually is 2791called for is the ability to have differing levels of granularity 2792depending on the specific controller. In other words, hierarchy may 2793be collapsed from leaf towards root when viewed from specific 2794controllers. For example, a given configuration might not care about 2795how memory is distributed beyond a certain level while still wanting 2796to control how CPU cycles are distributed. 2797 2798 2799Thread Granularity 2800------------------ 2801 2802cgroup v1 allowed threads of a process to belong to different cgroups. 2803This didn't make sense for some controllers and those controllers 2804ended up implementing different ways to ignore such situations but 2805much more importantly it blurred the line between API exposed to 2806individual applications and system management interface. 2807 2808Generally, in-process knowledge is available only to the process 2809itself; thus, unlike service-level organization of processes, 2810categorizing threads of a process requires active participation from 2811the application which owns the target process. 2812 2813cgroup v1 had an ambiguously defined delegation model which got abused 2814in combination with thread granularity. cgroups were delegated to 2815individual applications so that they can create and manage their own 2816sub-hierarchies and control resource distributions along them. This 2817effectively raised cgroup to the status of a syscall-like API exposed 2818to lay programs. 2819 2820First of all, cgroup has a fundamentally inadequate interface to be 2821exposed this way. For a process to access its own knobs, it has to 2822extract the path on the target hierarchy from /proc/self/cgroup, 2823construct the path by appending the name of the knob to the path, open 2824and then read and/or write to it. This is not only extremely clunky 2825and unusual but also inherently racy. There is no conventional way to 2826define transaction across the required steps and nothing can guarantee 2827that the process would actually be operating on its own sub-hierarchy. 2828 2829cgroup controllers implemented a number of knobs which would never be 2830accepted as public APIs because they were just adding control knobs to 2831system-management pseudo filesystem. cgroup ended up with interface 2832knobs which were not properly abstracted or refined and directly 2833revealed kernel internal details. These knobs got exposed to 2834individual applications through the ill-defined delegation mechanism 2835effectively abusing cgroup as a shortcut to implementing public APIs 2836without going through the required scrutiny. 2837 2838This was painful for both userland and kernel. Userland ended up with 2839misbehaving and poorly abstracted interfaces and kernel exposing and 2840locked into constructs inadvertently. 2841 2842 2843Competition Between Inner Nodes and Threads 2844------------------------------------------- 2845 2846cgroup v1 allowed threads to be in any cgroups which created an 2847interesting problem where threads belonging to a parent cgroup and its 2848children cgroups competed for resources. This was nasty as two 2849different types of entities competed and there was no obvious way to 2850settle it. Different controllers did different things. 2851 2852The cpu controller considered threads and cgroups as equivalents and 2853mapped nice levels to cgroup weights. This worked for some cases but 2854fell flat when children wanted to be allocated specific ratios of CPU 2855cycles and the number of internal threads fluctuated - the ratios 2856constantly changed as the number of competing entities fluctuated. 2857There also were other issues. The mapping from nice level to weight 2858wasn't obvious or universal, and there were various other knobs which 2859simply weren't available for threads. 2860 2861The io controller implicitly created a hidden leaf node for each 2862cgroup to host the threads. The hidden leaf had its own copies of all 2863the knobs with ``leaf_`` prefixed. While this allowed equivalent 2864control over internal threads, it was with serious drawbacks. It 2865always added an extra layer of nesting which wouldn't be necessary 2866otherwise, made the interface messy and significantly complicated the 2867implementation. 2868 2869The memory controller didn't have a way to control what happened 2870between internal tasks and child cgroups and the behavior was not 2871clearly defined. There were attempts to add ad-hoc behaviors and 2872knobs to tailor the behavior to specific workloads which would have 2873led to problems extremely difficult to resolve in the long term. 2874 2875Multiple controllers struggled with internal tasks and came up with 2876different ways to deal with it; unfortunately, all the approaches were 2877severely flawed and, furthermore, the widely different behaviors 2878made cgroup as a whole highly inconsistent. 2879 2880This clearly is a problem which needs to be addressed from cgroup core 2881in a uniform way. 2882 2883 2884Other Interface Issues 2885---------------------- 2886 2887cgroup v1 grew without oversight and developed a large number of 2888idiosyncrasies and inconsistencies. One issue on the cgroup core side 2889was how an empty cgroup was notified - a userland helper binary was 2890forked and executed for each event. The event delivery wasn't 2891recursive or delegatable. The limitations of the mechanism also led 2892to in-kernel event delivery filtering mechanism further complicating 2893the interface. 2894 2895Controller interfaces were problematic too. An extreme example is 2896controllers completely ignoring hierarchical organization and treating 2897all cgroups as if they were all located directly under the root 2898cgroup. Some controllers exposed a large amount of inconsistent 2899implementation details to userland. 2900 2901There also was no consistency across controllers. When a new cgroup 2902was created, some controllers defaulted to not imposing extra 2903restrictions while others disallowed any resource usage until 2904explicitly configured. Configuration knobs for the same type of 2905control used widely differing naming schemes and formats. Statistics 2906and information knobs were named arbitrarily and used different 2907formats and units even in the same controller. 2908 2909cgroup v2 establishes common conventions where appropriate and updates 2910controllers so that they expose minimal and consistent interfaces. 2911 2912 2913Controller Issues and Remedies 2914------------------------------ 2915 2916Memory 2917~~~~~~ 2918 2919The original lower boundary, the soft limit, is defined as a limit 2920that is per default unset. As a result, the set of cgroups that 2921global reclaim prefers is opt-in, rather than opt-out. The costs for 2922optimizing these mostly negative lookups are so high that the 2923implementation, despite its enormous size, does not even provide the 2924basic desirable behavior. First off, the soft limit has no 2925hierarchical meaning. All configured groups are organized in a global 2926rbtree and treated like equal peers, regardless where they are located 2927in the hierarchy. This makes subtree delegation impossible. Second, 2928the soft limit reclaim pass is so aggressive that it not just 2929introduces high allocation latencies into the system, but also impacts 2930system performance due to overreclaim, to the point where the feature 2931becomes self-defeating. 2932 2933The memory.low boundary on the other hand is a top-down allocated 2934reserve. A cgroup enjoys reclaim protection when it's within its 2935effective low, which makes delegation of subtrees possible. It also 2936enjoys having reclaim pressure proportional to its overage when 2937above its effective low. 2938 2939The original high boundary, the hard limit, is defined as a strict 2940limit that can not budge, even if the OOM killer has to be called. 2941But this generally goes against the goal of making the most out of the 2942available memory. The memory consumption of workloads varies during 2943runtime, and that requires users to overcommit. But doing that with a 2944strict upper limit requires either a fairly accurate prediction of the 2945working set size or adding slack to the limit. Since working set size 2946estimation is hard and error prone, and getting it wrong results in 2947OOM kills, most users tend to err on the side of a looser limit and 2948end up wasting precious resources. 2949 2950The memory.high boundary on the other hand can be set much more 2951conservatively. When hit, it throttles allocations by forcing them 2952into direct reclaim to work off the excess, but it never invokes the 2953OOM killer. As a result, a high boundary that is chosen too 2954aggressively will not terminate the processes, but instead it will 2955lead to gradual performance degradation. The user can monitor this 2956and make corrections until the minimal memory footprint that still 2957gives acceptable performance is found. 2958 2959In extreme cases, with many concurrent allocations and a complete 2960breakdown of reclaim progress within the group, the high boundary can 2961be exceeded. But even then it's mostly better to satisfy the 2962allocation from the slack available in other groups or the rest of the 2963system than killing the group. Otherwise, memory.max is there to 2964limit this type of spillover and ultimately contain buggy or even 2965malicious applications. 2966 2967Setting the original memory.limit_in_bytes below the current usage was 2968subject to a race condition, where concurrent charges could cause the 2969limit setting to fail. memory.max on the other hand will first set the 2970limit to prevent new charges, and then reclaim and OOM kill until the 2971new limit is met - or the task writing to memory.max is killed. 2972 2973The combined memory+swap accounting and limiting is replaced by real 2974control over swap space. 2975 2976The main argument for a combined memory+swap facility in the original 2977cgroup design was that global or parental pressure would always be 2978able to swap all anonymous memory of a child group, regardless of the 2979child's own (possibly untrusted) configuration. However, untrusted 2980groups can sabotage swapping by other means - such as referencing its 2981anonymous memory in a tight loop - and an admin can not assume full 2982swappability when overcommitting untrusted jobs. 2983 2984For trusted jobs, on the other hand, a combined counter is not an 2985intuitive userspace interface, and it flies in the face of the idea 2986that cgroup controllers should account and limit specific physical 2987resources. Swap space is a resource like all others in the system, 2988and that's why unified hierarchy allows distributing it separately. 2989