1What: /sys/devices/system/cpu/ 2Date: pre-git history 3Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org> 4Description: 5 A collection of both global and individual CPU attributes 6 7 Individual CPU attributes are contained in subdirectories 8 named by the kernel's logical CPU number, e.g.: 9 10 /sys/devices/system/cpu/cpu#/ 11 12What: /sys/devices/system/cpu/kernel_max 13 /sys/devices/system/cpu/offline 14 /sys/devices/system/cpu/online 15 /sys/devices/system/cpu/possible 16 /sys/devices/system/cpu/present 17Date: December 2008 18Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org> 19Description: CPU topology files that describe kernel limits related to 20 hotplug. Briefly: 21 22 kernel_max: the maximum cpu index allowed by the kernel 23 configuration. 24 25 offline: cpus that are not online because they have been 26 HOTPLUGGED off or exceed the limit of cpus allowed by the 27 kernel configuration (kernel_max above). 28 29 online: cpus that are online and being scheduled. 30 31 possible: cpus that have been allocated resources and can be 32 brought online if they are present. 33 34 present: cpus that have been identified as being present in 35 the system. 36 37 See Documentation/cputopology.txt for more information. 38 39 40What: /sys/devices/system/cpu/probe 41 /sys/devices/system/cpu/release 42Date: November 2009 43Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org> 44Description: Dynamic addition and removal of CPU's. This is not hotplug 45 removal, this is meant complete removal/addition of the CPU 46 from the system. 47 48 probe: writes to this file will dynamically add a CPU to the 49 system. Information written to the file to add CPU's is 50 architecture specific. 51 52 release: writes to this file dynamically remove a CPU from 53 the system. Information writtento the file to remove CPU's 54 is architecture specific. 55 56What: /sys/devices/system/cpu/cpu#/node 57Date: October 2009 58Contact: Linux memory management mailing list <linux-mm@kvack.org> 59Description: Discover NUMA node a CPU belongs to 60 61 When CONFIG_NUMA is enabled, a symbolic link that points 62 to the corresponding NUMA node directory. 63 64 For example, the following symlink is created for cpu42 65 in NUMA node 2: 66 67 /sys/devices/system/cpu/cpu42/node2 -> ../../node/node2 68 69 70What: /sys/devices/system/cpu/cpu#/topology/core_id 71 /sys/devices/system/cpu/cpu#/topology/core_siblings 72 /sys/devices/system/cpu/cpu#/topology/core_siblings_list 73 /sys/devices/system/cpu/cpu#/topology/physical_package_id 74 /sys/devices/system/cpu/cpu#/topology/thread_siblings 75 /sys/devices/system/cpu/cpu#/topology/thread_siblings_list 76Date: December 2008 77Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org> 78Description: CPU topology files that describe a logical CPU's relationship 79 to other cores and threads in the same physical package. 80 81 One cpu# directory is created per logical CPU in the system, 82 e.g. /sys/devices/system/cpu/cpu42/. 83 84 Briefly, the files above are: 85 86 core_id: the CPU core ID of cpu#. Typically it is the 87 hardware platform's identifier (rather than the kernel's). 88 The actual value is architecture and platform dependent. 89 90 core_siblings: internal kernel map of cpu#'s hardware threads 91 within the same physical_package_id. 92 93 core_siblings_list: human-readable list of the logical CPU 94 numbers within the same physical_package_id as cpu#. 95 96 physical_package_id: physical package id of cpu#. Typically 97 corresponds to a physical socket number, but the actual value 98 is architecture and platform dependent. 99 100 thread_siblings: internel kernel map of cpu#'s hardware 101 threads within the same core as cpu# 102 103 thread_siblings_list: human-readable list of cpu#'s hardware 104 threads within the same core as cpu# 105 106 See Documentation/cputopology.txt for more information. 107 108 109What: /sys/devices/system/cpu/cpuidle/current_driver 110 /sys/devices/system/cpu/cpuidle/current_governer_ro 111Date: September 2007 112Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org> 113Description: Discover cpuidle policy and mechanism 114 115 Various CPUs today support multiple idle levels that are 116 differentiated by varying exit latencies and power 117 consumption during idle. 118 119 Idle policy (governor) is differentiated from idle mechanism 120 (driver) 121 122 current_driver: displays current idle mechanism 123 124 current_governor_ro: displays current idle policy 125 126 See files in Documentation/cpuidle/ for more information. 127 128 129What: /sys/devices/system/cpu/cpu#/cpufreq/* 130Date: pre-git history 131Contact: cpufreq@vger.kernel.org 132Description: Discover and change clock speed of CPUs 133 134 Clock scaling allows you to change the clock speed of the 135 CPUs on the fly. This is a nice method to save battery 136 power, because the lower the clock speed, the less power 137 the CPU consumes. 138 139 There are many knobs to tweak in this directory. 140 141 See files in Documentation/cpu-freq/ for more information. 142 143 In particular, read Documentation/cpu-freq/user-guide.txt 144 to learn how to control the knobs. 145 146 147What: /sys/devices/system/cpu/cpu#/cpufreq/freqdomain_cpus 148Date: June 2013 149Contact: cpufreq@vger.kernel.org 150Description: Discover CPUs in the same CPU frequency coordination domain 151 152 freqdomain_cpus is the list of CPUs (online+offline) that share 153 the same clock/freq domain (possibly at the hardware level). 154 That information may be hidden from the cpufreq core and the 155 value of related_cpus may be different from freqdomain_cpus. This 156 attribute is useful for user space DVFS controllers to get better 157 power/performance results for platforms using acpi-cpufreq. 158 159 This file is only present if the acpi-cpufreq driver is in use. 160 161 162What: /sys/devices/system/cpu/cpu*/cache/index3/cache_disable_{0,1} 163Date: August 2008 164KernelVersion: 2.6.27 165Contact: discuss@x86-64.org 166Description: Disable L3 cache indices 167 168 These files exist in every CPU's cache/index3 directory. Each 169 cache_disable_{0,1} file corresponds to one disable slot which 170 can be used to disable a cache index. Reading from these files 171 on a processor with this functionality will return the currently 172 disabled index for that node. There is one L3 structure per 173 node, or per internal node on MCM machines. Writing a valid 174 index to one of these files will cause the specificed cache 175 index to be disabled. 176 177 All AMD processors with L3 caches provide this functionality. 178 For details, see BKDGs at 179 http://developer.amd.com/documentation/guides/Pages/default.aspx 180 181 182What: /sys/devices/system/cpu/cpufreq/boost 183Date: August 2012 184Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org> 185Description: Processor frequency boosting control 186 187 This switch controls the boost setting for the whole system. 188 Boosting allows the CPU and the firmware to run at a frequency 189 beyound it's nominal limit. 190 More details can be found in Documentation/cpu-freq/boost.txt 191 192 193What: /sys/devices/system/cpu/cpu#/crash_notes 194 /sys/devices/system/cpu/cpu#/crash_notes_size 195Date: April 2013 196Contact: kexec@lists.infradead.org 197Description: address and size of the percpu note. 198 199 crash_notes: the physical address of the memory that holds the 200 note of cpu#. 201 202 crash_notes_size: size of the note of cpu#. 203