1What:		/sys/devices/system/cpu/
2Date:		pre-git history
3Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
4Description:
5		A collection of both global and individual CPU attributes
6
7		Individual CPU attributes are contained in subdirectories
8		named by the kernel's logical CPU number, e.g.:
9
10		/sys/devices/system/cpu/cpu#/
11
12What:		/sys/devices/system/cpu/kernel_max
13		/sys/devices/system/cpu/offline
14		/sys/devices/system/cpu/online
15		/sys/devices/system/cpu/possible
16		/sys/devices/system/cpu/present
17Date:		December 2008
18Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
19Description:	CPU topology files that describe kernel limits related to
20		hotplug. Briefly:
21
22		kernel_max: the maximum cpu index allowed by the kernel
23		configuration.
24
25		offline: cpus that are not online because they have been
26		HOTPLUGGED off or exceed the limit of cpus allowed by the
27		kernel configuration (kernel_max above).
28
29		online: cpus that are online and being scheduled.
30
31		possible: cpus that have been allocated resources and can be
32		brought online if they are present.
33
34		present: cpus that have been identified as being present in
35		the system.
36
37		See Documentation/cputopology.txt for more information.
38
39
40What:		/sys/devices/system/cpu/probe
41		/sys/devices/system/cpu/release
42Date:		November 2009
43Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
44Description:	Dynamic addition and removal of CPU's.  This is not hotplug
45		removal, this is meant complete removal/addition of the CPU
46		from the system.
47
48		probe: writes to this file will dynamically add a CPU to the
49		system.  Information written to the file to add CPU's is
50		architecture specific.
51
52		release: writes to this file dynamically remove a CPU from
53		the system.  Information writtento the file to remove CPU's
54		is architecture specific.
55
56What:		/sys/devices/system/cpu/cpu#/node
57Date:		October 2009
58Contact:	Linux memory management mailing list <linux-mm@kvack.org>
59Description:	Discover NUMA node a CPU belongs to
60
61		When CONFIG_NUMA is enabled, a symbolic link that points
62		to the corresponding NUMA node directory.
63
64		For example, the following symlink is created for cpu42
65		in NUMA node 2:
66
67		/sys/devices/system/cpu/cpu42/node2 -> ../../node/node2
68
69
70What:		/sys/devices/system/cpu/cpu#/node
71Date:		October 2009
72Contact:	Linux memory management mailing list <linux-mm@kvack.org>
73Description:	Discover NUMA node a CPU belongs to
74
75		When CONFIG_NUMA is enabled, a symbolic link that points
76		to the corresponding NUMA node directory.
77
78		For example, the following symlink is created for cpu42
79		in NUMA node 2:
80
81		/sys/devices/system/cpu/cpu42/node2 -> ../../node/node2
82
83
84What:		/sys/devices/system/cpu/cpu#/topology/core_id
85		/sys/devices/system/cpu/cpu#/topology/core_siblings
86		/sys/devices/system/cpu/cpu#/topology/core_siblings_list
87		/sys/devices/system/cpu/cpu#/topology/physical_package_id
88		/sys/devices/system/cpu/cpu#/topology/thread_siblings
89		/sys/devices/system/cpu/cpu#/topology/thread_siblings_list
90Date:		December 2008
91Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
92Description:	CPU topology files that describe a logical CPU's relationship
93		to other cores and threads in the same physical package.
94
95		One cpu# directory is created per logical CPU in the system,
96		e.g. /sys/devices/system/cpu/cpu42/.
97
98		Briefly, the files above are:
99
100		core_id: the CPU core ID of cpu#. Typically it is the
101		hardware platform's identifier (rather than the kernel's).
102		The actual value is architecture and platform dependent.
103
104		core_siblings: internal kernel map of cpu#'s hardware threads
105		within the same physical_package_id.
106
107		core_siblings_list: human-readable list of the logical CPU
108		numbers within the same physical_package_id as cpu#.
109
110		physical_package_id: physical package id of cpu#. Typically
111		corresponds to a physical socket number, but the actual value
112		is architecture and platform dependent.
113
114		thread_siblings: internel kernel map of cpu#'s hardware
115		threads within the same core as cpu#
116
117		thread_siblings_list: human-readable list of cpu#'s hardware
118		threads within the same core as cpu#
119
120		See Documentation/cputopology.txt for more information.
121
122
123What:		/sys/devices/system/cpu/cpuidle/current_driver
124		/sys/devices/system/cpu/cpuidle/current_governer_ro
125Date:		September 2007
126Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
127Description:	Discover cpuidle policy and mechanism
128
129		Various CPUs today support multiple idle levels that are
130		differentiated by varying exit latencies and power
131		consumption during idle.
132
133		Idle policy (governor) is differentiated from idle mechanism
134		(driver)
135
136		current_driver: displays current idle mechanism
137
138		current_governor_ro: displays current idle policy
139
140		See files in Documentation/cpuidle/ for more information.
141
142
143What:		/sys/devices/system/cpu/cpu#/cpufreq/*
144Date:		pre-git history
145Contact:	cpufreq@vger.kernel.org
146Description:	Discover and change clock speed of CPUs
147
148		Clock scaling allows you to change the clock speed of the
149		CPUs on the fly. This is a nice method to save battery
150		power, because the lower the clock speed, the less power
151		the CPU consumes.
152
153		There are many knobs to tweak in this directory.
154
155		See files in Documentation/cpu-freq/ for more information.
156
157		In particular, read Documentation/cpu-freq/user-guide.txt
158		to learn how to control the knobs.
159
160
161What:		/sys/devices/system/cpu/cpu*/cache/index3/cache_disable_{0,1}
162Date:		August 2008
163KernelVersion:	2.6.27
164Contact:	discuss@x86-64.org
165Description:	Disable L3 cache indices
166
167		These files exist in every CPU's cache/index3 directory. Each
168		cache_disable_{0,1} file corresponds to one disable slot which
169		can be used to disable a cache index. Reading from these files
170		on a processor with this functionality will return the currently
171		disabled index for that node. There is one L3 structure per
172		node, or per internal node on MCM machines. Writing a valid
173		index to one of these files will cause the specificed cache
174		index to be disabled.
175
176		All AMD processors with L3 caches provide this functionality.
177		For details, see BKDGs at
178		http://developer.amd.com/documentation/guides/Pages/default.aspx
179
180
181What:		/sys/devices/system/cpu/cpufreq/boost
182Date:		August 2012
183Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
184Description:	Processor frequency boosting control
185
186		This switch controls the boost setting for the whole system.
187		Boosting allows the CPU and the firmware to run at a frequency
188		beyound it's nominal limit.
189		More details can be found in Documentation/cpu-freq/boost.txt
190