1perf-stat(1)
2============
3
4NAME
5----
6perf-stat - Run a command and gather performance counter statistics
7
8SYNOPSIS
9--------
10[verse]
11'perf stat' [-e <EVENT> | --event=EVENT] [-a] <command>
12'perf stat' [-e <EVENT> | --event=EVENT] [-a] -- <command> [<options>]
13'perf stat' [-e <EVENT> | --event=EVENT] [-a] record [-o file] -- <command> [<options>]
14'perf stat' report [-i file]
15
16DESCRIPTION
17-----------
18This command runs a command and gathers performance counter statistics
19from it.
20
21
22OPTIONS
23-------
24<command>...::
25	Any command you can specify in a shell.
26
27record::
28	See STAT RECORD.
29
30report::
31	See STAT REPORT.
32
33-e::
34--event=::
35	Select the PMU event. Selection can be:
36
37	- a symbolic event name (use 'perf list' to list all events)
38
39	- a raw PMU event (eventsel+umask) in the form of rNNN where NNN is a
40	  hexadecimal event descriptor.
41
42	- a symbolically formed event like 'pmu/param1=0x3,param2/' where
43	  param1 and param2 are defined as formats for the PMU in
44	  /sys/bus/event_source/devices/<pmu>/format/*
45
46	  'percore' is a event qualifier that sums up the event counts for both
47	  hardware threads in a core. For example:
48	  perf stat -A -a -e cpu/event,percore=1/,otherevent ...
49
50	- a symbolically formed event like 'pmu/config=M,config1=N,config2=K/'
51	  where M, N, K are numbers (in decimal, hex, octal format).
52	  Acceptable values for each of 'config', 'config1' and 'config2'
53	  parameters are defined by corresponding entries in
54	  /sys/bus/event_source/devices/<pmu>/format/*
55
56	Note that the last two syntaxes support prefix and glob matching in
57	the PMU name to simplify creation of events across multiple instances
58	of the same type of PMU in large systems (e.g. memory controller PMUs).
59	Multiple PMU instances are typical for uncore PMUs, so the prefix
60	'uncore_' is also ignored when performing this match.
61
62
63-i::
64--no-inherit::
65        child tasks do not inherit counters
66-p::
67--pid=<pid>::
68        stat events on existing process id (comma separated list)
69
70-t::
71--tid=<tid>::
72        stat events on existing thread id (comma separated list)
73
74ifdef::HAVE_LIBPFM[]
75--pfm-events events::
76Select a PMU event using libpfm4 syntax (see http://perfmon2.sf.net)
77including support for event filters. For example '--pfm-events
78inst_retired:any_p:u:c=1:i'. More than one event can be passed to the
79option using the comma separator. Hardware events and generic hardware
80events cannot be mixed together. The latter must be used with the -e
81option. The -e option and this one can be mixed and matched.  Events
82can be grouped using the {} notation.
83endif::HAVE_LIBPFM[]
84
85-a::
86--all-cpus::
87        system-wide collection from all CPUs (default if no target is specified)
88
89--no-scale::
90	Don't scale/normalize counter values
91
92-d::
93--detailed::
94	print more detailed statistics, can be specified up to 3 times
95
96	   -d:          detailed events, L1 and LLC data cache
97        -d -d:     more detailed events, dTLB and iTLB events
98     -d -d -d:     very detailed events, adding prefetch events
99
100-r::
101--repeat=<n>::
102	repeat command and print average + stddev (max: 100). 0 means forever.
103
104-B::
105--big-num::
106        print large numbers with thousands' separators according to locale.
107	Enabled by default. Use "--no-big-num" to disable.
108	Default setting can be changed with "perf config stat.big-num=false".
109
110-C::
111--cpu=::
112Count only on the list of CPUs provided. Multiple CPUs can be provided as a
113comma-separated list with no space: 0,1. Ranges of CPUs are specified with -: 0-2.
114In per-thread mode, this option is ignored. The -a option is still necessary
115to activate system-wide monitoring. Default is to count on all CPUs.
116
117-A::
118--no-aggr::
119Do not aggregate counts across all monitored CPUs.
120
121-n::
122--null::
123        null run - don't start any counters
124
125-v::
126--verbose::
127        be more verbose (show counter open errors, etc)
128
129-x SEP::
130--field-separator SEP::
131print counts using a CSV-style output to make it easy to import directly into
132spreadsheets. Columns are separated by the string specified in SEP.
133
134--table:: Display time for each run (-r option), in a table format, e.g.:
135
136  $ perf stat --null -r 5 --table perf bench sched pipe
137
138   Performance counter stats for 'perf bench sched pipe' (5 runs):
139
140             # Table of individual measurements:
141             5.189 (-0.293) #
142             5.189 (-0.294) #
143             5.186 (-0.296) #
144             5.663 (+0.181) ##
145             6.186 (+0.703) ####
146
147             # Final result:
148             5.483 +- 0.198 seconds time elapsed  ( +-  3.62% )
149
150-G name::
151--cgroup name::
152monitor only in the container (cgroup) called "name". This option is available only
153in per-cpu mode. The cgroup filesystem must be mounted. All threads belonging to
154container "name" are monitored when they run on the monitored CPUs. Multiple cgroups
155can be provided. Each cgroup is applied to the corresponding event, i.e., first cgroup
156to first event, second cgroup to second event and so on. It is possible to provide
157an empty cgroup (monitor all the time) using, e.g., -G foo,,bar. Cgroups must have
158corresponding events, i.e., they always refer to events defined earlier on the command
159line. If the user wants to track multiple events for a specific cgroup, the user can
160use '-e e1 -e e2 -G foo,foo' or just use '-e e1 -e e2 -G foo'.
161
162If wanting to monitor, say, 'cycles' for a cgroup and also for system wide, this
163command line can be used: 'perf stat -e cycles -G cgroup_name -a -e cycles'.
164
165-o file::
166--output file::
167Print the output into the designated file.
168
169--append::
170Append to the output file designated with the -o option. Ignored if -o is not specified.
171
172--log-fd::
173
174Log output to fd, instead of stderr.  Complementary to --output, and mutually exclusive
175with it.  --append may be used here.  Examples:
176     3>results  perf stat --log-fd 3          -- $cmd
177     3>>results perf stat --log-fd 3 --append -- $cmd
178
179--control fd:ctl-fd[,ack-fd]
180Listen on ctl-fd descriptor for command to control measurement ('enable': enable events,
181'disable': disable events). Measurements can be started with events disabled using
182--delay=-1 option. Optionally send control command completion ('ack\n') to ack-fd descriptor
183to synchronize with the controlling process. Example of bash shell script to enable and
184disable events during measurements:
185
186#!/bin/bash
187
188ctl_dir=/tmp/
189
190ctl_fifo=${ctl_dir}perf_ctl.fifo
191test -p ${ctl_fifo} && unlink ${ctl_fifo}
192mkfifo ${ctl_fifo}
193exec {ctl_fd}<>${ctl_fifo}
194
195ctl_ack_fifo=${ctl_dir}perf_ctl_ack.fifo
196test -p ${ctl_ack_fifo} && unlink ${ctl_ack_fifo}
197mkfifo ${ctl_ack_fifo}
198exec {ctl_fd_ack}<>${ctl_ack_fifo}
199
200perf stat -D -1 -e cpu-cycles -a -I 1000       \
201          --control fd:${ctl_fd},${ctl_fd_ack} \
202          -- sleep 30 &
203perf_pid=$!
204
205sleep 5  && echo 'enable' >&${ctl_fd} && read -u ${ctl_fd_ack} e1 && echo "enabled(${e1})"
206sleep 10 && echo 'disable' >&${ctl_fd} && read -u ${ctl_fd_ack} d1 && echo "disabled(${d1})"
207
208exec {ctl_fd_ack}>&-
209unlink ${ctl_ack_fifo}
210
211exec {ctl_fd}>&-
212unlink ${ctl_fifo}
213
214wait -n ${perf_pid}
215exit $?
216
217
218--pre::
219--post::
220	Pre and post measurement hooks, e.g.:
221
222perf stat --repeat 10 --null --sync --pre 'make -s O=defconfig-build/clean' -- make -s -j64 O=defconfig-build/ bzImage
223
224-I msecs::
225--interval-print msecs::
226Print count deltas every N milliseconds (minimum: 1ms)
227The overhead percentage could be high in some cases, for instance with small, sub 100ms intervals.  Use with caution.
228	example: 'perf stat -I 1000 -e cycles -a sleep 5'
229
230If the metric exists, it is calculated by the counts generated in this interval and the metric is printed after #.
231
232--interval-count times::
233Print count deltas for fixed number of times.
234This option should be used together with "-I" option.
235	example: 'perf stat -I 1000 --interval-count 2 -e cycles -a'
236
237--interval-clear::
238Clear the screen before next interval.
239
240--timeout msecs::
241Stop the 'perf stat' session and print count deltas after N milliseconds (minimum: 10 ms).
242This option is not supported with the "-I" option.
243	example: 'perf stat --time 2000 -e cycles -a'
244
245--metric-only::
246Only print computed metrics. Print them in a single line.
247Don't show any raw values. Not supported with --per-thread.
248
249--per-socket::
250Aggregate counts per processor socket for system-wide mode measurements.  This
251is a useful mode to detect imbalance between sockets.  To enable this mode,
252use --per-socket in addition to -a. (system-wide).  The output includes the
253socket number and the number of online processors on that socket. This is
254useful to gauge the amount of aggregation.
255
256--per-die::
257Aggregate counts per processor die for system-wide mode measurements.  This
258is a useful mode to detect imbalance between dies.  To enable this mode,
259use --per-die in addition to -a. (system-wide).  The output includes the
260die number and the number of online processors on that die. This is
261useful to gauge the amount of aggregation.
262
263--per-core::
264Aggregate counts per physical processor for system-wide mode measurements.  This
265is a useful mode to detect imbalance between physical cores.  To enable this mode,
266use --per-core in addition to -a. (system-wide).  The output includes the
267core number and the number of online logical processors on that physical processor.
268
269--per-thread::
270Aggregate counts per monitored threads, when monitoring threads (-t option)
271or processes (-p option).
272
273--per-node::
274Aggregate counts per NUMA nodes for system-wide mode measurements. This
275is a useful mode to detect imbalance between NUMA nodes. To enable this
276mode, use --per-node in addition to -a. (system-wide).
277
278-D msecs::
279--delay msecs::
280After starting the program, wait msecs before measuring (-1: start with events
281disabled). This is useful to filter out the startup phase of the program,
282which is often very different.
283
284-T::
285--transaction::
286
287Print statistics of transactional execution if supported.
288
289--metric-no-group::
290By default, events to compute a metric are placed in weak groups. The
291group tries to enforce scheduling all or none of the events. The
292--metric-no-group option places events outside of groups and may
293increase the chance of the event being scheduled - leading to more
294accuracy. However, as events may not be scheduled together accuracy
295for metrics like instructions per cycle can be lower - as both metrics
296may no longer be being measured at the same time.
297
298--metric-no-merge::
299By default metric events in different weak groups can be shared if one
300group contains all the events needed by another. In such cases one
301group will be eliminated reducing event multiplexing and making it so
302that certain groups of metrics sum to 100%. A downside to sharing a
303group is that the group may require multiplexing and so accuracy for a
304small group that need not have multiplexing is lowered. This option
305forbids the event merging logic from sharing events between groups and
306may be used to increase accuracy in this case.
307
308STAT RECORD
309-----------
310Stores stat data into perf data file.
311
312-o file::
313--output file::
314Output file name.
315
316STAT REPORT
317-----------
318Reads and reports stat data from perf data file.
319
320-i file::
321--input file::
322Input file name.
323
324--per-socket::
325Aggregate counts per processor socket for system-wide mode measurements.
326
327--per-die::
328Aggregate counts per processor die for system-wide mode measurements.
329
330--per-core::
331Aggregate counts per physical processor for system-wide mode measurements.
332
333-M::
334--metrics::
335Print metrics or metricgroups specified in a comma separated list.
336For a group all metrics from the group are added.
337The events from the metrics are automatically measured.
338See perf list output for the possble metrics and metricgroups.
339
340-A::
341--no-aggr::
342Do not aggregate counts across all monitored CPUs.
343
344--topdown::
345Print top down level 1 metrics if supported by the CPU. This allows to
346determine bottle necks in the CPU pipeline for CPU bound workloads,
347by breaking the cycles consumed down into frontend bound, backend bound,
348bad speculation and retiring.
349
350Frontend bound means that the CPU cannot fetch and decode instructions fast
351enough. Backend bound means that computation or memory access is the bottle
352neck. Bad Speculation means that the CPU wasted cycles due to branch
353mispredictions and similar issues. Retiring means that the CPU computed without
354an apparently bottleneck. The bottleneck is only the real bottleneck
355if the workload is actually bound by the CPU and not by something else.
356
357For best results it is usually a good idea to use it with interval
358mode like -I 1000, as the bottleneck of workloads can change often.
359
360The top down metrics are collected per core instead of per
361CPU thread. Per core mode is automatically enabled
362and -a (global monitoring) is needed, requiring root rights or
363perf.perf_event_paranoid=-1.
364
365Topdown uses the full Performance Monitoring Unit, and needs
366disabling of the NMI watchdog (as root):
367echo 0 > /proc/sys/kernel/nmi_watchdog
368for best results. Otherwise the bottlenecks may be inconsistent
369on workload with changing phases.
370
371This enables --metric-only, unless overridden with --no-metric-only.
372
373To interpret the results it is usually needed to know on which
374CPUs the workload runs on. If needed the CPUs can be forced using
375taskset.
376
377--no-merge::
378Do not merge results from same PMUs.
379
380When multiple events are created from a single event specification,
381stat will, by default, aggregate the event counts and show the result
382in a single row. This option disables that behavior and shows
383the individual events and counts.
384
385Multiple events are created from a single event specification when:
3861. Prefix or glob matching is used for the PMU name.
3872. Aliases, which are listed immediately after the Kernel PMU events
388   by perf list, are used.
389
390--smi-cost::
391Measure SMI cost if msr/aperf/ and msr/smi/ events are supported.
392
393During the measurement, the /sys/device/cpu/freeze_on_smi will be set to
394freeze core counters on SMI.
395The aperf counter will not be effected by the setting.
396The cost of SMI can be measured by (aperf - unhalted core cycles).
397
398In practice, the percentages of SMI cycles is very useful for performance
399oriented analysis. --metric_only will be applied by default.
400The output is SMI cycles%, equals to (aperf - unhalted core cycles) / aperf
401
402Users who wants to get the actual value can apply --no-metric-only.
403
404--all-kernel::
405Configure all used events to run in kernel space.
406
407--all-user::
408Configure all used events to run in user space.
409
410--percore-show-thread::
411The event modifier "percore" has supported to sum up the event counts
412for all hardware threads in a core and show the counts per core.
413
414This option with event modifier "percore" enabled also sums up the event
415counts for all hardware threads in a core but show the sum counts per
416hardware thread. This is essentially a replacement for the any bit and
417convenient for post processing.
418
419EXAMPLES
420--------
421
422$ perf stat -- make
423
424   Performance counter stats for 'make':
425
426        83723.452481      task-clock:u (msec)       #    1.004 CPUs utilized
427                   0      context-switches:u        #    0.000 K/sec
428                   0      cpu-migrations:u          #    0.000 K/sec
429           3,228,188      page-faults:u             #    0.039 M/sec
430     229,570,665,834      cycles:u                  #    2.742 GHz
431     313,163,853,778      instructions:u            #    1.36  insn per cycle
432      69,704,684,856      branches:u                #  832.559 M/sec
433       2,078,861,393      branch-misses:u           #    2.98% of all branches
434
435        83.409183620 seconds time elapsed
436
437        74.684747000 seconds user
438         8.739217000 seconds sys
439
440TIMINGS
441-------
442As displayed in the example above we can display 3 types of timings.
443We always display the time the counters were enabled/alive:
444
445        83.409183620 seconds time elapsed
446
447For workload sessions we also display time the workloads spent in
448user/system lands:
449
450        74.684747000 seconds user
451         8.739217000 seconds sys
452
453Those times are the very same as displayed by the 'time' tool.
454
455CSV FORMAT
456----------
457
458With -x, perf stat is able to output a not-quite-CSV format output
459Commas in the output are not put into "". To make it easy to parse
460it is recommended to use a different character like -x \;
461
462The fields are in this order:
463
464	- optional usec time stamp in fractions of second (with -I xxx)
465	- optional CPU, core, or socket identifier
466	- optional number of logical CPUs aggregated
467	- counter value
468	- unit of the counter value or empty
469	- event name
470	- run time of counter
471	- percentage of measurement time the counter was running
472	- optional variance if multiple values are collected with -r
473	- optional metric value
474	- optional unit of metric
475
476Additional metrics may be printed with all earlier fields being empty.
477
478SEE ALSO
479--------
480linkperf:perf-top[1], linkperf:perf-list[1]
481