1perf-stat(1) 2============ 3 4NAME 5---- 6perf-stat - Run a command and gather performance counter statistics 7 8SYNOPSIS 9-------- 10[verse] 11'perf stat' [-e <EVENT> | --event=EVENT] [-a] <command> 12'perf stat' [-e <EVENT> | --event=EVENT] [-a] -- <command> [<options>] 13'perf stat' [-e <EVENT> | --event=EVENT] [-a] record [-o file] -- <command> [<options>] 14'perf stat' report [-i file] 15 16DESCRIPTION 17----------- 18This command runs a command and gathers performance counter statistics 19from it. 20 21 22OPTIONS 23------- 24<command>...:: 25 Any command you can specify in a shell. 26 27record:: 28 See STAT RECORD. 29 30report:: 31 See STAT REPORT. 32 33-e:: 34--event=:: 35 Select the PMU event. Selection can be: 36 37 - a symbolic event name (use 'perf list' to list all events) 38 39 - a raw PMU event (eventsel+umask) in the form of rNNN where NNN is a 40 hexadecimal event descriptor. 41 42 - a symbolically formed event like 'pmu/param1=0x3,param2/' where 43 param1 and param2 are defined as formats for the PMU in 44 /sys/bus/event_source/devices/<pmu>/format/* 45 46 - a symbolically formed event like 'pmu/config=M,config1=N,config2=K/' 47 where M, N, K are numbers (in decimal, hex, octal format). 48 Acceptable values for each of 'config', 'config1' and 'config2' 49 parameters are defined by corresponding entries in 50 /sys/bus/event_source/devices/<pmu>/format/* 51 52 Note that the last two syntaxes support prefix and glob matching in 53 the PMU name to simplify creation of events across multiple instances 54 of the same type of PMU in large systems (e.g. memory controller PMUs). 55 Multiple PMU instances are typical for uncore PMUs, so the prefix 56 'uncore_' is also ignored when performing this match. 57 58 59-i:: 60--no-inherit:: 61 child tasks do not inherit counters 62-p:: 63--pid=<pid>:: 64 stat events on existing process id (comma separated list) 65 66-t:: 67--tid=<tid>:: 68 stat events on existing thread id (comma separated list) 69 70 71-a:: 72--all-cpus:: 73 system-wide collection from all CPUs (default if no target is specified) 74 75--no-scale:: 76 Don't scale/normalize counter values 77 78-d:: 79--detailed:: 80 print more detailed statistics, can be specified up to 3 times 81 82 -d: detailed events, L1 and LLC data cache 83 -d -d: more detailed events, dTLB and iTLB events 84 -d -d -d: very detailed events, adding prefetch events 85 86-r:: 87--repeat=<n>:: 88 repeat command and print average + stddev (max: 100). 0 means forever. 89 90-B:: 91--big-num:: 92 print large numbers with thousands' separators according to locale 93 94-C:: 95--cpu=:: 96Count only on the list of CPUs provided. Multiple CPUs can be provided as a 97comma-separated list with no space: 0,1. Ranges of CPUs are specified with -: 0-2. 98In per-thread mode, this option is ignored. The -a option is still necessary 99to activate system-wide monitoring. Default is to count on all CPUs. 100 101-A:: 102--no-aggr:: 103Do not aggregate counts across all monitored CPUs. 104 105-n:: 106--null:: 107 null run - don't start any counters 108 109-v:: 110--verbose:: 111 be more verbose (show counter open errors, etc) 112 113-x SEP:: 114--field-separator SEP:: 115print counts using a CSV-style output to make it easy to import directly into 116spreadsheets. Columns are separated by the string specified in SEP. 117 118--table:: Display time for each run (-r option), in a table format, e.g.: 119 120 $ perf stat --null -r 5 --table perf bench sched pipe 121 122 Performance counter stats for 'perf bench sched pipe' (5 runs): 123 124 # Table of individual measurements: 125 5.189 (-0.293) # 126 5.189 (-0.294) # 127 5.186 (-0.296) # 128 5.663 (+0.181) ## 129 6.186 (+0.703) #### 130 131 # Final result: 132 5.483 +- 0.198 seconds time elapsed ( +- 3.62% ) 133 134-G name:: 135--cgroup name:: 136monitor only in the container (cgroup) called "name". This option is available only 137in per-cpu mode. The cgroup filesystem must be mounted. All threads belonging to 138container "name" are monitored when they run on the monitored CPUs. Multiple cgroups 139can be provided. Each cgroup is applied to the corresponding event, i.e., first cgroup 140to first event, second cgroup to second event and so on. It is possible to provide 141an empty cgroup (monitor all the time) using, e.g., -G foo,,bar. Cgroups must have 142corresponding events, i.e., they always refer to events defined earlier on the command 143line. If the user wants to track multiple events for a specific cgroup, the user can 144use '-e e1 -e e2 -G foo,foo' or just use '-e e1 -e e2 -G foo'. 145 146If wanting to monitor, say, 'cycles' for a cgroup and also for system wide, this 147command line can be used: 'perf stat -e cycles -G cgroup_name -a -e cycles'. 148 149-o file:: 150--output file:: 151Print the output into the designated file. 152 153--append:: 154Append to the output file designated with the -o option. Ignored if -o is not specified. 155 156--log-fd:: 157 158Log output to fd, instead of stderr. Complementary to --output, and mutually exclusive 159with it. --append may be used here. Examples: 160 3>results perf stat --log-fd 3 -- $cmd 161 3>>results perf stat --log-fd 3 --append -- $cmd 162 163--pre:: 164--post:: 165 Pre and post measurement hooks, e.g.: 166 167perf stat --repeat 10 --null --sync --pre 'make -s O=defconfig-build/clean' -- make -s -j64 O=defconfig-build/ bzImage 168 169-I msecs:: 170--interval-print msecs:: 171Print count deltas every N milliseconds (minimum: 1ms) 172The overhead percentage could be high in some cases, for instance with small, sub 100ms intervals. Use with caution. 173 example: 'perf stat -I 1000 -e cycles -a sleep 5' 174 175--interval-count times:: 176Print count deltas for fixed number of times. 177This option should be used together with "-I" option. 178 example: 'perf stat -I 1000 --interval-count 2 -e cycles -a' 179 180--interval-clear:: 181Clear the screen before next interval. 182 183--timeout msecs:: 184Stop the 'perf stat' session and print count deltas after N milliseconds (minimum: 10 ms). 185This option is not supported with the "-I" option. 186 example: 'perf stat --time 2000 -e cycles -a' 187 188--metric-only:: 189Only print computed metrics. Print them in a single line. 190Don't show any raw values. Not supported with --per-thread. 191 192--per-socket:: 193Aggregate counts per processor socket for system-wide mode measurements. This 194is a useful mode to detect imbalance between sockets. To enable this mode, 195use --per-socket in addition to -a. (system-wide). The output includes the 196socket number and the number of online processors on that socket. This is 197useful to gauge the amount of aggregation. 198 199--per-core:: 200Aggregate counts per physical processor for system-wide mode measurements. This 201is a useful mode to detect imbalance between physical cores. To enable this mode, 202use --per-core in addition to -a. (system-wide). The output includes the 203core number and the number of online logical processors on that physical processor. 204 205--per-thread:: 206Aggregate counts per monitored threads, when monitoring threads (-t option) 207or processes (-p option). 208 209-D msecs:: 210--delay msecs:: 211After starting the program, wait msecs before measuring. This is useful to 212filter out the startup phase of the program, which is often very different. 213 214-T:: 215--transaction:: 216 217Print statistics of transactional execution if supported. 218 219STAT RECORD 220----------- 221Stores stat data into perf data file. 222 223-o file:: 224--output file:: 225Output file name. 226 227STAT REPORT 228----------- 229Reads and reports stat data from perf data file. 230 231-i file:: 232--input file:: 233Input file name. 234 235--per-socket:: 236Aggregate counts per processor socket for system-wide mode measurements. 237 238--per-core:: 239Aggregate counts per physical processor for system-wide mode measurements. 240 241-M:: 242--metrics:: 243Print metrics or metricgroups specified in a comma separated list. 244For a group all metrics from the group are added. 245The events from the metrics are automatically measured. 246See perf list output for the possble metrics and metricgroups. 247 248-A:: 249--no-aggr:: 250Do not aggregate counts across all monitored CPUs. 251 252--topdown:: 253Print top down level 1 metrics if supported by the CPU. This allows to 254determine bottle necks in the CPU pipeline for CPU bound workloads, 255by breaking the cycles consumed down into frontend bound, backend bound, 256bad speculation and retiring. 257 258Frontend bound means that the CPU cannot fetch and decode instructions fast 259enough. Backend bound means that computation or memory access is the bottle 260neck. Bad Speculation means that the CPU wasted cycles due to branch 261mispredictions and similar issues. Retiring means that the CPU computed without 262an apparently bottleneck. The bottleneck is only the real bottleneck 263if the workload is actually bound by the CPU and not by something else. 264 265For best results it is usually a good idea to use it with interval 266mode like -I 1000, as the bottleneck of workloads can change often. 267 268The top down metrics are collected per core instead of per 269CPU thread. Per core mode is automatically enabled 270and -a (global monitoring) is needed, requiring root rights or 271perf.perf_event_paranoid=-1. 272 273Topdown uses the full Performance Monitoring Unit, and needs 274disabling of the NMI watchdog (as root): 275echo 0 > /proc/sys/kernel/nmi_watchdog 276for best results. Otherwise the bottlenecks may be inconsistent 277on workload with changing phases. 278 279This enables --metric-only, unless overridden with --no-metric-only. 280 281To interpret the results it is usually needed to know on which 282CPUs the workload runs on. If needed the CPUs can be forced using 283taskset. 284 285--no-merge:: 286Do not merge results from same PMUs. 287 288When multiple events are created from a single event specification, 289stat will, by default, aggregate the event counts and show the result 290in a single row. This option disables that behavior and shows 291the individual events and counts. 292 293Multiple events are created from a single event specification when: 2941. Prefix or glob matching is used for the PMU name. 2952. Aliases, which are listed immediately after the Kernel PMU events 296 by perf list, are used. 297 298--smi-cost:: 299Measure SMI cost if msr/aperf/ and msr/smi/ events are supported. 300 301During the measurement, the /sys/device/cpu/freeze_on_smi will be set to 302freeze core counters on SMI. 303The aperf counter will not be effected by the setting. 304The cost of SMI can be measured by (aperf - unhalted core cycles). 305 306In practice, the percentages of SMI cycles is very useful for performance 307oriented analysis. --metric_only will be applied by default. 308The output is SMI cycles%, equals to (aperf - unhalted core cycles) / aperf 309 310Users who wants to get the actual value can apply --no-metric-only. 311 312EXAMPLES 313-------- 314 315$ perf stat -- make 316 317 Performance counter stats for 'make': 318 319 83723.452481 task-clock:u (msec) # 1.004 CPUs utilized 320 0 context-switches:u # 0.000 K/sec 321 0 cpu-migrations:u # 0.000 K/sec 322 3,228,188 page-faults:u # 0.039 M/sec 323 229,570,665,834 cycles:u # 2.742 GHz 324 313,163,853,778 instructions:u # 1.36 insn per cycle 325 69,704,684,856 branches:u # 832.559 M/sec 326 2,078,861,393 branch-misses:u # 2.98% of all branches 327 328 83.409183620 seconds time elapsed 329 330 74.684747000 seconds user 331 8.739217000 seconds sys 332 333TIMINGS 334------- 335As displayed in the example above we can display 3 types of timings. 336We always display the time the counters were enabled/alive: 337 338 83.409183620 seconds time elapsed 339 340For workload sessions we also display time the workloads spent in 341user/system lands: 342 343 74.684747000 seconds user 344 8.739217000 seconds sys 345 346Those times are the very same as displayed by the 'time' tool. 347 348CSV FORMAT 349---------- 350 351With -x, perf stat is able to output a not-quite-CSV format output 352Commas in the output are not put into "". To make it easy to parse 353it is recommended to use a different character like -x \; 354 355The fields are in this order: 356 357 - optional usec time stamp in fractions of second (with -I xxx) 358 - optional CPU, core, or socket identifier 359 - optional number of logical CPUs aggregated 360 - counter value 361 - unit of the counter value or empty 362 - event name 363 - run time of counter 364 - percentage of measurement time the counter was running 365 - optional variance if multiple values are collected with -r 366 - optional metric value 367 - optional unit of metric 368 369Additional metrics may be printed with all earlier fields being empty. 370 371SEE ALSO 372-------- 373linkperf:perf-top[1], linkperf:perf-list[1] 374