1perf-stat(1) 2============ 3 4NAME 5---- 6perf-stat - Run a command and gather performance counter statistics 7 8SYNOPSIS 9-------- 10[verse] 11'perf stat' [-e <EVENT> | --event=EVENT] [-a] <command> 12'perf stat' [-e <EVENT> | --event=EVENT] [-a] -- <command> [<options>] 13'perf stat' [-e <EVENT> | --event=EVENT] [-a] record [-o file] -- <command> [<options>] 14'perf stat' report [-i file] 15 16DESCRIPTION 17----------- 18This command runs a command and gathers performance counter statistics 19from it. 20 21 22OPTIONS 23------- 24<command>...:: 25 Any command you can specify in a shell. 26 27record:: 28 See STAT RECORD. 29 30report:: 31 See STAT REPORT. 32 33-e:: 34--event=:: 35 Select the PMU event. Selection can be: 36 37 - a symbolic event name (use 'perf list' to list all events) 38 39 - a raw PMU event (eventsel+umask) in the form of rNNN where NNN is a 40 hexadecimal event descriptor. 41 42 - a symbolically formed event like 'pmu/param1=0x3,param2/' where 43 param1 and param2 are defined as formats for the PMU in 44 /sys/bus/event_source/devices/<pmu>/format/* 45 46 - a symbolically formed event like 'pmu/config=M,config1=N,config2=K/' 47 where M, N, K are numbers (in decimal, hex, octal format). 48 Acceptable values for each of 'config', 'config1' and 'config2' 49 parameters are defined by corresponding entries in 50 /sys/bus/event_source/devices/<pmu>/format/* 51 52 Note that the last two syntaxes support prefix and glob matching in 53 the PMU name to simplify creation of events accross multiple instances 54 of the same type of PMU in large systems (e.g. memory controller PMUs). 55 Multiple PMU instances are typical for uncore PMUs, so the prefix 56 'uncore_' is also ignored when performing this match. 57 58 59-i:: 60--no-inherit:: 61 child tasks do not inherit counters 62-p:: 63--pid=<pid>:: 64 stat events on existing process id (comma separated list) 65 66-t:: 67--tid=<tid>:: 68 stat events on existing thread id (comma separated list) 69 70 71-a:: 72--all-cpus:: 73 system-wide collection from all CPUs (default if no target is specified) 74 75-c:: 76--scale:: 77 scale/normalize counter values 78 79-d:: 80--detailed:: 81 print more detailed statistics, can be specified up to 3 times 82 83 -d: detailed events, L1 and LLC data cache 84 -d -d: more detailed events, dTLB and iTLB events 85 -d -d -d: very detailed events, adding prefetch events 86 87-r:: 88--repeat=<n>:: 89 repeat command and print average + stddev (max: 100). 0 means forever. 90 91-B:: 92--big-num:: 93 print large numbers with thousands' separators according to locale 94 95-C:: 96--cpu=:: 97Count only on the list of CPUs provided. Multiple CPUs can be provided as a 98comma-separated list with no space: 0,1. Ranges of CPUs are specified with -: 0-2. 99In per-thread mode, this option is ignored. The -a option is still necessary 100to activate system-wide monitoring. Default is to count on all CPUs. 101 102-A:: 103--no-aggr:: 104Do not aggregate counts across all monitored CPUs. 105 106-n:: 107--null:: 108 null run - don't start any counters 109 110-v:: 111--verbose:: 112 be more verbose (show counter open errors, etc) 113 114-x SEP:: 115--field-separator SEP:: 116print counts using a CSV-style output to make it easy to import directly into 117spreadsheets. Columns are separated by the string specified in SEP. 118 119--table:: Display time for each run (-r option), in a table format, e.g.: 120 121 $ perf stat --null -r 5 --table perf bench sched pipe 122 123 Performance counter stats for 'perf bench sched pipe' (5 runs): 124 125 # Table of individual measurements: 126 5.189 (-0.293) # 127 5.189 (-0.294) # 128 5.186 (-0.296) # 129 5.663 (+0.181) ## 130 6.186 (+0.703) #### 131 132 # Final result: 133 5.483 +- 0.198 seconds time elapsed ( +- 3.62% ) 134 135-G name:: 136--cgroup name:: 137monitor only in the container (cgroup) called "name". This option is available only 138in per-cpu mode. The cgroup filesystem must be mounted. All threads belonging to 139container "name" are monitored when they run on the monitored CPUs. Multiple cgroups 140can be provided. Each cgroup is applied to the corresponding event, i.e., first cgroup 141to first event, second cgroup to second event and so on. It is possible to provide 142an empty cgroup (monitor all the time) using, e.g., -G foo,,bar. Cgroups must have 143corresponding events, i.e., they always refer to events defined earlier on the command 144line. If the user wants to track multiple events for a specific cgroup, the user can 145use '-e e1 -e e2 -G foo,foo' or just use '-e e1 -e e2 -G foo'. 146 147If wanting to monitor, say, 'cycles' for a cgroup and also for system wide, this 148command line can be used: 'perf stat -e cycles -G cgroup_name -a -e cycles'. 149 150-o file:: 151--output file:: 152Print the output into the designated file. 153 154--append:: 155Append to the output file designated with the -o option. Ignored if -o is not specified. 156 157--log-fd:: 158 159Log output to fd, instead of stderr. Complementary to --output, and mutually exclusive 160with it. --append may be used here. Examples: 161 3>results perf stat --log-fd 3 -- $cmd 162 3>>results perf stat --log-fd 3 --append -- $cmd 163 164--pre:: 165--post:: 166 Pre and post measurement hooks, e.g.: 167 168perf stat --repeat 10 --null --sync --pre 'make -s O=defconfig-build/clean' -- make -s -j64 O=defconfig-build/ bzImage 169 170-I msecs:: 171--interval-print msecs:: 172Print count deltas every N milliseconds (minimum: 1ms) 173The overhead percentage could be high in some cases, for instance with small, sub 100ms intervals. Use with caution. 174 example: 'perf stat -I 1000 -e cycles -a sleep 5' 175 176--interval-count times:: 177Print count deltas for fixed number of times. 178This option should be used together with "-I" option. 179 example: 'perf stat -I 1000 --interval-count 2 -e cycles -a' 180 181--interval-clear:: 182Clear the screen before next interval. 183 184--timeout msecs:: 185Stop the 'perf stat' session and print count deltas after N milliseconds (minimum: 10 ms). 186This option is not supported with the "-I" option. 187 example: 'perf stat --time 2000 -e cycles -a' 188 189--metric-only:: 190Only print computed metrics. Print them in a single line. 191Don't show any raw values. Not supported with --per-thread. 192 193--per-socket:: 194Aggregate counts per processor socket for system-wide mode measurements. This 195is a useful mode to detect imbalance between sockets. To enable this mode, 196use --per-socket in addition to -a. (system-wide). The output includes the 197socket number and the number of online processors on that socket. This is 198useful to gauge the amount of aggregation. 199 200--per-core:: 201Aggregate counts per physical processor for system-wide mode measurements. This 202is a useful mode to detect imbalance between physical cores. To enable this mode, 203use --per-core in addition to -a. (system-wide). The output includes the 204core number and the number of online logical processors on that physical processor. 205 206--per-thread:: 207Aggregate counts per monitored threads, when monitoring threads (-t option) 208or processes (-p option). 209 210-D msecs:: 211--delay msecs:: 212After starting the program, wait msecs before measuring. This is useful to 213filter out the startup phase of the program, which is often very different. 214 215-T:: 216--transaction:: 217 218Print statistics of transactional execution if supported. 219 220STAT RECORD 221----------- 222Stores stat data into perf data file. 223 224-o file:: 225--output file:: 226Output file name. 227 228STAT REPORT 229----------- 230Reads and reports stat data from perf data file. 231 232-i file:: 233--input file:: 234Input file name. 235 236--per-socket:: 237Aggregate counts per processor socket for system-wide mode measurements. 238 239--per-core:: 240Aggregate counts per physical processor for system-wide mode measurements. 241 242-M:: 243--metrics:: 244Print metrics or metricgroups specified in a comma separated list. 245For a group all metrics from the group are added. 246The events from the metrics are automatically measured. 247See perf list output for the possble metrics and metricgroups. 248 249-A:: 250--no-aggr:: 251Do not aggregate counts across all monitored CPUs. 252 253--topdown:: 254Print top down level 1 metrics if supported by the CPU. This allows to 255determine bottle necks in the CPU pipeline for CPU bound workloads, 256by breaking the cycles consumed down into frontend bound, backend bound, 257bad speculation and retiring. 258 259Frontend bound means that the CPU cannot fetch and decode instructions fast 260enough. Backend bound means that computation or memory access is the bottle 261neck. Bad Speculation means that the CPU wasted cycles due to branch 262mispredictions and similar issues. Retiring means that the CPU computed without 263an apparently bottleneck. The bottleneck is only the real bottleneck 264if the workload is actually bound by the CPU and not by something else. 265 266For best results it is usually a good idea to use it with interval 267mode like -I 1000, as the bottleneck of workloads can change often. 268 269The top down metrics are collected per core instead of per 270CPU thread. Per core mode is automatically enabled 271and -a (global monitoring) is needed, requiring root rights or 272perf.perf_event_paranoid=-1. 273 274Topdown uses the full Performance Monitoring Unit, and needs 275disabling of the NMI watchdog (as root): 276echo 0 > /proc/sys/kernel/nmi_watchdog 277for best results. Otherwise the bottlenecks may be inconsistent 278on workload with changing phases. 279 280This enables --metric-only, unless overriden with --no-metric-only. 281 282To interpret the results it is usually needed to know on which 283CPUs the workload runs on. If needed the CPUs can be forced using 284taskset. 285 286--no-merge:: 287Do not merge results from same PMUs. 288 289When multiple events are created from a single event specification, 290stat will, by default, aggregate the event counts and show the result 291in a single row. This option disables that behavior and shows 292the individual events and counts. 293 294Multiple events are created from a single event specification when: 2951. Prefix or glob matching is used for the PMU name. 2962. Aliases, which are listed immediately after the Kernel PMU events 297 by perf list, are used. 298 299--smi-cost:: 300Measure SMI cost if msr/aperf/ and msr/smi/ events are supported. 301 302During the measurement, the /sys/device/cpu/freeze_on_smi will be set to 303freeze core counters on SMI. 304The aperf counter will not be effected by the setting. 305The cost of SMI can be measured by (aperf - unhalted core cycles). 306 307In practice, the percentages of SMI cycles is very useful for performance 308oriented analysis. --metric_only will be applied by default. 309The output is SMI cycles%, equals to (aperf - unhalted core cycles) / aperf 310 311Users who wants to get the actual value can apply --no-metric-only. 312 313EXAMPLES 314-------- 315 316$ perf stat -- make 317 318 Performance counter stats for 'make': 319 320 83723.452481 task-clock:u (msec) # 1.004 CPUs utilized 321 0 context-switches:u # 0.000 K/sec 322 0 cpu-migrations:u # 0.000 K/sec 323 3,228,188 page-faults:u # 0.039 M/sec 324 229,570,665,834 cycles:u # 2.742 GHz 325 313,163,853,778 instructions:u # 1.36 insn per cycle 326 69,704,684,856 branches:u # 832.559 M/sec 327 2,078,861,393 branch-misses:u # 2.98% of all branches 328 329 83.409183620 seconds time elapsed 330 331 74.684747000 seconds user 332 8.739217000 seconds sys 333 334TIMINGS 335------- 336As displayed in the example above we can display 3 types of timings. 337We always display the time the counters were enabled/alive: 338 339 83.409183620 seconds time elapsed 340 341For workload sessions we also display time the workloads spent in 342user/system lands: 343 344 74.684747000 seconds user 345 8.739217000 seconds sys 346 347Those times are the very same as displayed by the 'time' tool. 348 349CSV FORMAT 350---------- 351 352With -x, perf stat is able to output a not-quite-CSV format output 353Commas in the output are not put into "". To make it easy to parse 354it is recommended to use a different character like -x \; 355 356The fields are in this order: 357 358 - optional usec time stamp in fractions of second (with -I xxx) 359 - optional CPU, core, or socket identifier 360 - optional number of logical CPUs aggregated 361 - counter value 362 - unit of the counter value or empty 363 - event name 364 - run time of counter 365 - percentage of measurement time the counter was running 366 - optional variance if multiple values are collected with -r 367 - optional metric value 368 - optional unit of metric 369 370Additional metrics may be printed with all earlier fields being empty. 371 372SEE ALSO 373-------- 374linkperf:perf-top[1], linkperf:perf-list[1] 375