1perf-stat(1) 2============ 3 4NAME 5---- 6perf-stat - Run a command and gather performance counter statistics 7 8SYNOPSIS 9-------- 10[verse] 11'perf stat' [-e <EVENT> | --event=EVENT] [-a] <command> 12'perf stat' [-e <EVENT> | --event=EVENT] [-a] -- <command> [<options>] 13'perf stat' [-e <EVENT> | --event=EVENT] [-a] record [-o file] -- <command> [<options>] 14'perf stat' report [-i file] 15 16DESCRIPTION 17----------- 18This command runs a command and gathers performance counter statistics 19from it. 20 21 22OPTIONS 23------- 24<command>...:: 25 Any command you can specify in a shell. 26 27record:: 28 See STAT RECORD. 29 30report:: 31 See STAT REPORT. 32 33-e:: 34--event=:: 35 Select the PMU event. Selection can be: 36 37 - a symbolic event name (use 'perf list' to list all events) 38 39 - a raw PMU event (eventsel+umask) in the form of rNNN where NNN is a 40 hexadecimal event descriptor. 41 42 - a symbolically formed event like 'pmu/param1=0x3,param2/' where 43 param1 and param2 are defined as formats for the PMU in 44 /sys/bus/event_source/devices/<pmu>/format/* 45 46 - a symbolically formed event like 'pmu/config=M,config1=N,config2=K/' 47 where M, N, K are numbers (in decimal, hex, octal format). 48 Acceptable values for each of 'config', 'config1' and 'config2' 49 parameters are defined by corresponding entries in 50 /sys/bus/event_source/devices/<pmu>/format/* 51 52 Note that the last two syntaxes support prefix and glob matching in 53 the PMU name to simplify creation of events accross multiple instances 54 of the same type of PMU in large systems (e.g. memory controller PMUs). 55 Multiple PMU instances are typical for uncore PMUs, so the prefix 56 'uncore_' is also ignored when performing this match. 57 58 59-i:: 60--no-inherit:: 61 child tasks do not inherit counters 62-p:: 63--pid=<pid>:: 64 stat events on existing process id (comma separated list) 65 66-t:: 67--tid=<tid>:: 68 stat events on existing thread id (comma separated list) 69 70 71-a:: 72--all-cpus:: 73 system-wide collection from all CPUs (default if no target is specified) 74 75-c:: 76--scale:: 77 scale/normalize counter values 78 79-d:: 80--detailed:: 81 print more detailed statistics, can be specified up to 3 times 82 83 -d: detailed events, L1 and LLC data cache 84 -d -d: more detailed events, dTLB and iTLB events 85 -d -d -d: very detailed events, adding prefetch events 86 87-r:: 88--repeat=<n>:: 89 repeat command and print average + stddev (max: 100). 0 means forever. 90 91-B:: 92--big-num:: 93 print large numbers with thousands' separators according to locale 94 95-C:: 96--cpu=:: 97Count only on the list of CPUs provided. Multiple CPUs can be provided as a 98comma-separated list with no space: 0,1. Ranges of CPUs are specified with -: 0-2. 99In per-thread mode, this option is ignored. The -a option is still necessary 100to activate system-wide monitoring. Default is to count on all CPUs. 101 102-A:: 103--no-aggr:: 104Do not aggregate counts across all monitored CPUs. 105 106-n:: 107--null:: 108 null run - don't start any counters 109 110-v:: 111--verbose:: 112 be more verbose (show counter open errors, etc) 113 114-x SEP:: 115--field-separator SEP:: 116print counts using a CSV-style output to make it easy to import directly into 117spreadsheets. Columns are separated by the string specified in SEP. 118 119--table:: Display time for each run (-r option), in a table format, e.g.: 120 121 $ perf stat --null -r 5 --table perf bench sched pipe 122 123 Performance counter stats for 'perf bench sched pipe' (5 runs): 124 125 # Table of individual measurements: 126 5.189 (-0.293) # 127 5.189 (-0.294) # 128 5.186 (-0.296) # 129 5.663 (+0.181) ## 130 6.186 (+0.703) #### 131 132 # Final result: 133 5.483 +- 0.198 seconds time elapsed ( +- 3.62% ) 134 135-G name:: 136--cgroup name:: 137monitor only in the container (cgroup) called "name". This option is available only 138in per-cpu mode. The cgroup filesystem must be mounted. All threads belonging to 139container "name" are monitored when they run on the monitored CPUs. Multiple cgroups 140can be provided. Each cgroup is applied to the corresponding event, i.e., first cgroup 141to first event, second cgroup to second event and so on. It is possible to provide 142an empty cgroup (monitor all the time) using, e.g., -G foo,,bar. Cgroups must have 143corresponding events, i.e., they always refer to events defined earlier on the command 144line. If the user wants to track multiple events for a specific cgroup, the user can 145use '-e e1 -e e2 -G foo,foo' or just use '-e e1 -e e2 -G foo'. 146 147If wanting to monitor, say, 'cycles' for a cgroup and also for system wide, this 148command line can be used: 'perf stat -e cycles -G cgroup_name -a -e cycles'. 149 150-o file:: 151--output file:: 152Print the output into the designated file. 153 154--append:: 155Append to the output file designated with the -o option. Ignored if -o is not specified. 156 157--log-fd:: 158 159Log output to fd, instead of stderr. Complementary to --output, and mutually exclusive 160with it. --append may be used here. Examples: 161 3>results perf stat --log-fd 3 -- $cmd 162 3>>results perf stat --log-fd 3 --append -- $cmd 163 164--pre:: 165--post:: 166 Pre and post measurement hooks, e.g.: 167 168perf stat --repeat 10 --null --sync --pre 'make -s O=defconfig-build/clean' -- make -s -j64 O=defconfig-build/ bzImage 169 170-I msecs:: 171--interval-print msecs:: 172Print count deltas every N milliseconds (minimum: 1ms) 173The overhead percentage could be high in some cases, for instance with small, sub 100ms intervals. Use with caution. 174 example: 'perf stat -I 1000 -e cycles -a sleep 5' 175 176--interval-count times:: 177Print count deltas for fixed number of times. 178This option should be used together with "-I" option. 179 example: 'perf stat -I 1000 --interval-count 2 -e cycles -a' 180 181--timeout msecs:: 182Stop the 'perf stat' session and print count deltas after N milliseconds (minimum: 10 ms). 183This option is not supported with the "-I" option. 184 example: 'perf stat --time 2000 -e cycles -a' 185 186--metric-only:: 187Only print computed metrics. Print them in a single line. 188Don't show any raw values. Not supported with --per-thread. 189 190--per-socket:: 191Aggregate counts per processor socket for system-wide mode measurements. This 192is a useful mode to detect imbalance between sockets. To enable this mode, 193use --per-socket in addition to -a. (system-wide). The output includes the 194socket number and the number of online processors on that socket. This is 195useful to gauge the amount of aggregation. 196 197--per-core:: 198Aggregate counts per physical processor for system-wide mode measurements. This 199is a useful mode to detect imbalance between physical cores. To enable this mode, 200use --per-core in addition to -a. (system-wide). The output includes the 201core number and the number of online logical processors on that physical processor. 202 203--per-thread:: 204Aggregate counts per monitored threads, when monitoring threads (-t option) 205or processes (-p option). 206 207-D msecs:: 208--delay msecs:: 209After starting the program, wait msecs before measuring. This is useful to 210filter out the startup phase of the program, which is often very different. 211 212-T:: 213--transaction:: 214 215Print statistics of transactional execution if supported. 216 217STAT RECORD 218----------- 219Stores stat data into perf data file. 220 221-o file:: 222--output file:: 223Output file name. 224 225STAT REPORT 226----------- 227Reads and reports stat data from perf data file. 228 229-i file:: 230--input file:: 231Input file name. 232 233--per-socket:: 234Aggregate counts per processor socket for system-wide mode measurements. 235 236--per-core:: 237Aggregate counts per physical processor for system-wide mode measurements. 238 239-M:: 240--metrics:: 241Print metrics or metricgroups specified in a comma separated list. 242For a group all metrics from the group are added. 243The events from the metrics are automatically measured. 244See perf list output for the possble metrics and metricgroups. 245 246-A:: 247--no-aggr:: 248Do not aggregate counts across all monitored CPUs. 249 250--topdown:: 251Print top down level 1 metrics if supported by the CPU. This allows to 252determine bottle necks in the CPU pipeline for CPU bound workloads, 253by breaking the cycles consumed down into frontend bound, backend bound, 254bad speculation and retiring. 255 256Frontend bound means that the CPU cannot fetch and decode instructions fast 257enough. Backend bound means that computation or memory access is the bottle 258neck. Bad Speculation means that the CPU wasted cycles due to branch 259mispredictions and similar issues. Retiring means that the CPU computed without 260an apparently bottleneck. The bottleneck is only the real bottleneck 261if the workload is actually bound by the CPU and not by something else. 262 263For best results it is usually a good idea to use it with interval 264mode like -I 1000, as the bottleneck of workloads can change often. 265 266The top down metrics are collected per core instead of per 267CPU thread. Per core mode is automatically enabled 268and -a (global monitoring) is needed, requiring root rights or 269perf.perf_event_paranoid=-1. 270 271Topdown uses the full Performance Monitoring Unit, and needs 272disabling of the NMI watchdog (as root): 273echo 0 > /proc/sys/kernel/nmi_watchdog 274for best results. Otherwise the bottlenecks may be inconsistent 275on workload with changing phases. 276 277This enables --metric-only, unless overriden with --no-metric-only. 278 279To interpret the results it is usually needed to know on which 280CPUs the workload runs on. If needed the CPUs can be forced using 281taskset. 282 283--no-merge:: 284Do not merge results from same PMUs. 285 286When multiple events are created from a single event specification, 287stat will, by default, aggregate the event counts and show the result 288in a single row. This option disables that behavior and shows 289the individual events and counts. 290 291Multiple events are created from a single event specification when: 2921. Prefix or glob matching is used for the PMU name. 2932. Aliases, which are listed immediately after the Kernel PMU events 294 by perf list, are used. 295 296--smi-cost:: 297Measure SMI cost if msr/aperf/ and msr/smi/ events are supported. 298 299During the measurement, the /sys/device/cpu/freeze_on_smi will be set to 300freeze core counters on SMI. 301The aperf counter will not be effected by the setting. 302The cost of SMI can be measured by (aperf - unhalted core cycles). 303 304In practice, the percentages of SMI cycles is very useful for performance 305oriented analysis. --metric_only will be applied by default. 306The output is SMI cycles%, equals to (aperf - unhalted core cycles) / aperf 307 308Users who wants to get the actual value can apply --no-metric-only. 309 310EXAMPLES 311-------- 312 313$ perf stat -- make 314 315 Performance counter stats for 'make': 316 317 83723.452481 task-clock:u (msec) # 1.004 CPUs utilized 318 0 context-switches:u # 0.000 K/sec 319 0 cpu-migrations:u # 0.000 K/sec 320 3,228,188 page-faults:u # 0.039 M/sec 321 229,570,665,834 cycles:u # 2.742 GHz 322 313,163,853,778 instructions:u # 1.36 insn per cycle 323 69,704,684,856 branches:u # 832.559 M/sec 324 2,078,861,393 branch-misses:u # 2.98% of all branches 325 326 83.409183620 seconds time elapsed 327 328 74.684747000 seconds user 329 8.739217000 seconds sys 330 331TIMINGS 332------- 333As displayed in the example above we can display 3 types of timings. 334We always display the time the counters were enabled/alive: 335 336 83.409183620 seconds time elapsed 337 338For workload sessions we also display time the workloads spent in 339user/system lands: 340 341 74.684747000 seconds user 342 8.739217000 seconds sys 343 344Those times are the very same as displayed by the 'time' tool. 345 346CSV FORMAT 347---------- 348 349With -x, perf stat is able to output a not-quite-CSV format output 350Commas in the output are not put into "". To make it easy to parse 351it is recommended to use a different character like -x \; 352 353The fields are in this order: 354 355 - optional usec time stamp in fractions of second (with -I xxx) 356 - optional CPU, core, or socket identifier 357 - optional number of logical CPUs aggregated 358 - counter value 359 - unit of the counter value or empty 360 - event name 361 - run time of counter 362 - percentage of measurement time the counter was running 363 - optional variance if multiple values are collected with -r 364 - optional metric value 365 - optional unit of metric 366 367Additional metrics may be printed with all earlier fields being empty. 368 369SEE ALSO 370-------- 371linkperf:perf-top[1], linkperf:perf-list[1] 372