1.. SPDX-License-Identifier: CC-BY-SA-2.0-UK
2.. highlight:: shell
3
4***************************************************************
5Basic Usage (with examples) for each of the Yocto Tracing Tools
6***************************************************************
7
8|
9
10This chapter presents basic usage examples for each of the tracing
11tools.
12
13perf
14====
15
16The 'perf' tool is the profiling and tracing tool that comes bundled
17with the Linux kernel.
18
19Don't let the fact that it's part of the kernel fool you into thinking
20that it's only for tracing and profiling the kernel --- you can indeed use
21it to trace and profile just the kernel, but you can also use it to
22profile specific applications separately (with or without kernel
23context), and you can also use it to trace and profile the kernel and
24all applications on the system simultaneously to gain a system-wide view
25of what's going on.
26
27In many ways, perf aims to be a superset of all the tracing and
28profiling tools available in Linux today, including all the other tools
29covered in this HOWTO. The past couple of years have seen perf subsume a
30lot of the functionality of those other tools and, at the same time,
31those other tools have removed large portions of their previous
32functionality and replaced it with calls to the equivalent functionality
33now implemented by the perf subsystem. Extrapolation suggests that at
34some point those other tools will simply become completely redundant and
35go away; until then, we'll cover those other tools in these pages and in
36many cases show how the same things can be accomplished in perf and the
37other tools when it seems useful to do so.
38
39The coverage below details some of the most common ways you'll likely
40want to apply the tool; full documentation can be found either within
41the tool itself or in the man pages at
42`perf(1) <https://linux.die.net/man/1/perf>`__.
43
44Perf Setup
45----------
46
47For this section, we'll assume you've already performed the basic setup
48outlined in the ":ref:`profile-manual/intro:General Setup`" section.
49
50In particular, you'll get the most mileage out of perf if you profile an
51image built with the following in your ``local.conf`` file::
52
53   INHIBIT_PACKAGE_STRIP = "1"
54
55perf runs on the target system for the most part. You can archive
56profile data and copy it to the host for analysis, but for the rest of
57this document we assume you've ssh'ed to the host and will be running
58the perf commands on the target.
59
60Basic Perf Usage
61----------------
62
63The perf tool is pretty much self-documenting. To remind yourself of the
64available commands, simply type 'perf', which will show you basic usage
65along with the available perf subcommands::
66
67   root@crownbay:~# perf
68
69   usage: perf [--version] [--help] COMMAND [ARGS]
70
71   The most commonly used perf commands are:
72     annotate        Read perf.data (created by perf record) and display annotated code
73     archive         Create archive with object files with build-ids found in perf.data file
74     bench           General framework for benchmark suites
75     buildid-cache   Manage build-id cache.
76     buildid-list    List the buildids in a perf.data file
77     diff            Read two perf.data files and display the differential profile
78     evlist          List the event names in a perf.data file
79     inject          Filter to augment the events stream with additional information
80     kmem            Tool to trace/measure kernel memory(slab) properties
81     kvm             Tool to trace/measure kvm guest os
82     list            List all symbolic event types
83     lock            Analyze lock events
84     probe           Define new dynamic tracepoints
85     record          Run a command and record its profile into perf.data
86     report          Read perf.data (created by perf record) and display the profile
87     sched           Tool to trace/measure scheduler properties (latencies)
88     script          Read perf.data (created by perf record) and display trace output
89     stat            Run a command and gather performance counter statistics
90     test            Runs sanity tests.
91     timechart       Tool to visualize total system behavior during a workload
92     top             System profiling tool.
93
94   See 'perf help COMMAND' for more information on a specific command.
95
96
97Using perf to do Basic Profiling
98~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
99
100As a simple test case, we'll profile the 'wget' of a fairly large file,
101which is a minimally interesting case because it has both file and
102network I/O aspects, and at least in the case of standard Yocto images,
103it's implemented as part of BusyBox, so the methods we use to analyze it
104can be used in a very similar way to the whole host of supported BusyBox
105applets in Yocto. ::
106
107   root@crownbay:~# rm linux-2.6.19.2.tar.bz2; \
108                    wget &YOCTO_DL_URL;/mirror/sources/linux-2.6.19.2.tar.bz2
109
110The quickest and easiest way to get some basic overall data about what's
111going on for a particular workload is to profile it using 'perf stat'.
112'perf stat' basically profiles using a few default counters and displays
113the summed counts at the end of the run::
114
115   root@crownbay:~# perf stat wget &YOCTO_DL_URL;/mirror/sources/linux-2.6.19.2.tar.bz2
116   Connecting to downloads.yoctoproject.org (140.211.169.59:80)
117   linux-2.6.19.2.tar.b 100% |***************************************************| 41727k  0:00:00 ETA
118
119   Performance counter stats for 'wget &YOCTO_DL_URL;/mirror/sources/linux-2.6.19.2.tar.bz2':
120
121         4597.223902 task-clock                #    0.077 CPUs utilized
122               23568 context-switches          #    0.005 M/sec
123                  68 CPU-migrations            #    0.015 K/sec
124                 241 page-faults               #    0.052 K/sec
125          3045817293 cycles                    #    0.663 GHz
126     <not supported> stalled-cycles-frontend
127     <not supported> stalled-cycles-backend
128           858909167 instructions              #    0.28  insns per cycle
129           165441165 branches                  #   35.987 M/sec
130            19550329 branch-misses             #   11.82% of all branches
131
132        59.836627620 seconds time elapsed
133
134Many times such a simple-minded test doesn't yield much of
135interest, but sometimes it does (see Real-world Yocto bug (slow
136loop-mounted write speed)).
137
138Also, note that 'perf stat' isn't restricted to a fixed set of counters
139- basically any event listed in the output of 'perf list' can be tallied
140by 'perf stat'. For example, suppose we wanted to see a summary of all
141the events related to kernel memory allocation/freeing along with cache
142hits and misses::
143
144   root@crownbay:~# perf stat -e kmem:* -e cache-references -e cache-misses wget &YOCTO_DL_URL;/mirror/sources/linux-2.6.19.2.tar.bz2
145   Connecting to downloads.yoctoproject.org (140.211.169.59:80)
146   linux-2.6.19.2.tar.b 100% |***************************************************| 41727k  0:00:00 ETA
147
148   Performance counter stats for 'wget &YOCTO_DL_URL;/mirror/sources/linux-2.6.19.2.tar.bz2':
149
150                5566 kmem:kmalloc
151              125517 kmem:kmem_cache_alloc
152                   0 kmem:kmalloc_node
153                   0 kmem:kmem_cache_alloc_node
154               34401 kmem:kfree
155               69920 kmem:kmem_cache_free
156                 133 kmem:mm_page_free
157                  41 kmem:mm_page_free_batched
158               11502 kmem:mm_page_alloc
159               11375 kmem:mm_page_alloc_zone_locked
160                   0 kmem:mm_page_pcpu_drain
161                   0 kmem:mm_page_alloc_extfrag
162            66848602 cache-references
163             2917740 cache-misses              #    4.365 % of all cache refs
164
165        44.831023415 seconds time elapsed
166
167So 'perf stat' gives us a nice easy
168way to get a quick overview of what might be happening for a set of
169events, but normally we'd need a little more detail in order to
170understand what's going on in a way that we can act on in a useful way.
171
172To dive down into a next level of detail, we can use 'perf record'/'perf
173report' which will collect profiling data and present it to use using an
174interactive text-based UI (or simply as text if we specify ``--stdio`` to
175'perf report').
176
177As our first attempt at profiling this workload, we'll simply run 'perf
178record', handing it the workload we want to profile (everything after
179'perf record' and any perf options we hand it --- here none, will be
180executed in a new shell). perf collects samples until the process exits
181and records them in a file named 'perf.data' in the current working
182directory. ::
183
184   root@crownbay:~# perf record wget &YOCTO_DL_URL;/mirror/sources/linux-2.6.19.2.tar.bz2
185
186   Connecting to downloads.yoctoproject.org (140.211.169.59:80)
187   linux-2.6.19.2.tar.b 100% |************************************************| 41727k  0:00:00 ETA
188   [ perf record: Woken up 1 times to write data ]
189   [ perf record: Captured and wrote 0.176 MB perf.data (~7700 samples) ]
190
191To see the results in a
192'text-based UI' (tui), simply run 'perf report', which will read the
193perf.data file in the current working directory and display the results
194in an interactive UI::
195
196   root@crownbay:~# perf report
197
198.. image:: figures/perf-wget-flat-stripped.png
199   :align: center
200   :width: 70%
201
202The above screenshot displays a 'flat' profile, one entry for each
203'bucket' corresponding to the functions that were profiled during the
204profiling run, ordered from the most popular to the least (perf has
205options to sort in various orders and keys as well as display entries
206only above a certain threshold and so on --- see the perf documentation
207for details). Note that this includes both userspace functions (entries
208containing a [.]) and kernel functions accounted to the process (entries
209containing a [k]). (perf has command-line modifiers that can be used to
210restrict the profiling to kernel or userspace, among others).
211
212Notice also that the above report shows an entry for 'busybox', which is
213the executable that implements 'wget' in Yocto, but that instead of a
214useful function name in that entry, it displays a not-so-friendly hex
215value instead. The steps below will show how to fix that problem.
216
217Before we do that, however, let's try running a different profile, one
218which shows something a little more interesting. The only difference
219between the new profile and the previous one is that we'll add the -g
220option, which will record not just the address of a sampled function,
221but the entire callchain to the sampled function as well::
222
223   root@crownbay:~# perf record -g wget &YOCTO_DL_URL;/mirror/sources/linux-2.6.19.2.tar.bz2
224   Connecting to downloads.yoctoproject.org (140.211.169.59:80)
225   linux-2.6.19.2.tar.b 100% |************************************************| 41727k  0:00:00 ETA
226   [ perf record: Woken up 3 times to write data ]
227   [ perf record: Captured and wrote 0.652 MB perf.data (~28476 samples) ]
228
229
230   root@crownbay:~# perf report
231
232.. image:: figures/perf-wget-g-copy-to-user-expanded-stripped.png
233   :align: center
234   :width: 70%
235
236Using the callgraph view, we can actually see not only which functions
237took the most time, but we can also see a summary of how those functions
238were called and learn something about how the program interacts with the
239kernel in the process.
240
241Notice that each entry in the above screenshot now contains a '+' on the
242left-hand side. This means that we can expand the entry and drill down
243into the callchains that feed into that entry. Pressing 'enter' on any
244one of them will expand the callchain (you can also press 'E' to expand
245them all at the same time or 'C' to collapse them all).
246
247In the screenshot above, we've toggled the ``__copy_to_user_ll()`` entry
248and several subnodes all the way down. This lets us see which callchains
249contributed to the profiled ``__copy_to_user_ll()`` function which
250contributed 1.77% to the total profile.
251
252As a bit of background explanation for these callchains, think about
253what happens at a high level when you run wget to get a file out on the
254network. Basically what happens is that the data comes into the kernel
255via the network connection (socket) and is passed to the userspace
256program 'wget' (which is actually a part of BusyBox, but that's not
257important for now), which takes the buffers the kernel passes to it and
258writes it to a disk file to save it.
259
260The part of this process that we're looking at in the above call stacks
261is the part where the kernel passes the data it has read from the socket
262down to wget i.e. a copy-to-user.
263
264Notice also that here there's also a case where the hex value is
265displayed in the callstack, here in the expanded ``sys_clock_gettime()``
266function. Later we'll see it resolve to a userspace function call in
267busybox.
268
269.. image:: figures/perf-wget-g-copy-from-user-expanded-stripped.png
270   :align: center
271   :width: 70%
272
273The above screenshot shows the other half of the journey for the data -
274from the wget program's userspace buffers to disk. To get the buffers to
275disk, the wget program issues a ``write(2)``, which does a ``copy-from-user`` to
276the kernel, which then takes care via some circuitous path (probably
277also present somewhere in the profile data), to get it safely to disk.
278
279Now that we've seen the basic layout of the profile data and the basics
280of how to extract useful information out of it, let's get back to the
281task at hand and see if we can get some basic idea about where the time
282is spent in the program we're profiling, wget. Remember that wget is
283actually implemented as an applet in BusyBox, so while the process name
284is 'wget', the executable we're actually interested in is BusyBox. So
285let's expand the first entry containing BusyBox:
286
287.. image:: figures/perf-wget-busybox-expanded-stripped.png
288   :align: center
289   :width: 70%
290
291Again, before we expanded we saw that the function was labeled with a
292hex value instead of a symbol as with most of the kernel entries.
293Expanding the BusyBox entry doesn't make it any better.
294
295The problem is that perf can't find the symbol information for the
296busybox binary, which is actually stripped out by the Yocto build
297system.
298
299One way around that is to put the following in your ``local.conf`` file
300when you build the image::
301
302   INHIBIT_PACKAGE_STRIP = "1"
303
304However, we already have an image with the binaries stripped, so
305what can we do to get perf to resolve the symbols? Basically we need to
306install the debuginfo for the BusyBox package.
307
308To generate the debug info for the packages in the image, we can add
309``dbg-pkgs`` to :term:`EXTRA_IMAGE_FEATURES` in ``local.conf``. For example::
310
311   EXTRA_IMAGE_FEATURES = "debug-tweaks tools-profile dbg-pkgs"
312
313Additionally, in order to generate the type of debuginfo that perf
314understands, we also need to set
315:term:`PACKAGE_DEBUG_SPLIT_STYLE`
316in the ``local.conf`` file::
317
318   PACKAGE_DEBUG_SPLIT_STYLE = 'debug-file-directory'
319
320Once we've done that, we can install the
321debuginfo for BusyBox. The debug packages once built can be found in
322``build/tmp/deploy/rpm/*`` on the host system. Find the busybox-dbg-...rpm
323file and copy it to the target. For example::
324
325   [trz@empanada core2]$ scp /home/trz/yocto/crownbay-tracing-dbg/build/tmp/deploy/rpm/core2_32/busybox-dbg-1.20.2-r2.core2_32.rpm root@192.168.1.31:
326   busybox-dbg-1.20.2-r2.core2_32.rpm                     100% 1826KB   1.8MB/s   00:01
327
328Now install the debug rpm on the target::
329
330   root@crownbay:~# rpm -i busybox-dbg-1.20.2-r2.core2_32.rpm
331
332Now that the debuginfo is installed, we see that the BusyBox entries now display
333their functions symbolically:
334
335.. image:: figures/perf-wget-busybox-debuginfo.png
336   :align: center
337   :width: 70%
338
339If we expand one of the entries and press 'enter' on a leaf node, we're
340presented with a menu of actions we can take to get more information
341related to that entry:
342
343.. image:: figures/perf-wget-busybox-dso-zoom-menu.png
344   :align: center
345   :width: 70%
346
347One of these actions allows us to show a view that displays a
348busybox-centric view of the profiled functions (in this case we've also
349expanded all the nodes using the 'E' key):
350
351.. image:: figures/perf-wget-busybox-dso-zoom.png
352   :align: center
353   :width: 70%
354
355Finally, we can see that now that the BusyBox debuginfo is installed,
356the previously unresolved symbol in the ``sys_clock_gettime()`` entry
357mentioned previously is now resolved, and shows that the
358sys_clock_gettime system call that was the source of 6.75% of the
359copy-to-user overhead was initiated by the ``handle_input()`` BusyBox
360function:
361
362.. image:: figures/perf-wget-g-copy-to-user-expanded-debuginfo.png
363   :align: center
364   :width: 70%
365
366At the lowest level of detail, we can dive down to the assembly level
367and see which instructions caused the most overhead in a function.
368Pressing 'enter' on the 'udhcpc_main' function, we're again presented
369with a menu:
370
371.. image:: figures/perf-wget-busybox-annotate-menu.png
372   :align: center
373   :width: 70%
374
375Selecting 'Annotate udhcpc_main', we get a detailed listing of
376percentages by instruction for the udhcpc_main function. From the
377display, we can see that over 50% of the time spent in this function is
378taken up by a couple tests and the move of a constant (1) to a register:
379
380.. image:: figures/perf-wget-busybox-annotate-udhcpc.png
381   :align: center
382   :width: 70%
383
384As a segue into tracing, let's try another profile using a different
385counter, something other than the default 'cycles'.
386
387The tracing and profiling infrastructure in Linux has become unified in
388a way that allows us to use the same tool with a completely different
389set of counters, not just the standard hardware counters that
390traditional tools have had to restrict themselves to (of course the
391traditional tools can also make use of the expanded possibilities now
392available to them, and in some cases have, as mentioned previously).
393
394We can get a list of the available events that can be used to profile a
395workload via 'perf list'::
396
397   root@crownbay:~# perf list
398
399   List of pre-defined events (to be used in -e):
400    cpu-cycles OR cycles                               [Hardware event]
401    stalled-cycles-frontend OR idle-cycles-frontend    [Hardware event]
402    stalled-cycles-backend OR idle-cycles-backend      [Hardware event]
403    instructions                                       [Hardware event]
404    cache-references                                   [Hardware event]
405    cache-misses                                       [Hardware event]
406    branch-instructions OR branches                    [Hardware event]
407    branch-misses                                      [Hardware event]
408    bus-cycles                                         [Hardware event]
409    ref-cycles                                         [Hardware event]
410
411    cpu-clock                                          [Software event]
412    task-clock                                         [Software event]
413    page-faults OR faults                              [Software event]
414    minor-faults                                       [Software event]
415    major-faults                                       [Software event]
416    context-switches OR cs                             [Software event]
417    cpu-migrations OR migrations                       [Software event]
418    alignment-faults                                   [Software event]
419    emulation-faults                                   [Software event]
420
421    L1-dcache-loads                                    [Hardware cache event]
422    L1-dcache-load-misses                              [Hardware cache event]
423    L1-dcache-prefetch-misses                          [Hardware cache event]
424    L1-icache-loads                                    [Hardware cache event]
425    L1-icache-load-misses                              [Hardware cache event]
426    .
427    .
428    .
429    rNNN                                               [Raw hardware event descriptor]
430    cpu/t1=v1[,t2=v2,t3 ...]/modifier                  [Raw hardware event descriptor]
431     (see 'perf list --help' on how to encode it)
432
433    mem:<addr>[:access]                                [Hardware breakpoint]
434
435    sunrpc:rpc_call_status                             [Tracepoint event]
436    sunrpc:rpc_bind_status                             [Tracepoint event]
437    sunrpc:rpc_connect_status                          [Tracepoint event]
438    sunrpc:rpc_task_begin                              [Tracepoint event]
439    skb:kfree_skb                                      [Tracepoint event]
440    skb:consume_skb                                    [Tracepoint event]
441    skb:skb_copy_datagram_iovec                        [Tracepoint event]
442    net:net_dev_xmit                                   [Tracepoint event]
443    net:net_dev_queue                                  [Tracepoint event]
444    net:netif_receive_skb                              [Tracepoint event]
445    net:netif_rx                                       [Tracepoint event]
446    napi:napi_poll                                     [Tracepoint event]
447    sock:sock_rcvqueue_full                            [Tracepoint event]
448    sock:sock_exceed_buf_limit                         [Tracepoint event]
449    udp:udp_fail_queue_rcv_skb                         [Tracepoint event]
450    hda:hda_send_cmd                                   [Tracepoint event]
451    hda:hda_get_response                               [Tracepoint event]
452    hda:hda_bus_reset                                  [Tracepoint event]
453    scsi:scsi_dispatch_cmd_start                       [Tracepoint event]
454    scsi:scsi_dispatch_cmd_error                       [Tracepoint event]
455    scsi:scsi_eh_wakeup                                [Tracepoint event]
456    drm:drm_vblank_event                               [Tracepoint event]
457    drm:drm_vblank_event_queued                        [Tracepoint event]
458    drm:drm_vblank_event_delivered                     [Tracepoint event]
459    random:mix_pool_bytes                              [Tracepoint event]
460    random:mix_pool_bytes_nolock                       [Tracepoint event]
461    random:credit_entropy_bits                         [Tracepoint event]
462    gpio:gpio_direction                                [Tracepoint event]
463    gpio:gpio_value                                    [Tracepoint event]
464    block:block_rq_abort                               [Tracepoint event]
465    block:block_rq_requeue                             [Tracepoint event]
466    block:block_rq_issue                               [Tracepoint event]
467    block:block_bio_bounce                             [Tracepoint event]
468    block:block_bio_complete                           [Tracepoint event]
469    block:block_bio_backmerge                          [Tracepoint event]
470    .
471    .
472    writeback:writeback_wake_thread                    [Tracepoint event]
473    writeback:writeback_wake_forker_thread             [Tracepoint event]
474    writeback:writeback_bdi_register                   [Tracepoint event]
475    .
476    .
477    writeback:writeback_single_inode_requeue           [Tracepoint event]
478    writeback:writeback_single_inode                   [Tracepoint event]
479    kmem:kmalloc                                       [Tracepoint event]
480    kmem:kmem_cache_alloc                              [Tracepoint event]
481    kmem:mm_page_alloc                                 [Tracepoint event]
482    kmem:mm_page_alloc_zone_locked                     [Tracepoint event]
483    kmem:mm_page_pcpu_drain                            [Tracepoint event]
484    kmem:mm_page_alloc_extfrag                         [Tracepoint event]
485    vmscan:mm_vmscan_kswapd_sleep                      [Tracepoint event]
486    vmscan:mm_vmscan_kswapd_wake                       [Tracepoint event]
487    vmscan:mm_vmscan_wakeup_kswapd                     [Tracepoint event]
488    vmscan:mm_vmscan_direct_reclaim_begin              [Tracepoint event]
489    .
490    .
491    module:module_get                                  [Tracepoint event]
492    module:module_put                                  [Tracepoint event]
493    module:module_request                              [Tracepoint event]
494    sched:sched_kthread_stop                           [Tracepoint event]
495    sched:sched_wakeup                                 [Tracepoint event]
496    sched:sched_wakeup_new                             [Tracepoint event]
497    sched:sched_process_fork                           [Tracepoint event]
498    sched:sched_process_exec                           [Tracepoint event]
499    sched:sched_stat_runtime                           [Tracepoint event]
500    rcu:rcu_utilization                                [Tracepoint event]
501    workqueue:workqueue_queue_work                     [Tracepoint event]
502    workqueue:workqueue_execute_end                    [Tracepoint event]
503    signal:signal_generate                             [Tracepoint event]
504    signal:signal_deliver                              [Tracepoint event]
505    timer:timer_init                                   [Tracepoint event]
506    timer:timer_start                                  [Tracepoint event]
507    timer:hrtimer_cancel                               [Tracepoint event]
508    timer:itimer_state                                 [Tracepoint event]
509    timer:itimer_expire                                [Tracepoint event]
510    irq:irq_handler_entry                              [Tracepoint event]
511    irq:irq_handler_exit                               [Tracepoint event]
512    irq:softirq_entry                                  [Tracepoint event]
513    irq:softirq_exit                                   [Tracepoint event]
514    irq:softirq_raise                                  [Tracepoint event]
515    printk:console                                     [Tracepoint event]
516    task:task_newtask                                  [Tracepoint event]
517    task:task_rename                                   [Tracepoint event]
518    syscalls:sys_enter_socketcall                      [Tracepoint event]
519    syscalls:sys_exit_socketcall                       [Tracepoint event]
520    .
521    .
522    .
523    syscalls:sys_enter_unshare                         [Tracepoint event]
524    syscalls:sys_exit_unshare                          [Tracepoint event]
525    raw_syscalls:sys_enter                             [Tracepoint event]
526    raw_syscalls:sys_exit                              [Tracepoint event]
527
528.. admonition:: Tying it Together
529
530   These are exactly the same set of events defined by the trace event
531   subsystem and exposed by ftrace/tracecmd/kernelshark as files in
532   /sys/kernel/debug/tracing/events, by SystemTap as
533   kernel.trace("tracepoint_name") and (partially) accessed by LTTng.
534
535Only a subset of these would be of interest to us when looking at this
536workload, so let's choose the most likely subsystems (identified by the
537string before the colon in the Tracepoint events) and do a 'perf stat'
538run using only those wildcarded subsystems::
539
540   root@crownbay:~# perf stat -e skb:* -e net:* -e napi:* -e sched:* -e workqueue:* -e irq:* -e syscalls:* wget &YOCTO_DL_URL;/mirror/sources/linux-2.6.19.2.tar.bz2
541   Performance counter stats for 'wget &YOCTO_DL_URL;/mirror/sources/linux-2.6.19.2.tar.bz2':
542
543               23323 skb:kfree_skb
544                   0 skb:consume_skb
545               49897 skb:skb_copy_datagram_iovec
546                6217 net:net_dev_xmit
547                6217 net:net_dev_queue
548                7962 net:netif_receive_skb
549                   2 net:netif_rx
550                8340 napi:napi_poll
551                   0 sched:sched_kthread_stop
552                   0 sched:sched_kthread_stop_ret
553                3749 sched:sched_wakeup
554                   0 sched:sched_wakeup_new
555                   0 sched:sched_switch
556                  29 sched:sched_migrate_task
557                   0 sched:sched_process_free
558                   1 sched:sched_process_exit
559                   0 sched:sched_wait_task
560                   0 sched:sched_process_wait
561                   0 sched:sched_process_fork
562                   1 sched:sched_process_exec
563                   0 sched:sched_stat_wait
564       2106519415641 sched:sched_stat_sleep
565                   0 sched:sched_stat_iowait
566           147453613 sched:sched_stat_blocked
567         12903026955 sched:sched_stat_runtime
568                   0 sched:sched_pi_setprio
569                3574 workqueue:workqueue_queue_work
570                3574 workqueue:workqueue_activate_work
571                   0 workqueue:workqueue_execute_start
572                   0 workqueue:workqueue_execute_end
573               16631 irq:irq_handler_entry
574               16631 irq:irq_handler_exit
575               28521 irq:softirq_entry
576               28521 irq:softirq_exit
577               28728 irq:softirq_raise
578                   1 syscalls:sys_enter_sendmmsg
579                   1 syscalls:sys_exit_sendmmsg
580                   0 syscalls:sys_enter_recvmmsg
581                   0 syscalls:sys_exit_recvmmsg
582                  14 syscalls:sys_enter_socketcall
583                  14 syscalls:sys_exit_socketcall
584                     .
585                     .
586                     .
587               16965 syscalls:sys_enter_read
588               16965 syscalls:sys_exit_read
589               12854 syscalls:sys_enter_write
590               12854 syscalls:sys_exit_write
591                     .
592                     .
593                     .
594
595        58.029710972 seconds time elapsed
596
597
598
599Let's pick one of these tracepoints
600and tell perf to do a profile using it as the sampling event::
601
602   root@crownbay:~# perf record -g -e sched:sched_wakeup wget &YOCTO_DL_URL;/mirror/sources/linux-2.6.19.2.tar.bz2
603
604.. image:: figures/sched-wakeup-profile.png
605   :align: center
606   :width: 70%
607
608The screenshot above shows the results of running a profile using
609sched:sched_switch tracepoint, which shows the relative costs of various
610paths to sched_wakeup (note that sched_wakeup is the name of the
611tracepoint --- it's actually defined just inside ttwu_do_wakeup(), which
612accounts for the function name actually displayed in the profile:
613
614.. code-block:: c
615
616     /*
617      * Mark the task runnable and perform wakeup-preemption.
618      */
619     static void
620     ttwu_do_wakeup(struct rq *rq, struct task_struct *p, int wake_flags)
621     {
622          trace_sched_wakeup(p, true);
623          .
624          .
625          .
626     }
627
628A couple of the more interesting
629callchains are expanded and displayed above, basically some network
630receive paths that presumably end up waking up wget (busybox) when
631network data is ready.
632
633Note that because tracepoints are normally used for tracing, the default
634sampling period for tracepoints is 1 i.e. for tracepoints perf will
635sample on every event occurrence (this can be changed using the -c
636option). This is in contrast to hardware counters such as for example
637the default 'cycles' hardware counter used for normal profiling, where
638sampling periods are much higher (in the thousands) because profiling
639should have as low an overhead as possible and sampling on every cycle
640would be prohibitively expensive.
641
642Using perf to do Basic Tracing
643~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
644
645Profiling is a great tool for solving many problems or for getting a
646high-level view of what's going on with a workload or across the system.
647It is however by definition an approximation, as suggested by the most
648prominent word associated with it, 'sampling'. On the one hand, it
649allows a representative picture of what's going on in the system to be
650cheaply taken, but on the other hand, that cheapness limits its utility
651when that data suggests a need to 'dive down' more deeply to discover
652what's really going on. In such cases, the only way to see what's really
653going on is to be able to look at (or summarize more intelligently) the
654individual steps that go into the higher-level behavior exposed by the
655coarse-grained profiling data.
656
657As a concrete example, we can trace all the events we think might be
658applicable to our workload::
659
660   root@crownbay:~# perf record -g -e skb:* -e net:* -e napi:* -e sched:sched_switch -e sched:sched_wakeup -e irq:*
661    -e syscalls:sys_enter_read -e syscalls:sys_exit_read -e syscalls:sys_enter_write -e syscalls:sys_exit_write
662    wget &YOCTO_DL_URL;/mirror/sources/linux-2.6.19.2.tar.bz2
663
664We can look at the raw trace output using 'perf script' with no
665arguments::
666
667   root@crownbay:~# perf script
668
669         perf  1262 [000] 11624.857082: sys_exit_read: 0x0
670         perf  1262 [000] 11624.857193: sched_wakeup: comm=migration/0 pid=6 prio=0 success=1 target_cpu=000
671         wget  1262 [001] 11624.858021: softirq_raise: vec=1 [action=TIMER]
672         wget  1262 [001] 11624.858074: softirq_entry: vec=1 [action=TIMER]
673         wget  1262 [001] 11624.858081: softirq_exit: vec=1 [action=TIMER]
674         wget  1262 [001] 11624.858166: sys_enter_read: fd: 0x0003, buf: 0xbf82c940, count: 0x0200
675         wget  1262 [001] 11624.858177: sys_exit_read: 0x200
676         wget  1262 [001] 11624.858878: kfree_skb: skbaddr=0xeb248d80 protocol=0 location=0xc15a5308
677         wget  1262 [001] 11624.858945: kfree_skb: skbaddr=0xeb248000 protocol=0 location=0xc15a5308
678         wget  1262 [001] 11624.859020: softirq_raise: vec=1 [action=TIMER]
679         wget  1262 [001] 11624.859076: softirq_entry: vec=1 [action=TIMER]
680         wget  1262 [001] 11624.859083: softirq_exit: vec=1 [action=TIMER]
681         wget  1262 [001] 11624.859167: sys_enter_read: fd: 0x0003, buf: 0xb7720000, count: 0x0400
682         wget  1262 [001] 11624.859192: sys_exit_read: 0x1d7
683         wget  1262 [001] 11624.859228: sys_enter_read: fd: 0x0003, buf: 0xb7720000, count: 0x0400
684         wget  1262 [001] 11624.859233: sys_exit_read: 0x0
685         wget  1262 [001] 11624.859573: sys_enter_read: fd: 0x0003, buf: 0xbf82c580, count: 0x0200
686         wget  1262 [001] 11624.859584: sys_exit_read: 0x200
687         wget  1262 [001] 11624.859864: sys_enter_read: fd: 0x0003, buf: 0xb7720000, count: 0x0400
688         wget  1262 [001] 11624.859888: sys_exit_read: 0x400
689         wget  1262 [001] 11624.859935: sys_enter_read: fd: 0x0003, buf: 0xb7720000, count: 0x0400
690         wget  1262 [001] 11624.859944: sys_exit_read: 0x400
691
692This gives us a detailed timestamped sequence of events that occurred within the
693workload with respect to those events.
694
695In many ways, profiling can be viewed as a subset of tracing -
696theoretically, if you have a set of trace events that's sufficient to
697capture all the important aspects of a workload, you can derive any of
698the results or views that a profiling run can.
699
700Another aspect of traditional profiling is that while powerful in many
701ways, it's limited by the granularity of the underlying data. Profiling
702tools offer various ways of sorting and presenting the sample data,
703which make it much more useful and amenable to user experimentation, but
704in the end it can't be used in an open-ended way to extract data that
705just isn't present as a consequence of the fact that conceptually, most
706of it has been thrown away.
707
708Full-blown detailed tracing data does however offer the opportunity to
709manipulate and present the information collected during a tracing run in
710an infinite variety of ways.
711
712Another way to look at it is that there are only so many ways that the
713'primitive' counters can be used on their own to generate interesting
714output; to get anything more complicated than simple counts requires
715some amount of additional logic, which is typically very specific to the
716problem at hand. For example, if we wanted to make use of a 'counter'
717that maps to the value of the time difference between when a process was
718scheduled to run on a processor and the time it actually ran, we
719wouldn't expect such a counter to exist on its own, but we could derive
720one called say 'wakeup_latency' and use it to extract a useful view of
721that metric from trace data. Likewise, we really can't figure out from
722standard profiling tools how much data every process on the system reads
723and writes, along with how many of those reads and writes fail
724completely. If we have sufficient trace data, however, we could with the
725right tools easily extract and present that information, but we'd need
726something other than pre-canned profiling tools to do that.
727
728Luckily, there is a general-purpose way to handle such needs, called
729'programming languages'. Making programming languages easily available
730to apply to such problems given the specific format of data is called a
731'programming language binding' for that data and language. Perf supports
732two programming language bindings, one for Python and one for Perl.
733
734.. admonition:: Tying it Together
735
736   Language bindings for manipulating and aggregating trace data are of
737   course not a new idea. One of the first projects to do this was IBM's
738   DProbes dpcc compiler, an ANSI C compiler which targeted a low-level
739   assembly language running on an in-kernel interpreter on the target
740   system. This is exactly analogous to what Sun's DTrace did, except
741   that DTrace invented its own language for the purpose. Systemtap,
742   heavily inspired by DTrace, also created its own one-off language,
743   but rather than running the product on an in-kernel interpreter,
744   created an elaborate compiler-based machinery to translate its
745   language into kernel modules written in C.
746
747Now that we have the trace data in perf.data, we can use 'perf script
748-g' to generate a skeleton script with handlers for the read/write
749entry/exit events we recorded::
750
751   root@crownbay:~# perf script -g python
752   generated Python script: perf-script.py
753
754The skeleton script simply creates a Python function for each event type in the
755perf.data file. The body of each function simply prints the event name along
756with its parameters. For example:
757
758.. code-block:: python
759
760   def net__netif_rx(event_name, context, common_cpu,
761          common_secs, common_nsecs, common_pid, common_comm,
762          skbaddr, len, name):
763                  print_header(event_name, common_cpu, common_secs, common_nsecs,
764                          common_pid, common_comm)
765
766                  print "skbaddr=%u, len=%u, name=%s\n" % (skbaddr, len, name),
767
768We can run that script directly to print all of the events contained in the
769perf.data file::
770
771   root@crownbay:~# perf script -s perf-script.py
772
773   in trace_begin
774   syscalls__sys_exit_read     0 11624.857082795     1262 perf                  nr=3, ret=0
775   sched__sched_wakeup      0 11624.857193498     1262 perf                  comm=migration/0, pid=6, prio=0,      success=1, target_cpu=0
776   irq__softirq_raise       1 11624.858021635     1262 wget                  vec=TIMER
777   irq__softirq_entry       1 11624.858074075     1262 wget                  vec=TIMER
778   irq__softirq_exit        1 11624.858081389     1262 wget                  vec=TIMER
779   syscalls__sys_enter_read     1 11624.858166434     1262 wget                  nr=3, fd=3, buf=3213019456,      count=512
780   syscalls__sys_exit_read     1 11624.858177924     1262 wget                  nr=3, ret=512
781   skb__kfree_skb           1 11624.858878188     1262 wget                  skbaddr=3945041280,           location=3243922184, protocol=0
782   skb__kfree_skb           1 11624.858945608     1262 wget                  skbaddr=3945037824,      location=3243922184, protocol=0
783   irq__softirq_raise       1 11624.859020942     1262 wget                  vec=TIMER
784   irq__softirq_entry       1 11624.859076935     1262 wget                  vec=TIMER
785   irq__softirq_exit        1 11624.859083469     1262 wget                  vec=TIMER
786   syscalls__sys_enter_read     1 11624.859167565     1262 wget                  nr=3, fd=3, buf=3077701632,      count=1024
787   syscalls__sys_exit_read     1 11624.859192533     1262 wget                  nr=3, ret=471
788   syscalls__sys_enter_read     1 11624.859228072     1262 wget                  nr=3, fd=3, buf=3077701632,      count=1024
789   syscalls__sys_exit_read     1 11624.859233707     1262 wget                  nr=3, ret=0
790   syscalls__sys_enter_read     1 11624.859573008     1262 wget                  nr=3, fd=3, buf=3213018496,      count=512
791   syscalls__sys_exit_read     1 11624.859584818     1262 wget                  nr=3, ret=512
792   syscalls__sys_enter_read     1 11624.859864562     1262 wget                  nr=3, fd=3, buf=3077701632,      count=1024
793   syscalls__sys_exit_read     1 11624.859888770     1262 wget                  nr=3, ret=1024
794   syscalls__sys_enter_read     1 11624.859935140     1262 wget                  nr=3, fd=3, buf=3077701632,      count=1024
795   syscalls__sys_exit_read     1 11624.859944032     1262 wget                  nr=3, ret=1024
796
797That in itself isn't very useful; after all, we can accomplish pretty much the
798same thing by simply running 'perf script' without arguments in the same
799directory as the perf.data file.
800
801We can however replace the print statements in the generated function
802bodies with whatever we want, and thereby make it infinitely more
803useful.
804
805As a simple example, let's just replace the print statements in the
806function bodies with a simple function that does nothing but increment a
807per-event count. When the program is run against a perf.data file, each
808time a particular event is encountered, a tally is incremented for that
809event. For example:
810
811.. code-block:: python
812
813   def net__netif_rx(event_name, context, common_cpu,
814          common_secs, common_nsecs, common_pid, common_comm,
815          skbaddr, len, name):
816              inc_counts(event_name)
817
818Each event handler function in the generated code
819is modified to do this. For convenience, we define a common function
820called inc_counts() that each handler calls; inc_counts() simply tallies
821a count for each event using the 'counts' hash, which is a specialized
822hash function that does Perl-like autovivification, a capability that's
823extremely useful for kinds of multi-level aggregation commonly used in
824processing traces (see perf's documentation on the Python language
825binding for details):
826
827.. code-block:: python
828
829     counts = autodict()
830
831     def inc_counts(event_name):
832            try:
833                    counts[event_name] += 1
834            except TypeError:
835                    counts[event_name] = 1
836
837Finally, at the end of the trace processing run, we want to print the
838result of all the per-event tallies. For that, we use the special
839'trace_end()' function:
840
841.. code-block:: python
842
843     def trace_end():
844            for event_name, count in counts.iteritems():
845                    print "%-40s %10s\n" % (event_name, count)
846
847The end result is a summary of all the events recorded in the trace::
848
849   skb__skb_copy_datagram_iovec                  13148
850   irq__softirq_entry                             4796
851   irq__irq_handler_exit                          3805
852   irq__softirq_exit                              4795
853   syscalls__sys_enter_write                      8990
854   net__net_dev_xmit                               652
855   skb__kfree_skb                                 4047
856   sched__sched_wakeup                            1155
857   irq__irq_handler_entry                         3804
858   irq__softirq_raise                             4799
859   net__net_dev_queue                              652
860   syscalls__sys_enter_read                      17599
861   net__netif_receive_skb                         1743
862   syscalls__sys_exit_read                       17598
863   net__netif_rx                                     2
864   napi__napi_poll                                1877
865   syscalls__sys_exit_write                       8990
866
867Note that this is
868pretty much exactly the same information we get from 'perf stat', which
869goes a little way to support the idea mentioned previously that given
870the right kind of trace data, higher-level profiling-type summaries can
871be derived from it.
872
873Documentation on using the `'perf script' Python
874binding <https://linux.die.net/man/1/perf-script-python>`__.
875
876System-Wide Tracing and Profiling
877~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
878
879The examples so far have focused on tracing a particular program or
880workload --- in other words, every profiling run has specified the program
881to profile in the command-line e.g. 'perf record wget ...'.
882
883It's also possible, and more interesting in many cases, to run a
884system-wide profile or trace while running the workload in a separate
885shell.
886
887To do system-wide profiling or tracing, you typically use the -a flag to
888'perf record'.
889
890To demonstrate this, open up one window and start the profile using the
891-a flag (press Ctrl-C to stop tracing)::
892
893   root@crownbay:~# perf record -g -a
894   ^C[ perf record: Woken up 6 times to write data ]
895   [ perf record: Captured and wrote 1.400 MB perf.data (~61172 samples) ]
896
897In another window, run the wget test::
898
899   root@crownbay:~# wget &YOCTO_DL_URL;/mirror/sources/linux-2.6.19.2.tar.bz2
900   Connecting to downloads.yoctoproject.org (140.211.169.59:80)
901   linux-2.6.19.2.tar.b 100% \|*******************************\| 41727k 0:00:00 ETA
902
903Here we see entries not only for our wget load, but for
904other processes running on the system as well:
905
906.. image:: figures/perf-systemwide.png
907   :align: center
908   :width: 70%
909
910In the snapshot above, we can see callchains that originate in libc, and
911a callchain from Xorg that demonstrates that we're using a proprietary X
912driver in userspace (notice the presence of 'PVR' and some other
913unresolvable symbols in the expanded Xorg callchain).
914
915Note also that we have both kernel and userspace entries in the above
916snapshot. We can also tell perf to focus on userspace but providing a
917modifier, in this case 'u', to the 'cycles' hardware counter when we
918record a profile::
919
920   root@crownbay:~# perf record -g -a -e cycles:u
921   ^C[ perf record: Woken up 2 times to write data ]
922   [ perf record: Captured and wrote 0.376 MB perf.data (~16443 samples) ]
923
924.. image:: figures/perf-report-cycles-u.png
925   :align: center
926   :width: 70%
927
928Notice in the screenshot above, we see only userspace entries ([.])
929
930Finally, we can press 'enter' on a leaf node and select the 'Zoom into
931DSO' menu item to show only entries associated with a specific DSO. In
932the screenshot below, we've zoomed into the 'libc' DSO which shows all
933the entries associated with the libc-xxx.so DSO.
934
935.. image:: figures/perf-systemwide-libc.png
936   :align: center
937   :width: 70%
938
939We can also use the system-wide -a switch to do system-wide tracing.
940Here we'll trace a couple of scheduler events::
941
942   root@crownbay:~# perf record -a -e sched:sched_switch -e sched:sched_wakeup
943   ^C[ perf record: Woken up 38 times to write data ]
944   [ perf record: Captured and wrote 9.780 MB perf.data (~427299 samples) ]
945
946We can look at the raw output using 'perf script' with no arguments::
947
948   root@crownbay:~# perf script
949
950              perf  1383 [001]  6171.460045: sched_wakeup: comm=kworker/1:1 pid=21 prio=120 success=1 target_cpu=001
951              perf  1383 [001]  6171.460066: sched_switch: prev_comm=perf prev_pid=1383 prev_prio=120 prev_state=R+ ==> next_comm=kworker/1:1 next_pid=21 next_prio=120
952       kworker/1:1    21 [001]  6171.460093: sched_switch: prev_comm=kworker/1:1 prev_pid=21 prev_prio=120 prev_state=S ==> next_comm=perf next_pid=1383 next_prio=120
953           swapper     0 [000]  6171.468063: sched_wakeup: comm=kworker/0:3 pid=1209 prio=120 success=1 target_cpu=000
954           swapper     0 [000]  6171.468107: sched_switch: prev_comm=swapper/0 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=kworker/0:3 next_pid=1209 next_prio=120
955       kworker/0:3  1209 [000]  6171.468143: sched_switch: prev_comm=kworker/0:3 prev_pid=1209 prev_prio=120 prev_state=S ==> next_comm=swapper/0 next_pid=0 next_prio=120
956              perf  1383 [001]  6171.470039: sched_wakeup: comm=kworker/1:1 pid=21 prio=120 success=1 target_cpu=001
957              perf  1383 [001]  6171.470058: sched_switch: prev_comm=perf prev_pid=1383 prev_prio=120 prev_state=R+ ==> next_comm=kworker/1:1 next_pid=21 next_prio=120
958       kworker/1:1    21 [001]  6171.470082: sched_switch: prev_comm=kworker/1:1 prev_pid=21 prev_prio=120 prev_state=S ==> next_comm=perf next_pid=1383 next_prio=120
959              perf  1383 [001]  6171.480035: sched_wakeup: comm=kworker/1:1 pid=21 prio=120 success=1 target_cpu=001
960
961Filtering
962^^^^^^^^^
963
964Notice that there are a lot of events that don't really have anything to
965do with what we're interested in, namely events that schedule 'perf'
966itself in and out or that wake perf up. We can get rid of those by using
967the '--filter' option --- for each event we specify using -e, we can add a
968--filter after that to filter out trace events that contain fields with
969specific values::
970
971   root@crownbay:~# perf record -a -e sched:sched_switch --filter 'next_comm != perf && prev_comm != perf' -e sched:sched_wakeup --filter 'comm != perf'
972   ^C[ perf record: Woken up 38 times to write data ]
973   [ perf record: Captured and wrote 9.688 MB perf.data (~423279 samples) ]
974
975
976   root@crownbay:~# perf script
977
978           swapper     0 [000]  7932.162180: sched_switch: prev_comm=swapper/0 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=kworker/0:3 next_pid=1209 next_prio=120
979       kworker/0:3  1209 [000]  7932.162236: sched_switch: prev_comm=kworker/0:3 prev_pid=1209 prev_prio=120 prev_state=S ==> next_comm=swapper/0 next_pid=0 next_prio=120
980              perf  1407 [001]  7932.170048: sched_wakeup: comm=kworker/1:1 pid=21 prio=120 success=1 target_cpu=001
981              perf  1407 [001]  7932.180044: sched_wakeup: comm=kworker/1:1 pid=21 prio=120 success=1 target_cpu=001
982              perf  1407 [001]  7932.190038: sched_wakeup: comm=kworker/1:1 pid=21 prio=120 success=1 target_cpu=001
983              perf  1407 [001]  7932.200044: sched_wakeup: comm=kworker/1:1 pid=21 prio=120 success=1 target_cpu=001
984              perf  1407 [001]  7932.210044: sched_wakeup: comm=kworker/1:1 pid=21 prio=120 success=1 target_cpu=001
985              perf  1407 [001]  7932.220044: sched_wakeup: comm=kworker/1:1 pid=21 prio=120 success=1 target_cpu=001
986           swapper     0 [001]  7932.230111: sched_wakeup: comm=kworker/1:1 pid=21 prio=120 success=1 target_cpu=001
987           swapper     0 [001]  7932.230146: sched_switch: prev_comm=swapper/1 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=kworker/1:1 next_pid=21 next_prio=120
988       kworker/1:1    21 [001]  7932.230205: sched_switch: prev_comm=kworker/1:1 prev_pid=21 prev_prio=120 prev_state=S ==> next_comm=swapper/1 next_pid=0 next_prio=120
989           swapper     0 [000]  7932.326109: sched_wakeup: comm=kworker/0:3 pid=1209 prio=120 success=1 target_cpu=000
990           swapper     0 [000]  7932.326171: sched_switch: prev_comm=swapper/0 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=kworker/0:3 next_pid=1209 next_prio=120
991       kworker/0:3  1209 [000]  7932.326214: sched_switch: prev_comm=kworker/0:3 prev_pid=1209 prev_prio=120 prev_state=S ==> next_comm=swapper/0 next_pid=0 next_prio=120
992
993In this case, we've filtered out all events that have
994'perf' in their 'comm' or 'comm_prev' or 'comm_next' fields. Notice that
995there are still events recorded for perf, but notice that those events
996don't have values of 'perf' for the filtered fields. To completely
997filter out anything from perf will require a bit more work, but for the
998purpose of demonstrating how to use filters, it's close enough.
999
1000.. admonition:: Tying it Together
1001
1002   These are exactly the same set of event filters defined by the trace
1003   event subsystem. See the ftrace/tracecmd/kernelshark section for more
1004   discussion about these event filters.
1005
1006.. admonition:: Tying it Together
1007
1008   These event filters are implemented by a special-purpose
1009   pseudo-interpreter in the kernel and are an integral and
1010   indispensable part of the perf design as it relates to tracing.
1011   kernel-based event filters provide a mechanism to precisely throttle
1012   the event stream that appears in user space, where it makes sense to
1013   provide bindings to real programming languages for postprocessing the
1014   event stream. This architecture allows for the intelligent and
1015   flexible partitioning of processing between the kernel and user
1016   space. Contrast this with other tools such as SystemTap, which does
1017   all of its processing in the kernel and as such requires a special
1018   project-defined language in order to accommodate that design, or
1019   LTTng, where everything is sent to userspace and as such requires a
1020   super-efficient kernel-to-userspace transport mechanism in order to
1021   function properly. While perf certainly can benefit from for instance
1022   advances in the design of the transport, it doesn't fundamentally
1023   depend on them. Basically, if you find that your perf tracing
1024   application is causing buffer I/O overruns, it probably means that
1025   you aren't taking enough advantage of the kernel filtering engine.
1026
1027Using Dynamic Tracepoints
1028~~~~~~~~~~~~~~~~~~~~~~~~~
1029
1030perf isn't restricted to the fixed set of static tracepoints listed by
1031'perf list'. Users can also add their own 'dynamic' tracepoints anywhere
1032in the kernel. For instance, suppose we want to define our own
1033tracepoint on do_fork(). We can do that using the 'perf probe' perf
1034subcommand::
1035
1036   root@crownbay:~# perf probe do_fork
1037   Added new event:
1038     probe:do_fork        (on do_fork)
1039
1040   You can now use it in all perf tools, such as:
1041
1042     perf record -e probe:do_fork -aR sleep 1
1043
1044Adding a new tracepoint via
1045'perf probe' results in an event with all the expected files and format
1046in /sys/kernel/debug/tracing/events, just the same as for static
1047tracepoints (as discussed in more detail in the trace events subsystem
1048section::
1049
1050   root@crownbay:/sys/kernel/debug/tracing/events/probe/do_fork# ls -al
1051   drwxr-xr-x    2 root     root             0 Oct 28 11:42 .
1052   drwxr-xr-x    3 root     root             0 Oct 28 11:42 ..
1053   -rw-r--r--    1 root     root             0 Oct 28 11:42 enable
1054   -rw-r--r--    1 root     root             0 Oct 28 11:42 filter
1055   -r--r--r--    1 root     root             0 Oct 28 11:42 format
1056   -r--r--r--    1 root     root             0 Oct 28 11:42 id
1057
1058   root@crownbay:/sys/kernel/debug/tracing/events/probe/do_fork# cat format
1059   name: do_fork
1060   ID: 944
1061   format:
1062           field:unsigned short common_type;	offset:0;	size:2;	signed:0;
1063           field:unsigned char common_flags;	offset:2;	size:1;	signed:0;
1064           field:unsigned char common_preempt_count;	offset:3;	size:1;	signed:0;
1065           field:int common_pid;	offset:4;	size:4;	signed:1;
1066           field:int common_padding;	offset:8;	size:4;	signed:1;
1067
1068           field:unsigned long __probe_ip;	offset:12;	size:4;	signed:0;
1069
1070   print fmt: "(%lx)", REC->__probe_ip
1071
1072We can list all dynamic tracepoints currently in
1073existence::
1074
1075   root@crownbay:~# perf probe -l
1076    probe:do_fork (on do_fork)
1077    probe:schedule (on schedule)
1078
1079Let's record system-wide ('sleep 30' is a
1080trick for recording system-wide but basically do nothing and then wake
1081up after 30 seconds)::
1082
1083   root@crownbay:~# perf record -g -a -e probe:do_fork sleep 30
1084   [ perf record: Woken up 1 times to write data ]
1085   [ perf record: Captured and wrote 0.087 MB perf.data (~3812 samples) ]
1086
1087Using 'perf script' we can see each do_fork event that fired::
1088
1089   root@crownbay:~# perf script
1090
1091   # ========
1092   # captured on: Sun Oct 28 11:55:18 2012
1093   # hostname : crownbay
1094   # os release : 3.4.11-yocto-standard
1095   # perf version : 3.4.11
1096   # arch : i686
1097   # nrcpus online : 2
1098   # nrcpus avail : 2
1099   # cpudesc : Intel(R) Atom(TM) CPU E660 @ 1.30GHz
1100   # cpuid : GenuineIntel,6,38,1
1101   # total memory : 1017184 kB
1102   # cmdline : /usr/bin/perf record -g -a -e probe:do_fork sleep 30
1103   # event : name = probe:do_fork, type = 2, config = 0x3b0, config1 = 0x0, config2 = 0x0, excl_usr = 0, excl_kern
1104    = 0, id = { 5, 6 }
1105   # HEADER_CPU_TOPOLOGY info available, use -I to display
1106   # ========
1107   #
1108    matchbox-deskto  1197 [001] 34211.378318: do_fork: (c1028460)
1109    matchbox-deskto  1295 [001] 34211.380388: do_fork: (c1028460)
1110            pcmanfm  1296 [000] 34211.632350: do_fork: (c1028460)
1111            pcmanfm  1296 [000] 34211.639917: do_fork: (c1028460)
1112    matchbox-deskto  1197 [001] 34217.541603: do_fork: (c1028460)
1113    matchbox-deskto  1299 [001] 34217.543584: do_fork: (c1028460)
1114             gthumb  1300 [001] 34217.697451: do_fork: (c1028460)
1115             gthumb  1300 [001] 34219.085734: do_fork: (c1028460)
1116             gthumb  1300 [000] 34219.121351: do_fork: (c1028460)
1117             gthumb  1300 [001] 34219.264551: do_fork: (c1028460)
1118            pcmanfm  1296 [000] 34219.590380: do_fork: (c1028460)
1119    matchbox-deskto  1197 [001] 34224.955965: do_fork: (c1028460)
1120    matchbox-deskto  1306 [001] 34224.957972: do_fork: (c1028460)
1121    matchbox-termin  1307 [000] 34225.038214: do_fork: (c1028460)
1122    matchbox-termin  1307 [001] 34225.044218: do_fork: (c1028460)
1123    matchbox-termin  1307 [000] 34225.046442: do_fork: (c1028460)
1124    matchbox-deskto  1197 [001] 34237.112138: do_fork: (c1028460)
1125    matchbox-deskto  1311 [001] 34237.114106: do_fork: (c1028460)
1126               gaku  1312 [000] 34237.202388: do_fork: (c1028460)
1127
1128And using 'perf report' on the same file, we can see the
1129callgraphs from starting a few programs during those 30 seconds:
1130
1131.. image:: figures/perf-probe-do_fork-profile.png
1132   :align: center
1133   :width: 70%
1134
1135.. admonition:: Tying it Together
1136
1137   The trace events subsystem accommodate static and dynamic tracepoints
1138   in exactly the same way --- there's no difference as far as the
1139   infrastructure is concerned. See the ftrace section for more details
1140   on the trace event subsystem.
1141
1142.. admonition:: Tying it Together
1143
1144   Dynamic tracepoints are implemented under the covers by kprobes and
1145   uprobes. kprobes and uprobes are also used by and in fact are the
1146   main focus of SystemTap.
1147
1148Perf Documentation
1149------------------
1150
1151Online versions of the man pages for the commands discussed in this
1152section can be found here:
1153
1154-  The `'perf stat' manpage <https://linux.die.net/man/1/perf-stat>`__.
1155
1156-  The `'perf record'
1157   manpage <https://linux.die.net/man/1/perf-record>`__.
1158
1159-  The `'perf report'
1160   manpage <https://linux.die.net/man/1/perf-report>`__.
1161
1162-  The `'perf probe' manpage <https://linux.die.net/man/1/perf-probe>`__.
1163
1164-  The `'perf script'
1165   manpage <https://linux.die.net/man/1/perf-script>`__.
1166
1167-  Documentation on using the `'perf script' Python
1168   binding <https://linux.die.net/man/1/perf-script-python>`__.
1169
1170-  The top-level `perf(1) manpage <https://linux.die.net/man/1/perf>`__.
1171
1172Normally, you should be able to invoke the man pages via perf itself
1173e.g. 'perf help' or 'perf help record'.
1174
1175To have the perf manpages installed on your target, modify your
1176configuration as follows::
1177
1178   IMAGE_INSTALL:append = " perf perf-doc"
1179   DISTRO_FEATURES:append = " api-documentation"
1180
1181The man pages in text form, along with some other files, such as a set
1182of examples, can also be found in the 'perf' directory of the kernel tree::
1183
1184   tools/perf/Documentation
1185
1186There's also a nice perf tutorial on the perf
1187wiki that goes into more detail than we do here in certain areas: `Perf
1188Tutorial <https://perf.wiki.kernel.org/index.php/Tutorial>`__
1189
1190ftrace
1191======
1192
1193'ftrace' literally refers to the 'ftrace function tracer' but in reality
1194this encompasses a number of related tracers along with the
1195infrastructure that they all make use of.
1196
1197ftrace Setup
1198------------
1199
1200For this section, we'll assume you've already performed the basic setup
1201outlined in the ":ref:`profile-manual/intro:General Setup`" section.
1202
1203ftrace, trace-cmd, and kernelshark run on the target system, and are
1204ready to go out-of-the-box --- no additional setup is necessary. For the
1205rest of this section we assume you've ssh'ed to the host and will be
1206running ftrace on the target. kernelshark is a GUI application and if
1207you use the '-X' option to ssh you can have the kernelshark GUI run on
1208the target but display remotely on the host if you want.
1209
1210Basic ftrace usage
1211------------------
1212
1213'ftrace' essentially refers to everything included in the /tracing
1214directory of the mounted debugfs filesystem (Yocto follows the standard
1215convention and mounts it at /sys/kernel/debug). Here's a listing of all
1216the files found in /sys/kernel/debug/tracing on a Yocto system::
1217
1218   root@sugarbay:/sys/kernel/debug/tracing# ls
1219   README                      kprobe_events               trace
1220   available_events            kprobe_profile              trace_clock
1221   available_filter_functions  options                     trace_marker
1222   available_tracers           per_cpu                     trace_options
1223   buffer_size_kb              printk_formats              trace_pipe
1224   buffer_total_size_kb        saved_cmdlines              tracing_cpumask
1225   current_tracer              set_event                   tracing_enabled
1226   dyn_ftrace_total_info       set_ftrace_filter           tracing_on
1227   enabled_functions           set_ftrace_notrace          tracing_thresh
1228   events                      set_ftrace_pid
1229   free_buffer                 set_graph_function
1230
1231The files listed above are used for various purposes
1232- some relate directly to the tracers themselves, others are used to set
1233tracing options, and yet others actually contain the tracing output when
1234a tracer is in effect. Some of the functions can be guessed from their
1235names, others need explanation; in any case, we'll cover some of the
1236files we see here below but for an explanation of the others, please see
1237the ftrace documentation.
1238
1239We'll start by looking at some of the available built-in tracers.
1240
1241cat'ing the 'available_tracers' file lists the set of available tracers::
1242
1243   root@sugarbay:/sys/kernel/debug/tracing# cat available_tracers
1244   blk function_graph function nop
1245
1246The 'current_tracer' file contains the tracer currently in effect::
1247
1248   root@sugarbay:/sys/kernel/debug/tracing# cat current_tracer
1249   nop
1250
1251The above listing of current_tracer shows that the
1252'nop' tracer is in effect, which is just another way of saying that
1253there's actually no tracer currently in effect.
1254
1255echo'ing one of the available_tracers into current_tracer makes the
1256specified tracer the current tracer::
1257
1258   root@sugarbay:/sys/kernel/debug/tracing# echo function > current_tracer
1259   root@sugarbay:/sys/kernel/debug/tracing# cat current_tracer
1260   function
1261
1262The above sets the current tracer to be the 'function tracer'. This tracer
1263traces every function call in the kernel and makes it available as the
1264contents of the 'trace' file. Reading the 'trace' file lists the
1265currently buffered function calls that have been traced by the function
1266tracer::
1267
1268   root@sugarbay:/sys/kernel/debug/tracing# cat trace | less
1269
1270   # tracer: function
1271   #
1272   # entries-in-buffer/entries-written: 310629/766471   #P:8
1273   #
1274   #                              _-----=> irqs-off
1275   #                             / _----=> need-resched
1276   #                            | / _---=> hardirq/softirq
1277   #                            || / _--=> preempt-depth
1278   #                            ||| /     delay
1279   #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
1280   #              | |       |   ||||       |         |
1281            <idle>-0     [004] d..1   470.867169: ktime_get_real <-intel_idle
1282            <idle>-0     [004] d..1   470.867170: getnstimeofday <-ktime_get_real
1283            <idle>-0     [004] d..1   470.867171: ns_to_timeval <-intel_idle
1284            <idle>-0     [004] d..1   470.867171: ns_to_timespec <-ns_to_timeval
1285            <idle>-0     [004] d..1   470.867172: smp_apic_timer_interrupt <-apic_timer_interrupt
1286            <idle>-0     [004] d..1   470.867172: native_apic_mem_write <-smp_apic_timer_interrupt
1287            <idle>-0     [004] d..1   470.867172: irq_enter <-smp_apic_timer_interrupt
1288            <idle>-0     [004] d..1   470.867172: rcu_irq_enter <-irq_enter
1289            <idle>-0     [004] d..1   470.867173: rcu_idle_exit_common.isra.33 <-rcu_irq_enter
1290            <idle>-0     [004] d..1   470.867173: local_bh_disable <-irq_enter
1291            <idle>-0     [004] d..1   470.867173: add_preempt_count <-local_bh_disable
1292            <idle>-0     [004] d.s1   470.867174: tick_check_idle <-irq_enter
1293            <idle>-0     [004] d.s1   470.867174: tick_check_oneshot_broadcast <-tick_check_idle
1294            <idle>-0     [004] d.s1   470.867174: ktime_get <-tick_check_idle
1295            <idle>-0     [004] d.s1   470.867174: tick_nohz_stop_idle <-tick_check_idle
1296            <idle>-0     [004] d.s1   470.867175: update_ts_time_stats <-tick_nohz_stop_idle
1297            <idle>-0     [004] d.s1   470.867175: nr_iowait_cpu <-update_ts_time_stats
1298            <idle>-0     [004] d.s1   470.867175: tick_do_update_jiffies64 <-tick_check_idle
1299            <idle>-0     [004] d.s1   470.867175: _raw_spin_lock <-tick_do_update_jiffies64
1300            <idle>-0     [004] d.s1   470.867176: add_preempt_count <-_raw_spin_lock
1301            <idle>-0     [004] d.s2   470.867176: do_timer <-tick_do_update_jiffies64
1302            <idle>-0     [004] d.s2   470.867176: _raw_spin_lock <-do_timer
1303            <idle>-0     [004] d.s2   470.867176: add_preempt_count <-_raw_spin_lock
1304            <idle>-0     [004] d.s3   470.867177: ntp_tick_length <-do_timer
1305            <idle>-0     [004] d.s3   470.867177: _raw_spin_lock_irqsave <-ntp_tick_length
1306            .
1307            .
1308            .
1309
1310Each line in the trace above shows what was happening in the kernel on a given
1311cpu, to the level of detail of function calls. Each entry shows the function
1312called, followed by its caller (after the arrow).
1313
1314The function tracer gives you an extremely detailed idea of what the
1315kernel was doing at the point in time the trace was taken, and is a
1316great way to learn about how the kernel code works in a dynamic sense.
1317
1318.. admonition:: Tying it Together
1319
1320   The ftrace function tracer is also available from within perf, as the
1321   ftrace:function tracepoint.
1322
1323It is a little more difficult to follow the call chains than it needs to
1324be --- luckily there's a variant of the function tracer that displays the
1325callchains explicitly, called the 'function_graph' tracer::
1326
1327   root@sugarbay:/sys/kernel/debug/tracing# echo function_graph > current_tracer
1328   root@sugarbay:/sys/kernel/debug/tracing# cat trace | less
1329
1330    tracer: function_graph
1331
1332    CPU  DURATION                  FUNCTION CALLS
1333    |     |   |                     |   |   |   |
1334   7)   0.046 us    |      pick_next_task_fair();
1335   7)   0.043 us    |      pick_next_task_stop();
1336   7)   0.042 us    |      pick_next_task_rt();
1337   7)   0.032 us    |      pick_next_task_fair();
1338   7)   0.030 us    |      pick_next_task_idle();
1339   7)               |      _raw_spin_unlock_irq() {
1340   7)   0.033 us    |        sub_preempt_count();
1341   7)   0.258 us    |      }
1342   7)   0.032 us    |      sub_preempt_count();
1343   7) + 13.341 us   |    } /* __schedule */
1344   7)   0.095 us    |  } /* sub_preempt_count */
1345   7)               |  schedule() {
1346   7)               |    __schedule() {
1347   7)   0.060 us    |      add_preempt_count();
1348   7)   0.044 us    |      rcu_note_context_switch();
1349   7)               |      _raw_spin_lock_irq() {
1350   7)   0.033 us    |        add_preempt_count();
1351   7)   0.247 us    |      }
1352   7)               |      idle_balance() {
1353   7)               |        _raw_spin_unlock() {
1354   7)   0.031 us    |          sub_preempt_count();
1355   7)   0.246 us    |        }
1356   7)               |        update_shares() {
1357   7)   0.030 us    |          __rcu_read_lock();
1358   7)   0.029 us    |          __rcu_read_unlock();
1359   7)   0.484 us    |        }
1360   7)   0.030 us    |        __rcu_read_lock();
1361   7)               |        load_balance() {
1362   7)               |          find_busiest_group() {
1363   7)   0.031 us    |            idle_cpu();
1364   7)   0.029 us    |            idle_cpu();
1365   7)   0.035 us    |            idle_cpu();
1366   7)   0.906 us    |          }
1367   7)   1.141 us    |        }
1368   7)   0.022 us    |        msecs_to_jiffies();
1369   7)               |        load_balance() {
1370   7)               |          find_busiest_group() {
1371   7)   0.031 us    |            idle_cpu();
1372   .
1373   .
1374   .
1375   4)   0.062 us    |        msecs_to_jiffies();
1376   4)   0.062 us    |        __rcu_read_unlock();
1377   4)               |        _raw_spin_lock() {
1378   4)   0.073 us    |          add_preempt_count();
1379   4)   0.562 us    |        }
1380   4) + 17.452 us   |      }
1381   4)   0.108 us    |      put_prev_task_fair();
1382   4)   0.102 us    |      pick_next_task_fair();
1383   4)   0.084 us    |      pick_next_task_stop();
1384   4)   0.075 us    |      pick_next_task_rt();
1385   4)   0.062 us    |      pick_next_task_fair();
1386   4)   0.066 us    |      pick_next_task_idle();
1387   ------------------------------------------
1388   4)   kworker-74   =>    <idle>-0
1389   ------------------------------------------
1390
1391   4)               |      finish_task_switch() {
1392   4)               |        _raw_spin_unlock_irq() {
1393   4)   0.100 us    |          sub_preempt_count();
1394   4)   0.582 us    |        }
1395   4)   1.105 us    |      }
1396   4)   0.088 us    |      sub_preempt_count();
1397   4) ! 100.066 us  |    }
1398   .
1399   .
1400   .
1401   3)               |  sys_ioctl() {
1402   3)   0.083 us    |    fget_light();
1403   3)               |    security_file_ioctl() {
1404   3)   0.066 us    |      cap_file_ioctl();
1405   3)   0.562 us    |    }
1406   3)               |    do_vfs_ioctl() {
1407   3)               |      drm_ioctl() {
1408   3)   0.075 us    |        drm_ut_debug_printk();
1409   3)               |        i915_gem_pwrite_ioctl() {
1410   3)               |          i915_mutex_lock_interruptible() {
1411   3)   0.070 us    |            mutex_lock_interruptible();
1412   3)   0.570 us    |          }
1413   3)               |          drm_gem_object_lookup() {
1414   3)               |            _raw_spin_lock() {
1415   3)   0.080 us    |              add_preempt_count();
1416   3)   0.620 us    |            }
1417   3)               |            _raw_spin_unlock() {
1418   3)   0.085 us    |              sub_preempt_count();
1419   3)   0.562 us    |            }
1420   3)   2.149 us    |          }
1421   3)   0.133 us    |          i915_gem_object_pin();
1422   3)               |          i915_gem_object_set_to_gtt_domain() {
1423   3)   0.065 us    |            i915_gem_object_flush_gpu_write_domain();
1424   3)   0.065 us    |            i915_gem_object_wait_rendering();
1425   3)   0.062 us    |            i915_gem_object_flush_cpu_write_domain();
1426   3)   1.612 us    |          }
1427   3)               |          i915_gem_object_put_fence() {
1428   3)   0.097 us    |            i915_gem_object_flush_fence.constprop.36();
1429   3)   0.645 us    |          }
1430   3)   0.070 us    |          add_preempt_count();
1431   3)   0.070 us    |          sub_preempt_count();
1432   3)   0.073 us    |          i915_gem_object_unpin();
1433   3)   0.068 us    |          mutex_unlock();
1434   3)   9.924 us    |        }
1435   3) + 11.236 us   |      }
1436   3) + 11.770 us   |    }
1437   3) + 13.784 us   |  }
1438   3)               |  sys_ioctl() {
1439
1440As you can see, the function_graph display is much easier
1441to follow. Also note that in addition to the function calls and
1442associated braces, other events such as scheduler events are displayed
1443in context. In fact, you can freely include any tracepoint available in
1444the trace events subsystem described in the next section by simply
1445enabling those events, and they'll appear in context in the function
1446graph display. Quite a powerful tool for understanding kernel dynamics.
1447
1448Also notice that there are various annotations on the left hand side of
1449the display. For example if the total time it took for a given function
1450to execute is above a certain threshold, an exclamation point or plus
1451sign appears on the left hand side. Please see the ftrace documentation
1452for details on all these fields.
1453
1454The 'trace events' Subsystem
1455----------------------------
1456
1457One especially important directory contained within the
1458/sys/kernel/debug/tracing directory is the 'events' subdirectory, which
1459contains representations of every tracepoint in the system. Listing out
1460the contents of the 'events' subdirectory, we see mainly another set of
1461subdirectories::
1462
1463   root@sugarbay:/sys/kernel/debug/tracing# cd events
1464   root@sugarbay:/sys/kernel/debug/tracing/events# ls -al
1465   drwxr-xr-x   38 root     root             0 Nov 14 23:19 .
1466   drwxr-xr-x    5 root     root             0 Nov 14 23:19 ..
1467   drwxr-xr-x   19 root     root             0 Nov 14 23:19 block
1468   drwxr-xr-x   32 root     root             0 Nov 14 23:19 btrfs
1469   drwxr-xr-x    5 root     root             0 Nov 14 23:19 drm
1470   -rw-r--r--    1 root     root             0 Nov 14 23:19 enable
1471   drwxr-xr-x   40 root     root             0 Nov 14 23:19 ext3
1472   drwxr-xr-x   79 root     root             0 Nov 14 23:19 ext4
1473   drwxr-xr-x   14 root     root             0 Nov 14 23:19 ftrace
1474   drwxr-xr-x    8 root     root             0 Nov 14 23:19 hda
1475   -r--r--r--    1 root     root             0 Nov 14 23:19 header_event
1476   -r--r--r--    1 root     root             0 Nov 14 23:19 header_page
1477   drwxr-xr-x   25 root     root             0 Nov 14 23:19 i915
1478   drwxr-xr-x    7 root     root             0 Nov 14 23:19 irq
1479   drwxr-xr-x   12 root     root             0 Nov 14 23:19 jbd
1480   drwxr-xr-x   14 root     root             0 Nov 14 23:19 jbd2
1481   drwxr-xr-x   14 root     root             0 Nov 14 23:19 kmem
1482   drwxr-xr-x    7 root     root             0 Nov 14 23:19 module
1483   drwxr-xr-x    3 root     root             0 Nov 14 23:19 napi
1484   drwxr-xr-x    6 root     root             0 Nov 14 23:19 net
1485   drwxr-xr-x    3 root     root             0 Nov 14 23:19 oom
1486   drwxr-xr-x   12 root     root             0 Nov 14 23:19 power
1487   drwxr-xr-x    3 root     root             0 Nov 14 23:19 printk
1488   drwxr-xr-x    8 root     root             0 Nov 14 23:19 random
1489   drwxr-xr-x    4 root     root             0 Nov 14 23:19 raw_syscalls
1490   drwxr-xr-x    3 root     root             0 Nov 14 23:19 rcu
1491   drwxr-xr-x    6 root     root             0 Nov 14 23:19 rpm
1492   drwxr-xr-x   20 root     root             0 Nov 14 23:19 sched
1493   drwxr-xr-x    7 root     root             0 Nov 14 23:19 scsi
1494   drwxr-xr-x    4 root     root             0 Nov 14 23:19 signal
1495   drwxr-xr-x    5 root     root             0 Nov 14 23:19 skb
1496   drwxr-xr-x    4 root     root             0 Nov 14 23:19 sock
1497   drwxr-xr-x   10 root     root             0 Nov 14 23:19 sunrpc
1498   drwxr-xr-x  538 root     root             0 Nov 14 23:19 syscalls
1499   drwxr-xr-x    4 root     root             0 Nov 14 23:19 task
1500   drwxr-xr-x   14 root     root             0 Nov 14 23:19 timer
1501   drwxr-xr-x    3 root     root             0 Nov 14 23:19 udp
1502   drwxr-xr-x   21 root     root             0 Nov 14 23:19 vmscan
1503   drwxr-xr-x    3 root     root             0 Nov 14 23:19 vsyscall
1504   drwxr-xr-x    6 root     root             0 Nov 14 23:19 workqueue
1505   drwxr-xr-x   26 root     root             0 Nov 14 23:19 writeback
1506
1507Each one of these subdirectories
1508corresponds to a 'subsystem' and contains yet again more subdirectories,
1509each one of those finally corresponding to a tracepoint. For example,
1510here are the contents of the 'kmem' subsystem::
1511
1512   root@sugarbay:/sys/kernel/debug/tracing/events# cd kmem
1513   root@sugarbay:/sys/kernel/debug/tracing/events/kmem# ls -al
1514   drwxr-xr-x   14 root     root             0 Nov 14 23:19 .
1515   drwxr-xr-x   38 root     root             0 Nov 14 23:19 ..
1516   -rw-r--r--    1 root     root             0 Nov 14 23:19 enable
1517   -rw-r--r--    1 root     root             0 Nov 14 23:19 filter
1518   drwxr-xr-x    2 root     root             0 Nov 14 23:19 kfree
1519   drwxr-xr-x    2 root     root             0 Nov 14 23:19 kmalloc
1520   drwxr-xr-x    2 root     root             0 Nov 14 23:19 kmalloc_node
1521   drwxr-xr-x    2 root     root             0 Nov 14 23:19 kmem_cache_alloc
1522   drwxr-xr-x    2 root     root             0 Nov 14 23:19 kmem_cache_alloc_node
1523   drwxr-xr-x    2 root     root             0 Nov 14 23:19 kmem_cache_free
1524   drwxr-xr-x    2 root     root             0 Nov 14 23:19 mm_page_alloc
1525   drwxr-xr-x    2 root     root             0 Nov 14 23:19 mm_page_alloc_extfrag
1526   drwxr-xr-x    2 root     root             0 Nov 14 23:19 mm_page_alloc_zone_locked
1527   drwxr-xr-x    2 root     root             0 Nov 14 23:19 mm_page_free
1528   drwxr-xr-x    2 root     root             0 Nov 14 23:19 mm_page_free_batched
1529   drwxr-xr-x    2 root     root             0 Nov 14 23:19 mm_page_pcpu_drain
1530
1531Let's see what's inside the subdirectory for a
1532specific tracepoint, in this case the one for kmalloc::
1533
1534   root@sugarbay:/sys/kernel/debug/tracing/events/kmem# cd kmalloc
1535   root@sugarbay:/sys/kernel/debug/tracing/events/kmem/kmalloc# ls -al
1536   drwxr-xr-x    2 root     root             0 Nov 14 23:19 .
1537   drwxr-xr-x   14 root     root             0 Nov 14 23:19 ..
1538   -rw-r--r--    1 root     root             0 Nov 14 23:19 enable
1539   -rw-r--r--    1 root     root             0 Nov 14 23:19 filter
1540   -r--r--r--    1 root     root             0 Nov 14 23:19 format
1541   -r--r--r--    1 root     root             0 Nov 14 23:19 id
1542
1543The 'format' file for the
1544tracepoint describes the event in memory, which is used by the various
1545tracing tools that now make use of these tracepoint to parse the event
1546and make sense of it, along with a 'print fmt' field that allows tools
1547like ftrace to display the event as text. Here's what the format of the
1548kmalloc event looks like::
1549
1550   root@sugarbay:/sys/kernel/debug/tracing/events/kmem/kmalloc# cat format
1551   name: kmalloc
1552   ID: 313
1553   format:
1554           field:unsigned short common_type;	offset:0;	size:2;	signed:0;
1555           field:unsigned char common_flags;	offset:2;	size:1;	signed:0;
1556           field:unsigned char common_preempt_count;	offset:3;	size:1;	signed:0;
1557           field:int common_pid;	offset:4;	size:4;	signed:1;
1558           field:int common_padding;	offset:8;	size:4;	signed:1;
1559
1560           field:unsigned long call_site;	offset:16;	size:8;	signed:0;
1561           field:const void * ptr;	offset:24;	size:8;	signed:0;
1562           field:size_t bytes_req;	offset:32;	size:8;	signed:0;
1563           field:size_t bytes_alloc;	offset:40;	size:8;	signed:0;
1564           field:gfp_t gfp_flags;	offset:48;	size:4;	signed:0;
1565
1566   print fmt: "call_site=%lx ptr=%p bytes_req=%zu bytes_alloc=%zu gfp_flags=%s", REC->call_site, REC->ptr, REC->bytes_req, REC->bytes_alloc,
1567   (REC->gfp_flags) ? __print_flags(REC->gfp_flags, "|", {(unsigned long)(((( gfp_t)0x10u) | (( gfp_t)0x40u) | (( gfp_t)0x80u) | ((
1568   gfp_t)0x20000u) | (( gfp_t)0x02u) | (( gfp_t)0x08u)) | (( gfp_t)0x4000u) | (( gfp_t)0x10000u) | (( gfp_t)0x1000u) | (( gfp_t)0x200u) | ((
1569   gfp_t)0x400000u)), "GFP_TRANSHUGE"}, {(unsigned long)((( gfp_t)0x10u) | (( gfp_t)0x40u) | (( gfp_t)0x80u) | (( gfp_t)0x20000u) | ((
1570   gfp_t)0x02u) | (( gfp_t)0x08u)), "GFP_HIGHUSER_MOVABLE"}, {(unsigned long)((( gfp_t)0x10u) | (( gfp_t)0x40u) | (( gfp_t)0x80u) | ((
1571   gfp_t)0x20000u) | (( gfp_t)0x02u)), "GFP_HIGHUSER"}, {(unsigned long)((( gfp_t)0x10u) | (( gfp_t)0x40u) | (( gfp_t)0x80u) | ((
1572   gfp_t)0x20000u)), "GFP_USER"}, {(unsigned long)((( gfp_t)0x10u) | (( gfp_t)0x40u) | (( gfp_t)0x80u) | (( gfp_t)0x80000u)), GFP_TEMPORARY"},
1573   {(unsigned long)((( gfp_t)0x10u) | (( gfp_t)0x40u) | (( gfp_t)0x80u)), "GFP_KERNEL"}, {(unsigned long)((( gfp_t)0x10u) | (( gfp_t)0x40u)),
1574   "GFP_NOFS"}, {(unsigned long)((( gfp_t)0x20u)), "GFP_ATOMIC"}, {(unsigned long)((( gfp_t)0x10u)), "GFP_NOIO"}, {(unsigned long)((
1575   gfp_t)0x20u), "GFP_HIGH"}, {(unsigned long)(( gfp_t)0x10u), "GFP_WAIT"}, {(unsigned long)(( gfp_t)0x40u), "GFP_IO"}, {(unsigned long)((
1576   gfp_t)0x100u), "GFP_COLD"}, {(unsigned long)(( gfp_t)0x200u), "GFP_NOWARN"}, {(unsigned long)(( gfp_t)0x400u), "GFP_REPEAT"}, {(unsigned
1577   long)(( gfp_t)0x800u), "GFP_NOFAIL"}, {(unsigned long)(( gfp_t)0x1000u), "GFP_NORETRY"},      {(unsigned long)(( gfp_t)0x4000u), "GFP_COMP"},
1578   {(unsigned long)(( gfp_t)0x8000u), "GFP_ZERO"}, {(unsigned long)(( gfp_t)0x10000u), "GFP_NOMEMALLOC"}, {(unsigned long)(( gfp_t)0x20000u),
1579   "GFP_HARDWALL"}, {(unsigned long)(( gfp_t)0x40000u), "GFP_THISNODE"}, {(unsigned long)(( gfp_t)0x80000u), "GFP_RECLAIMABLE"}, {(unsigned
1580   long)(( gfp_t)0x08u), "GFP_MOVABLE"}, {(unsigned long)(( gfp_t)0), "GFP_NOTRACK"}, {(unsigned long)(( gfp_t)0x400000u), "GFP_NO_KSWAPD"},
1581   {(unsigned long)(( gfp_t)0x800000u), "GFP_OTHER_NODE"} ) : "GFP_NOWAIT"
1582
1583The 'enable' file
1584in the tracepoint directory is what allows the user (or tools such as
1585trace-cmd) to actually turn the tracepoint on and off. When enabled, the
1586corresponding tracepoint will start appearing in the ftrace 'trace' file
1587described previously. For example, this turns on the kmalloc tracepoint::
1588
1589   root@sugarbay:/sys/kernel/debug/tracing/events/kmem/kmalloc# echo 1 > enable
1590
1591At the moment, we're not interested in the function tracer or
1592some other tracer that might be in effect, so we first turn it off, but
1593if we do that, we still need to turn tracing on in order to see the
1594events in the output buffer::
1595
1596   root@sugarbay:/sys/kernel/debug/tracing# echo nop > current_tracer
1597   root@sugarbay:/sys/kernel/debug/tracing# echo 1 > tracing_on
1598
1599Now, if we look at the 'trace' file, we see nothing
1600but the kmalloc events we just turned on::
1601
1602   root@sugarbay:/sys/kernel/debug/tracing# cat trace | less
1603   # tracer: nop
1604   #
1605   # entries-in-buffer/entries-written: 1897/1897   #P:8
1606   #
1607   #                              _-----=> irqs-off
1608   #                             / _----=> need-resched
1609   #                            | / _---=> hardirq/softirq
1610   #                            || / _--=> preempt-depth
1611   #                            ||| /     delay
1612   #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
1613   #              | |       |   ||||       |         |
1614          dropbear-1465  [000] ...1 18154.620753: kmalloc: call_site=ffffffff816650d4 ptr=ffff8800729c3000 bytes_req=2048 bytes_alloc=2048 gfp_flags=GFP_KERNEL
1615            <idle>-0     [000] ..s3 18154.621640: kmalloc: call_site=ffffffff81619b36 ptr=ffff88006d555800 bytes_req=512 bytes_alloc=512 gfp_flags=GFP_ATOMIC
1616            <idle>-0     [000] ..s3 18154.621656: kmalloc: call_site=ffffffff81619b36 ptr=ffff88006d555800 bytes_req=512 bytes_alloc=512 gfp_flags=GFP_ATOMIC
1617   matchbox-termin-1361  [001] ...1 18154.755472: kmalloc: call_site=ffffffff81614050 ptr=ffff88006d5f0e00 bytes_req=512 bytes_alloc=512 gfp_flags=GFP_KERNEL|GFP_REPEAT
1618              Xorg-1264  [002] ...1 18154.755581: kmalloc: call_site=ffffffff8141abe8 ptr=ffff8800734f4cc0 bytes_req=168 bytes_alloc=192 gfp_flags=GFP_KERNEL|GFP_NOWARN|GFP_NORETRY
1619              Xorg-1264  [002] ...1 18154.755583: kmalloc: call_site=ffffffff814192a3 ptr=ffff88001f822520 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL|GFP_ZERO
1620              Xorg-1264  [002] ...1 18154.755589: kmalloc: call_site=ffffffff81419edb ptr=ffff8800721a2f00 bytes_req=64 bytes_alloc=64 gfp_flags=GFP_KERNEL|GFP_ZERO
1621   matchbox-termin-1361  [001] ...1 18155.354594: kmalloc: call_site=ffffffff81614050 ptr=ffff88006db35400 bytes_req=576 bytes_alloc=1024 gfp_flags=GFP_KERNEL|GFP_REPEAT
1622              Xorg-1264  [002] ...1 18155.354703: kmalloc: call_site=ffffffff8141abe8 ptr=ffff8800734f4cc0 bytes_req=168 bytes_alloc=192 gfp_flags=GFP_KERNEL|GFP_NOWARN|GFP_NORETRY
1623              Xorg-1264  [002] ...1 18155.354705: kmalloc: call_site=ffffffff814192a3 ptr=ffff88001f822520 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL|GFP_ZERO
1624              Xorg-1264  [002] ...1 18155.354711: kmalloc: call_site=ffffffff81419edb ptr=ffff8800721a2f00 bytes_req=64 bytes_alloc=64 gfp_flags=GFP_KERNEL|GFP_ZERO
1625            <idle>-0     [000] ..s3 18155.673319: kmalloc: call_site=ffffffff81619b36 ptr=ffff88006d555800 bytes_req=512 bytes_alloc=512 gfp_flags=GFP_ATOMIC
1626          dropbear-1465  [000] ...1 18155.673525: kmalloc: call_site=ffffffff816650d4 ptr=ffff8800729c3000 bytes_req=2048 bytes_alloc=2048 gfp_flags=GFP_KERNEL
1627            <idle>-0     [000] ..s3 18155.674821: kmalloc: call_site=ffffffff81619b36 ptr=ffff88006d554800 bytes_req=512 bytes_alloc=512 gfp_flags=GFP_ATOMIC
1628            <idle>-0     [000] ..s3 18155.793014: kmalloc: call_site=ffffffff81619b36 ptr=ffff88006d554800 bytes_req=512 bytes_alloc=512 gfp_flags=GFP_ATOMIC
1629          dropbear-1465  [000] ...1 18155.793219: kmalloc: call_site=ffffffff816650d4 ptr=ffff8800729c3000 bytes_req=2048 bytes_alloc=2048 gfp_flags=GFP_KERNEL
1630            <idle>-0     [000] ..s3 18155.794147: kmalloc: call_site=ffffffff81619b36 ptr=ffff88006d555800 bytes_req=512 bytes_alloc=512 gfp_flags=GFP_ATOMIC
1631            <idle>-0     [000] ..s3 18155.936705: kmalloc: call_site=ffffffff81619b36 ptr=ffff88006d555800 bytes_req=512 bytes_alloc=512 gfp_flags=GFP_ATOMIC
1632          dropbear-1465  [000] ...1 18155.936910: kmalloc: call_site=ffffffff816650d4 ptr=ffff8800729c3000 bytes_req=2048 bytes_alloc=2048 gfp_flags=GFP_KERNEL
1633            <idle>-0     [000] ..s3 18155.937869: kmalloc: call_site=ffffffff81619b36 ptr=ffff88006d554800 bytes_req=512 bytes_alloc=512 gfp_flags=GFP_ATOMIC
1634   matchbox-termin-1361  [001] ...1 18155.953667: kmalloc: call_site=ffffffff81614050 ptr=ffff88006d5f2000 bytes_req=512 bytes_alloc=512 gfp_flags=GFP_KERNEL|GFP_REPEAT
1635              Xorg-1264  [002] ...1 18155.953775: kmalloc: call_site=ffffffff8141abe8 ptr=ffff8800734f4cc0 bytes_req=168 bytes_alloc=192 gfp_flags=GFP_KERNEL|GFP_NOWARN|GFP_NORETRY
1636              Xorg-1264  [002] ...1 18155.953777: kmalloc: call_site=ffffffff814192a3 ptr=ffff88001f822520 bytes_req=24 bytes_alloc=32 gfp_flags=GFP_KERNEL|GFP_ZERO
1637              Xorg-1264  [002] ...1 18155.953783: kmalloc: call_site=ffffffff81419edb ptr=ffff8800721a2f00 bytes_req=64 bytes_alloc=64 gfp_flags=GFP_KERNEL|GFP_ZERO
1638            <idle>-0     [000] ..s3 18156.176053: kmalloc: call_site=ffffffff81619b36 ptr=ffff88006d554800 bytes_req=512 bytes_alloc=512 gfp_flags=GFP_ATOMIC
1639          dropbear-1465  [000] ...1 18156.176257: kmalloc: call_site=ffffffff816650d4 ptr=ffff8800729c3000 bytes_req=2048 bytes_alloc=2048 gfp_flags=GFP_KERNEL
1640            <idle>-0     [000] ..s3 18156.177717: kmalloc: call_site=ffffffff81619b36 ptr=ffff88006d555800 bytes_req=512 bytes_alloc=512 gfp_flags=GFP_ATOMIC
1641            <idle>-0     [000] ..s3 18156.399229: kmalloc: call_site=ffffffff81619b36 ptr=ffff88006d555800 bytes_req=512 bytes_alloc=512 gfp_flags=GFP_ATOMIC
1642          dropbear-1465  [000] ...1 18156.399434: kmalloc: call_site=ffffffff816650d4 ptr=ffff8800729c3000 bytes_http://rostedt.homelinux.com/kernelshark/req=2048 bytes_alloc=2048 gfp_flags=GFP_KERNEL
1643            <idle>-0     [000] ..s3 18156.400660: kmalloc: call_site=ffffffff81619b36 ptr=ffff88006d554800 bytes_req=512 bytes_alloc=512 gfp_flags=GFP_ATOMIC
1644   matchbox-termin-1361  [001] ...1 18156.552800: kmalloc: call_site=ffffffff81614050 ptr=ffff88006db34800 bytes_req=576 bytes_alloc=1024 gfp_flags=GFP_KERNEL|GFP_REPEAT
1645
1646To again disable the kmalloc event, we need to send 0 to the enable file::
1647
1648   root@sugarbay:/sys/kernel/debug/tracing/events/kmem/kmalloc# echo 0 > enable
1649
1650You can enable any number of events or complete subsystems (by
1651using the 'enable' file in the subsystem directory) and get an
1652arbitrarily fine-grained idea of what's going on in the system by
1653enabling as many of the appropriate tracepoints as applicable.
1654
1655A number of the tools described in this HOWTO do just that, including
1656trace-cmd and kernelshark in the next section.
1657
1658.. admonition:: Tying it Together
1659
1660   These tracepoints and their representation are used not only by
1661   ftrace, but by many of the other tools covered in this document and
1662   they form a central point of integration for the various tracers
1663   available in Linux. They form a central part of the instrumentation
1664   for the following tools: perf, lttng, ftrace, blktrace and SystemTap
1665
1666.. admonition:: Tying it Together
1667
1668   Eventually all the special-purpose tracers currently available in
1669   /sys/kernel/debug/tracing will be removed and replaced with
1670   equivalent tracers based on the 'trace events' subsystem.
1671
1672trace-cmd/kernelshark
1673---------------------
1674
1675trace-cmd is essentially an extensive command-line 'wrapper' interface
1676that hides the details of all the individual files in
1677/sys/kernel/debug/tracing, allowing users to specify specific particular
1678events within the /sys/kernel/debug/tracing/events/ subdirectory and to
1679collect traces and avoid having to deal with those details directly.
1680
1681As yet another layer on top of that, kernelshark provides a GUI that
1682allows users to start and stop traces and specify sets of events using
1683an intuitive interface, and view the output as both trace events and as
1684a per-CPU graphical display. It directly uses 'trace-cmd' as the
1685plumbing that accomplishes all that underneath the covers (and actually
1686displays the trace-cmd command it uses, as we'll see).
1687
1688To start a trace using kernelshark, first start kernelshark::
1689
1690   root@sugarbay:~# kernelshark
1691
1692Then bring up the 'Capture' dialog by
1693choosing from the kernelshark menu::
1694
1695   Capture | Record
1696
1697That will display the following dialog, which allows you to choose one or more
1698events (or even one or more complete subsystems) to trace:
1699
1700.. image:: figures/kernelshark-choose-events.png
1701   :align: center
1702   :width: 70%
1703
1704Note that these are exactly the same sets of events described in the
1705previous trace events subsystem section, and in fact is where trace-cmd
1706gets them for kernelshark.
1707
1708In the above screenshot, we've decided to explore the graphics subsystem
1709a bit and so have chosen to trace all the tracepoints contained within
1710the 'i915' and 'drm' subsystems.
1711
1712After doing that, we can start and stop the trace using the 'Run' and
1713'Stop' button on the lower right corner of the dialog (the same button
1714will turn into the 'Stop' button after the trace has started):
1715
1716.. image:: figures/kernelshark-output-display.png
1717   :align: center
1718   :width: 70%
1719
1720Notice that the right-hand pane shows the exact trace-cmd command-line
1721that's used to run the trace, along with the results of the trace-cmd
1722run.
1723
1724Once the 'Stop' button is pressed, the graphical view magically fills up
1725with a colorful per-cpu display of the trace data, along with the
1726detailed event listing below that:
1727
1728.. image:: figures/kernelshark-i915-display.png
1729   :align: center
1730   :width: 70%
1731
1732Here's another example, this time a display resulting from tracing 'all
1733events':
1734
1735.. image:: figures/kernelshark-all.png
1736   :align: center
1737   :width: 70%
1738
1739The tool is pretty self-explanatory, but for more detailed information
1740on navigating through the data, see the `kernelshark
1741website <https://rostedt.homelinux.com/kernelshark/>`__.
1742
1743ftrace Documentation
1744--------------------
1745
1746The documentation for ftrace can be found in the kernel Documentation
1747directory::
1748
1749   Documentation/trace/ftrace.txt
1750
1751The documentation for the trace event subsystem can also be found in the kernel
1752Documentation directory::
1753
1754   Documentation/trace/events.txt
1755
1756There is a nice series of articles on using ftrace and trace-cmd at LWN:
1757
1758-  `Debugging the kernel using Ftrace - part
1759   1 <https://lwn.net/Articles/365835/>`__
1760
1761-  `Debugging the kernel using Ftrace - part
1762   2 <https://lwn.net/Articles/366796/>`__
1763
1764-  `Secrets of the Ftrace function
1765   tracer <https://lwn.net/Articles/370423/>`__
1766
1767-  `trace-cmd: A front-end for
1768   Ftrace <https://lwn.net/Articles/410200/>`__
1769
1770There's more detailed documentation kernelshark usage here:
1771`KernelShark <https://rostedt.homelinux.com/kernelshark/>`__
1772
1773An amusing yet useful README (a tracing mini-HOWTO) can be found in
1774``/sys/kernel/debug/tracing/README``.
1775
1776systemtap
1777=========
1778
1779SystemTap is a system-wide script-based tracing and profiling tool.
1780
1781SystemTap scripts are C-like programs that are executed in the kernel to
1782gather/print/aggregate data extracted from the context they end up being
1783invoked under.
1784
1785For example, this probe from the `SystemTap
1786tutorial <https://sourceware.org/systemtap/tutorial/>`__ simply prints a
1787line every time any process on the system open()s a file. For each line,
1788it prints the executable name of the program that opened the file, along
1789with its PID, and the name of the file it opened (or tried to open),
1790which it extracts from the open syscall's argstr.
1791
1792.. code-block:: none
1793
1794   probe syscall.open
1795   {
1796           printf ("%s(%d) open (%s)\n", execname(), pid(), argstr)
1797   }
1798
1799   probe timer.ms(4000) # after 4 seconds
1800   {
1801           exit ()
1802   }
1803
1804Normally, to execute this
1805probe, you'd simply install systemtap on the system you want to probe,
1806and directly run the probe on that system e.g. assuming the name of the
1807file containing the above text is trace_open.stp::
1808
1809   # stap trace_open.stp
1810
1811What systemtap does under the covers to run this probe is 1) parse and
1812convert the probe to an equivalent 'C' form, 2) compile the 'C' form
1813into a kernel module, 3) insert the module into the kernel, which arms
1814it, and 4) collect the data generated by the probe and display it to the
1815user.
1816
1817In order to accomplish steps 1 and 2, the 'stap' program needs access to
1818the kernel build system that produced the kernel that the probed system
1819is running. In the case of a typical embedded system (the 'target'), the
1820kernel build system unfortunately isn't typically part of the image
1821running on the target. It is normally available on the 'host' system
1822that produced the target image however; in such cases, steps 1 and 2 are
1823executed on the host system, and steps 3 and 4 are executed on the
1824target system, using only the systemtap 'runtime'.
1825
1826The systemtap support in Yocto assumes that only steps 3 and 4 are run
1827on the target; it is possible to do everything on the target, but this
1828section assumes only the typical embedded use-case.
1829
1830So basically what you need to do in order to run a systemtap script on
1831the target is to 1) on the host system, compile the probe into a kernel
1832module that makes sense to the target, 2) copy the module onto the
1833target system and 3) insert the module into the target kernel, which
1834arms it, and 4) collect the data generated by the probe and display it
1835to the user.
1836
1837systemtap Setup
1838---------------
1839
1840Those are a lot of steps and a lot of details, but fortunately Yocto
1841includes a script called 'crosstap' that will take care of those
1842details, allowing you to simply execute a systemtap script on the remote
1843target, with arguments if necessary.
1844
1845In order to do this from a remote host, however, you need to have access
1846to the build for the image you booted. The 'crosstap' script provides
1847details on how to do this if you run the script on the host without
1848having done a build::
1849
1850   $ crosstap root@192.168.1.88 trace_open.stp
1851
1852   Error: No target kernel build found.
1853   Did you forget to create a local build of your image?
1854
1855   'crosstap' requires a local sdk build of the target system
1856   (or a build that includes 'tools-profile') in order to build
1857   kernel modules that can probe the target system.
1858
1859   Practically speaking, that means you need to do the following:
1860    - If you're running a pre-built image, download the release
1861      and/or BSP tarballs used to build the image.
1862    - If you're working from git sources, just clone the metadata
1863      and BSP layers needed to build the image you'll be booting.
1864    - Make sure you're properly set up to build a new image (see
1865      the BSP README and/or the widely available basic documentation
1866      that discusses how to build images).
1867    - Build an -sdk version of the image e.g.:
1868        $ bitbake core-image-sato-sdk
1869    OR
1870    - Build a non-sdk image but include the profiling tools:
1871        [ edit local.conf and add 'tools-profile' to the end of
1872          the EXTRA_IMAGE_FEATURES variable ]
1873        $ bitbake core-image-sato
1874
1875   Once you've build the image on the host system, you're ready to
1876   boot it (or the equivalent pre-built image) and use 'crosstap'
1877   to probe it (you need to source the environment as usual first):
1878
1879      $ source oe-init-build-env
1880      $ cd ~/my/systemtap/scripts
1881      $ crosstap root@192.168.1.xxx myscript.stp
1882
1883.. note::
1884
1885   SystemTap, which uses 'crosstap', assumes you can establish an ssh
1886   connection to the remote target. Please refer to the crosstap wiki
1887   page for details on verifying ssh connections at
1888   . Also, the ability to ssh into the target system is not enabled by
1889   default in \*-minimal images.
1890
1891So essentially what you need to
1892do is build an SDK image or image with 'tools-profile' as detailed in
1893the ":ref:`profile-manual/intro:General Setup`" section of this
1894manual, and boot the resulting target image.
1895
1896.. note::
1897
1898   If you have a build directory containing multiple machines, you need
1899   to have the MACHINE you're connecting to selected in local.conf, and
1900   the kernel in that machine's build directory must match the kernel on
1901   the booted system exactly, or you'll get the above 'crosstap' message
1902   when you try to invoke a script.
1903
1904Running a Script on a Target
1905----------------------------
1906
1907Once you've done that, you should be able to run a systemtap script on
1908the target::
1909
1910   $ cd /path/to/yocto
1911   $ source oe-init-build-env
1912
1913   ### Shell environment set up for builds. ###
1914
1915   You can now run 'bitbake <target>'
1916
1917   Common targets are:
1918            core-image-minimal
1919            core-image-sato
1920            meta-toolchain
1921            meta-ide-support
1922
1923   You can also run generated QEMU images with a command like 'runqemu qemux86-64'
1924
1925Once you've done that, you can cd to whatever
1926directory contains your scripts and use 'crosstap' to run the script::
1927
1928   $ cd /path/to/my/systemap/script
1929   $ crosstap root@192.168.7.2 trace_open.stp
1930
1931If you get an error connecting to the target e.g.::
1932
1933   $ crosstap root@192.168.7.2 trace_open.stp
1934   error establishing ssh connection on remote 'root@192.168.7.2'
1935
1936Try ssh'ing to the target and see what happens::
1937
1938   $ ssh root@192.168.7.2
1939
1940A lot of the time, connection
1941problems are due specifying a wrong IP address or having a 'host key
1942verification error'.
1943
1944If everything worked as planned, you should see something like this
1945(enter the password when prompted, or press enter if it's set up to use
1946no password):
1947
1948.. code-block:: none
1949
1950   $ crosstap root@192.168.7.2 trace_open.stp
1951   root@192.168.7.2's password:
1952   matchbox-termin(1036) open ("/tmp/vte3FS2LW", O_RDWR|O_CREAT|O_EXCL|O_LARGEFILE, 0600)
1953   matchbox-termin(1036) open ("/tmp/vteJMC7LW", O_RDWR|O_CREAT|O_EXCL|O_LARGEFILE, 0600)
1954
1955systemtap Documentation
1956-----------------------
1957
1958The SystemTap language reference can be found here: `SystemTap Language
1959Reference <https://sourceware.org/systemtap/langref/>`__
1960
1961Links to other SystemTap documents, tutorials, and examples can be found
1962here: `SystemTap documentation
1963page <https://sourceware.org/systemtap/documentation.html>`__
1964
1965Sysprof
1966=======
1967
1968Sysprof is a very easy to use system-wide profiler that consists of a
1969single window with three panes and a few buttons which allow you to
1970start, stop, and view the profile from one place.
1971
1972Sysprof Setup
1973-------------
1974
1975For this section, we'll assume you've already performed the basic setup
1976outlined in the ":ref:`profile-manual/intro:General Setup`" section.
1977
1978Sysprof is a GUI-based application that runs on the target system. For
1979the rest of this document we assume you've ssh'ed to the host and will
1980be running Sysprof on the target (you can use the '-X' option to ssh and
1981have the Sysprof GUI run on the target but display remotely on the host
1982if you want).
1983
1984Basic Sysprof Usage
1985-------------------
1986
1987To start profiling the system, you simply press the 'Start' button. To
1988stop profiling and to start viewing the profile data in one easy step,
1989press the 'Profile' button.
1990
1991Once you've pressed the profile button, the three panes will fill up
1992with profiling data:
1993
1994.. image:: figures/sysprof-copy-to-user.png
1995   :align: center
1996   :width: 70%
1997
1998The left pane shows a list of functions and processes. Selecting one of
1999those expands that function in the right pane, showing all its callees.
2000Note that this caller-oriented display is essentially the inverse of
2001perf's default callee-oriented callchain display.
2002
2003In the screenshot above, we're focusing on ``__copy_to_user_ll()`` and
2004looking up the callchain we can see that one of the callers of
2005``__copy_to_user_ll`` is sys_read() and the complete callpath between them.
2006Notice that this is essentially a portion of the same information we saw
2007in the perf display shown in the perf section of this page.
2008
2009.. image:: figures/sysprof-copy-from-user.png
2010   :align: center
2011   :width: 70%
2012
2013Similarly, the above is a snapshot of the Sysprof display of a
2014copy-from-user callchain.
2015
2016Finally, looking at the third Sysprof pane in the lower left, we can see
2017a list of all the callers of a particular function selected in the top
2018left pane. In this case, the lower pane is showing all the callers of
2019``__mark_inode_dirty``:
2020
2021.. image:: figures/sysprof-callers.png
2022   :align: center
2023   :width: 70%
2024
2025Double-clicking on one of those functions will in turn change the focus
2026to the selected function, and so on.
2027
2028.. admonition:: Tying it Together
2029
2030   If you like sysprof's 'caller-oriented' display, you may be able to
2031   approximate it in other tools as well. For example, 'perf report' has
2032   the -g (--call-graph) option that you can experiment with; one of the
2033   options is 'caller' for an inverted caller-based callgraph display.
2034
2035Sysprof Documentation
2036---------------------
2037
2038There doesn't seem to be any documentation for Sysprof, but maybe that's
2039because it's pretty self-explanatory. The Sysprof website, however, is
2040here: `Sysprof, System-wide Performance Profiler for
2041Linux <http://sysprof.com/>`__
2042
2043LTTng (Linux Trace Toolkit, next generation)
2044============================================
2045
2046LTTng Setup
2047-----------
2048
2049For this section, we'll assume you've already performed the basic setup
2050outlined in the ":ref:`profile-manual/intro:General Setup`" section.
2051LTTng is run on the target system by ssh'ing to it.
2052
2053Collecting and Viewing Traces
2054-----------------------------
2055
2056Once you've applied the above commits and built and booted your image
2057(you need to build the core-image-sato-sdk image or use one of the other
2058methods described in the ":ref:`profile-manual/intro:General Setup`" section), you're ready to start
2059tracing.
2060
2061Collecting and viewing a trace on the target (inside a shell)
2062~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2063
2064First, from the host, ssh to the target::
2065
2066   $ ssh -l root 192.168.1.47
2067   The authenticity of host '192.168.1.47 (192.168.1.47)' can't be established.
2068   RSA key fingerprint is 23:bd:c8:b1:a8:71:52:00:ee:00:4f:64:9e:10:b9:7e.
2069   Are you sure you want to continue connecting (yes/no)? yes
2070   Warning: Permanently added '192.168.1.47' (RSA) to the list of known hosts.
2071   root@192.168.1.47's password:
2072
2073Once on the target, use these steps to create a trace::
2074
2075   root@crownbay:~# lttng create
2076   Spawning a session daemon
2077   Session auto-20121015-232120 created.
2078   Traces will be written in /home/root/lttng-traces/auto-20121015-232120
2079
2080Enable the events you want to trace (in this case all kernel events)::
2081
2082   root@crownbay:~# lttng enable-event --kernel --all
2083   All kernel events are enabled in channel channel0
2084
2085Start the trace::
2086
2087   root@crownbay:~# lttng start
2088   Tracing started for session auto-20121015-232120
2089
2090And then stop the trace after awhile or after running a particular workload that
2091you want to trace::
2092
2093   root@crownbay:~# lttng stop
2094   Tracing stopped for session auto-20121015-232120
2095
2096You can now view the trace in text form on the target::
2097
2098   root@crownbay:~# lttng view
2099   [23:21:56.989270399] (+?.?????????) sys_geteuid: { 1 }, { }
2100   [23:21:56.989278081] (+0.000007682) exit_syscall: { 1 }, { ret = 0 }
2101   [23:21:56.989286043] (+0.000007962) sys_pipe: { 1 }, { fildes = 0xB77B9E8C }
2102   [23:21:56.989321802] (+0.000035759) exit_syscall: { 1 }, { ret = 0 }
2103   [23:21:56.989329345] (+0.000007543) sys_mmap_pgoff: { 1 }, { addr = 0x0, len = 10485760, prot = 3, flags = 131362, fd = 4294967295, pgoff = 0 }
2104   [23:21:56.989351694] (+0.000022349) exit_syscall: { 1 }, { ret = -1247805440 }
2105   [23:21:56.989432989] (+0.000081295) sys_clone: { 1 }, { clone_flags = 0x411, newsp = 0xB5EFFFE4, parent_tid = 0xFFFFFFFF, child_tid = 0x0 }
2106   [23:21:56.989477129] (+0.000044140) sched_stat_runtime: { 1 }, { comm = "lttng-consumerd", tid = 1193, runtime = 681660, vruntime = 43367983388 }
2107   [23:21:56.989486697] (+0.000009568) sched_migrate_task: { 1 }, { comm = "lttng-consumerd", tid = 1193, prio = 20, orig_cpu = 1, dest_cpu = 1 }
2108   [23:21:56.989508418] (+0.000021721) hrtimer_init: { 1 }, { hrtimer = 3970832076, clockid = 1, mode = 1 }
2109   [23:21:56.989770462] (+0.000262044) hrtimer_cancel: { 1 }, { hrtimer = 3993865440 }
2110   [23:21:56.989771580] (+0.000001118) hrtimer_cancel: { 0 }, { hrtimer = 3993812192 }
2111   [23:21:56.989776957] (+0.000005377) hrtimer_expire_entry: { 1 }, { hrtimer = 3993865440, now = 79815980007057, function = 3238465232 }
2112   [23:21:56.989778145] (+0.000001188) hrtimer_expire_entry: { 0 }, { hrtimer = 3993812192, now = 79815980008174, function = 3238465232 }
2113   [23:21:56.989791695] (+0.000013550) softirq_raise: { 1 }, { vec = 1 }
2114   [23:21:56.989795396] (+0.000003701) softirq_raise: { 0 }, { vec = 1 }
2115   [23:21:56.989800635] (+0.000005239) softirq_raise: { 0 }, { vec = 9 }
2116   [23:21:56.989807130] (+0.000006495) sched_stat_runtime: { 1 }, { comm = "lttng-consumerd", tid = 1193, runtime = 330710, vruntime = 43368314098 }
2117   [23:21:56.989809993] (+0.000002863) sched_stat_runtime: { 0 }, { comm = "lttng-sessiond", tid = 1181, runtime = 1015313, vruntime = 36976733240 }
2118   [23:21:56.989818514] (+0.000008521) hrtimer_expire_exit: { 0 }, { hrtimer = 3993812192 }
2119   [23:21:56.989819631] (+0.000001117) hrtimer_expire_exit: { 1 }, { hrtimer = 3993865440 }
2120   [23:21:56.989821866] (+0.000002235) hrtimer_start: { 0 }, { hrtimer = 3993812192, function = 3238465232, expires = 79815981000000, softexpires = 79815981000000 }
2121   [23:21:56.989822984] (+0.000001118) hrtimer_start: { 1 }, { hrtimer = 3993865440, function = 3238465232, expires = 79815981000000, softexpires = 79815981000000 }
2122   [23:21:56.989832762] (+0.000009778) softirq_entry: { 1 }, { vec = 1 }
2123   [23:21:56.989833879] (+0.000001117) softirq_entry: { 0 }, { vec = 1 }
2124   [23:21:56.989838069] (+0.000004190) timer_cancel: { 1 }, { timer = 3993871956 }
2125   [23:21:56.989839187] (+0.000001118) timer_cancel: { 0 }, { timer = 3993818708 }
2126   [23:21:56.989841492] (+0.000002305) timer_expire_entry: { 1 }, { timer = 3993871956, now = 79515980, function = 3238277552 }
2127   [23:21:56.989842819] (+0.000001327) timer_expire_entry: { 0 }, { timer = 3993818708, now = 79515980, function = 3238277552 }
2128   [23:21:56.989854831] (+0.000012012) sched_stat_runtime: { 1 }, { comm = "lttng-consumerd", tid = 1193, runtime = 49237, vruntime = 43368363335 }
2129   [23:21:56.989855949] (+0.000001118) sched_stat_runtime: { 0 }, { comm = "lttng-sessiond", tid = 1181, runtime = 45121, vruntime = 36976778361 }
2130   [23:21:56.989861257] (+0.000005308) sched_stat_sleep: { 1 }, { comm = "kworker/1:1", tid = 21, delay = 9451318 }
2131   [23:21:56.989862374] (+0.000001117) sched_stat_sleep: { 0 }, { comm = "kworker/0:0", tid = 4, delay = 9958820 }
2132   [23:21:56.989868241] (+0.000005867) sched_wakeup: { 0 }, { comm = "kworker/0:0", tid = 4, prio = 120, success = 1, target_cpu = 0 }
2133   [23:21:56.989869358] (+0.000001117) sched_wakeup: { 1 }, { comm = "kworker/1:1", tid = 21, prio = 120, success = 1, target_cpu = 1 }
2134   [23:21:56.989877460] (+0.000008102) timer_expire_exit: { 1 }, { timer = 3993871956 }
2135   [23:21:56.989878577] (+0.000001117) timer_expire_exit: { 0 }, { timer = 3993818708 }
2136   .
2137   .
2138   .
2139
2140You can now safely destroy the trace
2141session (note that this doesn't delete the trace --- it's still there in
2142~/lttng-traces)::
2143
2144   root@crownbay:~# lttng destroy
2145   Session auto-20121015-232120 destroyed at /home/root
2146
2147Note that the trace is saved in a directory of the same name as returned by
2148'lttng create', under the ~/lttng-traces directory (note that you can change this by
2149supplying your own name to 'lttng create')::
2150
2151   root@crownbay:~# ls -al ~/lttng-traces
2152   drwxrwx---    3 root     root          1024 Oct 15 23:21 .
2153   drwxr-xr-x    5 root     root          1024 Oct 15 23:57 ..
2154   drwxrwx---    3 root     root          1024 Oct 15 23:21 auto-20121015-232120
2155
2156Collecting and viewing a userspace trace on the target (inside a shell)
2157~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2158
2159For LTTng userspace tracing, you need to have a properly instrumented
2160userspace program. For this example, we'll use the 'hello' test program
2161generated by the lttng-ust build.
2162
2163The 'hello' test program isn't installed on the root filesystem by the lttng-ust
2164build, so we need to copy it over manually. First cd into the build
2165directory that contains the hello executable::
2166
2167   $ cd build/tmp/work/core2_32-poky-linux/lttng-ust/2.0.5-r0/git/tests/hello/.libs
2168
2169Copy that over to the target machine::
2170
2171   $ scp hello root@192.168.1.20:
2172
2173You now have the instrumented lttng 'hello world' test program on the
2174target, ready to test.
2175
2176First, from the host, ssh to the target::
2177
2178   $ ssh -l root 192.168.1.47
2179   The authenticity of host '192.168.1.47 (192.168.1.47)' can't be established.
2180   RSA key fingerprint is 23:bd:c8:b1:a8:71:52:00:ee:00:4f:64:9e:10:b9:7e.
2181   Are you sure you want to continue connecting (yes/no)? yes
2182   Warning: Permanently added '192.168.1.47' (RSA) to the list of known hosts.
2183   root@192.168.1.47's password:
2184
2185Once on the target, use these steps to create a trace::
2186
2187   root@crownbay:~# lttng create
2188   Session auto-20190303-021943 created.
2189   Traces will be written in /home/root/lttng-traces/auto-20190303-021943
2190
2191Enable the events you want to trace (in this case all userspace events)::
2192
2193   root@crownbay:~# lttng enable-event --userspace --all
2194   All UST events are enabled in channel channel0
2195
2196Start the trace::
2197
2198   root@crownbay:~# lttng start
2199   Tracing started for session auto-20190303-021943
2200
2201Run the instrumented hello world program::
2202
2203   root@crownbay:~# ./hello
2204   Hello, World!
2205   Tracing... done.
2206
2207And then stop the trace after awhile or after running a particular workload
2208that you want to trace::
2209
2210   root@crownbay:~# lttng stop
2211   Tracing stopped for session auto-20190303-021943
2212
2213You can now view the trace in text form on the target::
2214
2215   root@crownbay:~# lttng view
2216   [02:31:14.906146544] (+?.?????????) hello:1424 ust_tests_hello:tptest: { cpu_id = 1 }, { intfield = 0, intfield2 = 0x0, longfield = 0, netintfield = 0, netintfieldhex = 0x0, arrfield1 = [ [0] = 1, [1] = 2, [2] = 3 ], arrfield2 = "test", _seqfield1_length = 4, seqfield1 = [ [0] = 116, [1] = 101, [2] = 115, [3] = 116 ], _seqfield2_length = 4,  seqfield2 = "test", stringfield = "test", floatfield = 2222, doublefield = 2, boolfield = 1 }
2217   [02:31:14.906170360] (+0.000023816) hello:1424 ust_tests_hello:tptest: { cpu_id = 1 }, { intfield = 1, intfield2 = 0x1, longfield = 1, netintfield = 1, netintfieldhex = 0x1, arrfield1 = [ [0] = 1, [1] = 2, [2] = 3 ], arrfield2 = "test", _seqfield1_length = 4, seqfield1 = [ [0] = 116, [1] = 101, [2] = 115, [3] = 116 ], _seqfield2_length = 4, seqfield2 = "test", stringfield = "test", floatfield = 2222, doublefield = 2, boolfield = 1 }
2218   [02:31:14.906183140] (+0.000012780) hello:1424 ust_tests_hello:tptest: { cpu_id = 1 }, { intfield = 2, intfield2 = 0x2, longfield = 2, netintfield = 2, netintfieldhex = 0x2, arrfield1 = [ [0] = 1, [1] = 2, [2] = 3 ], arrfield2 = "test", _seqfield1_length = 4, seqfield1 = [ [0] = 116, [1] = 101, [2] = 115, [3] = 116 ], _seqfield2_length = 4, seqfield2 = "test", stringfield = "test", floatfield = 2222, doublefield = 2, boolfield = 1 }
2219   [02:31:14.906194385] (+0.000011245) hello:1424 ust_tests_hello:tptest: { cpu_id = 1 }, { intfield = 3, intfield2 = 0x3, longfield = 3, netintfield = 3, netintfieldhex = 0x3, arrfield1 = [ [0] = 1, [1] = 2, [2] = 3 ], arrfield2 = "test", _seqfield1_length = 4, seqfield1 = [ [0] = 116, [1] = 101, [2] = 115, [3] = 116 ], _seqfield2_length = 4, seqfield2 = "test", stringfield = "test", floatfield = 2222, doublefield = 2, boolfield = 1 }
2220   .
2221   .
2222   .
2223
2224You can now safely destroy the trace session (note that this doesn't delete the
2225trace --- it's still there in ~/lttng-traces)::
2226
2227   root@crownbay:~# lttng destroy
2228   Session auto-20190303-021943 destroyed at /home/root
2229
2230LTTng Documentation
2231-------------------
2232
2233You can find the primary LTTng Documentation on the `LTTng
2234Documentation <https://lttng.org/docs/>`__ site. The documentation on
2235this site is appropriate for intermediate to advanced software
2236developers who are working in a Linux environment and are interested in
2237efficient software tracing.
2238
2239For information on LTTng in general, visit the `LTTng
2240Project <https://lttng.org/lttng2.0>`__ site. You can find a "Getting
2241Started" link on this site that takes you to an LTTng Quick Start.
2242
2243blktrace
2244========
2245
2246blktrace is a tool for tracing and reporting low-level disk I/O.
2247blktrace provides the tracing half of the equation; its output can be
2248piped into the blkparse program, which renders the data in a
2249human-readable form and does some basic analysis:
2250
2251blktrace Setup
2252--------------
2253
2254For this section, we'll assume you've already performed the basic setup
2255outlined in the ":ref:`profile-manual/intro:General Setup`"
2256section.
2257
2258blktrace is an application that runs on the target system. You can run
2259the entire blktrace and blkparse pipeline on the target, or you can run
2260blktrace in 'listen' mode on the target and have blktrace and blkparse
2261collect and analyze the data on the host (see the
2262":ref:`profile-manual/usage:Using blktrace Remotely`" section
2263below). For the rest of this section we assume you've ssh'ed to the host and
2264will be running blkrace on the target.
2265
2266Basic blktrace Usage
2267--------------------
2268
2269To record a trace, simply run the 'blktrace' command, giving it the name
2270of the block device you want to trace activity on::
2271
2272   root@crownbay:~# blktrace /dev/sdc
2273
2274In another shell, execute a workload you want to trace. ::
2275
2276   root@crownbay:/media/sdc# rm linux-2.6.19.2.tar.bz2; wget &YOCTO_DL_URL;/mirror/sources/linux-2.6.19.2.tar.bz2; sync
2277   Connecting to downloads.yoctoproject.org (140.211.169.59:80)
2278   linux-2.6.19.2.tar.b 100% \|*******************************\| 41727k 0:00:00 ETA
2279
2280Press Ctrl-C in the blktrace shell to stop the trace. It
2281will display how many events were logged, along with the per-cpu file
2282sizes (blktrace records traces in per-cpu kernel buffers and simply
2283dumps them to userspace for blkparse to merge and sort later). ::
2284
2285   ^C=== sdc ===
2286    CPU  0:                 7082 events,      332 KiB data
2287    CPU  1:                 1578 events,       74 KiB data
2288    Total:                  8660 events (dropped 0),      406 KiB data
2289
2290If you examine the files saved to disk, you see multiple files, one per CPU and
2291with the device name as the first part of the filename::
2292
2293   root@crownbay:~# ls -al
2294   drwxr-xr-x    6 root     root          1024 Oct 27 22:39 .
2295   drwxr-sr-x    4 root     root          1024 Oct 26 18:24 ..
2296   -rw-r--r--    1 root     root        339938 Oct 27 22:40 sdc.blktrace.0
2297   -rw-r--r--    1 root     root         75753 Oct 27 22:40 sdc.blktrace.1
2298
2299To view the trace events, simply invoke 'blkparse' in the directory
2300containing the trace files, giving it the device name that forms the
2301first part of the filenames::
2302
2303   root@crownbay:~# blkparse sdc
2304
2305    8,32   1        1     0.000000000  1225  Q  WS 3417048 + 8 [jbd2/sdc-8]
2306    8,32   1        2     0.000025213  1225  G  WS 3417048 + 8 [jbd2/sdc-8]
2307    8,32   1        3     0.000033384  1225  P   N [jbd2/sdc-8]
2308    8,32   1        4     0.000043301  1225  I  WS 3417048 + 8 [jbd2/sdc-8]
2309    8,32   1        0     0.000057270     0  m   N cfq1225 insert_request
2310    8,32   1        0     0.000064813     0  m   N cfq1225 add_to_rr
2311    8,32   1        5     0.000076336  1225  U   N [jbd2/sdc-8] 1
2312    8,32   1        0     0.000088559     0  m   N cfq workload slice:150
2313    8,32   1        0     0.000097359     0  m   N cfq1225 set_active wl_prio:0 wl_type:1
2314    8,32   1        0     0.000104063     0  m   N cfq1225 Not idling. st->count:1
2315    8,32   1        0     0.000112584     0  m   N cfq1225 fifo=  (null)
2316    8,32   1        0     0.000118730     0  m   N cfq1225 dispatch_insert
2317    8,32   1        0     0.000127390     0  m   N cfq1225 dispatched a request
2318    8,32   1        0     0.000133536     0  m   N cfq1225 activate rq, drv=1
2319    8,32   1        6     0.000136889  1225  D  WS 3417048 + 8 [jbd2/sdc-8]
2320    8,32   1        7     0.000360381  1225  Q  WS 3417056 + 8 [jbd2/sdc-8]
2321    8,32   1        8     0.000377422  1225  G  WS 3417056 + 8 [jbd2/sdc-8]
2322    8,32   1        9     0.000388876  1225  P   N [jbd2/sdc-8]
2323    8,32   1       10     0.000397886  1225  Q  WS 3417064 + 8 [jbd2/sdc-8]
2324    8,32   1       11     0.000404800  1225  M  WS 3417064 + 8 [jbd2/sdc-8]
2325    8,32   1       12     0.000412343  1225  Q  WS 3417072 + 8 [jbd2/sdc-8]
2326    8,32   1       13     0.000416533  1225  M  WS 3417072 + 8 [jbd2/sdc-8]
2327    8,32   1       14     0.000422121  1225  Q  WS 3417080 + 8 [jbd2/sdc-8]
2328    8,32   1       15     0.000425194  1225  M  WS 3417080 + 8 [jbd2/sdc-8]
2329    8,32   1       16     0.000431968  1225  Q  WS 3417088 + 8 [jbd2/sdc-8]
2330    8,32   1       17     0.000435251  1225  M  WS 3417088 + 8 [jbd2/sdc-8]
2331    8,32   1       18     0.000440279  1225  Q  WS 3417096 + 8 [jbd2/sdc-8]
2332    8,32   1       19     0.000443911  1225  M  WS 3417096 + 8 [jbd2/sdc-8]
2333    8,32   1       20     0.000450336  1225  Q  WS 3417104 + 8 [jbd2/sdc-8]
2334    8,32   1       21     0.000454038  1225  M  WS 3417104 + 8 [jbd2/sdc-8]
2335    8,32   1       22     0.000462070  1225  Q  WS 3417112 + 8 [jbd2/sdc-8]
2336    8,32   1       23     0.000465422  1225  M  WS 3417112 + 8 [jbd2/sdc-8]
2337    8,32   1       24     0.000474222  1225  I  WS 3417056 + 64 [jbd2/sdc-8]
2338    8,32   1        0     0.000483022     0  m   N cfq1225 insert_request
2339    8,32   1       25     0.000489727  1225  U   N [jbd2/sdc-8] 1
2340    8,32   1        0     0.000498457     0  m   N cfq1225 Not idling. st->count:1
2341    8,32   1        0     0.000503765     0  m   N cfq1225 dispatch_insert
2342    8,32   1        0     0.000512914     0  m   N cfq1225 dispatched a request
2343    8,32   1        0     0.000518851     0  m   N cfq1225 activate rq, drv=2
2344    .
2345    .
2346    .
2347    8,32   0        0    58.515006138     0  m   N cfq3551 complete rqnoidle 1
2348    8,32   0     2024    58.516603269     3  C  WS 3156992 + 16 [0]
2349    8,32   0        0    58.516626736     0  m   N cfq3551 complete rqnoidle 1
2350    8,32   0        0    58.516634558     0  m   N cfq3551 arm_idle: 8 group_idle: 0
2351    8,32   0        0    58.516636933     0  m   N cfq schedule dispatch
2352    8,32   1        0    58.516971613     0  m   N cfq3551 slice expired t=0
2353    8,32   1        0    58.516982089     0  m   N cfq3551 sl_used=13 disp=6 charge=13 iops=0 sect=80
2354    8,32   1        0    58.516985511     0  m   N cfq3551 del_from_rr
2355    8,32   1        0    58.516990819     0  m   N cfq3551 put_queue
2356
2357   CPU0 (sdc):
2358    Reads Queued:           0,        0KiB	 Writes Queued:         331,   26,284KiB
2359    Read Dispatches:        0,        0KiB	 Write Dispatches:      485,   40,484KiB
2360    Reads Requeued:         0		 Writes Requeued:         0
2361    Reads Completed:        0,        0KiB	 Writes Completed:      511,   41,000KiB
2362    Read Merges:            0,        0KiB	 Write Merges:           13,      160KiB
2363    Read depth:             0        	 Write depth:             2
2364    IO unplugs:            23        	 Timer unplugs:           0
2365   CPU1 (sdc):
2366    Reads Queued:           0,        0KiB	 Writes Queued:         249,   15,800KiB
2367    Read Dispatches:        0,        0KiB	 Write Dispatches:       42,    1,600KiB
2368    Reads Requeued:         0		 Writes Requeued:         0
2369    Reads Completed:        0,        0KiB	 Writes Completed:       16,    1,084KiB
2370    Read Merges:            0,        0KiB	 Write Merges:           40,      276KiB
2371    Read depth:             0        	 Write depth:             2
2372    IO unplugs:            30        	 Timer unplugs:           1
2373
2374   Total (sdc):
2375    Reads Queued:           0,        0KiB	 Writes Queued:         580,   42,084KiB
2376    Read Dispatches:        0,        0KiB	 Write Dispatches:      527,   42,084KiB
2377    Reads Requeued:         0		 Writes Requeued:         0
2378    Reads Completed:        0,        0KiB	 Writes Completed:      527,   42,084KiB
2379    Read Merges:            0,        0KiB	 Write Merges:           53,      436KiB
2380    IO unplugs:            53        	 Timer unplugs:           1
2381
2382   Throughput (R/W): 0KiB/s / 719KiB/s
2383   Events (sdc): 6,592 entries
2384   Skips: 0 forward (0 -   0.0%)
2385   Input file sdc.blktrace.0 added
2386   Input file sdc.blktrace.1 added
2387
2388The report shows each event that was
2389found in the blktrace data, along with a summary of the overall block
2390I/O traffic during the run. You can look at the
2391`blkparse <https://linux.die.net/man/1/blkparse>`__ manpage to learn the
2392meaning of each field displayed in the trace listing.
2393
2394Live Mode
2395~~~~~~~~~
2396
2397blktrace and blkparse are designed from the ground up to be able to
2398operate together in a 'pipe mode' where the stdout of blktrace can be
2399fed directly into the stdin of blkparse::
2400
2401   root@crownbay:~# blktrace /dev/sdc -o - | blkparse -i -
2402
2403This enables long-lived tracing sessions
2404to run without writing anything to disk, and allows the user to look for
2405certain conditions in the trace data in 'real-time' by viewing the trace
2406output as it scrolls by on the screen or by passing it along to yet
2407another program in the pipeline such as grep which can be used to
2408identify and capture conditions of interest.
2409
2410There's actually another blktrace command that implements the above
2411pipeline as a single command, so the user doesn't have to bother typing
2412in the above command sequence::
2413
2414   root@crownbay:~# btrace /dev/sdc
2415
2416Using blktrace Remotely
2417~~~~~~~~~~~~~~~~~~~~~~~
2418
2419Because blktrace traces block I/O and at the same time normally writes
2420its trace data to a block device, and in general because it's not really
2421a great idea to make the device being traced the same as the device the
2422tracer writes to, blktrace provides a way to trace without perturbing
2423the traced device at all by providing native support for sending all
2424trace data over the network.
2425
2426To have blktrace operate in this mode, start blktrace on the target
2427system being traced with the -l option, along with the device to trace::
2428
2429   root@crownbay:~# blktrace -l /dev/sdc
2430   server: waiting for connections...
2431
2432On the host system, use the -h option to connect to the target system,
2433also passing it the device to trace::
2434
2435   $ blktrace -d /dev/sdc -h 192.168.1.43
2436   blktrace: connecting to 192.168.1.43
2437   blktrace: connected!
2438
2439On the target system, you should see this::
2440
2441   server: connection from 192.168.1.43
2442
2443In another shell, execute a workload you want to trace. ::
2444
2445   root@crownbay:/media/sdc# rm linux-2.6.19.2.tar.bz2; wget &YOCTO_DL_URL;/mirror/sources/linux-2.6.19.2.tar.bz2; sync
2446   Connecting to downloads.yoctoproject.org (140.211.169.59:80)
2447   linux-2.6.19.2.tar.b 100% \|*******************************\| 41727k 0:00:00 ETA
2448
2449When it's done, do a Ctrl-C on the host system to stop the
2450trace::
2451
2452   ^C=== sdc ===
2453    CPU  0:                 7691 events,      361 KiB data
2454    CPU  1:                 4109 events,      193 KiB data
2455    Total:                 11800 events (dropped 0),      554 KiB data
2456
2457On the target system, you should also see a trace summary for the trace
2458just ended::
2459
2460   server: end of run for 192.168.1.43:sdc
2461   === sdc ===
2462    CPU  0:                 7691 events,      361 KiB data
2463    CPU  1:                 4109 events,      193 KiB data
2464    Total:                 11800 events (dropped 0),      554 KiB data
2465
2466The blktrace instance on the host will
2467save the target output inside a hostname-timestamp directory::
2468
2469   $ ls -al
2470   drwxr-xr-x   10 root     root          1024 Oct 28 02:40 .
2471   drwxr-sr-x    4 root     root          1024 Oct 26 18:24 ..
2472   drwxr-xr-x    2 root     root          1024 Oct 28 02:40 192.168.1.43-2012-10-28-02:40:56
2473
2474cd into that directory to see the output files::
2475
2476   $ ls -l
2477   -rw-r--r--    1 root     root        369193 Oct 28 02:44 sdc.blktrace.0
2478   -rw-r--r--    1 root     root        197278 Oct 28 02:44 sdc.blktrace.1
2479
2480And run blkparse on the host system using the device name::
2481
2482   $ blkparse sdc
2483
2484    8,32   1        1     0.000000000  1263  Q  RM 6016 + 8 [ls]
2485    8,32   1        0     0.000036038     0  m   N cfq1263 alloced
2486    8,32   1        2     0.000039390  1263  G  RM 6016 + 8 [ls]
2487    8,32   1        3     0.000049168  1263  I  RM 6016 + 8 [ls]
2488    8,32   1        0     0.000056152     0  m   N cfq1263 insert_request
2489    8,32   1        0     0.000061600     0  m   N cfq1263 add_to_rr
2490    8,32   1        0     0.000075498     0  m   N cfq workload slice:300
2491    .
2492    .
2493    .
2494    8,32   0        0   177.266385696     0  m   N cfq1267 arm_idle: 8 group_idle: 0
2495    8,32   0        0   177.266388140     0  m   N cfq schedule dispatch
2496    8,32   1        0   177.266679239     0  m   N cfq1267 slice expired t=0
2497    8,32   1        0   177.266689297     0  m   N cfq1267 sl_used=9 disp=6 charge=9 iops=0 sect=56
2498    8,32   1        0   177.266692649     0  m   N cfq1267 del_from_rr
2499    8,32   1        0   177.266696560     0  m   N cfq1267 put_queue
2500
2501   CPU0 (sdc):
2502    Reads Queued:           0,        0KiB	 Writes Queued:         270,   21,708KiB
2503    Read Dispatches:       59,    2,628KiB	 Write Dispatches:      495,   39,964KiB
2504    Reads Requeued:         0		 Writes Requeued:         0
2505    Reads Completed:       90,    2,752KiB	 Writes Completed:      543,   41,596KiB
2506    Read Merges:            0,        0KiB	 Write Merges:            9,      344KiB
2507    Read depth:             2        	 Write depth:             2
2508    IO unplugs:            20        	 Timer unplugs:           1
2509   CPU1 (sdc):
2510    Reads Queued:         688,    2,752KiB	 Writes Queued:         381,   20,652KiB
2511    Read Dispatches:       31,      124KiB	 Write Dispatches:       59,    2,396KiB
2512    Reads Requeued:         0		 Writes Requeued:         0
2513    Reads Completed:        0,        0KiB	 Writes Completed:       11,      764KiB
2514    Read Merges:          598,    2,392KiB	 Write Merges:           88,      448KiB
2515    Read depth:             2        	 Write depth:             2
2516    IO unplugs:            52        	 Timer unplugs:           0
2517
2518   Total (sdc):
2519    Reads Queued:         688,    2,752KiB	 Writes Queued:         651,   42,360KiB
2520    Read Dispatches:       90,    2,752KiB	 Write Dispatches:      554,   42,360KiB
2521    Reads Requeued:         0		 Writes Requeued:         0
2522    Reads Completed:       90,    2,752KiB	 Writes Completed:      554,   42,360KiB
2523    Read Merges:          598,    2,392KiB	 Write Merges:           97,      792KiB
2524    IO unplugs:            72        	 Timer unplugs:           1
2525
2526   Throughput (R/W): 15KiB/s / 238KiB/s
2527   Events (sdc): 9,301 entries
2528   Skips: 0 forward (0 -   0.0%)
2529
2530You should see the trace events and summary just as you would have if you'd run
2531the same command on the target.
2532
2533Tracing Block I/O via 'ftrace'
2534~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2535
2536It's also possible to trace block I/O using only
2537:ref:`profile-manual/usage:The 'trace events' Subsystem`, which
2538can be useful for casual tracing if you don't want to bother dealing with the
2539userspace tools.
2540
2541To enable tracing for a given device, use /sys/block/xxx/trace/enable,
2542where xxx is the device name. This for example enables tracing for
2543/dev/sdc::
2544
2545   root@crownbay:/sys/kernel/debug/tracing# echo 1 > /sys/block/sdc/trace/enable
2546
2547Once you've selected the device(s) you want
2548to trace, selecting the 'blk' tracer will turn the blk tracer on::
2549
2550   root@crownbay:/sys/kernel/debug/tracing# cat available_tracers
2551   blk function_graph function nop
2552
2553   root@crownbay:/sys/kernel/debug/tracing# echo blk > current_tracer
2554
2555Execute the workload you're interested in::
2556
2557   root@crownbay:/sys/kernel/debug/tracing# cat /media/sdc/testfile.txt
2558
2559And look at the output (note here that we're using 'trace_pipe' instead of
2560trace to capture this trace --- this allows us to wait around on the pipe
2561for data to appear)::
2562
2563   root@crownbay:/sys/kernel/debug/tracing# cat trace_pipe
2564               cat-3587  [001] d..1  3023.276361:   8,32   Q   R 1699848 + 8 [cat]
2565               cat-3587  [001] d..1  3023.276410:   8,32   m   N cfq3587 alloced
2566               cat-3587  [001] d..1  3023.276415:   8,32   G   R 1699848 + 8 [cat]
2567               cat-3587  [001] d..1  3023.276424:   8,32   P   N [cat]
2568               cat-3587  [001] d..2  3023.276432:   8,32   I   R 1699848 + 8 [cat]
2569               cat-3587  [001] d..1  3023.276439:   8,32   m   N cfq3587 insert_request
2570               cat-3587  [001] d..1  3023.276445:   8,32   m   N cfq3587 add_to_rr
2571               cat-3587  [001] d..2  3023.276454:   8,32   U   N [cat] 1
2572               cat-3587  [001] d..1  3023.276464:   8,32   m   N cfq workload slice:150
2573               cat-3587  [001] d..1  3023.276471:   8,32   m   N cfq3587 set_active wl_prio:0 wl_type:2
2574               cat-3587  [001] d..1  3023.276478:   8,32   m   N cfq3587 fifo=  (null)
2575               cat-3587  [001] d..1  3023.276483:   8,32   m   N cfq3587 dispatch_insert
2576               cat-3587  [001] d..1  3023.276490:   8,32   m   N cfq3587 dispatched a request
2577               cat-3587  [001] d..1  3023.276497:   8,32   m   N cfq3587 activate rq, drv=1
2578               cat-3587  [001] d..2  3023.276500:   8,32   D   R 1699848 + 8 [cat]
2579
2580And this turns off tracing for the specified device::
2581
2582   root@crownbay:/sys/kernel/debug/tracing# echo 0 > /sys/block/sdc/trace/enable
2583
2584blktrace Documentation
2585----------------------
2586
2587Online versions of the man pages for the commands discussed in this
2588section can be found here:
2589
2590-  https://linux.die.net/man/8/blktrace
2591
2592-  https://linux.die.net/man/1/blkparse
2593
2594-  https://linux.die.net/man/8/btrace
2595
2596The above manpages, along with manpages for the other blktrace utilities
2597(btt, blkiomon, etc) can be found in the /doc directory of the blktrace
2598tools git repo::
2599
2600   $ git clone git://git.kernel.dk/blktrace.git
2601