1======================== 2ftrace - Function Tracer 3======================== 4 5Copyright 2008 Red Hat Inc. 6 7:Author: Steven Rostedt <srostedt@redhat.com> 8:License: The GNU Free Documentation License, Version 1.2 9 (dual licensed under the GPL v2) 10:Original Reviewers: Elias Oltmanns, Randy Dunlap, Andrew Morton, 11 John Kacur, and David Teigland. 12 13- Written for: 2.6.28-rc2 14- Updated for: 3.10 15- Updated for: 4.13 - Copyright 2017 VMware Inc. Steven Rostedt 16- Converted to rst format - Changbin Du <changbin.du@intel.com> 17 18Introduction 19------------ 20 21Ftrace is an internal tracer designed to help out developers and 22designers of systems to find what is going on inside the kernel. 23It can be used for debugging or analyzing latencies and 24performance issues that take place outside of user-space. 25 26Although ftrace is typically considered the function tracer, it 27is really a frame work of several assorted tracing utilities. 28There's latency tracing to examine what occurs between interrupts 29disabled and enabled, as well as for preemption and from a time 30a task is woken to the task is actually scheduled in. 31 32One of the most common uses of ftrace is the event tracing. 33Through out the kernel is hundreds of static event points that 34can be enabled via the tracefs file system to see what is 35going on in certain parts of the kernel. 36 37See events.txt for more information. 38 39 40Implementation Details 41---------------------- 42 43See :doc:`ftrace-design` for details for arch porters and such. 44 45 46The File System 47--------------- 48 49Ftrace uses the tracefs file system to hold the control files as 50well as the files to display output. 51 52When tracefs is configured into the kernel (which selecting any ftrace 53option will do) the directory /sys/kernel/tracing will be created. To mount 54this directory, you can add to your /etc/fstab file:: 55 56 tracefs /sys/kernel/tracing tracefs defaults 0 0 57 58Or you can mount it at run time with:: 59 60 mount -t tracefs nodev /sys/kernel/tracing 61 62For quicker access to that directory you may want to make a soft link to 63it:: 64 65 ln -s /sys/kernel/tracing /tracing 66 67.. attention:: 68 69 Before 4.1, all ftrace tracing control files were within the debugfs 70 file system, which is typically located at /sys/kernel/debug/tracing. 71 For backward compatibility, when mounting the debugfs file system, 72 the tracefs file system will be automatically mounted at: 73 74 /sys/kernel/debug/tracing 75 76 All files located in the tracefs file system will be located in that 77 debugfs file system directory as well. 78 79.. attention:: 80 81 Any selected ftrace option will also create the tracefs file system. 82 The rest of the document will assume that you are in the ftrace directory 83 (cd /sys/kernel/tracing) and will only concentrate on the files within that 84 directory and not distract from the content with the extended 85 "/sys/kernel/tracing" path name. 86 87That's it! (assuming that you have ftrace configured into your kernel) 88 89After mounting tracefs you will have access to the control and output files 90of ftrace. Here is a list of some of the key files: 91 92 93 Note: all time values are in microseconds. 94 95 current_tracer: 96 97 This is used to set or display the current tracer 98 that is configured. 99 100 available_tracers: 101 102 This holds the different types of tracers that 103 have been compiled into the kernel. The 104 tracers listed here can be configured by 105 echoing their name into current_tracer. 106 107 tracing_on: 108 109 This sets or displays whether writing to the trace 110 ring buffer is enabled. Echo 0 into this file to disable 111 the tracer or 1 to enable it. Note, this only disables 112 writing to the ring buffer, the tracing overhead may 113 still be occurring. 114 115 The kernel function tracing_off() can be used within the 116 kernel to disable writing to the ring buffer, which will 117 set this file to "0". User space can re-enable tracing by 118 echoing "1" into the file. 119 120 Note, the function and event trigger "traceoff" will also 121 set this file to zero and stop tracing. Which can also 122 be re-enabled by user space using this file. 123 124 trace: 125 126 This file holds the output of the trace in a human 127 readable format (described below). Note, tracing is temporarily 128 disabled while this file is being read (opened). 129 130 trace_pipe: 131 132 The output is the same as the "trace" file but this 133 file is meant to be streamed with live tracing. 134 Reads from this file will block until new data is 135 retrieved. Unlike the "trace" file, this file is a 136 consumer. This means reading from this file causes 137 sequential reads to display more current data. Once 138 data is read from this file, it is consumed, and 139 will not be read again with a sequential read. The 140 "trace" file is static, and if the tracer is not 141 adding more data, it will display the same 142 information every time it is read. This file will not 143 disable tracing while being read. 144 145 trace_options: 146 147 This file lets the user control the amount of data 148 that is displayed in one of the above output 149 files. Options also exist to modify how a tracer 150 or events work (stack traces, timestamps, etc). 151 152 options: 153 154 This is a directory that has a file for every available 155 trace option (also in trace_options). Options may also be set 156 or cleared by writing a "1" or "0" respectively into the 157 corresponding file with the option name. 158 159 tracing_max_latency: 160 161 Some of the tracers record the max latency. 162 For example, the maximum time that interrupts are disabled. 163 The maximum time is saved in this file. The max trace will also be 164 stored, and displayed by "trace". A new max trace will only be 165 recorded if the latency is greater than the value in this file 166 (in microseconds). 167 168 By echoing in a time into this file, no latency will be recorded 169 unless it is greater than the time in this file. 170 171 tracing_thresh: 172 173 Some latency tracers will record a trace whenever the 174 latency is greater than the number in this file. 175 Only active when the file contains a number greater than 0. 176 (in microseconds) 177 178 buffer_size_kb: 179 180 This sets or displays the number of kilobytes each CPU 181 buffer holds. By default, the trace buffers are the same size 182 for each CPU. The displayed number is the size of the 183 CPU buffer and not total size of all buffers. The 184 trace buffers are allocated in pages (blocks of memory 185 that the kernel uses for allocation, usually 4 KB in size). 186 If the last page allocated has room for more bytes 187 than requested, the rest of the page will be used, 188 making the actual allocation bigger than requested or shown. 189 ( Note, the size may not be a multiple of the page size 190 due to buffer management meta-data. ) 191 192 Buffer sizes for individual CPUs may vary 193 (see "per_cpu/cpu0/buffer_size_kb" below), and if they do 194 this file will show "X". 195 196 buffer_total_size_kb: 197 198 This displays the total combined size of all the trace buffers. 199 200 free_buffer: 201 202 If a process is performing tracing, and the ring buffer should be 203 shrunk "freed" when the process is finished, even if it were to be 204 killed by a signal, this file can be used for that purpose. On close 205 of this file, the ring buffer will be resized to its minimum size. 206 Having a process that is tracing also open this file, when the process 207 exits its file descriptor for this file will be closed, and in doing so, 208 the ring buffer will be "freed". 209 210 It may also stop tracing if disable_on_free option is set. 211 212 tracing_cpumask: 213 214 This is a mask that lets the user only trace on specified CPUs. 215 The format is a hex string representing the CPUs. 216 217 set_ftrace_filter: 218 219 When dynamic ftrace is configured in (see the 220 section below "dynamic ftrace"), the code is dynamically 221 modified (code text rewrite) to disable calling of the 222 function profiler (mcount). This lets tracing be configured 223 in with practically no overhead in performance. This also 224 has a side effect of enabling or disabling specific functions 225 to be traced. Echoing names of functions into this file 226 will limit the trace to only those functions. 227 228 The functions listed in "available_filter_functions" are what 229 can be written into this file. 230 231 This interface also allows for commands to be used. See the 232 "Filter commands" section for more details. 233 234 set_ftrace_notrace: 235 236 This has an effect opposite to that of 237 set_ftrace_filter. Any function that is added here will not 238 be traced. If a function exists in both set_ftrace_filter 239 and set_ftrace_notrace, the function will _not_ be traced. 240 241 set_ftrace_pid: 242 243 Have the function tracer only trace the threads whose PID are 244 listed in this file. 245 246 If the "function-fork" option is set, then when a task whose 247 PID is listed in this file forks, the child's PID will 248 automatically be added to this file, and the child will be 249 traced by the function tracer as well. This option will also 250 cause PIDs of tasks that exit to be removed from the file. 251 252 set_event_pid: 253 254 Have the events only trace a task with a PID listed in this file. 255 Note, sched_switch and sched_wake_up will also trace events 256 listed in this file. 257 258 To have the PIDs of children of tasks with their PID in this file 259 added on fork, enable the "event-fork" option. That option will also 260 cause the PIDs of tasks to be removed from this file when the task 261 exits. 262 263 set_graph_function: 264 265 Functions listed in this file will cause the function graph 266 tracer to only trace these functions and the functions that 267 they call. (See the section "dynamic ftrace" for more details). 268 269 set_graph_notrace: 270 271 Similar to set_graph_function, but will disable function graph 272 tracing when the function is hit until it exits the function. 273 This makes it possible to ignore tracing functions that are called 274 by a specific function. 275 276 available_filter_functions: 277 278 This lists the functions that ftrace has processed and can trace. 279 These are the function names that you can pass to 280 "set_ftrace_filter" or "set_ftrace_notrace". 281 (See the section "dynamic ftrace" below for more details.) 282 283 dyn_ftrace_total_info: 284 285 This file is for debugging purposes. The number of functions that 286 have been converted to nops and are available to be traced. 287 288 enabled_functions: 289 290 This file is more for debugging ftrace, but can also be useful 291 in seeing if any function has a callback attached to it. 292 Not only does the trace infrastructure use ftrace function 293 trace utility, but other subsystems might too. This file 294 displays all functions that have a callback attached to them 295 as well as the number of callbacks that have been attached. 296 Note, a callback may also call multiple functions which will 297 not be listed in this count. 298 299 If the callback registered to be traced by a function with 300 the "save regs" attribute (thus even more overhead), a 'R' 301 will be displayed on the same line as the function that 302 is returning registers. 303 304 If the callback registered to be traced by a function with 305 the "ip modify" attribute (thus the regs->ip can be changed), 306 an 'I' will be displayed on the same line as the function that 307 can be overridden. 308 309 If the architecture supports it, it will also show what callback 310 is being directly called by the function. If the count is greater 311 than 1 it most likely will be ftrace_ops_list_func(). 312 313 If the callback of the function jumps to a trampoline that is 314 specific to a the callback and not the standard trampoline, 315 its address will be printed as well as the function that the 316 trampoline calls. 317 318 function_profile_enabled: 319 320 When set it will enable all functions with either the function 321 tracer, or if configured, the function graph tracer. It will 322 keep a histogram of the number of functions that were called 323 and if the function graph tracer was configured, it will also keep 324 track of the time spent in those functions. The histogram 325 content can be displayed in the files: 326 327 trace_stats/function<cpu> ( function0, function1, etc). 328 329 trace_stats: 330 331 A directory that holds different tracing stats. 332 333 kprobe_events: 334 335 Enable dynamic trace points. See kprobetrace.txt. 336 337 kprobe_profile: 338 339 Dynamic trace points stats. See kprobetrace.txt. 340 341 max_graph_depth: 342 343 Used with the function graph tracer. This is the max depth 344 it will trace into a function. Setting this to a value of 345 one will show only the first kernel function that is called 346 from user space. 347 348 printk_formats: 349 350 This is for tools that read the raw format files. If an event in 351 the ring buffer references a string, only a pointer to the string 352 is recorded into the buffer and not the string itself. This prevents 353 tools from knowing what that string was. This file displays the string 354 and address for the string allowing tools to map the pointers to what 355 the strings were. 356 357 saved_cmdlines: 358 359 Only the pid of the task is recorded in a trace event unless 360 the event specifically saves the task comm as well. Ftrace 361 makes a cache of pid mappings to comms to try to display 362 comms for events. If a pid for a comm is not listed, then 363 "<...>" is displayed in the output. 364 365 If the option "record-cmd" is set to "0", then comms of tasks 366 will not be saved during recording. By default, it is enabled. 367 368 saved_cmdlines_size: 369 370 By default, 128 comms are saved (see "saved_cmdlines" above). To 371 increase or decrease the amount of comms that are cached, echo 372 in a the number of comms to cache, into this file. 373 374 saved_tgids: 375 376 If the option "record-tgid" is set, on each scheduling context switch 377 the Task Group ID of a task is saved in a table mapping the PID of 378 the thread to its TGID. By default, the "record-tgid" option is 379 disabled. 380 381 snapshot: 382 383 This displays the "snapshot" buffer and also lets the user 384 take a snapshot of the current running trace. 385 See the "Snapshot" section below for more details. 386 387 stack_max_size: 388 389 When the stack tracer is activated, this will display the 390 maximum stack size it has encountered. 391 See the "Stack Trace" section below. 392 393 stack_trace: 394 395 This displays the stack back trace of the largest stack 396 that was encountered when the stack tracer is activated. 397 See the "Stack Trace" section below. 398 399 stack_trace_filter: 400 401 This is similar to "set_ftrace_filter" but it limits what 402 functions the stack tracer will check. 403 404 trace_clock: 405 406 Whenever an event is recorded into the ring buffer, a 407 "timestamp" is added. This stamp comes from a specified 408 clock. By default, ftrace uses the "local" clock. This 409 clock is very fast and strictly per cpu, but on some 410 systems it may not be monotonic with respect to other 411 CPUs. In other words, the local clocks may not be in sync 412 with local clocks on other CPUs. 413 414 Usual clocks for tracing:: 415 416 # cat trace_clock 417 [local] global counter x86-tsc 418 419 The clock with the square brackets around it is the one in effect. 420 421 local: 422 Default clock, but may not be in sync across CPUs 423 424 global: 425 This clock is in sync with all CPUs but may 426 be a bit slower than the local clock. 427 428 counter: 429 This is not a clock at all, but literally an atomic 430 counter. It counts up one by one, but is in sync 431 with all CPUs. This is useful when you need to 432 know exactly the order events occurred with respect to 433 each other on different CPUs. 434 435 uptime: 436 This uses the jiffies counter and the time stamp 437 is relative to the time since boot up. 438 439 perf: 440 This makes ftrace use the same clock that perf uses. 441 Eventually perf will be able to read ftrace buffers 442 and this will help out in interleaving the data. 443 444 x86-tsc: 445 Architectures may define their own clocks. For 446 example, x86 uses its own TSC cycle clock here. 447 448 ppc-tb: 449 This uses the powerpc timebase register value. 450 This is in sync across CPUs and can also be used 451 to correlate events across hypervisor/guest if 452 tb_offset is known. 453 454 mono: 455 This uses the fast monotonic clock (CLOCK_MONOTONIC) 456 which is monotonic and is subject to NTP rate adjustments. 457 458 mono_raw: 459 This is the raw monotonic clock (CLOCK_MONOTONIC_RAW) 460 which is montonic but is not subject to any rate adjustments 461 and ticks at the same rate as the hardware clocksource. 462 463 boot: 464 This is the boot clock (CLOCK_BOOTTIME) and is based on the 465 fast monotonic clock, but also accounts for time spent in 466 suspend. Since the clock access is designed for use in 467 tracing in the suspend path, some side effects are possible 468 if clock is accessed after the suspend time is accounted before 469 the fast mono clock is updated. In this case, the clock update 470 appears to happen slightly sooner than it normally would have. 471 Also on 32-bit systems, it's possible that the 64-bit boot offset 472 sees a partial update. These effects are rare and post 473 processing should be able to handle them. See comments in the 474 ktime_get_boot_fast_ns() function for more information. 475 476 To set a clock, simply echo the clock name into this file:: 477 478 # echo global > trace_clock 479 480 trace_marker: 481 482 This is a very useful file for synchronizing user space 483 with events happening in the kernel. Writing strings into 484 this file will be written into the ftrace buffer. 485 486 It is useful in applications to open this file at the start 487 of the application and just reference the file descriptor 488 for the file:: 489 490 void trace_write(const char *fmt, ...) 491 { 492 va_list ap; 493 char buf[256]; 494 int n; 495 496 if (trace_fd < 0) 497 return; 498 499 va_start(ap, fmt); 500 n = vsnprintf(buf, 256, fmt, ap); 501 va_end(ap); 502 503 write(trace_fd, buf, n); 504 } 505 506 start:: 507 508 trace_fd = open("trace_marker", WR_ONLY); 509 510 trace_marker_raw: 511 512 This is similar to trace_marker above, but is meant for for binary data 513 to be written to it, where a tool can be used to parse the data 514 from trace_pipe_raw. 515 516 uprobe_events: 517 518 Add dynamic tracepoints in programs. 519 See uprobetracer.txt 520 521 uprobe_profile: 522 523 Uprobe statistics. See uprobetrace.txt 524 525 instances: 526 527 This is a way to make multiple trace buffers where different 528 events can be recorded in different buffers. 529 See "Instances" section below. 530 531 events: 532 533 This is the trace event directory. It holds event tracepoints 534 (also known as static tracepoints) that have been compiled 535 into the kernel. It shows what event tracepoints exist 536 and how they are grouped by system. There are "enable" 537 files at various levels that can enable the tracepoints 538 when a "1" is written to them. 539 540 See events.txt for more information. 541 542 set_event: 543 544 By echoing in the event into this file, will enable that event. 545 546 See events.txt for more information. 547 548 available_events: 549 550 A list of events that can be enabled in tracing. 551 552 See events.txt for more information. 553 554 timestamp_mode: 555 556 Certain tracers may change the timestamp mode used when 557 logging trace events into the event buffer. Events with 558 different modes can coexist within a buffer but the mode in 559 effect when an event is logged determines which timestamp mode 560 is used for that event. The default timestamp mode is 561 'delta'. 562 563 Usual timestamp modes for tracing: 564 565 # cat timestamp_mode 566 [delta] absolute 567 568 The timestamp mode with the square brackets around it is the 569 one in effect. 570 571 delta: Default timestamp mode - timestamp is a delta against 572 a per-buffer timestamp. 573 574 absolute: The timestamp is a full timestamp, not a delta 575 against some other value. As such it takes up more 576 space and is less efficient. 577 578 hwlat_detector: 579 580 Directory for the Hardware Latency Detector. 581 See "Hardware Latency Detector" section below. 582 583 per_cpu: 584 585 This is a directory that contains the trace per_cpu information. 586 587 per_cpu/cpu0/buffer_size_kb: 588 589 The ftrace buffer is defined per_cpu. That is, there's a separate 590 buffer for each CPU to allow writes to be done atomically, 591 and free from cache bouncing. These buffers may have different 592 size buffers. This file is similar to the buffer_size_kb 593 file, but it only displays or sets the buffer size for the 594 specific CPU. (here cpu0). 595 596 per_cpu/cpu0/trace: 597 598 This is similar to the "trace" file, but it will only display 599 the data specific for the CPU. If written to, it only clears 600 the specific CPU buffer. 601 602 per_cpu/cpu0/trace_pipe 603 604 This is similar to the "trace_pipe" file, and is a consuming 605 read, but it will only display (and consume) the data specific 606 for the CPU. 607 608 per_cpu/cpu0/trace_pipe_raw 609 610 For tools that can parse the ftrace ring buffer binary format, 611 the trace_pipe_raw file can be used to extract the data 612 from the ring buffer directly. With the use of the splice() 613 system call, the buffer data can be quickly transferred to 614 a file or to the network where a server is collecting the 615 data. 616 617 Like trace_pipe, this is a consuming reader, where multiple 618 reads will always produce different data. 619 620 per_cpu/cpu0/snapshot: 621 622 This is similar to the main "snapshot" file, but will only 623 snapshot the current CPU (if supported). It only displays 624 the content of the snapshot for a given CPU, and if 625 written to, only clears this CPU buffer. 626 627 per_cpu/cpu0/snapshot_raw: 628 629 Similar to the trace_pipe_raw, but will read the binary format 630 from the snapshot buffer for the given CPU. 631 632 per_cpu/cpu0/stats: 633 634 This displays certain stats about the ring buffer: 635 636 entries: 637 The number of events that are still in the buffer. 638 639 overrun: 640 The number of lost events due to overwriting when 641 the buffer was full. 642 643 commit overrun: 644 Should always be zero. 645 This gets set if so many events happened within a nested 646 event (ring buffer is re-entrant), that it fills the 647 buffer and starts dropping events. 648 649 bytes: 650 Bytes actually read (not overwritten). 651 652 oldest event ts: 653 The oldest timestamp in the buffer 654 655 now ts: 656 The current timestamp 657 658 dropped events: 659 Events lost due to overwrite option being off. 660 661 read events: 662 The number of events read. 663 664The Tracers 665----------- 666 667Here is the list of current tracers that may be configured. 668 669 "function" 670 671 Function call tracer to trace all kernel functions. 672 673 "function_graph" 674 675 Similar to the function tracer except that the 676 function tracer probes the functions on their entry 677 whereas the function graph tracer traces on both entry 678 and exit of the functions. It then provides the ability 679 to draw a graph of function calls similar to C code 680 source. 681 682 "blk" 683 684 The block tracer. The tracer used by the blktrace user 685 application. 686 687 "hwlat" 688 689 The Hardware Latency tracer is used to detect if the hardware 690 produces any latency. See "Hardware Latency Detector" section 691 below. 692 693 "irqsoff" 694 695 Traces the areas that disable interrupts and saves 696 the trace with the longest max latency. 697 See tracing_max_latency. When a new max is recorded, 698 it replaces the old trace. It is best to view this 699 trace with the latency-format option enabled, which 700 happens automatically when the tracer is selected. 701 702 "preemptoff" 703 704 Similar to irqsoff but traces and records the amount of 705 time for which preemption is disabled. 706 707 "preemptirqsoff" 708 709 Similar to irqsoff and preemptoff, but traces and 710 records the largest time for which irqs and/or preemption 711 is disabled. 712 713 "wakeup" 714 715 Traces and records the max latency that it takes for 716 the highest priority task to get scheduled after 717 it has been woken up. 718 Traces all tasks as an average developer would expect. 719 720 "wakeup_rt" 721 722 Traces and records the max latency that it takes for just 723 RT tasks (as the current "wakeup" does). This is useful 724 for those interested in wake up timings of RT tasks. 725 726 "wakeup_dl" 727 728 Traces and records the max latency that it takes for 729 a SCHED_DEADLINE task to be woken (as the "wakeup" and 730 "wakeup_rt" does). 731 732 "mmiotrace" 733 734 A special tracer that is used to trace binary module. 735 It will trace all the calls that a module makes to the 736 hardware. Everything it writes and reads from the I/O 737 as well. 738 739 "branch" 740 741 This tracer can be configured when tracing likely/unlikely 742 calls within the kernel. It will trace when a likely and 743 unlikely branch is hit and if it was correct in its prediction 744 of being correct. 745 746 "nop" 747 748 This is the "trace nothing" tracer. To remove all 749 tracers from tracing simply echo "nop" into 750 current_tracer. 751 752 753Examples of using the tracer 754---------------------------- 755 756Here are typical examples of using the tracers when controlling 757them only with the tracefs interface (without using any 758user-land utilities). 759 760Output format: 761-------------- 762 763Here is an example of the output format of the file "trace":: 764 765 # tracer: function 766 # 767 # entries-in-buffer/entries-written: 140080/250280 #P:4 768 # 769 # _-----=> irqs-off 770 # / _----=> need-resched 771 # | / _---=> hardirq/softirq 772 # || / _--=> preempt-depth 773 # ||| / delay 774 # TASK-PID CPU# |||| TIMESTAMP FUNCTION 775 # | | | |||| | | 776 bash-1977 [000] .... 17284.993652: sys_close <-system_call_fastpath 777 bash-1977 [000] .... 17284.993653: __close_fd <-sys_close 778 bash-1977 [000] .... 17284.993653: _raw_spin_lock <-__close_fd 779 sshd-1974 [003] .... 17284.993653: __srcu_read_unlock <-fsnotify 780 bash-1977 [000] .... 17284.993654: add_preempt_count <-_raw_spin_lock 781 bash-1977 [000] ...1 17284.993655: _raw_spin_unlock <-__close_fd 782 bash-1977 [000] ...1 17284.993656: sub_preempt_count <-_raw_spin_unlock 783 bash-1977 [000] .... 17284.993657: filp_close <-__close_fd 784 bash-1977 [000] .... 17284.993657: dnotify_flush <-filp_close 785 sshd-1974 [003] .... 17284.993658: sys_select <-system_call_fastpath 786 .... 787 788A header is printed with the tracer name that is represented by 789the trace. In this case the tracer is "function". Then it shows the 790number of events in the buffer as well as the total number of entries 791that were written. The difference is the number of entries that were 792lost due to the buffer filling up (250280 - 140080 = 110200 events 793lost). 794 795The header explains the content of the events. Task name "bash", the task 796PID "1977", the CPU that it was running on "000", the latency format 797(explained below), the timestamp in <secs>.<usecs> format, the 798function name that was traced "sys_close" and the parent function that 799called this function "system_call_fastpath". The timestamp is the time 800at which the function was entered. 801 802Latency trace format 803-------------------- 804 805When the latency-format option is enabled or when one of the latency 806tracers is set, the trace file gives somewhat more information to see 807why a latency happened. Here is a typical trace:: 808 809 # tracer: irqsoff 810 # 811 # irqsoff latency trace v1.1.5 on 3.8.0-test+ 812 # -------------------------------------------------------------------- 813 # latency: 259 us, #4/4, CPU#2 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4) 814 # ----------------- 815 # | task: ps-6143 (uid:0 nice:0 policy:0 rt_prio:0) 816 # ----------------- 817 # => started at: __lock_task_sighand 818 # => ended at: _raw_spin_unlock_irqrestore 819 # 820 # 821 # _------=> CPU# 822 # / _-----=> irqs-off 823 # | / _----=> need-resched 824 # || / _---=> hardirq/softirq 825 # ||| / _--=> preempt-depth 826 # |||| / delay 827 # cmd pid ||||| time | caller 828 # \ / ||||| \ | / 829 ps-6143 2d... 0us!: trace_hardirqs_off <-__lock_task_sighand 830 ps-6143 2d..1 259us+: trace_hardirqs_on <-_raw_spin_unlock_irqrestore 831 ps-6143 2d..1 263us+: time_hardirqs_on <-_raw_spin_unlock_irqrestore 832 ps-6143 2d..1 306us : <stack trace> 833 => trace_hardirqs_on_caller 834 => trace_hardirqs_on 835 => _raw_spin_unlock_irqrestore 836 => do_task_stat 837 => proc_tgid_stat 838 => proc_single_show 839 => seq_read 840 => vfs_read 841 => sys_read 842 => system_call_fastpath 843 844 845This shows that the current tracer is "irqsoff" tracing the time 846for which interrupts were disabled. It gives the trace version (which 847never changes) and the version of the kernel upon which this was executed on 848(3.8). Then it displays the max latency in microseconds (259 us). The number 849of trace entries displayed and the total number (both are four: #4/4). 850VP, KP, SP, and HP are always zero and are reserved for later use. 851#P is the number of online CPUs (#P:4). 852 853The task is the process that was running when the latency 854occurred. (ps pid: 6143). 855 856The start and stop (the functions in which the interrupts were 857disabled and enabled respectively) that caused the latencies: 858 859 - __lock_task_sighand is where the interrupts were disabled. 860 - _raw_spin_unlock_irqrestore is where they were enabled again. 861 862The next lines after the header are the trace itself. The header 863explains which is which. 864 865 cmd: The name of the process in the trace. 866 867 pid: The PID of that process. 868 869 CPU#: The CPU which the process was running on. 870 871 irqs-off: 'd' interrupts are disabled. '.' otherwise. 872 .. caution:: If the architecture does not support a way to 873 read the irq flags variable, an 'X' will always 874 be printed here. 875 876 need-resched: 877 - 'N' both TIF_NEED_RESCHED and PREEMPT_NEED_RESCHED is set, 878 - 'n' only TIF_NEED_RESCHED is set, 879 - 'p' only PREEMPT_NEED_RESCHED is set, 880 - '.' otherwise. 881 882 hardirq/softirq: 883 - 'Z' - NMI occurred inside a hardirq 884 - 'z' - NMI is running 885 - 'H' - hard irq occurred inside a softirq. 886 - 'h' - hard irq is running 887 - 's' - soft irq is running 888 - '.' - normal context. 889 890 preempt-depth: The level of preempt_disabled 891 892The above is mostly meaningful for kernel developers. 893 894 time: 895 When the latency-format option is enabled, the trace file 896 output includes a timestamp relative to the start of the 897 trace. This differs from the output when latency-format 898 is disabled, which includes an absolute timestamp. 899 900 delay: 901 This is just to help catch your eye a bit better. And 902 needs to be fixed to be only relative to the same CPU. 903 The marks are determined by the difference between this 904 current trace and the next trace. 905 906 - '$' - greater than 1 second 907 - '@' - greater than 100 milisecond 908 - '*' - greater than 10 milisecond 909 - '#' - greater than 1000 microsecond 910 - '!' - greater than 100 microsecond 911 - '+' - greater than 10 microsecond 912 - ' ' - less than or equal to 10 microsecond. 913 914 The rest is the same as the 'trace' file. 915 916 Note, the latency tracers will usually end with a back trace 917 to easily find where the latency occurred. 918 919trace_options 920------------- 921 922The trace_options file (or the options directory) is used to control 923what gets printed in the trace output, or manipulate the tracers. 924To see what is available, simply cat the file:: 925 926 cat trace_options 927 print-parent 928 nosym-offset 929 nosym-addr 930 noverbose 931 noraw 932 nohex 933 nobin 934 noblock 935 trace_printk 936 annotate 937 nouserstacktrace 938 nosym-userobj 939 noprintk-msg-only 940 context-info 941 nolatency-format 942 record-cmd 943 norecord-tgid 944 overwrite 945 nodisable_on_free 946 irq-info 947 markers 948 noevent-fork 949 function-trace 950 nofunction-fork 951 nodisplay-graph 952 nostacktrace 953 nobranch 954 955To disable one of the options, echo in the option prepended with 956"no":: 957 958 echo noprint-parent > trace_options 959 960To enable an option, leave off the "no":: 961 962 echo sym-offset > trace_options 963 964Here are the available options: 965 966 print-parent 967 On function traces, display the calling (parent) 968 function as well as the function being traced. 969 :: 970 971 print-parent: 972 bash-4000 [01] 1477.606694: simple_strtoul <-kstrtoul 973 974 noprint-parent: 975 bash-4000 [01] 1477.606694: simple_strtoul 976 977 978 sym-offset 979 Display not only the function name, but also the 980 offset in the function. For example, instead of 981 seeing just "ktime_get", you will see 982 "ktime_get+0xb/0x20". 983 :: 984 985 sym-offset: 986 bash-4000 [01] 1477.606694: simple_strtoul+0x6/0xa0 987 988 sym-addr 989 This will also display the function address as well 990 as the function name. 991 :: 992 993 sym-addr: 994 bash-4000 [01] 1477.606694: simple_strtoul <c0339346> 995 996 verbose 997 This deals with the trace file when the 998 latency-format option is enabled. 999 :: 1000 1001 bash 4000 1 0 00000000 00010a95 [58127d26] 1720.415ms \ 1002 (+0.000ms): simple_strtoul (kstrtoul) 1003 1004 raw 1005 This will display raw numbers. This option is best for 1006 use with user applications that can translate the raw 1007 numbers better than having it done in the kernel. 1008 1009 hex 1010 Similar to raw, but the numbers will be in a hexadecimal format. 1011 1012 bin 1013 This will print out the formats in raw binary. 1014 1015 block 1016 When set, reading trace_pipe will not block when polled. 1017 1018 trace_printk 1019 Can disable trace_printk() from writing into the buffer. 1020 1021 annotate 1022 It is sometimes confusing when the CPU buffers are full 1023 and one CPU buffer had a lot of events recently, thus 1024 a shorter time frame, were another CPU may have only had 1025 a few events, which lets it have older events. When 1026 the trace is reported, it shows the oldest events first, 1027 and it may look like only one CPU ran (the one with the 1028 oldest events). When the annotate option is set, it will 1029 display when a new CPU buffer started:: 1030 1031 <idle>-0 [001] dNs4 21169.031481: wake_up_idle_cpu <-add_timer_on 1032 <idle>-0 [001] dNs4 21169.031482: _raw_spin_unlock_irqrestore <-add_timer_on 1033 <idle>-0 [001] .Ns4 21169.031484: sub_preempt_count <-_raw_spin_unlock_irqrestore 1034 ##### CPU 2 buffer started #### 1035 <idle>-0 [002] .N.1 21169.031484: rcu_idle_exit <-cpu_idle 1036 <idle>-0 [001] .Ns3 21169.031484: _raw_spin_unlock <-clocksource_watchdog 1037 <idle>-0 [001] .Ns3 21169.031485: sub_preempt_count <-_raw_spin_unlock 1038 1039 userstacktrace 1040 This option changes the trace. It records a 1041 stacktrace of the current user space thread after 1042 each trace event. 1043 1044 sym-userobj 1045 when user stacktrace are enabled, look up which 1046 object the address belongs to, and print a 1047 relative address. This is especially useful when 1048 ASLR is on, otherwise you don't get a chance to 1049 resolve the address to object/file/line after 1050 the app is no longer running 1051 1052 The lookup is performed when you read 1053 trace,trace_pipe. Example:: 1054 1055 a.out-1623 [000] 40874.465068: /root/a.out[+0x480] <-/root/a.out[+0 1056 x494] <- /root/a.out[+0x4a8] <- /lib/libc-2.7.so[+0x1e1a6] 1057 1058 1059 printk-msg-only 1060 When set, trace_printk()s will only show the format 1061 and not their parameters (if trace_bprintk() or 1062 trace_bputs() was used to save the trace_printk()). 1063 1064 context-info 1065 Show only the event data. Hides the comm, PID, 1066 timestamp, CPU, and other useful data. 1067 1068 latency-format 1069 This option changes the trace output. When it is enabled, 1070 the trace displays additional information about the 1071 latency, as described in "Latency trace format". 1072 1073 record-cmd 1074 When any event or tracer is enabled, a hook is enabled 1075 in the sched_switch trace point to fill comm cache 1076 with mapped pids and comms. But this may cause some 1077 overhead, and if you only care about pids, and not the 1078 name of the task, disabling this option can lower the 1079 impact of tracing. See "saved_cmdlines". 1080 1081 record-tgid 1082 When any event or tracer is enabled, a hook is enabled 1083 in the sched_switch trace point to fill the cache of 1084 mapped Thread Group IDs (TGID) mapping to pids. See 1085 "saved_tgids". 1086 1087 overwrite 1088 This controls what happens when the trace buffer is 1089 full. If "1" (default), the oldest events are 1090 discarded and overwritten. If "0", then the newest 1091 events are discarded. 1092 (see per_cpu/cpu0/stats for overrun and dropped) 1093 1094 disable_on_free 1095 When the free_buffer is closed, tracing will 1096 stop (tracing_on set to 0). 1097 1098 irq-info 1099 Shows the interrupt, preempt count, need resched data. 1100 When disabled, the trace looks like:: 1101 1102 # tracer: function 1103 # 1104 # entries-in-buffer/entries-written: 144405/9452052 #P:4 1105 # 1106 # TASK-PID CPU# TIMESTAMP FUNCTION 1107 # | | | | | 1108 <idle>-0 [002] 23636.756054: ttwu_do_activate.constprop.89 <-try_to_wake_up 1109 <idle>-0 [002] 23636.756054: activate_task <-ttwu_do_activate.constprop.89 1110 <idle>-0 [002] 23636.756055: enqueue_task <-activate_task 1111 1112 1113 markers 1114 When set, the trace_marker is writable (only by root). 1115 When disabled, the trace_marker will error with EINVAL 1116 on write. 1117 1118 event-fork 1119 When set, tasks with PIDs listed in set_event_pid will have 1120 the PIDs of their children added to set_event_pid when those 1121 tasks fork. Also, when tasks with PIDs in set_event_pid exit, 1122 their PIDs will be removed from the file. 1123 1124 function-trace 1125 The latency tracers will enable function tracing 1126 if this option is enabled (default it is). When 1127 it is disabled, the latency tracers do not trace 1128 functions. This keeps the overhead of the tracer down 1129 when performing latency tests. 1130 1131 function-fork 1132 When set, tasks with PIDs listed in set_ftrace_pid will 1133 have the PIDs of their children added to set_ftrace_pid 1134 when those tasks fork. Also, when tasks with PIDs in 1135 set_ftrace_pid exit, their PIDs will be removed from the 1136 file. 1137 1138 display-graph 1139 When set, the latency tracers (irqsoff, wakeup, etc) will 1140 use function graph tracing instead of function tracing. 1141 1142 stacktrace 1143 When set, a stack trace is recorded after any trace event 1144 is recorded. 1145 1146 branch 1147 Enable branch tracing with the tracer. This enables branch 1148 tracer along with the currently set tracer. Enabling this 1149 with the "nop" tracer is the same as just enabling the 1150 "branch" tracer. 1151 1152.. tip:: Some tracers have their own options. They only appear in this 1153 file when the tracer is active. They always appear in the 1154 options directory. 1155 1156 1157Here are the per tracer options: 1158 1159Options for function tracer: 1160 1161 func_stack_trace 1162 When set, a stack trace is recorded after every 1163 function that is recorded. NOTE! Limit the functions 1164 that are recorded before enabling this, with 1165 "set_ftrace_filter" otherwise the system performance 1166 will be critically degraded. Remember to disable 1167 this option before clearing the function filter. 1168 1169Options for function_graph tracer: 1170 1171 Since the function_graph tracer has a slightly different output 1172 it has its own options to control what is displayed. 1173 1174 funcgraph-overrun 1175 When set, the "overrun" of the graph stack is 1176 displayed after each function traced. The 1177 overrun, is when the stack depth of the calls 1178 is greater than what is reserved for each task. 1179 Each task has a fixed array of functions to 1180 trace in the call graph. If the depth of the 1181 calls exceeds that, the function is not traced. 1182 The overrun is the number of functions missed 1183 due to exceeding this array. 1184 1185 funcgraph-cpu 1186 When set, the CPU number of the CPU where the trace 1187 occurred is displayed. 1188 1189 funcgraph-overhead 1190 When set, if the function takes longer than 1191 A certain amount, then a delay marker is 1192 displayed. See "delay" above, under the 1193 header description. 1194 1195 funcgraph-proc 1196 Unlike other tracers, the process' command line 1197 is not displayed by default, but instead only 1198 when a task is traced in and out during a context 1199 switch. Enabling this options has the command 1200 of each process displayed at every line. 1201 1202 funcgraph-duration 1203 At the end of each function (the return) 1204 the duration of the amount of time in the 1205 function is displayed in microseconds. 1206 1207 funcgraph-abstime 1208 When set, the timestamp is displayed at each line. 1209 1210 funcgraph-irqs 1211 When disabled, functions that happen inside an 1212 interrupt will not be traced. 1213 1214 funcgraph-tail 1215 When set, the return event will include the function 1216 that it represents. By default this is off, and 1217 only a closing curly bracket "}" is displayed for 1218 the return of a function. 1219 1220 sleep-time 1221 When running function graph tracer, to include 1222 the time a task schedules out in its function. 1223 When enabled, it will account time the task has been 1224 scheduled out as part of the function call. 1225 1226 graph-time 1227 When running function profiler with function graph tracer, 1228 to include the time to call nested functions. When this is 1229 not set, the time reported for the function will only 1230 include the time the function itself executed for, not the 1231 time for functions that it called. 1232 1233Options for blk tracer: 1234 1235 blk_classic 1236 Shows a more minimalistic output. 1237 1238 1239irqsoff 1240------- 1241 1242When interrupts are disabled, the CPU can not react to any other 1243external event (besides NMIs and SMIs). This prevents the timer 1244interrupt from triggering or the mouse interrupt from letting 1245the kernel know of a new mouse event. The result is a latency 1246with the reaction time. 1247 1248The irqsoff tracer tracks the time for which interrupts are 1249disabled. When a new maximum latency is hit, the tracer saves 1250the trace leading up to that latency point so that every time a 1251new maximum is reached, the old saved trace is discarded and the 1252new trace is saved. 1253 1254To reset the maximum, echo 0 into tracing_max_latency. Here is 1255an example:: 1256 1257 # echo 0 > options/function-trace 1258 # echo irqsoff > current_tracer 1259 # echo 1 > tracing_on 1260 # echo 0 > tracing_max_latency 1261 # ls -ltr 1262 [...] 1263 # echo 0 > tracing_on 1264 # cat trace 1265 # tracer: irqsoff 1266 # 1267 # irqsoff latency trace v1.1.5 on 3.8.0-test+ 1268 # -------------------------------------------------------------------- 1269 # latency: 16 us, #4/4, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4) 1270 # ----------------- 1271 # | task: swapper/0-0 (uid:0 nice:0 policy:0 rt_prio:0) 1272 # ----------------- 1273 # => started at: run_timer_softirq 1274 # => ended at: run_timer_softirq 1275 # 1276 # 1277 # _------=> CPU# 1278 # / _-----=> irqs-off 1279 # | / _----=> need-resched 1280 # || / _---=> hardirq/softirq 1281 # ||| / _--=> preempt-depth 1282 # |||| / delay 1283 # cmd pid ||||| time | caller 1284 # \ / ||||| \ | / 1285 <idle>-0 0d.s2 0us+: _raw_spin_lock_irq <-run_timer_softirq 1286 <idle>-0 0dNs3 17us : _raw_spin_unlock_irq <-run_timer_softirq 1287 <idle>-0 0dNs3 17us+: trace_hardirqs_on <-run_timer_softirq 1288 <idle>-0 0dNs3 25us : <stack trace> 1289 => _raw_spin_unlock_irq 1290 => run_timer_softirq 1291 => __do_softirq 1292 => call_softirq 1293 => do_softirq 1294 => irq_exit 1295 => smp_apic_timer_interrupt 1296 => apic_timer_interrupt 1297 => rcu_idle_exit 1298 => cpu_idle 1299 => rest_init 1300 => start_kernel 1301 => x86_64_start_reservations 1302 => x86_64_start_kernel 1303 1304Here we see that that we had a latency of 16 microseconds (which is 1305very good). The _raw_spin_lock_irq in run_timer_softirq disabled 1306interrupts. The difference between the 16 and the displayed 1307timestamp 25us occurred because the clock was incremented 1308between the time of recording the max latency and the time of 1309recording the function that had that latency. 1310 1311Note the above example had function-trace not set. If we set 1312function-trace, we get a much larger output:: 1313 1314 with echo 1 > options/function-trace 1315 1316 # tracer: irqsoff 1317 # 1318 # irqsoff latency trace v1.1.5 on 3.8.0-test+ 1319 # -------------------------------------------------------------------- 1320 # latency: 71 us, #168/168, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4) 1321 # ----------------- 1322 # | task: bash-2042 (uid:0 nice:0 policy:0 rt_prio:0) 1323 # ----------------- 1324 # => started at: ata_scsi_queuecmd 1325 # => ended at: ata_scsi_queuecmd 1326 # 1327 # 1328 # _------=> CPU# 1329 # / _-----=> irqs-off 1330 # | / _----=> need-resched 1331 # || / _---=> hardirq/softirq 1332 # ||| / _--=> preempt-depth 1333 # |||| / delay 1334 # cmd pid ||||| time | caller 1335 # \ / ||||| \ | / 1336 bash-2042 3d... 0us : _raw_spin_lock_irqsave <-ata_scsi_queuecmd 1337 bash-2042 3d... 0us : add_preempt_count <-_raw_spin_lock_irqsave 1338 bash-2042 3d..1 1us : ata_scsi_find_dev <-ata_scsi_queuecmd 1339 bash-2042 3d..1 1us : __ata_scsi_find_dev <-ata_scsi_find_dev 1340 bash-2042 3d..1 2us : ata_find_dev.part.14 <-__ata_scsi_find_dev 1341 bash-2042 3d..1 2us : ata_qc_new_init <-__ata_scsi_queuecmd 1342 bash-2042 3d..1 3us : ata_sg_init <-__ata_scsi_queuecmd 1343 bash-2042 3d..1 4us : ata_scsi_rw_xlat <-__ata_scsi_queuecmd 1344 bash-2042 3d..1 4us : ata_build_rw_tf <-ata_scsi_rw_xlat 1345 [...] 1346 bash-2042 3d..1 67us : delay_tsc <-__delay 1347 bash-2042 3d..1 67us : add_preempt_count <-delay_tsc 1348 bash-2042 3d..2 67us : sub_preempt_count <-delay_tsc 1349 bash-2042 3d..1 67us : add_preempt_count <-delay_tsc 1350 bash-2042 3d..2 68us : sub_preempt_count <-delay_tsc 1351 bash-2042 3d..1 68us+: ata_bmdma_start <-ata_bmdma_qc_issue 1352 bash-2042 3d..1 71us : _raw_spin_unlock_irqrestore <-ata_scsi_queuecmd 1353 bash-2042 3d..1 71us : _raw_spin_unlock_irqrestore <-ata_scsi_queuecmd 1354 bash-2042 3d..1 72us+: trace_hardirqs_on <-ata_scsi_queuecmd 1355 bash-2042 3d..1 120us : <stack trace> 1356 => _raw_spin_unlock_irqrestore 1357 => ata_scsi_queuecmd 1358 => scsi_dispatch_cmd 1359 => scsi_request_fn 1360 => __blk_run_queue_uncond 1361 => __blk_run_queue 1362 => blk_queue_bio 1363 => generic_make_request 1364 => submit_bio 1365 => submit_bh 1366 => __ext3_get_inode_loc 1367 => ext3_iget 1368 => ext3_lookup 1369 => lookup_real 1370 => __lookup_hash 1371 => walk_component 1372 => lookup_last 1373 => path_lookupat 1374 => filename_lookup 1375 => user_path_at_empty 1376 => user_path_at 1377 => vfs_fstatat 1378 => vfs_stat 1379 => sys_newstat 1380 => system_call_fastpath 1381 1382 1383Here we traced a 71 microsecond latency. But we also see all the 1384functions that were called during that time. Note that by 1385enabling function tracing, we incur an added overhead. This 1386overhead may extend the latency times. But nevertheless, this 1387trace has provided some very helpful debugging information. 1388 1389 1390preemptoff 1391---------- 1392 1393When preemption is disabled, we may be able to receive 1394interrupts but the task cannot be preempted and a higher 1395priority task must wait for preemption to be enabled again 1396before it can preempt a lower priority task. 1397 1398The preemptoff tracer traces the places that disable preemption. 1399Like the irqsoff tracer, it records the maximum latency for 1400which preemption was disabled. The control of preemptoff tracer 1401is much like the irqsoff tracer. 1402:: 1403 1404 # echo 0 > options/function-trace 1405 # echo preemptoff > current_tracer 1406 # echo 1 > tracing_on 1407 # echo 0 > tracing_max_latency 1408 # ls -ltr 1409 [...] 1410 # echo 0 > tracing_on 1411 # cat trace 1412 # tracer: preemptoff 1413 # 1414 # preemptoff latency trace v1.1.5 on 3.8.0-test+ 1415 # -------------------------------------------------------------------- 1416 # latency: 46 us, #4/4, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4) 1417 # ----------------- 1418 # | task: sshd-1991 (uid:0 nice:0 policy:0 rt_prio:0) 1419 # ----------------- 1420 # => started at: do_IRQ 1421 # => ended at: do_IRQ 1422 # 1423 # 1424 # _------=> CPU# 1425 # / _-----=> irqs-off 1426 # | / _----=> need-resched 1427 # || / _---=> hardirq/softirq 1428 # ||| / _--=> preempt-depth 1429 # |||| / delay 1430 # cmd pid ||||| time | caller 1431 # \ / ||||| \ | / 1432 sshd-1991 1d.h. 0us+: irq_enter <-do_IRQ 1433 sshd-1991 1d..1 46us : irq_exit <-do_IRQ 1434 sshd-1991 1d..1 47us+: trace_preempt_on <-do_IRQ 1435 sshd-1991 1d..1 52us : <stack trace> 1436 => sub_preempt_count 1437 => irq_exit 1438 => do_IRQ 1439 => ret_from_intr 1440 1441 1442This has some more changes. Preemption was disabled when an 1443interrupt came in (notice the 'h'), and was enabled on exit. 1444But we also see that interrupts have been disabled when entering 1445the preempt off section and leaving it (the 'd'). We do not know if 1446interrupts were enabled in the mean time or shortly after this 1447was over. 1448:: 1449 1450 # tracer: preemptoff 1451 # 1452 # preemptoff latency trace v1.1.5 on 3.8.0-test+ 1453 # -------------------------------------------------------------------- 1454 # latency: 83 us, #241/241, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4) 1455 # ----------------- 1456 # | task: bash-1994 (uid:0 nice:0 policy:0 rt_prio:0) 1457 # ----------------- 1458 # => started at: wake_up_new_task 1459 # => ended at: task_rq_unlock 1460 # 1461 # 1462 # _------=> CPU# 1463 # / _-----=> irqs-off 1464 # | / _----=> need-resched 1465 # || / _---=> hardirq/softirq 1466 # ||| / _--=> preempt-depth 1467 # |||| / delay 1468 # cmd pid ||||| time | caller 1469 # \ / ||||| \ | / 1470 bash-1994 1d..1 0us : _raw_spin_lock_irqsave <-wake_up_new_task 1471 bash-1994 1d..1 0us : select_task_rq_fair <-select_task_rq 1472 bash-1994 1d..1 1us : __rcu_read_lock <-select_task_rq_fair 1473 bash-1994 1d..1 1us : source_load <-select_task_rq_fair 1474 bash-1994 1d..1 1us : source_load <-select_task_rq_fair 1475 [...] 1476 bash-1994 1d..1 12us : irq_enter <-smp_apic_timer_interrupt 1477 bash-1994 1d..1 12us : rcu_irq_enter <-irq_enter 1478 bash-1994 1d..1 13us : add_preempt_count <-irq_enter 1479 bash-1994 1d.h1 13us : exit_idle <-smp_apic_timer_interrupt 1480 bash-1994 1d.h1 13us : hrtimer_interrupt <-smp_apic_timer_interrupt 1481 bash-1994 1d.h1 13us : _raw_spin_lock <-hrtimer_interrupt 1482 bash-1994 1d.h1 14us : add_preempt_count <-_raw_spin_lock 1483 bash-1994 1d.h2 14us : ktime_get_update_offsets <-hrtimer_interrupt 1484 [...] 1485 bash-1994 1d.h1 35us : lapic_next_event <-clockevents_program_event 1486 bash-1994 1d.h1 35us : irq_exit <-smp_apic_timer_interrupt 1487 bash-1994 1d.h1 36us : sub_preempt_count <-irq_exit 1488 bash-1994 1d..2 36us : do_softirq <-irq_exit 1489 bash-1994 1d..2 36us : __do_softirq <-call_softirq 1490 bash-1994 1d..2 36us : __local_bh_disable <-__do_softirq 1491 bash-1994 1d.s2 37us : add_preempt_count <-_raw_spin_lock_irq 1492 bash-1994 1d.s3 38us : _raw_spin_unlock <-run_timer_softirq 1493 bash-1994 1d.s3 39us : sub_preempt_count <-_raw_spin_unlock 1494 bash-1994 1d.s2 39us : call_timer_fn <-run_timer_softirq 1495 [...] 1496 bash-1994 1dNs2 81us : cpu_needs_another_gp <-rcu_process_callbacks 1497 bash-1994 1dNs2 82us : __local_bh_enable <-__do_softirq 1498 bash-1994 1dNs2 82us : sub_preempt_count <-__local_bh_enable 1499 bash-1994 1dN.2 82us : idle_cpu <-irq_exit 1500 bash-1994 1dN.2 83us : rcu_irq_exit <-irq_exit 1501 bash-1994 1dN.2 83us : sub_preempt_count <-irq_exit 1502 bash-1994 1.N.1 84us : _raw_spin_unlock_irqrestore <-task_rq_unlock 1503 bash-1994 1.N.1 84us+: trace_preempt_on <-task_rq_unlock 1504 bash-1994 1.N.1 104us : <stack trace> 1505 => sub_preempt_count 1506 => _raw_spin_unlock_irqrestore 1507 => task_rq_unlock 1508 => wake_up_new_task 1509 => do_fork 1510 => sys_clone 1511 => stub_clone 1512 1513 1514The above is an example of the preemptoff trace with 1515function-trace set. Here we see that interrupts were not disabled 1516the entire time. The irq_enter code lets us know that we entered 1517an interrupt 'h'. Before that, the functions being traced still 1518show that it is not in an interrupt, but we can see from the 1519functions themselves that this is not the case. 1520 1521preemptirqsoff 1522-------------- 1523 1524Knowing the locations that have interrupts disabled or 1525preemption disabled for the longest times is helpful. But 1526sometimes we would like to know when either preemption and/or 1527interrupts are disabled. 1528 1529Consider the following code:: 1530 1531 local_irq_disable(); 1532 call_function_with_irqs_off(); 1533 preempt_disable(); 1534 call_function_with_irqs_and_preemption_off(); 1535 local_irq_enable(); 1536 call_function_with_preemption_off(); 1537 preempt_enable(); 1538 1539The irqsoff tracer will record the total length of 1540call_function_with_irqs_off() and 1541call_function_with_irqs_and_preemption_off(). 1542 1543The preemptoff tracer will record the total length of 1544call_function_with_irqs_and_preemption_off() and 1545call_function_with_preemption_off(). 1546 1547But neither will trace the time that interrupts and/or 1548preemption is disabled. This total time is the time that we can 1549not schedule. To record this time, use the preemptirqsoff 1550tracer. 1551 1552Again, using this trace is much like the irqsoff and preemptoff 1553tracers. 1554:: 1555 1556 # echo 0 > options/function-trace 1557 # echo preemptirqsoff > current_tracer 1558 # echo 1 > tracing_on 1559 # echo 0 > tracing_max_latency 1560 # ls -ltr 1561 [...] 1562 # echo 0 > tracing_on 1563 # cat trace 1564 # tracer: preemptirqsoff 1565 # 1566 # preemptirqsoff latency trace v1.1.5 on 3.8.0-test+ 1567 # -------------------------------------------------------------------- 1568 # latency: 100 us, #4/4, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4) 1569 # ----------------- 1570 # | task: ls-2230 (uid:0 nice:0 policy:0 rt_prio:0) 1571 # ----------------- 1572 # => started at: ata_scsi_queuecmd 1573 # => ended at: ata_scsi_queuecmd 1574 # 1575 # 1576 # _------=> CPU# 1577 # / _-----=> irqs-off 1578 # | / _----=> need-resched 1579 # || / _---=> hardirq/softirq 1580 # ||| / _--=> preempt-depth 1581 # |||| / delay 1582 # cmd pid ||||| time | caller 1583 # \ / ||||| \ | / 1584 ls-2230 3d... 0us+: _raw_spin_lock_irqsave <-ata_scsi_queuecmd 1585 ls-2230 3...1 100us : _raw_spin_unlock_irqrestore <-ata_scsi_queuecmd 1586 ls-2230 3...1 101us+: trace_preempt_on <-ata_scsi_queuecmd 1587 ls-2230 3...1 111us : <stack trace> 1588 => sub_preempt_count 1589 => _raw_spin_unlock_irqrestore 1590 => ata_scsi_queuecmd 1591 => scsi_dispatch_cmd 1592 => scsi_request_fn 1593 => __blk_run_queue_uncond 1594 => __blk_run_queue 1595 => blk_queue_bio 1596 => generic_make_request 1597 => submit_bio 1598 => submit_bh 1599 => ext3_bread 1600 => ext3_dir_bread 1601 => htree_dirblock_to_tree 1602 => ext3_htree_fill_tree 1603 => ext3_readdir 1604 => vfs_readdir 1605 => sys_getdents 1606 => system_call_fastpath 1607 1608 1609The trace_hardirqs_off_thunk is called from assembly on x86 when 1610interrupts are disabled in the assembly code. Without the 1611function tracing, we do not know if interrupts were enabled 1612within the preemption points. We do see that it started with 1613preemption enabled. 1614 1615Here is a trace with function-trace set:: 1616 1617 # tracer: preemptirqsoff 1618 # 1619 # preemptirqsoff latency trace v1.1.5 on 3.8.0-test+ 1620 # -------------------------------------------------------------------- 1621 # latency: 161 us, #339/339, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4) 1622 # ----------------- 1623 # | task: ls-2269 (uid:0 nice:0 policy:0 rt_prio:0) 1624 # ----------------- 1625 # => started at: schedule 1626 # => ended at: mutex_unlock 1627 # 1628 # 1629 # _------=> CPU# 1630 # / _-----=> irqs-off 1631 # | / _----=> need-resched 1632 # || / _---=> hardirq/softirq 1633 # ||| / _--=> preempt-depth 1634 # |||| / delay 1635 # cmd pid ||||| time | caller 1636 # \ / ||||| \ | / 1637 kworker/-59 3...1 0us : __schedule <-schedule 1638 kworker/-59 3d..1 0us : rcu_preempt_qs <-rcu_note_context_switch 1639 kworker/-59 3d..1 1us : add_preempt_count <-_raw_spin_lock_irq 1640 kworker/-59 3d..2 1us : deactivate_task <-__schedule 1641 kworker/-59 3d..2 1us : dequeue_task <-deactivate_task 1642 kworker/-59 3d..2 2us : update_rq_clock <-dequeue_task 1643 kworker/-59 3d..2 2us : dequeue_task_fair <-dequeue_task 1644 kworker/-59 3d..2 2us : update_curr <-dequeue_task_fair 1645 kworker/-59 3d..2 2us : update_min_vruntime <-update_curr 1646 kworker/-59 3d..2 3us : cpuacct_charge <-update_curr 1647 kworker/-59 3d..2 3us : __rcu_read_lock <-cpuacct_charge 1648 kworker/-59 3d..2 3us : __rcu_read_unlock <-cpuacct_charge 1649 kworker/-59 3d..2 3us : update_cfs_rq_blocked_load <-dequeue_task_fair 1650 kworker/-59 3d..2 4us : clear_buddies <-dequeue_task_fair 1651 kworker/-59 3d..2 4us : account_entity_dequeue <-dequeue_task_fair 1652 kworker/-59 3d..2 4us : update_min_vruntime <-dequeue_task_fair 1653 kworker/-59 3d..2 4us : update_cfs_shares <-dequeue_task_fair 1654 kworker/-59 3d..2 5us : hrtick_update <-dequeue_task_fair 1655 kworker/-59 3d..2 5us : wq_worker_sleeping <-__schedule 1656 kworker/-59 3d..2 5us : kthread_data <-wq_worker_sleeping 1657 kworker/-59 3d..2 5us : put_prev_task_fair <-__schedule 1658 kworker/-59 3d..2 6us : pick_next_task_fair <-pick_next_task 1659 kworker/-59 3d..2 6us : clear_buddies <-pick_next_task_fair 1660 kworker/-59 3d..2 6us : set_next_entity <-pick_next_task_fair 1661 kworker/-59 3d..2 6us : update_stats_wait_end <-set_next_entity 1662 ls-2269 3d..2 7us : finish_task_switch <-__schedule 1663 ls-2269 3d..2 7us : _raw_spin_unlock_irq <-finish_task_switch 1664 ls-2269 3d..2 8us : do_IRQ <-ret_from_intr 1665 ls-2269 3d..2 8us : irq_enter <-do_IRQ 1666 ls-2269 3d..2 8us : rcu_irq_enter <-irq_enter 1667 ls-2269 3d..2 9us : add_preempt_count <-irq_enter 1668 ls-2269 3d.h2 9us : exit_idle <-do_IRQ 1669 [...] 1670 ls-2269 3d.h3 20us : sub_preempt_count <-_raw_spin_unlock 1671 ls-2269 3d.h2 20us : irq_exit <-do_IRQ 1672 ls-2269 3d.h2 21us : sub_preempt_count <-irq_exit 1673 ls-2269 3d..3 21us : do_softirq <-irq_exit 1674 ls-2269 3d..3 21us : __do_softirq <-call_softirq 1675 ls-2269 3d..3 21us+: __local_bh_disable <-__do_softirq 1676 ls-2269 3d.s4 29us : sub_preempt_count <-_local_bh_enable_ip 1677 ls-2269 3d.s5 29us : sub_preempt_count <-_local_bh_enable_ip 1678 ls-2269 3d.s5 31us : do_IRQ <-ret_from_intr 1679 ls-2269 3d.s5 31us : irq_enter <-do_IRQ 1680 ls-2269 3d.s5 31us : rcu_irq_enter <-irq_enter 1681 [...] 1682 ls-2269 3d.s5 31us : rcu_irq_enter <-irq_enter 1683 ls-2269 3d.s5 32us : add_preempt_count <-irq_enter 1684 ls-2269 3d.H5 32us : exit_idle <-do_IRQ 1685 ls-2269 3d.H5 32us : handle_irq <-do_IRQ 1686 ls-2269 3d.H5 32us : irq_to_desc <-handle_irq 1687 ls-2269 3d.H5 33us : handle_fasteoi_irq <-handle_irq 1688 [...] 1689 ls-2269 3d.s5 158us : _raw_spin_unlock_irqrestore <-rtl8139_poll 1690 ls-2269 3d.s3 158us : net_rps_action_and_irq_enable.isra.65 <-net_rx_action 1691 ls-2269 3d.s3 159us : __local_bh_enable <-__do_softirq 1692 ls-2269 3d.s3 159us : sub_preempt_count <-__local_bh_enable 1693 ls-2269 3d..3 159us : idle_cpu <-irq_exit 1694 ls-2269 3d..3 159us : rcu_irq_exit <-irq_exit 1695 ls-2269 3d..3 160us : sub_preempt_count <-irq_exit 1696 ls-2269 3d... 161us : __mutex_unlock_slowpath <-mutex_unlock 1697 ls-2269 3d... 162us+: trace_hardirqs_on <-mutex_unlock 1698 ls-2269 3d... 186us : <stack trace> 1699 => __mutex_unlock_slowpath 1700 => mutex_unlock 1701 => process_output 1702 => n_tty_write 1703 => tty_write 1704 => vfs_write 1705 => sys_write 1706 => system_call_fastpath 1707 1708This is an interesting trace. It started with kworker running and 1709scheduling out and ls taking over. But as soon as ls released the 1710rq lock and enabled interrupts (but not preemption) an interrupt 1711triggered. When the interrupt finished, it started running softirqs. 1712But while the softirq was running, another interrupt triggered. 1713When an interrupt is running inside a softirq, the annotation is 'H'. 1714 1715 1716wakeup 1717------ 1718 1719One common case that people are interested in tracing is the 1720time it takes for a task that is woken to actually wake up. 1721Now for non Real-Time tasks, this can be arbitrary. But tracing 1722it none the less can be interesting. 1723 1724Without function tracing:: 1725 1726 # echo 0 > options/function-trace 1727 # echo wakeup > current_tracer 1728 # echo 1 > tracing_on 1729 # echo 0 > tracing_max_latency 1730 # chrt -f 5 sleep 1 1731 # echo 0 > tracing_on 1732 # cat trace 1733 # tracer: wakeup 1734 # 1735 # wakeup latency trace v1.1.5 on 3.8.0-test+ 1736 # -------------------------------------------------------------------- 1737 # latency: 15 us, #4/4, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4) 1738 # ----------------- 1739 # | task: kworker/3:1H-312 (uid:0 nice:-20 policy:0 rt_prio:0) 1740 # ----------------- 1741 # 1742 # _------=> CPU# 1743 # / _-----=> irqs-off 1744 # | / _----=> need-resched 1745 # || / _---=> hardirq/softirq 1746 # ||| / _--=> preempt-depth 1747 # |||| / delay 1748 # cmd pid ||||| time | caller 1749 # \ / ||||| \ | / 1750 <idle>-0 3dNs7 0us : 0:120:R + [003] 312:100:R kworker/3:1H 1751 <idle>-0 3dNs7 1us+: ttwu_do_activate.constprop.87 <-try_to_wake_up 1752 <idle>-0 3d..3 15us : __schedule <-schedule 1753 <idle>-0 3d..3 15us : 0:120:R ==> [003] 312:100:R kworker/3:1H 1754 1755The tracer only traces the highest priority task in the system 1756to avoid tracing the normal circumstances. Here we see that 1757the kworker with a nice priority of -20 (not very nice), took 1758just 15 microseconds from the time it woke up, to the time it 1759ran. 1760 1761Non Real-Time tasks are not that interesting. A more interesting 1762trace is to concentrate only on Real-Time tasks. 1763 1764wakeup_rt 1765--------- 1766 1767In a Real-Time environment it is very important to know the 1768wakeup time it takes for the highest priority task that is woken 1769up to the time that it executes. This is also known as "schedule 1770latency". I stress the point that this is about RT tasks. It is 1771also important to know the scheduling latency of non-RT tasks, 1772but the average schedule latency is better for non-RT tasks. 1773Tools like LatencyTop are more appropriate for such 1774measurements. 1775 1776Real-Time environments are interested in the worst case latency. 1777That is the longest latency it takes for something to happen, 1778and not the average. We can have a very fast scheduler that may 1779only have a large latency once in a while, but that would not 1780work well with Real-Time tasks. The wakeup_rt tracer was designed 1781to record the worst case wakeups of RT tasks. Non-RT tasks are 1782not recorded because the tracer only records one worst case and 1783tracing non-RT tasks that are unpredictable will overwrite the 1784worst case latency of RT tasks (just run the normal wakeup 1785tracer for a while to see that effect). 1786 1787Since this tracer only deals with RT tasks, we will run this 1788slightly differently than we did with the previous tracers. 1789Instead of performing an 'ls', we will run 'sleep 1' under 1790'chrt' which changes the priority of the task. 1791:: 1792 1793 # echo 0 > options/function-trace 1794 # echo wakeup_rt > current_tracer 1795 # echo 1 > tracing_on 1796 # echo 0 > tracing_max_latency 1797 # chrt -f 5 sleep 1 1798 # echo 0 > tracing_on 1799 # cat trace 1800 # tracer: wakeup 1801 # 1802 # tracer: wakeup_rt 1803 # 1804 # wakeup_rt latency trace v1.1.5 on 3.8.0-test+ 1805 # -------------------------------------------------------------------- 1806 # latency: 5 us, #4/4, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4) 1807 # ----------------- 1808 # | task: sleep-2389 (uid:0 nice:0 policy:1 rt_prio:5) 1809 # ----------------- 1810 # 1811 # _------=> CPU# 1812 # / _-----=> irqs-off 1813 # | / _----=> need-resched 1814 # || / _---=> hardirq/softirq 1815 # ||| / _--=> preempt-depth 1816 # |||| / delay 1817 # cmd pid ||||| time | caller 1818 # \ / ||||| \ | / 1819 <idle>-0 3d.h4 0us : 0:120:R + [003] 2389: 94:R sleep 1820 <idle>-0 3d.h4 1us+: ttwu_do_activate.constprop.87 <-try_to_wake_up 1821 <idle>-0 3d..3 5us : __schedule <-schedule 1822 <idle>-0 3d..3 5us : 0:120:R ==> [003] 2389: 94:R sleep 1823 1824 1825Running this on an idle system, we see that it only took 5 microseconds 1826to perform the task switch. Note, since the trace point in the schedule 1827is before the actual "switch", we stop the tracing when the recorded task 1828is about to schedule in. This may change if we add a new marker at the 1829end of the scheduler. 1830 1831Notice that the recorded task is 'sleep' with the PID of 2389 1832and it has an rt_prio of 5. This priority is user-space priority 1833and not the internal kernel priority. The policy is 1 for 1834SCHED_FIFO and 2 for SCHED_RR. 1835 1836Note, that the trace data shows the internal priority (99 - rtprio). 1837:: 1838 1839 <idle>-0 3d..3 5us : 0:120:R ==> [003] 2389: 94:R sleep 1840 1841The 0:120:R means idle was running with a nice priority of 0 (120 - 120) 1842and in the running state 'R'. The sleep task was scheduled in with 18432389: 94:R. That is the priority is the kernel rtprio (99 - 5 = 94) 1844and it too is in the running state. 1845 1846Doing the same with chrt -r 5 and function-trace set. 1847:: 1848 1849 echo 1 > options/function-trace 1850 1851 # tracer: wakeup_rt 1852 # 1853 # wakeup_rt latency trace v1.1.5 on 3.8.0-test+ 1854 # -------------------------------------------------------------------- 1855 # latency: 29 us, #85/85, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4) 1856 # ----------------- 1857 # | task: sleep-2448 (uid:0 nice:0 policy:1 rt_prio:5) 1858 # ----------------- 1859 # 1860 # _------=> CPU# 1861 # / _-----=> irqs-off 1862 # | / _----=> need-resched 1863 # || / _---=> hardirq/softirq 1864 # ||| / _--=> preempt-depth 1865 # |||| / delay 1866 # cmd pid ||||| time | caller 1867 # \ / ||||| \ | / 1868 <idle>-0 3d.h4 1us+: 0:120:R + [003] 2448: 94:R sleep 1869 <idle>-0 3d.h4 2us : ttwu_do_activate.constprop.87 <-try_to_wake_up 1870 <idle>-0 3d.h3 3us : check_preempt_curr <-ttwu_do_wakeup 1871 <idle>-0 3d.h3 3us : resched_curr <-check_preempt_curr 1872 <idle>-0 3dNh3 4us : task_woken_rt <-ttwu_do_wakeup 1873 <idle>-0 3dNh3 4us : _raw_spin_unlock <-try_to_wake_up 1874 <idle>-0 3dNh3 4us : sub_preempt_count <-_raw_spin_unlock 1875 <idle>-0 3dNh2 5us : ttwu_stat <-try_to_wake_up 1876 <idle>-0 3dNh2 5us : _raw_spin_unlock_irqrestore <-try_to_wake_up 1877 <idle>-0 3dNh2 6us : sub_preempt_count <-_raw_spin_unlock_irqrestore 1878 <idle>-0 3dNh1 6us : _raw_spin_lock <-__run_hrtimer 1879 <idle>-0 3dNh1 6us : add_preempt_count <-_raw_spin_lock 1880 <idle>-0 3dNh2 7us : _raw_spin_unlock <-hrtimer_interrupt 1881 <idle>-0 3dNh2 7us : sub_preempt_count <-_raw_spin_unlock 1882 <idle>-0 3dNh1 7us : tick_program_event <-hrtimer_interrupt 1883 <idle>-0 3dNh1 7us : clockevents_program_event <-tick_program_event 1884 <idle>-0 3dNh1 8us : ktime_get <-clockevents_program_event 1885 <idle>-0 3dNh1 8us : lapic_next_event <-clockevents_program_event 1886 <idle>-0 3dNh1 8us : irq_exit <-smp_apic_timer_interrupt 1887 <idle>-0 3dNh1 9us : sub_preempt_count <-irq_exit 1888 <idle>-0 3dN.2 9us : idle_cpu <-irq_exit 1889 <idle>-0 3dN.2 9us : rcu_irq_exit <-irq_exit 1890 <idle>-0 3dN.2 10us : rcu_eqs_enter_common.isra.45 <-rcu_irq_exit 1891 <idle>-0 3dN.2 10us : sub_preempt_count <-irq_exit 1892 <idle>-0 3.N.1 11us : rcu_idle_exit <-cpu_idle 1893 <idle>-0 3dN.1 11us : rcu_eqs_exit_common.isra.43 <-rcu_idle_exit 1894 <idle>-0 3.N.1 11us : tick_nohz_idle_exit <-cpu_idle 1895 <idle>-0 3dN.1 12us : menu_hrtimer_cancel <-tick_nohz_idle_exit 1896 <idle>-0 3dN.1 12us : ktime_get <-tick_nohz_idle_exit 1897 <idle>-0 3dN.1 12us : tick_do_update_jiffies64 <-tick_nohz_idle_exit 1898 <idle>-0 3dN.1 13us : cpu_load_update_nohz <-tick_nohz_idle_exit 1899 <idle>-0 3dN.1 13us : _raw_spin_lock <-cpu_load_update_nohz 1900 <idle>-0 3dN.1 13us : add_preempt_count <-_raw_spin_lock 1901 <idle>-0 3dN.2 13us : __cpu_load_update <-cpu_load_update_nohz 1902 <idle>-0 3dN.2 14us : sched_avg_update <-__cpu_load_update 1903 <idle>-0 3dN.2 14us : _raw_spin_unlock <-cpu_load_update_nohz 1904 <idle>-0 3dN.2 14us : sub_preempt_count <-_raw_spin_unlock 1905 <idle>-0 3dN.1 15us : calc_load_nohz_stop <-tick_nohz_idle_exit 1906 <idle>-0 3dN.1 15us : touch_softlockup_watchdog <-tick_nohz_idle_exit 1907 <idle>-0 3dN.1 15us : hrtimer_cancel <-tick_nohz_idle_exit 1908 <idle>-0 3dN.1 15us : hrtimer_try_to_cancel <-hrtimer_cancel 1909 <idle>-0 3dN.1 16us : lock_hrtimer_base.isra.18 <-hrtimer_try_to_cancel 1910 <idle>-0 3dN.1 16us : _raw_spin_lock_irqsave <-lock_hrtimer_base.isra.18 1911 <idle>-0 3dN.1 16us : add_preempt_count <-_raw_spin_lock_irqsave 1912 <idle>-0 3dN.2 17us : __remove_hrtimer <-remove_hrtimer.part.16 1913 <idle>-0 3dN.2 17us : hrtimer_force_reprogram <-__remove_hrtimer 1914 <idle>-0 3dN.2 17us : tick_program_event <-hrtimer_force_reprogram 1915 <idle>-0 3dN.2 18us : clockevents_program_event <-tick_program_event 1916 <idle>-0 3dN.2 18us : ktime_get <-clockevents_program_event 1917 <idle>-0 3dN.2 18us : lapic_next_event <-clockevents_program_event 1918 <idle>-0 3dN.2 19us : _raw_spin_unlock_irqrestore <-hrtimer_try_to_cancel 1919 <idle>-0 3dN.2 19us : sub_preempt_count <-_raw_spin_unlock_irqrestore 1920 <idle>-0 3dN.1 19us : hrtimer_forward <-tick_nohz_idle_exit 1921 <idle>-0 3dN.1 20us : ktime_add_safe <-hrtimer_forward 1922 <idle>-0 3dN.1 20us : ktime_add_safe <-hrtimer_forward 1923 <idle>-0 3dN.1 20us : hrtimer_start_range_ns <-hrtimer_start_expires.constprop.11 1924 <idle>-0 3dN.1 20us : __hrtimer_start_range_ns <-hrtimer_start_range_ns 1925 <idle>-0 3dN.1 21us : lock_hrtimer_base.isra.18 <-__hrtimer_start_range_ns 1926 <idle>-0 3dN.1 21us : _raw_spin_lock_irqsave <-lock_hrtimer_base.isra.18 1927 <idle>-0 3dN.1 21us : add_preempt_count <-_raw_spin_lock_irqsave 1928 <idle>-0 3dN.2 22us : ktime_add_safe <-__hrtimer_start_range_ns 1929 <idle>-0 3dN.2 22us : enqueue_hrtimer <-__hrtimer_start_range_ns 1930 <idle>-0 3dN.2 22us : tick_program_event <-__hrtimer_start_range_ns 1931 <idle>-0 3dN.2 23us : clockevents_program_event <-tick_program_event 1932 <idle>-0 3dN.2 23us : ktime_get <-clockevents_program_event 1933 <idle>-0 3dN.2 23us : lapic_next_event <-clockevents_program_event 1934 <idle>-0 3dN.2 24us : _raw_spin_unlock_irqrestore <-__hrtimer_start_range_ns 1935 <idle>-0 3dN.2 24us : sub_preempt_count <-_raw_spin_unlock_irqrestore 1936 <idle>-0 3dN.1 24us : account_idle_ticks <-tick_nohz_idle_exit 1937 <idle>-0 3dN.1 24us : account_idle_time <-account_idle_ticks 1938 <idle>-0 3.N.1 25us : sub_preempt_count <-cpu_idle 1939 <idle>-0 3.N.. 25us : schedule <-cpu_idle 1940 <idle>-0 3.N.. 25us : __schedule <-preempt_schedule 1941 <idle>-0 3.N.. 26us : add_preempt_count <-__schedule 1942 <idle>-0 3.N.1 26us : rcu_note_context_switch <-__schedule 1943 <idle>-0 3.N.1 26us : rcu_sched_qs <-rcu_note_context_switch 1944 <idle>-0 3dN.1 27us : rcu_preempt_qs <-rcu_note_context_switch 1945 <idle>-0 3.N.1 27us : _raw_spin_lock_irq <-__schedule 1946 <idle>-0 3dN.1 27us : add_preempt_count <-_raw_spin_lock_irq 1947 <idle>-0 3dN.2 28us : put_prev_task_idle <-__schedule 1948 <idle>-0 3dN.2 28us : pick_next_task_stop <-pick_next_task 1949 <idle>-0 3dN.2 28us : pick_next_task_rt <-pick_next_task 1950 <idle>-0 3dN.2 29us : dequeue_pushable_task <-pick_next_task_rt 1951 <idle>-0 3d..3 29us : __schedule <-preempt_schedule 1952 <idle>-0 3d..3 30us : 0:120:R ==> [003] 2448: 94:R sleep 1953 1954This isn't that big of a trace, even with function tracing enabled, 1955so I included the entire trace. 1956 1957The interrupt went off while when the system was idle. Somewhere 1958before task_woken_rt() was called, the NEED_RESCHED flag was set, 1959this is indicated by the first occurrence of the 'N' flag. 1960 1961Latency tracing and events 1962-------------------------- 1963As function tracing can induce a much larger latency, but without 1964seeing what happens within the latency it is hard to know what 1965caused it. There is a middle ground, and that is with enabling 1966events. 1967:: 1968 1969 # echo 0 > options/function-trace 1970 # echo wakeup_rt > current_tracer 1971 # echo 1 > events/enable 1972 # echo 1 > tracing_on 1973 # echo 0 > tracing_max_latency 1974 # chrt -f 5 sleep 1 1975 # echo 0 > tracing_on 1976 # cat trace 1977 # tracer: wakeup_rt 1978 # 1979 # wakeup_rt latency trace v1.1.5 on 3.8.0-test+ 1980 # -------------------------------------------------------------------- 1981 # latency: 6 us, #12/12, CPU#2 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4) 1982 # ----------------- 1983 # | task: sleep-5882 (uid:0 nice:0 policy:1 rt_prio:5) 1984 # ----------------- 1985 # 1986 # _------=> CPU# 1987 # / _-----=> irqs-off 1988 # | / _----=> need-resched 1989 # || / _---=> hardirq/softirq 1990 # ||| / _--=> preempt-depth 1991 # |||| / delay 1992 # cmd pid ||||| time | caller 1993 # \ / ||||| \ | / 1994 <idle>-0 2d.h4 0us : 0:120:R + [002] 5882: 94:R sleep 1995 <idle>-0 2d.h4 0us : ttwu_do_activate.constprop.87 <-try_to_wake_up 1996 <idle>-0 2d.h4 1us : sched_wakeup: comm=sleep pid=5882 prio=94 success=1 target_cpu=002 1997 <idle>-0 2dNh2 1us : hrtimer_expire_exit: hrtimer=ffff88007796feb8 1998 <idle>-0 2.N.2 2us : power_end: cpu_id=2 1999 <idle>-0 2.N.2 3us : cpu_idle: state=4294967295 cpu_id=2 2000 <idle>-0 2dN.3 4us : hrtimer_cancel: hrtimer=ffff88007d50d5e0 2001 <idle>-0 2dN.3 4us : hrtimer_start: hrtimer=ffff88007d50d5e0 function=tick_sched_timer expires=34311211000000 softexpires=34311211000000 2002 <idle>-0 2.N.2 5us : rcu_utilization: Start context switch 2003 <idle>-0 2.N.2 5us : rcu_utilization: End context switch 2004 <idle>-0 2d..3 6us : __schedule <-schedule 2005 <idle>-0 2d..3 6us : 0:120:R ==> [002] 5882: 94:R sleep 2006 2007 2008Hardware Latency Detector 2009------------------------- 2010 2011The hardware latency detector is executed by enabling the "hwlat" tracer. 2012 2013NOTE, this tracer will affect the performance of the system as it will 2014periodically make a CPU constantly busy with interrupts disabled. 2015:: 2016 2017 # echo hwlat > current_tracer 2018 # sleep 100 2019 # cat trace 2020 # tracer: hwlat 2021 # 2022 # _-----=> irqs-off 2023 # / _----=> need-resched 2024 # | / _---=> hardirq/softirq 2025 # || / _--=> preempt-depth 2026 # ||| / delay 2027 # TASK-PID CPU# |||| TIMESTAMP FUNCTION 2028 # | | | |||| | | 2029 <...>-3638 [001] d... 19452.055471: #1 inner/outer(us): 12/14 ts:1499801089.066141940 2030 <...>-3638 [003] d... 19454.071354: #2 inner/outer(us): 11/9 ts:1499801091.082164365 2031 <...>-3638 [002] dn.. 19461.126852: #3 inner/outer(us): 12/9 ts:1499801098.138150062 2032 <...>-3638 [001] d... 19488.340960: #4 inner/outer(us): 8/12 ts:1499801125.354139633 2033 <...>-3638 [003] d... 19494.388553: #5 inner/outer(us): 8/12 ts:1499801131.402150961 2034 <...>-3638 [003] d... 19501.283419: #6 inner/outer(us): 0/12 ts:1499801138.297435289 nmi-total:4 nmi-count:1 2035 2036 2037The above output is somewhat the same in the header. All events will have 2038interrupts disabled 'd'. Under the FUNCTION title there is: 2039 2040 #1 2041 This is the count of events recorded that were greater than the 2042 tracing_threshold (See below). 2043 2044 inner/outer(us): 12/14 2045 2046 This shows two numbers as "inner latency" and "outer latency". The test 2047 runs in a loop checking a timestamp twice. The latency detected within 2048 the two timestamps is the "inner latency" and the latency detected 2049 after the previous timestamp and the next timestamp in the loop is 2050 the "outer latency". 2051 2052 ts:1499801089.066141940 2053 2054 The absolute timestamp that the event happened. 2055 2056 nmi-total:4 nmi-count:1 2057 2058 On architectures that support it, if an NMI comes in during the 2059 test, the time spent in NMI is reported in "nmi-total" (in 2060 microseconds). 2061 2062 All architectures that have NMIs will show the "nmi-count" if an 2063 NMI comes in during the test. 2064 2065hwlat files: 2066 2067 tracing_threshold 2068 This gets automatically set to "10" to represent 10 2069 microseconds. This is the threshold of latency that 2070 needs to be detected before the trace will be recorded. 2071 2072 Note, when hwlat tracer is finished (another tracer is 2073 written into "current_tracer"), the original value for 2074 tracing_threshold is placed back into this file. 2075 2076 hwlat_detector/width 2077 The length of time the test runs with interrupts disabled. 2078 2079 hwlat_detector/window 2080 The length of time of the window which the test 2081 runs. That is, the test will run for "width" 2082 microseconds per "window" microseconds 2083 2084 tracing_cpumask 2085 When the test is started. A kernel thread is created that 2086 runs the test. This thread will alternate between CPUs 2087 listed in the tracing_cpumask between each period 2088 (one "window"). To limit the test to specific CPUs 2089 set the mask in this file to only the CPUs that the test 2090 should run on. 2091 2092function 2093-------- 2094 2095This tracer is the function tracer. Enabling the function tracer 2096can be done from the debug file system. Make sure the 2097ftrace_enabled is set; otherwise this tracer is a nop. 2098See the "ftrace_enabled" section below. 2099:: 2100 2101 # sysctl kernel.ftrace_enabled=1 2102 # echo function > current_tracer 2103 # echo 1 > tracing_on 2104 # usleep 1 2105 # echo 0 > tracing_on 2106 # cat trace 2107 # tracer: function 2108 # 2109 # entries-in-buffer/entries-written: 24799/24799 #P:4 2110 # 2111 # _-----=> irqs-off 2112 # / _----=> need-resched 2113 # | / _---=> hardirq/softirq 2114 # || / _--=> preempt-depth 2115 # ||| / delay 2116 # TASK-PID CPU# |||| TIMESTAMP FUNCTION 2117 # | | | |||| | | 2118 bash-1994 [002] .... 3082.063030: mutex_unlock <-rb_simple_write 2119 bash-1994 [002] .... 3082.063031: __mutex_unlock_slowpath <-mutex_unlock 2120 bash-1994 [002] .... 3082.063031: __fsnotify_parent <-fsnotify_modify 2121 bash-1994 [002] .... 3082.063032: fsnotify <-fsnotify_modify 2122 bash-1994 [002] .... 3082.063032: __srcu_read_lock <-fsnotify 2123 bash-1994 [002] .... 3082.063032: add_preempt_count <-__srcu_read_lock 2124 bash-1994 [002] ...1 3082.063032: sub_preempt_count <-__srcu_read_lock 2125 bash-1994 [002] .... 3082.063033: __srcu_read_unlock <-fsnotify 2126 [...] 2127 2128 2129Note: function tracer uses ring buffers to store the above 2130entries. The newest data may overwrite the oldest data. 2131Sometimes using echo to stop the trace is not sufficient because 2132the tracing could have overwritten the data that you wanted to 2133record. For this reason, it is sometimes better to disable 2134tracing directly from a program. This allows you to stop the 2135tracing at the point that you hit the part that you are 2136interested in. To disable the tracing directly from a C program, 2137something like following code snippet can be used:: 2138 2139 int trace_fd; 2140 [...] 2141 int main(int argc, char *argv[]) { 2142 [...] 2143 trace_fd = open(tracing_file("tracing_on"), O_WRONLY); 2144 [...] 2145 if (condition_hit()) { 2146 write(trace_fd, "0", 1); 2147 } 2148 [...] 2149 } 2150 2151 2152Single thread tracing 2153--------------------- 2154 2155By writing into set_ftrace_pid you can trace a 2156single thread. For example:: 2157 2158 # cat set_ftrace_pid 2159 no pid 2160 # echo 3111 > set_ftrace_pid 2161 # cat set_ftrace_pid 2162 3111 2163 # echo function > current_tracer 2164 # cat trace | head 2165 # tracer: function 2166 # 2167 # TASK-PID CPU# TIMESTAMP FUNCTION 2168 # | | | | | 2169 yum-updatesd-3111 [003] 1637.254676: finish_task_switch <-thread_return 2170 yum-updatesd-3111 [003] 1637.254681: hrtimer_cancel <-schedule_hrtimeout_range 2171 yum-updatesd-3111 [003] 1637.254682: hrtimer_try_to_cancel <-hrtimer_cancel 2172 yum-updatesd-3111 [003] 1637.254683: lock_hrtimer_base <-hrtimer_try_to_cancel 2173 yum-updatesd-3111 [003] 1637.254685: fget_light <-do_sys_poll 2174 yum-updatesd-3111 [003] 1637.254686: pipe_poll <-do_sys_poll 2175 # echo > set_ftrace_pid 2176 # cat trace |head 2177 # tracer: function 2178 # 2179 # TASK-PID CPU# TIMESTAMP FUNCTION 2180 # | | | | | 2181 ##### CPU 3 buffer started #### 2182 yum-updatesd-3111 [003] 1701.957688: free_poll_entry <-poll_freewait 2183 yum-updatesd-3111 [003] 1701.957689: remove_wait_queue <-free_poll_entry 2184 yum-updatesd-3111 [003] 1701.957691: fput <-free_poll_entry 2185 yum-updatesd-3111 [003] 1701.957692: audit_syscall_exit <-sysret_audit 2186 yum-updatesd-3111 [003] 1701.957693: path_put <-audit_syscall_exit 2187 2188If you want to trace a function when executing, you could use 2189something like this simple program. 2190:: 2191 2192 #include <stdio.h> 2193 #include <stdlib.h> 2194 #include <sys/types.h> 2195 #include <sys/stat.h> 2196 #include <fcntl.h> 2197 #include <unistd.h> 2198 #include <string.h> 2199 2200 #define _STR(x) #x 2201 #define STR(x) _STR(x) 2202 #define MAX_PATH 256 2203 2204 const char *find_tracefs(void) 2205 { 2206 static char tracefs[MAX_PATH+1]; 2207 static int tracefs_found; 2208 char type[100]; 2209 FILE *fp; 2210 2211 if (tracefs_found) 2212 return tracefs; 2213 2214 if ((fp = fopen("/proc/mounts","r")) == NULL) { 2215 perror("/proc/mounts"); 2216 return NULL; 2217 } 2218 2219 while (fscanf(fp, "%*s %" 2220 STR(MAX_PATH) 2221 "s %99s %*s %*d %*d\n", 2222 tracefs, type) == 2) { 2223 if (strcmp(type, "tracefs") == 0) 2224 break; 2225 } 2226 fclose(fp); 2227 2228 if (strcmp(type, "tracefs") != 0) { 2229 fprintf(stderr, "tracefs not mounted"); 2230 return NULL; 2231 } 2232 2233 strcat(tracefs, "/tracing/"); 2234 tracefs_found = 1; 2235 2236 return tracefs; 2237 } 2238 2239 const char *tracing_file(const char *file_name) 2240 { 2241 static char trace_file[MAX_PATH+1]; 2242 snprintf(trace_file, MAX_PATH, "%s/%s", find_tracefs(), file_name); 2243 return trace_file; 2244 } 2245 2246 int main (int argc, char **argv) 2247 { 2248 if (argc < 1) 2249 exit(-1); 2250 2251 if (fork() > 0) { 2252 int fd, ffd; 2253 char line[64]; 2254 int s; 2255 2256 ffd = open(tracing_file("current_tracer"), O_WRONLY); 2257 if (ffd < 0) 2258 exit(-1); 2259 write(ffd, "nop", 3); 2260 2261 fd = open(tracing_file("set_ftrace_pid"), O_WRONLY); 2262 s = sprintf(line, "%d\n", getpid()); 2263 write(fd, line, s); 2264 2265 write(ffd, "function", 8); 2266 2267 close(fd); 2268 close(ffd); 2269 2270 execvp(argv[1], argv+1); 2271 } 2272 2273 return 0; 2274 } 2275 2276Or this simple script! 2277:: 2278 2279 #!/bin/bash 2280 2281 tracefs=`sed -ne 's/^tracefs \(.*\) tracefs.*/\1/p' /proc/mounts` 2282 echo nop > $tracefs/tracing/current_tracer 2283 echo 0 > $tracefs/tracing/tracing_on 2284 echo $$ > $tracefs/tracing/set_ftrace_pid 2285 echo function > $tracefs/tracing/current_tracer 2286 echo 1 > $tracefs/tracing/tracing_on 2287 exec "$@" 2288 2289 2290function graph tracer 2291--------------------------- 2292 2293This tracer is similar to the function tracer except that it 2294probes a function on its entry and its exit. This is done by 2295using a dynamically allocated stack of return addresses in each 2296task_struct. On function entry the tracer overwrites the return 2297address of each function traced to set a custom probe. Thus the 2298original return address is stored on the stack of return address 2299in the task_struct. 2300 2301Probing on both ends of a function leads to special features 2302such as: 2303 2304- measure of a function's time execution 2305- having a reliable call stack to draw function calls graph 2306 2307This tracer is useful in several situations: 2308 2309- you want to find the reason of a strange kernel behavior and 2310 need to see what happens in detail on any areas (or specific 2311 ones). 2312 2313- you are experiencing weird latencies but it's difficult to 2314 find its origin. 2315 2316- you want to find quickly which path is taken by a specific 2317 function 2318 2319- you just want to peek inside a working kernel and want to see 2320 what happens there. 2321 2322:: 2323 2324 # tracer: function_graph 2325 # 2326 # CPU DURATION FUNCTION CALLS 2327 # | | | | | | | 2328 2329 0) | sys_open() { 2330 0) | do_sys_open() { 2331 0) | getname() { 2332 0) | kmem_cache_alloc() { 2333 0) 1.382 us | __might_sleep(); 2334 0) 2.478 us | } 2335 0) | strncpy_from_user() { 2336 0) | might_fault() { 2337 0) 1.389 us | __might_sleep(); 2338 0) 2.553 us | } 2339 0) 3.807 us | } 2340 0) 7.876 us | } 2341 0) | alloc_fd() { 2342 0) 0.668 us | _spin_lock(); 2343 0) 0.570 us | expand_files(); 2344 0) 0.586 us | _spin_unlock(); 2345 2346 2347There are several columns that can be dynamically 2348enabled/disabled. You can use every combination of options you 2349want, depending on your needs. 2350 2351- The cpu number on which the function executed is default 2352 enabled. It is sometimes better to only trace one cpu (see 2353 tracing_cpu_mask file) or you might sometimes see unordered 2354 function calls while cpu tracing switch. 2355 2356 - hide: echo nofuncgraph-cpu > trace_options 2357 - show: echo funcgraph-cpu > trace_options 2358 2359- The duration (function's time of execution) is displayed on 2360 the closing bracket line of a function or on the same line 2361 than the current function in case of a leaf one. It is default 2362 enabled. 2363 2364 - hide: echo nofuncgraph-duration > trace_options 2365 - show: echo funcgraph-duration > trace_options 2366 2367- The overhead field precedes the duration field in case of 2368 reached duration thresholds. 2369 2370 - hide: echo nofuncgraph-overhead > trace_options 2371 - show: echo funcgraph-overhead > trace_options 2372 - depends on: funcgraph-duration 2373 2374 ie:: 2375 2376 3) # 1837.709 us | } /* __switch_to */ 2377 3) | finish_task_switch() { 2378 3) 0.313 us | _raw_spin_unlock_irq(); 2379 3) 3.177 us | } 2380 3) # 1889.063 us | } /* __schedule */ 2381 3) ! 140.417 us | } /* __schedule */ 2382 3) # 2034.948 us | } /* schedule */ 2383 3) * 33998.59 us | } /* schedule_preempt_disabled */ 2384 2385 [...] 2386 2387 1) 0.260 us | msecs_to_jiffies(); 2388 1) 0.313 us | __rcu_read_unlock(); 2389 1) + 61.770 us | } 2390 1) + 64.479 us | } 2391 1) 0.313 us | rcu_bh_qs(); 2392 1) 0.313 us | __local_bh_enable(); 2393 1) ! 217.240 us | } 2394 1) 0.365 us | idle_cpu(); 2395 1) | rcu_irq_exit() { 2396 1) 0.417 us | rcu_eqs_enter_common.isra.47(); 2397 1) 3.125 us | } 2398 1) ! 227.812 us | } 2399 1) ! 457.395 us | } 2400 1) @ 119760.2 us | } 2401 2402 [...] 2403 2404 2) | handle_IPI() { 2405 1) 6.979 us | } 2406 2) 0.417 us | scheduler_ipi(); 2407 1) 9.791 us | } 2408 1) + 12.917 us | } 2409 2) 3.490 us | } 2410 1) + 15.729 us | } 2411 1) + 18.542 us | } 2412 2) $ 3594274 us | } 2413 2414Flags:: 2415 2416 + means that the function exceeded 10 usecs. 2417 ! means that the function exceeded 100 usecs. 2418 # means that the function exceeded 1000 usecs. 2419 * means that the function exceeded 10 msecs. 2420 @ means that the function exceeded 100 msecs. 2421 $ means that the function exceeded 1 sec. 2422 2423 2424- The task/pid field displays the thread cmdline and pid which 2425 executed the function. It is default disabled. 2426 2427 - hide: echo nofuncgraph-proc > trace_options 2428 - show: echo funcgraph-proc > trace_options 2429 2430 ie:: 2431 2432 # tracer: function_graph 2433 # 2434 # CPU TASK/PID DURATION FUNCTION CALLS 2435 # | | | | | | | | | 2436 0) sh-4802 | | d_free() { 2437 0) sh-4802 | | call_rcu() { 2438 0) sh-4802 | | __call_rcu() { 2439 0) sh-4802 | 0.616 us | rcu_process_gp_end(); 2440 0) sh-4802 | 0.586 us | check_for_new_grace_period(); 2441 0) sh-4802 | 2.899 us | } 2442 0) sh-4802 | 4.040 us | } 2443 0) sh-4802 | 5.151 us | } 2444 0) sh-4802 | + 49.370 us | } 2445 2446 2447- The absolute time field is an absolute timestamp given by the 2448 system clock since it started. A snapshot of this time is 2449 given on each entry/exit of functions 2450 2451 - hide: echo nofuncgraph-abstime > trace_options 2452 - show: echo funcgraph-abstime > trace_options 2453 2454 ie:: 2455 2456 # 2457 # TIME CPU DURATION FUNCTION CALLS 2458 # | | | | | | | | 2459 360.774522 | 1) 0.541 us | } 2460 360.774522 | 1) 4.663 us | } 2461 360.774523 | 1) 0.541 us | __wake_up_bit(); 2462 360.774524 | 1) 6.796 us | } 2463 360.774524 | 1) 7.952 us | } 2464 360.774525 | 1) 9.063 us | } 2465 360.774525 | 1) 0.615 us | journal_mark_dirty(); 2466 360.774527 | 1) 0.578 us | __brelse(); 2467 360.774528 | 1) | reiserfs_prepare_for_journal() { 2468 360.774528 | 1) | unlock_buffer() { 2469 360.774529 | 1) | wake_up_bit() { 2470 360.774529 | 1) | bit_waitqueue() { 2471 360.774530 | 1) 0.594 us | __phys_addr(); 2472 2473 2474The function name is always displayed after the closing bracket 2475for a function if the start of that function is not in the 2476trace buffer. 2477 2478Display of the function name after the closing bracket may be 2479enabled for functions whose start is in the trace buffer, 2480allowing easier searching with grep for function durations. 2481It is default disabled. 2482 2483 - hide: echo nofuncgraph-tail > trace_options 2484 - show: echo funcgraph-tail > trace_options 2485 2486 Example with nofuncgraph-tail (default):: 2487 2488 0) | putname() { 2489 0) | kmem_cache_free() { 2490 0) 0.518 us | __phys_addr(); 2491 0) 1.757 us | } 2492 0) 2.861 us | } 2493 2494 Example with funcgraph-tail:: 2495 2496 0) | putname() { 2497 0) | kmem_cache_free() { 2498 0) 0.518 us | __phys_addr(); 2499 0) 1.757 us | } /* kmem_cache_free() */ 2500 0) 2.861 us | } /* putname() */ 2501 2502You can put some comments on specific functions by using 2503trace_printk() For example, if you want to put a comment inside 2504the __might_sleep() function, you just have to include 2505<linux/ftrace.h> and call trace_printk() inside __might_sleep():: 2506 2507 trace_printk("I'm a comment!\n") 2508 2509will produce:: 2510 2511 1) | __might_sleep() { 2512 1) | /* I'm a comment! */ 2513 1) 1.449 us | } 2514 2515 2516You might find other useful features for this tracer in the 2517following "dynamic ftrace" section such as tracing only specific 2518functions or tasks. 2519 2520dynamic ftrace 2521-------------- 2522 2523If CONFIG_DYNAMIC_FTRACE is set, the system will run with 2524virtually no overhead when function tracing is disabled. The way 2525this works is the mcount function call (placed at the start of 2526every kernel function, produced by the -pg switch in gcc), 2527starts of pointing to a simple return. (Enabling FTRACE will 2528include the -pg switch in the compiling of the kernel.) 2529 2530At compile time every C file object is run through the 2531recordmcount program (located in the scripts directory). This 2532program will parse the ELF headers in the C object to find all 2533the locations in the .text section that call mcount. Starting 2534with gcc verson 4.6, the -mfentry has been added for x86, which 2535calls "__fentry__" instead of "mcount". Which is called before 2536the creation of the stack frame. 2537 2538Note, not all sections are traced. They may be prevented by either 2539a notrace, or blocked another way and all inline functions are not 2540traced. Check the "available_filter_functions" file to see what functions 2541can be traced. 2542 2543A section called "__mcount_loc" is created that holds 2544references to all the mcount/fentry call sites in the .text section. 2545The recordmcount program re-links this section back into the 2546original object. The final linking stage of the kernel will add all these 2547references into a single table. 2548 2549On boot up, before SMP is initialized, the dynamic ftrace code 2550scans this table and updates all the locations into nops. It 2551also records the locations, which are added to the 2552available_filter_functions list. Modules are processed as they 2553are loaded and before they are executed. When a module is 2554unloaded, it also removes its functions from the ftrace function 2555list. This is automatic in the module unload code, and the 2556module author does not need to worry about it. 2557 2558When tracing is enabled, the process of modifying the function 2559tracepoints is dependent on architecture. The old method is to use 2560kstop_machine to prevent races with the CPUs executing code being 2561modified (which can cause the CPU to do undesirable things, especially 2562if the modified code crosses cache (or page) boundaries), and the nops are 2563patched back to calls. But this time, they do not call mcount 2564(which is just a function stub). They now call into the ftrace 2565infrastructure. 2566 2567The new method of modifying the function tracepoints is to place 2568a breakpoint at the location to be modified, sync all CPUs, modify 2569the rest of the instruction not covered by the breakpoint. Sync 2570all CPUs again, and then remove the breakpoint with the finished 2571version to the ftrace call site. 2572 2573Some archs do not even need to monkey around with the synchronization, 2574and can just slap the new code on top of the old without any 2575problems with other CPUs executing it at the same time. 2576 2577One special side-effect to the recording of the functions being 2578traced is that we can now selectively choose which functions we 2579wish to trace and which ones we want the mcount calls to remain 2580as nops. 2581 2582Two files are used, one for enabling and one for disabling the 2583tracing of specified functions. They are: 2584 2585 set_ftrace_filter 2586 2587and 2588 2589 set_ftrace_notrace 2590 2591A list of available functions that you can add to these files is 2592listed in: 2593 2594 available_filter_functions 2595 2596:: 2597 2598 # cat available_filter_functions 2599 put_prev_task_idle 2600 kmem_cache_create 2601 pick_next_task_rt 2602 get_online_cpus 2603 pick_next_task_fair 2604 mutex_lock 2605 [...] 2606 2607If I am only interested in sys_nanosleep and hrtimer_interrupt:: 2608 2609 # echo sys_nanosleep hrtimer_interrupt > set_ftrace_filter 2610 # echo function > current_tracer 2611 # echo 1 > tracing_on 2612 # usleep 1 2613 # echo 0 > tracing_on 2614 # cat trace 2615 # tracer: function 2616 # 2617 # entries-in-buffer/entries-written: 5/5 #P:4 2618 # 2619 # _-----=> irqs-off 2620 # / _----=> need-resched 2621 # | / _---=> hardirq/softirq 2622 # || / _--=> preempt-depth 2623 # ||| / delay 2624 # TASK-PID CPU# |||| TIMESTAMP FUNCTION 2625 # | | | |||| | | 2626 usleep-2665 [001] .... 4186.475355: sys_nanosleep <-system_call_fastpath 2627 <idle>-0 [001] d.h1 4186.475409: hrtimer_interrupt <-smp_apic_timer_interrupt 2628 usleep-2665 [001] d.h1 4186.475426: hrtimer_interrupt <-smp_apic_timer_interrupt 2629 <idle>-0 [003] d.h1 4186.475426: hrtimer_interrupt <-smp_apic_timer_interrupt 2630 <idle>-0 [002] d.h1 4186.475427: hrtimer_interrupt <-smp_apic_timer_interrupt 2631 2632To see which functions are being traced, you can cat the file: 2633:: 2634 2635 # cat set_ftrace_filter 2636 hrtimer_interrupt 2637 sys_nanosleep 2638 2639 2640Perhaps this is not enough. The filters also allow glob(7) matching. 2641 2642 ``<match>*`` 2643 will match functions that begin with <match> 2644 ``*<match>`` 2645 will match functions that end with <match> 2646 ``*<match>*`` 2647 will match functions that have <match> in it 2648 ``<match1>*<match2>`` 2649 will match functions that begin with <match1> and end with <match2> 2650 2651.. note:: 2652 It is better to use quotes to enclose the wild cards, 2653 otherwise the shell may expand the parameters into names 2654 of files in the local directory. 2655 2656:: 2657 2658 # echo 'hrtimer_*' > set_ftrace_filter 2659 2660Produces:: 2661 2662 # tracer: function 2663 # 2664 # entries-in-buffer/entries-written: 897/897 #P:4 2665 # 2666 # _-----=> irqs-off 2667 # / _----=> need-resched 2668 # | / _---=> hardirq/softirq 2669 # || / _--=> preempt-depth 2670 # ||| / delay 2671 # TASK-PID CPU# |||| TIMESTAMP FUNCTION 2672 # | | | |||| | | 2673 <idle>-0 [003] dN.1 4228.547803: hrtimer_cancel <-tick_nohz_idle_exit 2674 <idle>-0 [003] dN.1 4228.547804: hrtimer_try_to_cancel <-hrtimer_cancel 2675 <idle>-0 [003] dN.2 4228.547805: hrtimer_force_reprogram <-__remove_hrtimer 2676 <idle>-0 [003] dN.1 4228.547805: hrtimer_forward <-tick_nohz_idle_exit 2677 <idle>-0 [003] dN.1 4228.547805: hrtimer_start_range_ns <-hrtimer_start_expires.constprop.11 2678 <idle>-0 [003] d..1 4228.547858: hrtimer_get_next_event <-get_next_timer_interrupt 2679 <idle>-0 [003] d..1 4228.547859: hrtimer_start <-__tick_nohz_idle_enter 2680 <idle>-0 [003] d..2 4228.547860: hrtimer_force_reprogram <-__rem 2681 2682Notice that we lost the sys_nanosleep. 2683:: 2684 2685 # cat set_ftrace_filter 2686 hrtimer_run_queues 2687 hrtimer_run_pending 2688 hrtimer_init 2689 hrtimer_cancel 2690 hrtimer_try_to_cancel 2691 hrtimer_forward 2692 hrtimer_start 2693 hrtimer_reprogram 2694 hrtimer_force_reprogram 2695 hrtimer_get_next_event 2696 hrtimer_interrupt 2697 hrtimer_nanosleep 2698 hrtimer_wakeup 2699 hrtimer_get_remaining 2700 hrtimer_get_res 2701 hrtimer_init_sleeper 2702 2703 2704This is because the '>' and '>>' act just like they do in bash. 2705To rewrite the filters, use '>' 2706To append to the filters, use '>>' 2707 2708To clear out a filter so that all functions will be recorded 2709again:: 2710 2711 # echo > set_ftrace_filter 2712 # cat set_ftrace_filter 2713 # 2714 2715Again, now we want to append. 2716 2717:: 2718 2719 # echo sys_nanosleep > set_ftrace_filter 2720 # cat set_ftrace_filter 2721 sys_nanosleep 2722 # echo 'hrtimer_*' >> set_ftrace_filter 2723 # cat set_ftrace_filter 2724 hrtimer_run_queues 2725 hrtimer_run_pending 2726 hrtimer_init 2727 hrtimer_cancel 2728 hrtimer_try_to_cancel 2729 hrtimer_forward 2730 hrtimer_start 2731 hrtimer_reprogram 2732 hrtimer_force_reprogram 2733 hrtimer_get_next_event 2734 hrtimer_interrupt 2735 sys_nanosleep 2736 hrtimer_nanosleep 2737 hrtimer_wakeup 2738 hrtimer_get_remaining 2739 hrtimer_get_res 2740 hrtimer_init_sleeper 2741 2742 2743The set_ftrace_notrace prevents those functions from being 2744traced. 2745:: 2746 2747 # echo '*preempt*' '*lock*' > set_ftrace_notrace 2748 2749Produces:: 2750 2751 # tracer: function 2752 # 2753 # entries-in-buffer/entries-written: 39608/39608 #P:4 2754 # 2755 # _-----=> irqs-off 2756 # / _----=> need-resched 2757 # | / _---=> hardirq/softirq 2758 # || / _--=> preempt-depth 2759 # ||| / delay 2760 # TASK-PID CPU# |||| TIMESTAMP FUNCTION 2761 # | | | |||| | | 2762 bash-1994 [000] .... 4342.324896: file_ra_state_init <-do_dentry_open 2763 bash-1994 [000] .... 4342.324897: open_check_o_direct <-do_last 2764 bash-1994 [000] .... 4342.324897: ima_file_check <-do_last 2765 bash-1994 [000] .... 4342.324898: process_measurement <-ima_file_check 2766 bash-1994 [000] .... 4342.324898: ima_get_action <-process_measurement 2767 bash-1994 [000] .... 4342.324898: ima_match_policy <-ima_get_action 2768 bash-1994 [000] .... 4342.324899: do_truncate <-do_last 2769 bash-1994 [000] .... 4342.324899: should_remove_suid <-do_truncate 2770 bash-1994 [000] .... 4342.324899: notify_change <-do_truncate 2771 bash-1994 [000] .... 4342.324900: current_fs_time <-notify_change 2772 bash-1994 [000] .... 4342.324900: current_kernel_time <-current_fs_time 2773 bash-1994 [000] .... 4342.324900: timespec_trunc <-current_fs_time 2774 2775We can see that there's no more lock or preempt tracing. 2776 2777 2778Dynamic ftrace with the function graph tracer 2779--------------------------------------------- 2780 2781Although what has been explained above concerns both the 2782function tracer and the function-graph-tracer, there are some 2783special features only available in the function-graph tracer. 2784 2785If you want to trace only one function and all of its children, 2786you just have to echo its name into set_graph_function:: 2787 2788 echo __do_fault > set_graph_function 2789 2790will produce the following "expanded" trace of the __do_fault() 2791function:: 2792 2793 0) | __do_fault() { 2794 0) | filemap_fault() { 2795 0) | find_lock_page() { 2796 0) 0.804 us | find_get_page(); 2797 0) | __might_sleep() { 2798 0) 1.329 us | } 2799 0) 3.904 us | } 2800 0) 4.979 us | } 2801 0) 0.653 us | _spin_lock(); 2802 0) 0.578 us | page_add_file_rmap(); 2803 0) 0.525 us | native_set_pte_at(); 2804 0) 0.585 us | _spin_unlock(); 2805 0) | unlock_page() { 2806 0) 0.541 us | page_waitqueue(); 2807 0) 0.639 us | __wake_up_bit(); 2808 0) 2.786 us | } 2809 0) + 14.237 us | } 2810 0) | __do_fault() { 2811 0) | filemap_fault() { 2812 0) | find_lock_page() { 2813 0) 0.698 us | find_get_page(); 2814 0) | __might_sleep() { 2815 0) 1.412 us | } 2816 0) 3.950 us | } 2817 0) 5.098 us | } 2818 0) 0.631 us | _spin_lock(); 2819 0) 0.571 us | page_add_file_rmap(); 2820 0) 0.526 us | native_set_pte_at(); 2821 0) 0.586 us | _spin_unlock(); 2822 0) | unlock_page() { 2823 0) 0.533 us | page_waitqueue(); 2824 0) 0.638 us | __wake_up_bit(); 2825 0) 2.793 us | } 2826 0) + 14.012 us | } 2827 2828You can also expand several functions at once:: 2829 2830 echo sys_open > set_graph_function 2831 echo sys_close >> set_graph_function 2832 2833Now if you want to go back to trace all functions you can clear 2834this special filter via:: 2835 2836 echo > set_graph_function 2837 2838 2839ftrace_enabled 2840-------------- 2841 2842Note, the proc sysctl ftrace_enable is a big on/off switch for the 2843function tracer. By default it is enabled (when function tracing is 2844enabled in the kernel). If it is disabled, all function tracing is 2845disabled. This includes not only the function tracers for ftrace, but 2846also for any other uses (perf, kprobes, stack tracing, profiling, etc). 2847 2848Please disable this with care. 2849 2850This can be disable (and enabled) with:: 2851 2852 sysctl kernel.ftrace_enabled=0 2853 sysctl kernel.ftrace_enabled=1 2854 2855 or 2856 2857 echo 0 > /proc/sys/kernel/ftrace_enabled 2858 echo 1 > /proc/sys/kernel/ftrace_enabled 2859 2860 2861Filter commands 2862--------------- 2863 2864A few commands are supported by the set_ftrace_filter interface. 2865Trace commands have the following format:: 2866 2867 <function>:<command>:<parameter> 2868 2869The following commands are supported: 2870 2871- mod: 2872 This command enables function filtering per module. The 2873 parameter defines the module. For example, if only the write* 2874 functions in the ext3 module are desired, run: 2875 2876 echo 'write*:mod:ext3' > set_ftrace_filter 2877 2878 This command interacts with the filter in the same way as 2879 filtering based on function names. Thus, adding more functions 2880 in a different module is accomplished by appending (>>) to the 2881 filter file. Remove specific module functions by prepending 2882 '!':: 2883 2884 echo '!writeback*:mod:ext3' >> set_ftrace_filter 2885 2886 Mod command supports module globbing. Disable tracing for all 2887 functions except a specific module:: 2888 2889 echo '!*:mod:!ext3' >> set_ftrace_filter 2890 2891 Disable tracing for all modules, but still trace kernel:: 2892 2893 echo '!*:mod:*' >> set_ftrace_filter 2894 2895 Enable filter only for kernel:: 2896 2897 echo '*write*:mod:!*' >> set_ftrace_filter 2898 2899 Enable filter for module globbing:: 2900 2901 echo '*write*:mod:*snd*' >> set_ftrace_filter 2902 2903- traceon/traceoff: 2904 These commands turn tracing on and off when the specified 2905 functions are hit. The parameter determines how many times the 2906 tracing system is turned on and off. If unspecified, there is 2907 no limit. For example, to disable tracing when a schedule bug 2908 is hit the first 5 times, run:: 2909 2910 echo '__schedule_bug:traceoff:5' > set_ftrace_filter 2911 2912 To always disable tracing when __schedule_bug is hit:: 2913 2914 echo '__schedule_bug:traceoff' > set_ftrace_filter 2915 2916 These commands are cumulative whether or not they are appended 2917 to set_ftrace_filter. To remove a command, prepend it by '!' 2918 and drop the parameter:: 2919 2920 echo '!__schedule_bug:traceoff:0' > set_ftrace_filter 2921 2922 The above removes the traceoff command for __schedule_bug 2923 that have a counter. To remove commands without counters:: 2924 2925 echo '!__schedule_bug:traceoff' > set_ftrace_filter 2926 2927- snapshot: 2928 Will cause a snapshot to be triggered when the function is hit. 2929 :: 2930 2931 echo 'native_flush_tlb_others:snapshot' > set_ftrace_filter 2932 2933 To only snapshot once: 2934 :: 2935 2936 echo 'native_flush_tlb_others:snapshot:1' > set_ftrace_filter 2937 2938 To remove the above commands:: 2939 2940 echo '!native_flush_tlb_others:snapshot' > set_ftrace_filter 2941 echo '!native_flush_tlb_others:snapshot:0' > set_ftrace_filter 2942 2943- enable_event/disable_event: 2944 These commands can enable or disable a trace event. Note, because 2945 function tracing callbacks are very sensitive, when these commands 2946 are registered, the trace point is activated, but disabled in 2947 a "soft" mode. That is, the tracepoint will be called, but 2948 just will not be traced. The event tracepoint stays in this mode 2949 as long as there's a command that triggers it. 2950 :: 2951 2952 echo 'try_to_wake_up:enable_event:sched:sched_switch:2' > \ 2953 set_ftrace_filter 2954 2955 The format is:: 2956 2957 <function>:enable_event:<system>:<event>[:count] 2958 <function>:disable_event:<system>:<event>[:count] 2959 2960 To remove the events commands:: 2961 2962 echo '!try_to_wake_up:enable_event:sched:sched_switch:0' > \ 2963 set_ftrace_filter 2964 echo '!schedule:disable_event:sched:sched_switch' > \ 2965 set_ftrace_filter 2966 2967- dump: 2968 When the function is hit, it will dump the contents of the ftrace 2969 ring buffer to the console. This is useful if you need to debug 2970 something, and want to dump the trace when a certain function 2971 is hit. Perhaps its a function that is called before a tripple 2972 fault happens and does not allow you to get a regular dump. 2973 2974- cpudump: 2975 When the function is hit, it will dump the contents of the ftrace 2976 ring buffer for the current CPU to the console. Unlike the "dump" 2977 command, it only prints out the contents of the ring buffer for the 2978 CPU that executed the function that triggered the dump. 2979 2980trace_pipe 2981---------- 2982 2983The trace_pipe outputs the same content as the trace file, but 2984the effect on the tracing is different. Every read from 2985trace_pipe is consumed. This means that subsequent reads will be 2986different. The trace is live. 2987:: 2988 2989 # echo function > current_tracer 2990 # cat trace_pipe > /tmp/trace.out & 2991 [1] 4153 2992 # echo 1 > tracing_on 2993 # usleep 1 2994 # echo 0 > tracing_on 2995 # cat trace 2996 # tracer: function 2997 # 2998 # entries-in-buffer/entries-written: 0/0 #P:4 2999 # 3000 # _-----=> irqs-off 3001 # / _----=> need-resched 3002 # | / _---=> hardirq/softirq 3003 # || / _--=> preempt-depth 3004 # ||| / delay 3005 # TASK-PID CPU# |||| TIMESTAMP FUNCTION 3006 # | | | |||| | | 3007 3008 # 3009 # cat /tmp/trace.out 3010 bash-1994 [000] .... 5281.568961: mutex_unlock <-rb_simple_write 3011 bash-1994 [000] .... 5281.568963: __mutex_unlock_slowpath <-mutex_unlock 3012 bash-1994 [000] .... 5281.568963: __fsnotify_parent <-fsnotify_modify 3013 bash-1994 [000] .... 5281.568964: fsnotify <-fsnotify_modify 3014 bash-1994 [000] .... 5281.568964: __srcu_read_lock <-fsnotify 3015 bash-1994 [000] .... 5281.568964: add_preempt_count <-__srcu_read_lock 3016 bash-1994 [000] ...1 5281.568965: sub_preempt_count <-__srcu_read_lock 3017 bash-1994 [000] .... 5281.568965: __srcu_read_unlock <-fsnotify 3018 bash-1994 [000] .... 5281.568967: sys_dup2 <-system_call_fastpath 3019 3020 3021Note, reading the trace_pipe file will block until more input is 3022added. 3023 3024trace entries 3025------------- 3026 3027Having too much or not enough data can be troublesome in 3028diagnosing an issue in the kernel. The file buffer_size_kb is 3029used to modify the size of the internal trace buffers. The 3030number listed is the number of entries that can be recorded per 3031CPU. To know the full size, multiply the number of possible CPUs 3032with the number of entries. 3033:: 3034 3035 # cat buffer_size_kb 3036 1408 (units kilobytes) 3037 3038Or simply read buffer_total_size_kb 3039:: 3040 3041 # cat buffer_total_size_kb 3042 5632 3043 3044To modify the buffer, simple echo in a number (in 1024 byte segments). 3045:: 3046 3047 # echo 10000 > buffer_size_kb 3048 # cat buffer_size_kb 3049 10000 (units kilobytes) 3050 3051It will try to allocate as much as possible. If you allocate too 3052much, it can cause Out-Of-Memory to trigger. 3053:: 3054 3055 # echo 1000000000000 > buffer_size_kb 3056 -bash: echo: write error: Cannot allocate memory 3057 # cat buffer_size_kb 3058 85 3059 3060The per_cpu buffers can be changed individually as well: 3061:: 3062 3063 # echo 10000 > per_cpu/cpu0/buffer_size_kb 3064 # echo 100 > per_cpu/cpu1/buffer_size_kb 3065 3066When the per_cpu buffers are not the same, the buffer_size_kb 3067at the top level will just show an X 3068:: 3069 3070 # cat buffer_size_kb 3071 X 3072 3073This is where the buffer_total_size_kb is useful: 3074:: 3075 3076 # cat buffer_total_size_kb 3077 12916 3078 3079Writing to the top level buffer_size_kb will reset all the buffers 3080to be the same again. 3081 3082Snapshot 3083-------- 3084CONFIG_TRACER_SNAPSHOT makes a generic snapshot feature 3085available to all non latency tracers. (Latency tracers which 3086record max latency, such as "irqsoff" or "wakeup", can't use 3087this feature, since those are already using the snapshot 3088mechanism internally.) 3089 3090Snapshot preserves a current trace buffer at a particular point 3091in time without stopping tracing. Ftrace swaps the current 3092buffer with a spare buffer, and tracing continues in the new 3093current (=previous spare) buffer. 3094 3095The following tracefs files in "tracing" are related to this 3096feature: 3097 3098 snapshot: 3099 3100 This is used to take a snapshot and to read the output 3101 of the snapshot. Echo 1 into this file to allocate a 3102 spare buffer and to take a snapshot (swap), then read 3103 the snapshot from this file in the same format as 3104 "trace" (described above in the section "The File 3105 System"). Both reads snapshot and tracing are executable 3106 in parallel. When the spare buffer is allocated, echoing 3107 0 frees it, and echoing else (positive) values clear the 3108 snapshot contents. 3109 More details are shown in the table below. 3110 3111 +--------------+------------+------------+------------+ 3112 |status\\input | 0 | 1 | else | 3113 +==============+============+============+============+ 3114 |not allocated |(do nothing)| alloc+swap |(do nothing)| 3115 +--------------+------------+------------+------------+ 3116 |allocated | free | swap | clear | 3117 +--------------+------------+------------+------------+ 3118 3119Here is an example of using the snapshot feature. 3120:: 3121 3122 # echo 1 > events/sched/enable 3123 # echo 1 > snapshot 3124 # cat snapshot 3125 # tracer: nop 3126 # 3127 # entries-in-buffer/entries-written: 71/71 #P:8 3128 # 3129 # _-----=> irqs-off 3130 # / _----=> need-resched 3131 # | / _---=> hardirq/softirq 3132 # || / _--=> preempt-depth 3133 # ||| / delay 3134 # TASK-PID CPU# |||| TIMESTAMP FUNCTION 3135 # | | | |||| | | 3136 <idle>-0 [005] d... 2440.603828: sched_switch: prev_comm=swapper/5 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=snapshot-test-2 next_pid=2242 next_prio=120 3137 sleep-2242 [005] d... 2440.603846: sched_switch: prev_comm=snapshot-test-2 prev_pid=2242 prev_prio=120 prev_state=R ==> next_comm=kworker/5:1 next_pid=60 next_prio=120 3138 [...] 3139 <idle>-0 [002] d... 2440.707230: sched_switch: prev_comm=swapper/2 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=snapshot-test-2 next_pid=2229 next_prio=120 3140 3141 # cat trace 3142 # tracer: nop 3143 # 3144 # entries-in-buffer/entries-written: 77/77 #P:8 3145 # 3146 # _-----=> irqs-off 3147 # / _----=> need-resched 3148 # | / _---=> hardirq/softirq 3149 # || / _--=> preempt-depth 3150 # ||| / delay 3151 # TASK-PID CPU# |||| TIMESTAMP FUNCTION 3152 # | | | |||| | | 3153 <idle>-0 [007] d... 2440.707395: sched_switch: prev_comm=swapper/7 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=snapshot-test-2 next_pid=2243 next_prio=120 3154 snapshot-test-2-2229 [002] d... 2440.707438: sched_switch: prev_comm=snapshot-test-2 prev_pid=2229 prev_prio=120 prev_state=S ==> next_comm=swapper/2 next_pid=0 next_prio=120 3155 [...] 3156 3157 3158If you try to use this snapshot feature when current tracer is 3159one of the latency tracers, you will get the following results. 3160:: 3161 3162 # echo wakeup > current_tracer 3163 # echo 1 > snapshot 3164 bash: echo: write error: Device or resource busy 3165 # cat snapshot 3166 cat: snapshot: Device or resource busy 3167 3168 3169Instances 3170--------- 3171In the tracefs tracing directory is a directory called "instances". 3172This directory can have new directories created inside of it using 3173mkdir, and removing directories with rmdir. The directory created 3174with mkdir in this directory will already contain files and other 3175directories after it is created. 3176:: 3177 3178 # mkdir instances/foo 3179 # ls instances/foo 3180 buffer_size_kb buffer_total_size_kb events free_buffer per_cpu 3181 set_event snapshot trace trace_clock trace_marker trace_options 3182 trace_pipe tracing_on 3183 3184As you can see, the new directory looks similar to the tracing directory 3185itself. In fact, it is very similar, except that the buffer and 3186events are agnostic from the main director, or from any other 3187instances that are created. 3188 3189The files in the new directory work just like the files with the 3190same name in the tracing directory except the buffer that is used 3191is a separate and new buffer. The files affect that buffer but do not 3192affect the main buffer with the exception of trace_options. Currently, 3193the trace_options affect all instances and the top level buffer 3194the same, but this may change in future releases. That is, options 3195may become specific to the instance they reside in. 3196 3197Notice that none of the function tracer files are there, nor is 3198current_tracer and available_tracers. This is because the buffers 3199can currently only have events enabled for them. 3200:: 3201 3202 # mkdir instances/foo 3203 # mkdir instances/bar 3204 # mkdir instances/zoot 3205 # echo 100000 > buffer_size_kb 3206 # echo 1000 > instances/foo/buffer_size_kb 3207 # echo 5000 > instances/bar/per_cpu/cpu1/buffer_size_kb 3208 # echo function > current_trace 3209 # echo 1 > instances/foo/events/sched/sched_wakeup/enable 3210 # echo 1 > instances/foo/events/sched/sched_wakeup_new/enable 3211 # echo 1 > instances/foo/events/sched/sched_switch/enable 3212 # echo 1 > instances/bar/events/irq/enable 3213 # echo 1 > instances/zoot/events/syscalls/enable 3214 # cat trace_pipe 3215 CPU:2 [LOST 11745 EVENTS] 3216 bash-2044 [002] .... 10594.481032: _raw_spin_lock_irqsave <-get_page_from_freelist 3217 bash-2044 [002] d... 10594.481032: add_preempt_count <-_raw_spin_lock_irqsave 3218 bash-2044 [002] d..1 10594.481032: __rmqueue <-get_page_from_freelist 3219 bash-2044 [002] d..1 10594.481033: _raw_spin_unlock <-get_page_from_freelist 3220 bash-2044 [002] d..1 10594.481033: sub_preempt_count <-_raw_spin_unlock 3221 bash-2044 [002] d... 10594.481033: get_pageblock_flags_group <-get_pageblock_migratetype 3222 bash-2044 [002] d... 10594.481034: __mod_zone_page_state <-get_page_from_freelist 3223 bash-2044 [002] d... 10594.481034: zone_statistics <-get_page_from_freelist 3224 bash-2044 [002] d... 10594.481034: __inc_zone_state <-zone_statistics 3225 bash-2044 [002] d... 10594.481034: __inc_zone_state <-zone_statistics 3226 bash-2044 [002] .... 10594.481035: arch_dup_task_struct <-copy_process 3227 [...] 3228 3229 # cat instances/foo/trace_pipe 3230 bash-1998 [000] d..4 136.676759: sched_wakeup: comm=kworker/0:1 pid=59 prio=120 success=1 target_cpu=000 3231 bash-1998 [000] dN.4 136.676760: sched_wakeup: comm=bash pid=1998 prio=120 success=1 target_cpu=000 3232 <idle>-0 [003] d.h3 136.676906: sched_wakeup: comm=rcu_preempt pid=9 prio=120 success=1 target_cpu=003 3233 <idle>-0 [003] d..3 136.676909: sched_switch: prev_comm=swapper/3 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcu_preempt next_pid=9 next_prio=120 3234 rcu_preempt-9 [003] d..3 136.676916: sched_switch: prev_comm=rcu_preempt prev_pid=9 prev_prio=120 prev_state=S ==> next_comm=swapper/3 next_pid=0 next_prio=120 3235 bash-1998 [000] d..4 136.677014: sched_wakeup: comm=kworker/0:1 pid=59 prio=120 success=1 target_cpu=000 3236 bash-1998 [000] dN.4 136.677016: sched_wakeup: comm=bash pid=1998 prio=120 success=1 target_cpu=000 3237 bash-1998 [000] d..3 136.677018: sched_switch: prev_comm=bash prev_pid=1998 prev_prio=120 prev_state=R+ ==> next_comm=kworker/0:1 next_pid=59 next_prio=120 3238 kworker/0:1-59 [000] d..4 136.677022: sched_wakeup: comm=sshd pid=1995 prio=120 success=1 target_cpu=001 3239 kworker/0:1-59 [000] d..3 136.677025: sched_switch: prev_comm=kworker/0:1 prev_pid=59 prev_prio=120 prev_state=S ==> next_comm=bash next_pid=1998 next_prio=120 3240 [...] 3241 3242 # cat instances/bar/trace_pipe 3243 migration/1-14 [001] d.h3 138.732674: softirq_raise: vec=3 [action=NET_RX] 3244 <idle>-0 [001] dNh3 138.732725: softirq_raise: vec=3 [action=NET_RX] 3245 bash-1998 [000] d.h1 138.733101: softirq_raise: vec=1 [action=TIMER] 3246 bash-1998 [000] d.h1 138.733102: softirq_raise: vec=9 [action=RCU] 3247 bash-1998 [000] ..s2 138.733105: softirq_entry: vec=1 [action=TIMER] 3248 bash-1998 [000] ..s2 138.733106: softirq_exit: vec=1 [action=TIMER] 3249 bash-1998 [000] ..s2 138.733106: softirq_entry: vec=9 [action=RCU] 3250 bash-1998 [000] ..s2 138.733109: softirq_exit: vec=9 [action=RCU] 3251 sshd-1995 [001] d.h1 138.733278: irq_handler_entry: irq=21 name=uhci_hcd:usb4 3252 sshd-1995 [001] d.h1 138.733280: irq_handler_exit: irq=21 ret=unhandled 3253 sshd-1995 [001] d.h1 138.733281: irq_handler_entry: irq=21 name=eth0 3254 sshd-1995 [001] d.h1 138.733283: irq_handler_exit: irq=21 ret=handled 3255 [...] 3256 3257 # cat instances/zoot/trace 3258 # tracer: nop 3259 # 3260 # entries-in-buffer/entries-written: 18996/18996 #P:4 3261 # 3262 # _-----=> irqs-off 3263 # / _----=> need-resched 3264 # | / _---=> hardirq/softirq 3265 # || / _--=> preempt-depth 3266 # ||| / delay 3267 # TASK-PID CPU# |||| TIMESTAMP FUNCTION 3268 # | | | |||| | | 3269 bash-1998 [000] d... 140.733501: sys_write -> 0x2 3270 bash-1998 [000] d... 140.733504: sys_dup2(oldfd: a, newfd: 1) 3271 bash-1998 [000] d... 140.733506: sys_dup2 -> 0x1 3272 bash-1998 [000] d... 140.733508: sys_fcntl(fd: a, cmd: 1, arg: 0) 3273 bash-1998 [000] d... 140.733509: sys_fcntl -> 0x1 3274 bash-1998 [000] d... 140.733510: sys_close(fd: a) 3275 bash-1998 [000] d... 140.733510: sys_close -> 0x0 3276 bash-1998 [000] d... 140.733514: sys_rt_sigprocmask(how: 0, nset: 0, oset: 6e2768, sigsetsize: 8) 3277 bash-1998 [000] d... 140.733515: sys_rt_sigprocmask -> 0x0 3278 bash-1998 [000] d... 140.733516: sys_rt_sigaction(sig: 2, act: 7fff718846f0, oact: 7fff71884650, sigsetsize: 8) 3279 bash-1998 [000] d... 140.733516: sys_rt_sigaction -> 0x0 3280 3281You can see that the trace of the top most trace buffer shows only 3282the function tracing. The foo instance displays wakeups and task 3283switches. 3284 3285To remove the instances, simply delete their directories: 3286:: 3287 3288 # rmdir instances/foo 3289 # rmdir instances/bar 3290 # rmdir instances/zoot 3291 3292Note, if a process has a trace file open in one of the instance 3293directories, the rmdir will fail with EBUSY. 3294 3295 3296Stack trace 3297----------- 3298Since the kernel has a fixed sized stack, it is important not to 3299waste it in functions. A kernel developer must be conscience of 3300what they allocate on the stack. If they add too much, the system 3301can be in danger of a stack overflow, and corruption will occur, 3302usually leading to a system panic. 3303 3304There are some tools that check this, usually with interrupts 3305periodically checking usage. But if you can perform a check 3306at every function call that will become very useful. As ftrace provides 3307a function tracer, it makes it convenient to check the stack size 3308at every function call. This is enabled via the stack tracer. 3309 3310CONFIG_STACK_TRACER enables the ftrace stack tracing functionality. 3311To enable it, write a '1' into /proc/sys/kernel/stack_tracer_enabled. 3312:: 3313 3314 # echo 1 > /proc/sys/kernel/stack_tracer_enabled 3315 3316You can also enable it from the kernel command line to trace 3317the stack size of the kernel during boot up, by adding "stacktrace" 3318to the kernel command line parameter. 3319 3320After running it for a few minutes, the output looks like: 3321:: 3322 3323 # cat stack_max_size 3324 2928 3325 3326 # cat stack_trace 3327 Depth Size Location (18 entries) 3328 ----- ---- -------- 3329 0) 2928 224 update_sd_lb_stats+0xbc/0x4ac 3330 1) 2704 160 find_busiest_group+0x31/0x1f1 3331 2) 2544 256 load_balance+0xd9/0x662 3332 3) 2288 80 idle_balance+0xbb/0x130 3333 4) 2208 128 __schedule+0x26e/0x5b9 3334 5) 2080 16 schedule+0x64/0x66 3335 6) 2064 128 schedule_timeout+0x34/0xe0 3336 7) 1936 112 wait_for_common+0x97/0xf1 3337 8) 1824 16 wait_for_completion+0x1d/0x1f 3338 9) 1808 128 flush_work+0xfe/0x119 3339 10) 1680 16 tty_flush_to_ldisc+0x1e/0x20 3340 11) 1664 48 input_available_p+0x1d/0x5c 3341 12) 1616 48 n_tty_poll+0x6d/0x134 3342 13) 1568 64 tty_poll+0x64/0x7f 3343 14) 1504 880 do_select+0x31e/0x511 3344 15) 624 400 core_sys_select+0x177/0x216 3345 16) 224 96 sys_select+0x91/0xb9 3346 17) 128 128 system_call_fastpath+0x16/0x1b 3347 3348Note, if -mfentry is being used by gcc, functions get traced before 3349they set up the stack frame. This means that leaf level functions 3350are not tested by the stack tracer when -mfentry is used. 3351 3352Currently, -mfentry is used by gcc 4.6.0 and above on x86 only. 3353 3354More 3355---- 3356More details can be found in the source code, in the `kernel/trace/*.c` files. 3357