1# SPDX-License-Identifier: GPL-2.0-only 2# 3# Architectures that offer an FUNCTION_TRACER implementation should 4# select HAVE_FUNCTION_TRACER: 5# 6 7config USER_STACKTRACE_SUPPORT 8 bool 9 10config NOP_TRACER 11 bool 12 13config HAVE_RETHOOK 14 bool 15 16config RETHOOK 17 bool 18 depends on HAVE_RETHOOK 19 help 20 Enable generic return hooking feature. This is an internal 21 API, which will be used by other function-entry hooking 22 features like fprobe and kprobes. 23 24config HAVE_FUNCTION_TRACER 25 bool 26 help 27 See Documentation/trace/ftrace-design.rst 28 29config HAVE_FUNCTION_GRAPH_TRACER 30 bool 31 help 32 See Documentation/trace/ftrace-design.rst 33 34config HAVE_DYNAMIC_FTRACE 35 bool 36 help 37 See Documentation/trace/ftrace-design.rst 38 39config HAVE_DYNAMIC_FTRACE_WITH_REGS 40 bool 41 42config HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS 43 bool 44 45config HAVE_DYNAMIC_FTRACE_WITH_ARGS 46 bool 47 help 48 If this is set, then arguments and stack can be found from 49 the pt_regs passed into the function callback regs parameter 50 by default, even without setting the REGS flag in the ftrace_ops. 51 This allows for use of regs_get_kernel_argument() and 52 kernel_stack_pointer(). 53 54config HAVE_FTRACE_MCOUNT_RECORD 55 bool 56 help 57 See Documentation/trace/ftrace-design.rst 58 59config HAVE_SYSCALL_TRACEPOINTS 60 bool 61 help 62 See Documentation/trace/ftrace-design.rst 63 64config HAVE_FENTRY 65 bool 66 help 67 Arch supports the gcc options -pg with -mfentry 68 69config HAVE_NOP_MCOUNT 70 bool 71 help 72 Arch supports the gcc options -pg with -mrecord-mcount and -nop-mcount 73 74config HAVE_OBJTOOL_MCOUNT 75 bool 76 help 77 Arch supports objtool --mcount 78 79config HAVE_C_RECORDMCOUNT 80 bool 81 help 82 C version of recordmcount available? 83 84config HAVE_BUILDTIME_MCOUNT_SORT 85 bool 86 help 87 An architecture selects this if it sorts the mcount_loc section 88 at build time. 89 90config BUILDTIME_MCOUNT_SORT 91 bool 92 default y 93 depends on HAVE_BUILDTIME_MCOUNT_SORT && DYNAMIC_FTRACE 94 help 95 Sort the mcount_loc section at build time. 96 97config TRACER_MAX_TRACE 98 bool 99 100config TRACE_CLOCK 101 bool 102 103config RING_BUFFER 104 bool 105 select TRACE_CLOCK 106 select IRQ_WORK 107 108config EVENT_TRACING 109 select CONTEXT_SWITCH_TRACER 110 select GLOB 111 bool 112 113config CONTEXT_SWITCH_TRACER 114 bool 115 116config RING_BUFFER_ALLOW_SWAP 117 bool 118 help 119 Allow the use of ring_buffer_swap_cpu. 120 Adds a very slight overhead to tracing when enabled. 121 122config PREEMPTIRQ_TRACEPOINTS 123 bool 124 depends on TRACE_PREEMPT_TOGGLE || TRACE_IRQFLAGS 125 select TRACING 126 default y 127 help 128 Create preempt/irq toggle tracepoints if needed, so that other parts 129 of the kernel can use them to generate or add hooks to them. 130 131# All tracer options should select GENERIC_TRACER. For those options that are 132# enabled by all tracers (context switch and event tracer) they select TRACING. 133# This allows those options to appear when no other tracer is selected. But the 134# options do not appear when something else selects it. We need the two options 135# GENERIC_TRACER and TRACING to avoid circular dependencies to accomplish the 136# hiding of the automatic options. 137 138config TRACING 139 bool 140 select RING_BUFFER 141 select STACKTRACE if STACKTRACE_SUPPORT 142 select TRACEPOINTS 143 select NOP_TRACER 144 select BINARY_PRINTF 145 select EVENT_TRACING 146 select TRACE_CLOCK 147 select TASKS_RCU if PREEMPTION 148 149config GENERIC_TRACER 150 bool 151 select TRACING 152 153# 154# Minimum requirements an architecture has to meet for us to 155# be able to offer generic tracing facilities: 156# 157config TRACING_SUPPORT 158 bool 159 depends on TRACE_IRQFLAGS_SUPPORT 160 depends on STACKTRACE_SUPPORT 161 default y 162 163menuconfig FTRACE 164 bool "Tracers" 165 depends on TRACING_SUPPORT 166 default y if DEBUG_KERNEL 167 help 168 Enable the kernel tracing infrastructure. 169 170if FTRACE 171 172config BOOTTIME_TRACING 173 bool "Boot-time Tracing support" 174 depends on TRACING 175 select BOOT_CONFIG 176 help 177 Enable developer to setup ftrace subsystem via supplemental 178 kernel cmdline at boot time for debugging (tracing) driver 179 initialization and boot process. 180 181config FUNCTION_TRACER 182 bool "Kernel Function Tracer" 183 depends on HAVE_FUNCTION_TRACER 184 select KALLSYMS 185 select GENERIC_TRACER 186 select CONTEXT_SWITCH_TRACER 187 select GLOB 188 select TASKS_RCU if PREEMPTION 189 select TASKS_RUDE_RCU 190 help 191 Enable the kernel to trace every kernel function. This is done 192 by using a compiler feature to insert a small, 5-byte No-Operation 193 instruction at the beginning of every kernel function, which NOP 194 sequence is then dynamically patched into a tracer call when 195 tracing is enabled by the administrator. If it's runtime disabled 196 (the bootup default), then the overhead of the instructions is very 197 small and not measurable even in micro-benchmarks. 198 199config FUNCTION_GRAPH_TRACER 200 bool "Kernel Function Graph Tracer" 201 depends on HAVE_FUNCTION_GRAPH_TRACER 202 depends on FUNCTION_TRACER 203 depends on !X86_32 || !CC_OPTIMIZE_FOR_SIZE 204 default y 205 help 206 Enable the kernel to trace a function at both its return 207 and its entry. 208 Its first purpose is to trace the duration of functions and 209 draw a call graph for each thread with some information like 210 the return value. This is done by setting the current return 211 address on the current task structure into a stack of calls. 212 213config DYNAMIC_FTRACE 214 bool "enable/disable function tracing dynamically" 215 depends on FUNCTION_TRACER 216 depends on HAVE_DYNAMIC_FTRACE 217 default y 218 help 219 This option will modify all the calls to function tracing 220 dynamically (will patch them out of the binary image and 221 replace them with a No-Op instruction) on boot up. During 222 compile time, a table is made of all the locations that ftrace 223 can function trace, and this table is linked into the kernel 224 image. When this is enabled, functions can be individually 225 enabled, and the functions not enabled will not affect 226 performance of the system. 227 228 See the files in /sys/kernel/debug/tracing: 229 available_filter_functions 230 set_ftrace_filter 231 set_ftrace_notrace 232 233 This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but 234 otherwise has native performance as long as no tracing is active. 235 236config DYNAMIC_FTRACE_WITH_REGS 237 def_bool y 238 depends on DYNAMIC_FTRACE 239 depends on HAVE_DYNAMIC_FTRACE_WITH_REGS 240 241config DYNAMIC_FTRACE_WITH_DIRECT_CALLS 242 def_bool y 243 depends on DYNAMIC_FTRACE_WITH_REGS 244 depends on HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS 245 246config DYNAMIC_FTRACE_WITH_ARGS 247 def_bool y 248 depends on DYNAMIC_FTRACE 249 depends on HAVE_DYNAMIC_FTRACE_WITH_ARGS 250 251config FPROBE 252 bool "Kernel Function Probe (fprobe)" 253 depends on FUNCTION_TRACER 254 depends on DYNAMIC_FTRACE_WITH_REGS 255 depends on HAVE_RETHOOK 256 select RETHOOK 257 default n 258 help 259 This option enables kernel function probe (fprobe) based on ftrace. 260 The fprobe is similar to kprobes, but probes only for kernel function 261 entries and exits. This also can probe multiple functions by one 262 fprobe. 263 264 If unsure, say N. 265 266config FUNCTION_PROFILER 267 bool "Kernel function profiler" 268 depends on FUNCTION_TRACER 269 default n 270 help 271 This option enables the kernel function profiler. A file is created 272 in debugfs called function_profile_enabled which defaults to zero. 273 When a 1 is echoed into this file profiling begins, and when a 274 zero is entered, profiling stops. A "functions" file is created in 275 the trace_stat directory; this file shows the list of functions that 276 have been hit and their counters. 277 278 If in doubt, say N. 279 280config STACK_TRACER 281 bool "Trace max stack" 282 depends on HAVE_FUNCTION_TRACER 283 select FUNCTION_TRACER 284 select STACKTRACE 285 select KALLSYMS 286 help 287 This special tracer records the maximum stack footprint of the 288 kernel and displays it in /sys/kernel/debug/tracing/stack_trace. 289 290 This tracer works by hooking into every function call that the 291 kernel executes, and keeping a maximum stack depth value and 292 stack-trace saved. If this is configured with DYNAMIC_FTRACE 293 then it will not have any overhead while the stack tracer 294 is disabled. 295 296 To enable the stack tracer on bootup, pass in 'stacktrace' 297 on the kernel command line. 298 299 The stack tracer can also be enabled or disabled via the 300 sysctl kernel.stack_tracer_enabled 301 302 Say N if unsure. 303 304config TRACE_PREEMPT_TOGGLE 305 bool 306 help 307 Enables hooks which will be called when preemption is first disabled, 308 and last enabled. 309 310config IRQSOFF_TRACER 311 bool "Interrupts-off Latency Tracer" 312 default n 313 depends on TRACE_IRQFLAGS_SUPPORT 314 select TRACE_IRQFLAGS 315 select GENERIC_TRACER 316 select TRACER_MAX_TRACE 317 select RING_BUFFER_ALLOW_SWAP 318 select TRACER_SNAPSHOT 319 select TRACER_SNAPSHOT_PER_CPU_SWAP 320 help 321 This option measures the time spent in irqs-off critical 322 sections, with microsecond accuracy. 323 324 The default measurement method is a maximum search, which is 325 disabled by default and can be runtime (re-)started 326 via: 327 328 echo 0 > /sys/kernel/debug/tracing/tracing_max_latency 329 330 (Note that kernel size and overhead increase with this option 331 enabled. This option and the preempt-off timing option can be 332 used together or separately.) 333 334config PREEMPT_TRACER 335 bool "Preemption-off Latency Tracer" 336 default n 337 depends on PREEMPTION 338 select GENERIC_TRACER 339 select TRACER_MAX_TRACE 340 select RING_BUFFER_ALLOW_SWAP 341 select TRACER_SNAPSHOT 342 select TRACER_SNAPSHOT_PER_CPU_SWAP 343 select TRACE_PREEMPT_TOGGLE 344 help 345 This option measures the time spent in preemption-off critical 346 sections, with microsecond accuracy. 347 348 The default measurement method is a maximum search, which is 349 disabled by default and can be runtime (re-)started 350 via: 351 352 echo 0 > /sys/kernel/debug/tracing/tracing_max_latency 353 354 (Note that kernel size and overhead increase with this option 355 enabled. This option and the irqs-off timing option can be 356 used together or separately.) 357 358config SCHED_TRACER 359 bool "Scheduling Latency Tracer" 360 select GENERIC_TRACER 361 select CONTEXT_SWITCH_TRACER 362 select TRACER_MAX_TRACE 363 select TRACER_SNAPSHOT 364 help 365 This tracer tracks the latency of the highest priority task 366 to be scheduled in, starting from the point it has woken up. 367 368config HWLAT_TRACER 369 bool "Tracer to detect hardware latencies (like SMIs)" 370 select GENERIC_TRACER 371 help 372 This tracer, when enabled will create one or more kernel threads, 373 depending on what the cpumask file is set to, which each thread 374 spinning in a loop looking for interruptions caused by 375 something other than the kernel. For example, if a 376 System Management Interrupt (SMI) takes a noticeable amount of 377 time, this tracer will detect it. This is useful for testing 378 if a system is reliable for Real Time tasks. 379 380 Some files are created in the tracing directory when this 381 is enabled: 382 383 hwlat_detector/width - time in usecs for how long to spin for 384 hwlat_detector/window - time in usecs between the start of each 385 iteration 386 387 A kernel thread is created that will spin with interrupts disabled 388 for "width" microseconds in every "window" cycle. It will not spin 389 for "window - width" microseconds, where the system can 390 continue to operate. 391 392 The output will appear in the trace and trace_pipe files. 393 394 When the tracer is not running, it has no affect on the system, 395 but when it is running, it can cause the system to be 396 periodically non responsive. Do not run this tracer on a 397 production system. 398 399 To enable this tracer, echo in "hwlat" into the current_tracer 400 file. Every time a latency is greater than tracing_thresh, it will 401 be recorded into the ring buffer. 402 403config OSNOISE_TRACER 404 bool "OS Noise tracer" 405 select GENERIC_TRACER 406 help 407 In the context of high-performance computing (HPC), the Operating 408 System Noise (osnoise) refers to the interference experienced by an 409 application due to activities inside the operating system. In the 410 context of Linux, NMIs, IRQs, SoftIRQs, and any other system thread 411 can cause noise to the system. Moreover, hardware-related jobs can 412 also cause noise, for example, via SMIs. 413 414 The osnoise tracer leverages the hwlat_detector by running a similar 415 loop with preemption, SoftIRQs and IRQs enabled, thus allowing all 416 the sources of osnoise during its execution. The osnoise tracer takes 417 note of the entry and exit point of any source of interferences, 418 increasing a per-cpu interference counter. It saves an interference 419 counter for each source of interference. The interference counter for 420 NMI, IRQs, SoftIRQs, and threads is increased anytime the tool 421 observes these interferences' entry events. When a noise happens 422 without any interference from the operating system level, the 423 hardware noise counter increases, pointing to a hardware-related 424 noise. In this way, osnoise can account for any source of 425 interference. At the end of the period, the osnoise tracer prints 426 the sum of all noise, the max single noise, the percentage of CPU 427 available for the thread, and the counters for the noise sources. 428 429 In addition to the tracer, a set of tracepoints were added to 430 facilitate the identification of the osnoise source. 431 432 The output will appear in the trace and trace_pipe files. 433 434 To enable this tracer, echo in "osnoise" into the current_tracer 435 file. 436 437config TIMERLAT_TRACER 438 bool "Timerlat tracer" 439 select OSNOISE_TRACER 440 select GENERIC_TRACER 441 help 442 The timerlat tracer aims to help the preemptive kernel developers 443 to find sources of wakeup latencies of real-time threads. 444 445 The tracer creates a per-cpu kernel thread with real-time priority. 446 The tracer thread sets a periodic timer to wakeup itself, and goes 447 to sleep waiting for the timer to fire. At the wakeup, the thread 448 then computes a wakeup latency value as the difference between 449 the current time and the absolute time that the timer was set 450 to expire. 451 452 The tracer prints two lines at every activation. The first is the 453 timer latency observed at the hardirq context before the 454 activation of the thread. The second is the timer latency observed 455 by the thread, which is the same level that cyclictest reports. The 456 ACTIVATION ID field serves to relate the irq execution to its 457 respective thread execution. 458 459 The tracer is build on top of osnoise tracer, and the osnoise: 460 events can be used to trace the source of interference from NMI, 461 IRQs and other threads. It also enables the capture of the 462 stacktrace at the IRQ context, which helps to identify the code 463 path that can cause thread delay. 464 465config MMIOTRACE 466 bool "Memory mapped IO tracing" 467 depends on HAVE_MMIOTRACE_SUPPORT && PCI 468 select GENERIC_TRACER 469 help 470 Mmiotrace traces Memory Mapped I/O access and is meant for 471 debugging and reverse engineering. It is called from the ioremap 472 implementation and works via page faults. Tracing is disabled by 473 default and can be enabled at run-time. 474 475 See Documentation/trace/mmiotrace.rst. 476 If you are not helping to develop drivers, say N. 477 478config ENABLE_DEFAULT_TRACERS 479 bool "Trace process context switches and events" 480 depends on !GENERIC_TRACER 481 select TRACING 482 help 483 This tracer hooks to various trace points in the kernel, 484 allowing the user to pick and choose which trace point they 485 want to trace. It also includes the sched_switch tracer plugin. 486 487config FTRACE_SYSCALLS 488 bool "Trace syscalls" 489 depends on HAVE_SYSCALL_TRACEPOINTS 490 select GENERIC_TRACER 491 select KALLSYMS 492 help 493 Basic tracer to catch the syscall entry and exit events. 494 495config TRACER_SNAPSHOT 496 bool "Create a snapshot trace buffer" 497 select TRACER_MAX_TRACE 498 help 499 Allow tracing users to take snapshot of the current buffer using the 500 ftrace interface, e.g.: 501 502 echo 1 > /sys/kernel/debug/tracing/snapshot 503 cat snapshot 504 505config TRACER_SNAPSHOT_PER_CPU_SWAP 506 bool "Allow snapshot to swap per CPU" 507 depends on TRACER_SNAPSHOT 508 select RING_BUFFER_ALLOW_SWAP 509 help 510 Allow doing a snapshot of a single CPU buffer instead of a 511 full swap (all buffers). If this is set, then the following is 512 allowed: 513 514 echo 1 > /sys/kernel/debug/tracing/per_cpu/cpu2/snapshot 515 516 After which, only the tracing buffer for CPU 2 was swapped with 517 the main tracing buffer, and the other CPU buffers remain the same. 518 519 When this is enabled, this adds a little more overhead to the 520 trace recording, as it needs to add some checks to synchronize 521 recording with swaps. But this does not affect the performance 522 of the overall system. This is enabled by default when the preempt 523 or irq latency tracers are enabled, as those need to swap as well 524 and already adds the overhead (plus a lot more). 525 526config TRACE_BRANCH_PROFILING 527 bool 528 select GENERIC_TRACER 529 530choice 531 prompt "Branch Profiling" 532 default BRANCH_PROFILE_NONE 533 help 534 The branch profiling is a software profiler. It will add hooks 535 into the C conditionals to test which path a branch takes. 536 537 The likely/unlikely profiler only looks at the conditions that 538 are annotated with a likely or unlikely macro. 539 540 The "all branch" profiler will profile every if-statement in the 541 kernel. This profiler will also enable the likely/unlikely 542 profiler. 543 544 Either of the above profilers adds a bit of overhead to the system. 545 If unsure, choose "No branch profiling". 546 547config BRANCH_PROFILE_NONE 548 bool "No branch profiling" 549 help 550 No branch profiling. Branch profiling adds a bit of overhead. 551 Only enable it if you want to analyse the branching behavior. 552 Otherwise keep it disabled. 553 554config PROFILE_ANNOTATED_BRANCHES 555 bool "Trace likely/unlikely profiler" 556 select TRACE_BRANCH_PROFILING 557 help 558 This tracer profiles all likely and unlikely macros 559 in the kernel. It will display the results in: 560 561 /sys/kernel/debug/tracing/trace_stat/branch_annotated 562 563 Note: this will add a significant overhead; only turn this 564 on if you need to profile the system's use of these macros. 565 566config PROFILE_ALL_BRANCHES 567 bool "Profile all if conditionals" if !FORTIFY_SOURCE 568 select TRACE_BRANCH_PROFILING 569 help 570 This tracer profiles all branch conditions. Every if () 571 taken in the kernel is recorded whether it hit or miss. 572 The results will be displayed in: 573 574 /sys/kernel/debug/tracing/trace_stat/branch_all 575 576 This option also enables the likely/unlikely profiler. 577 578 This configuration, when enabled, will impose a great overhead 579 on the system. This should only be enabled when the system 580 is to be analyzed in much detail. 581endchoice 582 583config TRACING_BRANCHES 584 bool 585 help 586 Selected by tracers that will trace the likely and unlikely 587 conditions. This prevents the tracers themselves from being 588 profiled. Profiling the tracing infrastructure can only happen 589 when the likelys and unlikelys are not being traced. 590 591config BRANCH_TRACER 592 bool "Trace likely/unlikely instances" 593 depends on TRACE_BRANCH_PROFILING 594 select TRACING_BRANCHES 595 help 596 This traces the events of likely and unlikely condition 597 calls in the kernel. The difference between this and the 598 "Trace likely/unlikely profiler" is that this is not a 599 histogram of the callers, but actually places the calling 600 events into a running trace buffer to see when and where the 601 events happened, as well as their results. 602 603 Say N if unsure. 604 605config BLK_DEV_IO_TRACE 606 bool "Support for tracing block IO actions" 607 depends on SYSFS 608 depends on BLOCK 609 select RELAY 610 select DEBUG_FS 611 select TRACEPOINTS 612 select GENERIC_TRACER 613 select STACKTRACE 614 help 615 Say Y here if you want to be able to trace the block layer actions 616 on a given queue. Tracing allows you to see any traffic happening 617 on a block device queue. For more information (and the userspace 618 support tools needed), fetch the blktrace tools from: 619 620 git://git.kernel.dk/blktrace.git 621 622 Tracing also is possible using the ftrace interface, e.g.: 623 624 echo 1 > /sys/block/sda/sda1/trace/enable 625 echo blk > /sys/kernel/debug/tracing/current_tracer 626 cat /sys/kernel/debug/tracing/trace_pipe 627 628 If unsure, say N. 629 630config KPROBE_EVENTS 631 depends on KPROBES 632 depends on HAVE_REGS_AND_STACK_ACCESS_API 633 bool "Enable kprobes-based dynamic events" 634 select TRACING 635 select PROBE_EVENTS 636 select DYNAMIC_EVENTS 637 default y 638 help 639 This allows the user to add tracing events (similar to tracepoints) 640 on the fly via the ftrace interface. See 641 Documentation/trace/kprobetrace.rst for more details. 642 643 Those events can be inserted wherever kprobes can probe, and record 644 various register and memory values. 645 646 This option is also required by perf-probe subcommand of perf tools. 647 If you want to use perf tools, this option is strongly recommended. 648 649config KPROBE_EVENTS_ON_NOTRACE 650 bool "Do NOT protect notrace function from kprobe events" 651 depends on KPROBE_EVENTS 652 depends on DYNAMIC_FTRACE 653 default n 654 help 655 This is only for the developers who want to debug ftrace itself 656 using kprobe events. 657 658 If kprobes can use ftrace instead of breakpoint, ftrace related 659 functions are protected from kprobe-events to prevent an infinite 660 recursion or any unexpected execution path which leads to a kernel 661 crash. 662 663 This option disables such protection and allows you to put kprobe 664 events on ftrace functions for debugging ftrace by itself. 665 Note that this might let you shoot yourself in the foot. 666 667 If unsure, say N. 668 669config UPROBE_EVENTS 670 bool "Enable uprobes-based dynamic events" 671 depends on ARCH_SUPPORTS_UPROBES 672 depends on MMU 673 depends on PERF_EVENTS 674 select UPROBES 675 select PROBE_EVENTS 676 select DYNAMIC_EVENTS 677 select TRACING 678 default y 679 help 680 This allows the user to add tracing events on top of userspace 681 dynamic events (similar to tracepoints) on the fly via the trace 682 events interface. Those events can be inserted wherever uprobes 683 can probe, and record various registers. 684 This option is required if you plan to use perf-probe subcommand 685 of perf tools on user space applications. 686 687config BPF_EVENTS 688 depends on BPF_SYSCALL 689 depends on (KPROBE_EVENTS || UPROBE_EVENTS) && PERF_EVENTS 690 bool 691 default y 692 help 693 This allows the user to attach BPF programs to kprobe, uprobe, and 694 tracepoint events. 695 696config DYNAMIC_EVENTS 697 def_bool n 698 699config PROBE_EVENTS 700 def_bool n 701 702config BPF_KPROBE_OVERRIDE 703 bool "Enable BPF programs to override a kprobed function" 704 depends on BPF_EVENTS 705 depends on FUNCTION_ERROR_INJECTION 706 default n 707 help 708 Allows BPF to override the execution of a probed function and 709 set a different return value. This is used for error injection. 710 711config FTRACE_MCOUNT_RECORD 712 def_bool y 713 depends on DYNAMIC_FTRACE 714 depends on HAVE_FTRACE_MCOUNT_RECORD 715 716config FTRACE_MCOUNT_USE_PATCHABLE_FUNCTION_ENTRY 717 bool 718 depends on FTRACE_MCOUNT_RECORD 719 720config FTRACE_MCOUNT_USE_CC 721 def_bool y 722 depends on $(cc-option,-mrecord-mcount) 723 depends on !FTRACE_MCOUNT_USE_PATCHABLE_FUNCTION_ENTRY 724 depends on FTRACE_MCOUNT_RECORD 725 726config FTRACE_MCOUNT_USE_OBJTOOL 727 def_bool y 728 depends on HAVE_OBJTOOL_MCOUNT 729 depends on !FTRACE_MCOUNT_USE_PATCHABLE_FUNCTION_ENTRY 730 depends on !FTRACE_MCOUNT_USE_CC 731 depends on FTRACE_MCOUNT_RECORD 732 733config FTRACE_MCOUNT_USE_RECORDMCOUNT 734 def_bool y 735 depends on !FTRACE_MCOUNT_USE_PATCHABLE_FUNCTION_ENTRY 736 depends on !FTRACE_MCOUNT_USE_CC 737 depends on !FTRACE_MCOUNT_USE_OBJTOOL 738 depends on FTRACE_MCOUNT_RECORD 739 740config TRACING_MAP 741 bool 742 depends on ARCH_HAVE_NMI_SAFE_CMPXCHG 743 help 744 tracing_map is a special-purpose lock-free map for tracing, 745 separated out as a stand-alone facility in order to allow it 746 to be shared between multiple tracers. It isn't meant to be 747 generally used outside of that context, and is normally 748 selected by tracers that use it. 749 750config SYNTH_EVENTS 751 bool "Synthetic trace events" 752 select TRACING 753 select DYNAMIC_EVENTS 754 default n 755 help 756 Synthetic events are user-defined trace events that can be 757 used to combine data from other trace events or in fact any 758 data source. Synthetic events can be generated indirectly 759 via the trace() action of histogram triggers or directly 760 by way of an in-kernel API. 761 762 See Documentation/trace/events.rst or 763 Documentation/trace/histogram.rst for details and examples. 764 765 If in doubt, say N. 766 767config USER_EVENTS 768 bool "User trace events" 769 select TRACING 770 select DYNAMIC_EVENTS 771 depends on BROKEN || COMPILE_TEST # API needs to be straighten out 772 help 773 User trace events are user-defined trace events that 774 can be used like an existing kernel trace event. User trace 775 events are generated by writing to a tracefs file. User 776 processes can determine if their tracing events should be 777 generated by memory mapping a tracefs file and checking for 778 an associated byte being non-zero. 779 780 If in doubt, say N. 781 782config HIST_TRIGGERS 783 bool "Histogram triggers" 784 depends on ARCH_HAVE_NMI_SAFE_CMPXCHG 785 select TRACING_MAP 786 select TRACING 787 select DYNAMIC_EVENTS 788 select SYNTH_EVENTS 789 default n 790 help 791 Hist triggers allow one or more arbitrary trace event fields 792 to be aggregated into hash tables and dumped to stdout by 793 reading a debugfs/tracefs file. They're useful for 794 gathering quick and dirty (though precise) summaries of 795 event activity as an initial guide for further investigation 796 using more advanced tools. 797 798 Inter-event tracing of quantities such as latencies is also 799 supported using hist triggers under this option. 800 801 See Documentation/trace/histogram.rst. 802 If in doubt, say N. 803 804config TRACE_EVENT_INJECT 805 bool "Trace event injection" 806 depends on TRACING 807 help 808 Allow user-space to inject a specific trace event into the ring 809 buffer. This is mainly used for testing purpose. 810 811 If unsure, say N. 812 813config TRACEPOINT_BENCHMARK 814 bool "Add tracepoint that benchmarks tracepoints" 815 help 816 This option creates the tracepoint "benchmark:benchmark_event". 817 When the tracepoint is enabled, it kicks off a kernel thread that 818 goes into an infinite loop (calling cond_resched() to let other tasks 819 run), and calls the tracepoint. Each iteration will record the time 820 it took to write to the tracepoint and the next iteration that 821 data will be passed to the tracepoint itself. That is, the tracepoint 822 will report the time it took to do the previous tracepoint. 823 The string written to the tracepoint is a static string of 128 bytes 824 to keep the time the same. The initial string is simply a write of 825 "START". The second string records the cold cache time of the first 826 write which is not added to the rest of the calculations. 827 828 As it is a tight loop, it benchmarks as hot cache. That's fine because 829 we care most about hot paths that are probably in cache already. 830 831 An example of the output: 832 833 START 834 first=3672 [COLD CACHED] 835 last=632 first=3672 max=632 min=632 avg=316 std=446 std^2=199712 836 last=278 first=3672 max=632 min=278 avg=303 std=316 std^2=100337 837 last=277 first=3672 max=632 min=277 avg=296 std=258 std^2=67064 838 last=273 first=3672 max=632 min=273 avg=292 std=224 std^2=50411 839 last=273 first=3672 max=632 min=273 avg=288 std=200 std^2=40389 840 last=281 first=3672 max=632 min=273 avg=287 std=183 std^2=33666 841 842 843config RING_BUFFER_BENCHMARK 844 tristate "Ring buffer benchmark stress tester" 845 depends on RING_BUFFER 846 help 847 This option creates a test to stress the ring buffer and benchmark it. 848 It creates its own ring buffer such that it will not interfere with 849 any other users of the ring buffer (such as ftrace). It then creates 850 a producer and consumer that will run for 10 seconds and sleep for 851 10 seconds. Each interval it will print out the number of events 852 it recorded and give a rough estimate of how long each iteration took. 853 854 It does not disable interrupts or raise its priority, so it may be 855 affected by processes that are running. 856 857 If unsure, say N. 858 859config TRACE_EVAL_MAP_FILE 860 bool "Show eval mappings for trace events" 861 depends on TRACING 862 help 863 The "print fmt" of the trace events will show the enum/sizeof names 864 instead of their values. This can cause problems for user space tools 865 that use this string to parse the raw data as user space does not know 866 how to convert the string to its value. 867 868 To fix this, there's a special macro in the kernel that can be used 869 to convert an enum/sizeof into its value. If this macro is used, then 870 the print fmt strings will be converted to their values. 871 872 If something does not get converted properly, this option can be 873 used to show what enums/sizeof the kernel tried to convert. 874 875 This option is for debugging the conversions. A file is created 876 in the tracing directory called "eval_map" that will show the 877 names matched with their values and what trace event system they 878 belong too. 879 880 Normally, the mapping of the strings to values will be freed after 881 boot up or module load. With this option, they will not be freed, as 882 they are needed for the "eval_map" file. Enabling this option will 883 increase the memory footprint of the running kernel. 884 885 If unsure, say N. 886 887config FTRACE_RECORD_RECURSION 888 bool "Record functions that recurse in function tracing" 889 depends on FUNCTION_TRACER 890 help 891 All callbacks that attach to the function tracing have some sort 892 of protection against recursion. Even though the protection exists, 893 it adds overhead. This option will create a file in the tracefs 894 file system called "recursed_functions" that will list the functions 895 that triggered a recursion. 896 897 This will add more overhead to cases that have recursion. 898 899 If unsure, say N 900 901config FTRACE_RECORD_RECURSION_SIZE 902 int "Max number of recursed functions to record" 903 default 128 904 depends on FTRACE_RECORD_RECURSION 905 help 906 This defines the limit of number of functions that can be 907 listed in the "recursed_functions" file, that lists all 908 the functions that caused a recursion to happen. 909 This file can be reset, but the limit can not change in 910 size at runtime. 911 912config RING_BUFFER_RECORD_RECURSION 913 bool "Record functions that recurse in the ring buffer" 914 depends on FTRACE_RECORD_RECURSION 915 # default y, because it is coupled with FTRACE_RECORD_RECURSION 916 default y 917 help 918 The ring buffer has its own internal recursion. Although when 919 recursion happens it wont cause harm because of the protection, 920 but it does cause an unwanted overhead. Enabling this option will 921 place where recursion was detected into the ftrace "recursed_functions" 922 file. 923 924 This will add more overhead to cases that have recursion. 925 926config GCOV_PROFILE_FTRACE 927 bool "Enable GCOV profiling on ftrace subsystem" 928 depends on GCOV_KERNEL 929 help 930 Enable GCOV profiling on ftrace subsystem for checking 931 which functions/lines are tested. 932 933 If unsure, say N. 934 935 Note that on a kernel compiled with this config, ftrace will 936 run significantly slower. 937 938config FTRACE_SELFTEST 939 bool 940 941config FTRACE_STARTUP_TEST 942 bool "Perform a startup test on ftrace" 943 depends on GENERIC_TRACER 944 select FTRACE_SELFTEST 945 help 946 This option performs a series of startup tests on ftrace. On bootup 947 a series of tests are made to verify that the tracer is 948 functioning properly. It will do tests on all the configured 949 tracers of ftrace. 950 951config EVENT_TRACE_STARTUP_TEST 952 bool "Run selftest on trace events" 953 depends on FTRACE_STARTUP_TEST 954 default y 955 help 956 This option performs a test on all trace events in the system. 957 It basically just enables each event and runs some code that 958 will trigger events (not necessarily the event it enables) 959 This may take some time run as there are a lot of events. 960 961config EVENT_TRACE_TEST_SYSCALLS 962 bool "Run selftest on syscall events" 963 depends on EVENT_TRACE_STARTUP_TEST 964 help 965 This option will also enable testing every syscall event. 966 It only enables the event and disables it and runs various loads 967 with the event enabled. This adds a bit more time for kernel boot 968 up since it runs this on every system call defined. 969 970 TBD - enable a way to actually call the syscalls as we test their 971 events 972 973config FTRACE_SORT_STARTUP_TEST 974 bool "Verify compile time sorting of ftrace functions" 975 depends on DYNAMIC_FTRACE 976 depends on BUILDTIME_MCOUNT_SORT 977 help 978 Sorting of the mcount_loc sections that is used to find the 979 where the ftrace knows where to patch functions for tracing 980 and other callbacks is done at compile time. But if the sort 981 is not done correctly, it will cause non-deterministic failures. 982 When this is set, the sorted sections will be verified that they 983 are in deed sorted and will warn if they are not. 984 985 If unsure, say N 986 987config RING_BUFFER_STARTUP_TEST 988 bool "Ring buffer startup self test" 989 depends on RING_BUFFER 990 help 991 Run a simple self test on the ring buffer on boot up. Late in the 992 kernel boot sequence, the test will start that kicks off 993 a thread per cpu. Each thread will write various size events 994 into the ring buffer. Another thread is created to send IPIs 995 to each of the threads, where the IPI handler will also write 996 to the ring buffer, to test/stress the nesting ability. 997 If any anomalies are discovered, a warning will be displayed 998 and all ring buffers will be disabled. 999 1000 The test runs for 10 seconds. This will slow your boot time 1001 by at least 10 more seconds. 1002 1003 At the end of the test, statics and more checks are done. 1004 It will output the stats of each per cpu buffer. What 1005 was written, the sizes, what was read, what was lost, and 1006 other similar details. 1007 1008 If unsure, say N 1009 1010config RING_BUFFER_VALIDATE_TIME_DELTAS 1011 bool "Verify ring buffer time stamp deltas" 1012 depends on RING_BUFFER 1013 help 1014 This will audit the time stamps on the ring buffer sub 1015 buffer to make sure that all the time deltas for the 1016 events on a sub buffer matches the current time stamp. 1017 This audit is performed for every event that is not 1018 interrupted, or interrupting another event. A check 1019 is also made when traversing sub buffers to make sure 1020 that all the deltas on the previous sub buffer do not 1021 add up to be greater than the current time stamp. 1022 1023 NOTE: This adds significant overhead to recording of events, 1024 and should only be used to test the logic of the ring buffer. 1025 Do not use it on production systems. 1026 1027 Only say Y if you understand what this does, and you 1028 still want it enabled. Otherwise say N 1029 1030config MMIOTRACE_TEST 1031 tristate "Test module for mmiotrace" 1032 depends on MMIOTRACE && m 1033 help 1034 This is a dumb module for testing mmiotrace. It is very dangerous 1035 as it will write garbage to IO memory starting at a given address. 1036 However, it should be safe to use on e.g. unused portion of VRAM. 1037 1038 Say N, unless you absolutely know what you are doing. 1039 1040config PREEMPTIRQ_DELAY_TEST 1041 tristate "Test module to create a preempt / IRQ disable delay thread to test latency tracers" 1042 depends on m 1043 help 1044 Select this option to build a test module that can help test latency 1045 tracers by executing a preempt or irq disable section with a user 1046 configurable delay. The module busy waits for the duration of the 1047 critical section. 1048 1049 For example, the following invocation generates a burst of three 1050 irq-disabled critical sections for 500us: 1051 modprobe preemptirq_delay_test test_mode=irq delay=500 burst_size=3 1052 1053 What's more, if you want to attach the test on the cpu which the latency 1054 tracer is running on, specify cpu_affinity=cpu_num at the end of the 1055 command. 1056 1057 If unsure, say N 1058 1059config SYNTH_EVENT_GEN_TEST 1060 tristate "Test module for in-kernel synthetic event generation" 1061 depends on SYNTH_EVENTS 1062 help 1063 This option creates a test module to check the base 1064 functionality of in-kernel synthetic event definition and 1065 generation. 1066 1067 To test, insert the module, and then check the trace buffer 1068 for the generated sample events. 1069 1070 If unsure, say N. 1071 1072config KPROBE_EVENT_GEN_TEST 1073 tristate "Test module for in-kernel kprobe event generation" 1074 depends on KPROBE_EVENTS 1075 help 1076 This option creates a test module to check the base 1077 functionality of in-kernel kprobe event definition. 1078 1079 To test, insert the module, and then check the trace buffer 1080 for the generated kprobe events. 1081 1082 If unsure, say N. 1083 1084config HIST_TRIGGERS_DEBUG 1085 bool "Hist trigger debug support" 1086 depends on HIST_TRIGGERS 1087 help 1088 Add "hist_debug" file for each event, which when read will 1089 dump out a bunch of internal details about the hist triggers 1090 defined on that event. 1091 1092 The hist_debug file serves a couple of purposes: 1093 1094 - Helps developers verify that nothing is broken. 1095 1096 - Provides educational information to support the details 1097 of the hist trigger internals as described by 1098 Documentation/trace/histogram-design.rst. 1099 1100 The hist_debug output only covers the data structures 1101 related to the histogram definitions themselves and doesn't 1102 display the internals of map buckets or variable values of 1103 running histograms. 1104 1105 If unsure, say N. 1106 1107endif # FTRACE 1108