1# 2# Architectures that offer an FUNCTION_TRACER implementation should 3# select HAVE_FUNCTION_TRACER: 4# 5 6config USER_STACKTRACE_SUPPORT 7 bool 8 9config NOP_TRACER 10 bool 11 12config HAVE_FTRACE_NMI_ENTER 13 bool 14 help 15 See Documentation/trace/ftrace-design.rst 16 17config HAVE_FUNCTION_TRACER 18 bool 19 help 20 See Documentation/trace/ftrace-design.rst 21 22config HAVE_FUNCTION_GRAPH_TRACER 23 bool 24 help 25 See Documentation/trace/ftrace-design.rst 26 27config HAVE_DYNAMIC_FTRACE 28 bool 29 help 30 See Documentation/trace/ftrace-design.rst 31 32config HAVE_DYNAMIC_FTRACE_WITH_REGS 33 bool 34 35config HAVE_FTRACE_MCOUNT_RECORD 36 bool 37 help 38 See Documentation/trace/ftrace-design.rst 39 40config HAVE_SYSCALL_TRACEPOINTS 41 bool 42 help 43 See Documentation/trace/ftrace-design.rst 44 45config HAVE_FENTRY 46 bool 47 help 48 Arch supports the gcc options -pg with -mfentry 49 50config HAVE_C_RECORDMCOUNT 51 bool 52 help 53 C version of recordmcount available? 54 55config TRACER_MAX_TRACE 56 bool 57 58config TRACE_CLOCK 59 bool 60 61config RING_BUFFER 62 bool 63 select TRACE_CLOCK 64 select IRQ_WORK 65 66config FTRACE_NMI_ENTER 67 bool 68 depends on HAVE_FTRACE_NMI_ENTER 69 default y 70 71config EVENT_TRACING 72 select CONTEXT_SWITCH_TRACER 73 select GLOB 74 bool 75 76config CONTEXT_SWITCH_TRACER 77 bool 78 79config RING_BUFFER_ALLOW_SWAP 80 bool 81 help 82 Allow the use of ring_buffer_swap_cpu. 83 Adds a very slight overhead to tracing when enabled. 84 85config PREEMPTIRQ_TRACEPOINTS 86 bool 87 depends on TRACE_PREEMPT_TOGGLE || TRACE_IRQFLAGS 88 select TRACING 89 default y 90 help 91 Create preempt/irq toggle tracepoints if needed, so that other parts 92 of the kernel can use them to generate or add hooks to them. 93 94# All tracer options should select GENERIC_TRACER. For those options that are 95# enabled by all tracers (context switch and event tracer) they select TRACING. 96# This allows those options to appear when no other tracer is selected. But the 97# options do not appear when something else selects it. We need the two options 98# GENERIC_TRACER and TRACING to avoid circular dependencies to accomplish the 99# hiding of the automatic options. 100 101config TRACING 102 bool 103 select DEBUG_FS 104 select RING_BUFFER 105 select STACKTRACE if STACKTRACE_SUPPORT 106 select TRACEPOINTS 107 select NOP_TRACER 108 select BINARY_PRINTF 109 select EVENT_TRACING 110 select TRACE_CLOCK 111 112config GENERIC_TRACER 113 bool 114 select TRACING 115 116# 117# Minimum requirements an architecture has to meet for us to 118# be able to offer generic tracing facilities: 119# 120config TRACING_SUPPORT 121 bool 122 depends on TRACE_IRQFLAGS_SUPPORT 123 depends on STACKTRACE_SUPPORT 124 default y 125 126if TRACING_SUPPORT 127 128menuconfig FTRACE 129 bool "Tracers" 130 default y if DEBUG_KERNEL 131 help 132 Enable the kernel tracing infrastructure. 133 134if FTRACE 135 136config FUNCTION_TRACER 137 bool "Kernel Function Tracer" 138 depends on HAVE_FUNCTION_TRACER 139 select KALLSYMS 140 select GENERIC_TRACER 141 select CONTEXT_SWITCH_TRACER 142 select GLOB 143 select TASKS_RCU if PREEMPT 144 help 145 Enable the kernel to trace every kernel function. This is done 146 by using a compiler feature to insert a small, 5-byte No-Operation 147 instruction at the beginning of every kernel function, which NOP 148 sequence is then dynamically patched into a tracer call when 149 tracing is enabled by the administrator. If it's runtime disabled 150 (the bootup default), then the overhead of the instructions is very 151 small and not measurable even in micro-benchmarks. 152 153config FUNCTION_GRAPH_TRACER 154 bool "Kernel Function Graph Tracer" 155 depends on HAVE_FUNCTION_GRAPH_TRACER 156 depends on FUNCTION_TRACER 157 depends on !X86_32 || !CC_OPTIMIZE_FOR_SIZE 158 default y 159 help 160 Enable the kernel to trace a function at both its return 161 and its entry. 162 Its first purpose is to trace the duration of functions and 163 draw a call graph for each thread with some information like 164 the return value. This is done by setting the current return 165 address on the current task structure into a stack of calls. 166 167config TRACE_PREEMPT_TOGGLE 168 bool 169 help 170 Enables hooks which will be called when preemption is first disabled, 171 and last enabled. 172 173config PREEMPTIRQ_EVENTS 174 bool "Enable trace events for preempt and irq disable/enable" 175 select TRACE_IRQFLAGS 176 select TRACE_PREEMPT_TOGGLE if PREEMPT 177 select GENERIC_TRACER 178 default n 179 help 180 Enable tracing of disable and enable events for preemption and irqs. 181 182config IRQSOFF_TRACER 183 bool "Interrupts-off Latency Tracer" 184 default n 185 depends on TRACE_IRQFLAGS_SUPPORT 186 depends on !ARCH_USES_GETTIMEOFFSET 187 select TRACE_IRQFLAGS 188 select GENERIC_TRACER 189 select TRACER_MAX_TRACE 190 select RING_BUFFER_ALLOW_SWAP 191 select TRACER_SNAPSHOT 192 select TRACER_SNAPSHOT_PER_CPU_SWAP 193 help 194 This option measures the time spent in irqs-off critical 195 sections, with microsecond accuracy. 196 197 The default measurement method is a maximum search, which is 198 disabled by default and can be runtime (re-)started 199 via: 200 201 echo 0 > /sys/kernel/debug/tracing/tracing_max_latency 202 203 (Note that kernel size and overhead increase with this option 204 enabled. This option and the preempt-off timing option can be 205 used together or separately.) 206 207config PREEMPT_TRACER 208 bool "Preemption-off Latency Tracer" 209 default n 210 depends on !ARCH_USES_GETTIMEOFFSET 211 depends on PREEMPT 212 select GENERIC_TRACER 213 select TRACER_MAX_TRACE 214 select RING_BUFFER_ALLOW_SWAP 215 select TRACER_SNAPSHOT 216 select TRACER_SNAPSHOT_PER_CPU_SWAP 217 select TRACE_PREEMPT_TOGGLE 218 help 219 This option measures the time spent in preemption-off critical 220 sections, with microsecond accuracy. 221 222 The default measurement method is a maximum search, which is 223 disabled by default and can be runtime (re-)started 224 via: 225 226 echo 0 > /sys/kernel/debug/tracing/tracing_max_latency 227 228 (Note that kernel size and overhead increase with this option 229 enabled. This option and the irqs-off timing option can be 230 used together or separately.) 231 232config SCHED_TRACER 233 bool "Scheduling Latency Tracer" 234 select GENERIC_TRACER 235 select CONTEXT_SWITCH_TRACER 236 select TRACER_MAX_TRACE 237 select TRACER_SNAPSHOT 238 help 239 This tracer tracks the latency of the highest priority task 240 to be scheduled in, starting from the point it has woken up. 241 242config HWLAT_TRACER 243 bool "Tracer to detect hardware latencies (like SMIs)" 244 select GENERIC_TRACER 245 help 246 This tracer, when enabled will create one or more kernel threads, 247 depending on what the cpumask file is set to, which each thread 248 spinning in a loop looking for interruptions caused by 249 something other than the kernel. For example, if a 250 System Management Interrupt (SMI) takes a noticeable amount of 251 time, this tracer will detect it. This is useful for testing 252 if a system is reliable for Real Time tasks. 253 254 Some files are created in the tracing directory when this 255 is enabled: 256 257 hwlat_detector/width - time in usecs for how long to spin for 258 hwlat_detector/window - time in usecs between the start of each 259 iteration 260 261 A kernel thread is created that will spin with interrupts disabled 262 for "width" microseconds in every "window" cycle. It will not spin 263 for "window - width" microseconds, where the system can 264 continue to operate. 265 266 The output will appear in the trace and trace_pipe files. 267 268 When the tracer is not running, it has no affect on the system, 269 but when it is running, it can cause the system to be 270 periodically non responsive. Do not run this tracer on a 271 production system. 272 273 To enable this tracer, echo in "hwlat" into the current_tracer 274 file. Every time a latency is greater than tracing_thresh, it will 275 be recorded into the ring buffer. 276 277config ENABLE_DEFAULT_TRACERS 278 bool "Trace process context switches and events" 279 depends on !GENERIC_TRACER 280 select TRACING 281 help 282 This tracer hooks to various trace points in the kernel, 283 allowing the user to pick and choose which trace point they 284 want to trace. It also includes the sched_switch tracer plugin. 285 286config FTRACE_SYSCALLS 287 bool "Trace syscalls" 288 depends on HAVE_SYSCALL_TRACEPOINTS 289 select GENERIC_TRACER 290 select KALLSYMS 291 help 292 Basic tracer to catch the syscall entry and exit events. 293 294config TRACER_SNAPSHOT 295 bool "Create a snapshot trace buffer" 296 select TRACER_MAX_TRACE 297 help 298 Allow tracing users to take snapshot of the current buffer using the 299 ftrace interface, e.g.: 300 301 echo 1 > /sys/kernel/debug/tracing/snapshot 302 cat snapshot 303 304config TRACER_SNAPSHOT_PER_CPU_SWAP 305 bool "Allow snapshot to swap per CPU" 306 depends on TRACER_SNAPSHOT 307 select RING_BUFFER_ALLOW_SWAP 308 help 309 Allow doing a snapshot of a single CPU buffer instead of a 310 full swap (all buffers). If this is set, then the following is 311 allowed: 312 313 echo 1 > /sys/kernel/debug/tracing/per_cpu/cpu2/snapshot 314 315 After which, only the tracing buffer for CPU 2 was swapped with 316 the main tracing buffer, and the other CPU buffers remain the same. 317 318 When this is enabled, this adds a little more overhead to the 319 trace recording, as it needs to add some checks to synchronize 320 recording with swaps. But this does not affect the performance 321 of the overall system. This is enabled by default when the preempt 322 or irq latency tracers are enabled, as those need to swap as well 323 and already adds the overhead (plus a lot more). 324 325config TRACE_BRANCH_PROFILING 326 bool 327 select GENERIC_TRACER 328 329choice 330 prompt "Branch Profiling" 331 default BRANCH_PROFILE_NONE 332 help 333 The branch profiling is a software profiler. It will add hooks 334 into the C conditionals to test which path a branch takes. 335 336 The likely/unlikely profiler only looks at the conditions that 337 are annotated with a likely or unlikely macro. 338 339 The "all branch" profiler will profile every if-statement in the 340 kernel. This profiler will also enable the likely/unlikely 341 profiler. 342 343 Either of the above profilers adds a bit of overhead to the system. 344 If unsure, choose "No branch profiling". 345 346config BRANCH_PROFILE_NONE 347 bool "No branch profiling" 348 help 349 No branch profiling. Branch profiling adds a bit of overhead. 350 Only enable it if you want to analyse the branching behavior. 351 Otherwise keep it disabled. 352 353config PROFILE_ANNOTATED_BRANCHES 354 bool "Trace likely/unlikely profiler" 355 select TRACE_BRANCH_PROFILING 356 help 357 This tracer profiles all likely and unlikely macros 358 in the kernel. It will display the results in: 359 360 /sys/kernel/debug/tracing/trace_stat/branch_annotated 361 362 Note: this will add a significant overhead; only turn this 363 on if you need to profile the system's use of these macros. 364 365config PROFILE_ALL_BRANCHES 366 bool "Profile all if conditionals" if !FORTIFY_SOURCE 367 select TRACE_BRANCH_PROFILING 368 help 369 This tracer profiles all branch conditions. Every if () 370 taken in the kernel is recorded whether it hit or miss. 371 The results will be displayed in: 372 373 /sys/kernel/debug/tracing/trace_stat/branch_all 374 375 This option also enables the likely/unlikely profiler. 376 377 This configuration, when enabled, will impose a great overhead 378 on the system. This should only be enabled when the system 379 is to be analyzed in much detail. 380endchoice 381 382config TRACING_BRANCHES 383 bool 384 help 385 Selected by tracers that will trace the likely and unlikely 386 conditions. This prevents the tracers themselves from being 387 profiled. Profiling the tracing infrastructure can only happen 388 when the likelys and unlikelys are not being traced. 389 390config BRANCH_TRACER 391 bool "Trace likely/unlikely instances" 392 depends on TRACE_BRANCH_PROFILING 393 select TRACING_BRANCHES 394 help 395 This traces the events of likely and unlikely condition 396 calls in the kernel. The difference between this and the 397 "Trace likely/unlikely profiler" is that this is not a 398 histogram of the callers, but actually places the calling 399 events into a running trace buffer to see when and where the 400 events happened, as well as their results. 401 402 Say N if unsure. 403 404config STACK_TRACER 405 bool "Trace max stack" 406 depends on HAVE_FUNCTION_TRACER 407 select FUNCTION_TRACER 408 select STACKTRACE 409 select KALLSYMS 410 help 411 This special tracer records the maximum stack footprint of the 412 kernel and displays it in /sys/kernel/debug/tracing/stack_trace. 413 414 This tracer works by hooking into every function call that the 415 kernel executes, and keeping a maximum stack depth value and 416 stack-trace saved. If this is configured with DYNAMIC_FTRACE 417 then it will not have any overhead while the stack tracer 418 is disabled. 419 420 To enable the stack tracer on bootup, pass in 'stacktrace' 421 on the kernel command line. 422 423 The stack tracer can also be enabled or disabled via the 424 sysctl kernel.stack_tracer_enabled 425 426 Say N if unsure. 427 428config BLK_DEV_IO_TRACE 429 bool "Support for tracing block IO actions" 430 depends on SYSFS 431 depends on BLOCK 432 select RELAY 433 select DEBUG_FS 434 select TRACEPOINTS 435 select GENERIC_TRACER 436 select STACKTRACE 437 help 438 Say Y here if you want to be able to trace the block layer actions 439 on a given queue. Tracing allows you to see any traffic happening 440 on a block device queue. For more information (and the userspace 441 support tools needed), fetch the blktrace tools from: 442 443 git://git.kernel.dk/blktrace.git 444 445 Tracing also is possible using the ftrace interface, e.g.: 446 447 echo 1 > /sys/block/sda/sda1/trace/enable 448 echo blk > /sys/kernel/debug/tracing/current_tracer 449 cat /sys/kernel/debug/tracing/trace_pipe 450 451 If unsure, say N. 452 453config KPROBE_EVENTS 454 depends on KPROBES 455 depends on HAVE_REGS_AND_STACK_ACCESS_API 456 bool "Enable kprobes-based dynamic events" 457 select TRACING 458 select PROBE_EVENTS 459 default y 460 help 461 This allows the user to add tracing events (similar to tracepoints) 462 on the fly via the ftrace interface. See 463 Documentation/trace/kprobetrace.rst for more details. 464 465 Those events can be inserted wherever kprobes can probe, and record 466 various register and memory values. 467 468 This option is also required by perf-probe subcommand of perf tools. 469 If you want to use perf tools, this option is strongly recommended. 470 471config KPROBE_EVENTS_ON_NOTRACE 472 bool "Do NOT protect notrace function from kprobe events" 473 depends on KPROBE_EVENTS 474 depends on KPROBES_ON_FTRACE 475 default n 476 help 477 This is only for the developers who want to debug ftrace itself 478 using kprobe events. 479 480 If kprobes can use ftrace instead of breakpoint, ftrace related 481 functions are protected from kprobe-events to prevent an infinit 482 recursion or any unexpected execution path which leads to a kernel 483 crash. 484 485 This option disables such protection and allows you to put kprobe 486 events on ftrace functions for debugging ftrace by itself. 487 Note that this might let you shoot yourself in the foot. 488 489 If unsure, say N. 490 491config UPROBE_EVENTS 492 bool "Enable uprobes-based dynamic events" 493 depends on ARCH_SUPPORTS_UPROBES 494 depends on MMU 495 depends on PERF_EVENTS 496 select UPROBES 497 select PROBE_EVENTS 498 select TRACING 499 default y 500 help 501 This allows the user to add tracing events on top of userspace 502 dynamic events (similar to tracepoints) on the fly via the trace 503 events interface. Those events can be inserted wherever uprobes 504 can probe, and record various registers. 505 This option is required if you plan to use perf-probe subcommand 506 of perf tools on user space applications. 507 508config BPF_EVENTS 509 depends on BPF_SYSCALL 510 depends on (KPROBE_EVENTS || UPROBE_EVENTS) && PERF_EVENTS 511 bool 512 default y 513 help 514 This allows the user to attach BPF programs to kprobe events. 515 516config PROBE_EVENTS 517 def_bool n 518 519config DYNAMIC_FTRACE 520 bool "enable/disable function tracing dynamically" 521 depends on FUNCTION_TRACER 522 depends on HAVE_DYNAMIC_FTRACE 523 default y 524 help 525 This option will modify all the calls to function tracing 526 dynamically (will patch them out of the binary image and 527 replace them with a No-Op instruction) on boot up. During 528 compile time, a table is made of all the locations that ftrace 529 can function trace, and this table is linked into the kernel 530 image. When this is enabled, functions can be individually 531 enabled, and the functions not enabled will not affect 532 performance of the system. 533 534 See the files in /sys/kernel/debug/tracing: 535 available_filter_functions 536 set_ftrace_filter 537 set_ftrace_notrace 538 539 This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but 540 otherwise has native performance as long as no tracing is active. 541 542config DYNAMIC_FTRACE_WITH_REGS 543 def_bool y 544 depends on DYNAMIC_FTRACE 545 depends on HAVE_DYNAMIC_FTRACE_WITH_REGS 546 547config FUNCTION_PROFILER 548 bool "Kernel function profiler" 549 depends on FUNCTION_TRACER 550 default n 551 help 552 This option enables the kernel function profiler. A file is created 553 in debugfs called function_profile_enabled which defaults to zero. 554 When a 1 is echoed into this file profiling begins, and when a 555 zero is entered, profiling stops. A "functions" file is created in 556 the trace_stats directory; this file shows the list of functions that 557 have been hit and their counters. 558 559 If in doubt, say N. 560 561config BPF_KPROBE_OVERRIDE 562 bool "Enable BPF programs to override a kprobed function" 563 depends on BPF_EVENTS 564 depends on FUNCTION_ERROR_INJECTION 565 default n 566 help 567 Allows BPF to override the execution of a probed function and 568 set a different return value. This is used for error injection. 569 570config FTRACE_MCOUNT_RECORD 571 def_bool y 572 depends on DYNAMIC_FTRACE 573 depends on HAVE_FTRACE_MCOUNT_RECORD 574 575config FTRACE_SELFTEST 576 bool 577 578config FTRACE_STARTUP_TEST 579 bool "Perform a startup test on ftrace" 580 depends on GENERIC_TRACER 581 select FTRACE_SELFTEST 582 help 583 This option performs a series of startup tests on ftrace. On bootup 584 a series of tests are made to verify that the tracer is 585 functioning properly. It will do tests on all the configured 586 tracers of ftrace. 587 588config EVENT_TRACE_TEST_SYSCALLS 589 bool "Run selftest on syscall events" 590 depends on FTRACE_STARTUP_TEST 591 help 592 This option will also enable testing every syscall event. 593 It only enables the event and disables it and runs various loads 594 with the event enabled. This adds a bit more time for kernel boot 595 up since it runs this on every system call defined. 596 597 TBD - enable a way to actually call the syscalls as we test their 598 events 599 600config MMIOTRACE 601 bool "Memory mapped IO tracing" 602 depends on HAVE_MMIOTRACE_SUPPORT && PCI 603 select GENERIC_TRACER 604 help 605 Mmiotrace traces Memory Mapped I/O access and is meant for 606 debugging and reverse engineering. It is called from the ioremap 607 implementation and works via page faults. Tracing is disabled by 608 default and can be enabled at run-time. 609 610 See Documentation/trace/mmiotrace.rst. 611 If you are not helping to develop drivers, say N. 612 613config TRACING_MAP 614 bool 615 depends on ARCH_HAVE_NMI_SAFE_CMPXCHG 616 help 617 tracing_map is a special-purpose lock-free map for tracing, 618 separated out as a stand-alone facility in order to allow it 619 to be shared between multiple tracers. It isn't meant to be 620 generally used outside of that context, and is normally 621 selected by tracers that use it. 622 623config HIST_TRIGGERS 624 bool "Histogram triggers" 625 depends on ARCH_HAVE_NMI_SAFE_CMPXCHG 626 select TRACING_MAP 627 select TRACING 628 default n 629 help 630 Hist triggers allow one or more arbitrary trace event fields 631 to be aggregated into hash tables and dumped to stdout by 632 reading a debugfs/tracefs file. They're useful for 633 gathering quick and dirty (though precise) summaries of 634 event activity as an initial guide for further investigation 635 using more advanced tools. 636 637 Inter-event tracing of quantities such as latencies is also 638 supported using hist triggers under this option. 639 640 See Documentation/trace/histogram.txt. 641 If in doubt, say N. 642 643config MMIOTRACE_TEST 644 tristate "Test module for mmiotrace" 645 depends on MMIOTRACE && m 646 help 647 This is a dumb module for testing mmiotrace. It is very dangerous 648 as it will write garbage to IO memory starting at a given address. 649 However, it should be safe to use on e.g. unused portion of VRAM. 650 651 Say N, unless you absolutely know what you are doing. 652 653config TRACEPOINT_BENCHMARK 654 bool "Add tracepoint that benchmarks tracepoints" 655 help 656 This option creates the tracepoint "benchmark:benchmark_event". 657 When the tracepoint is enabled, it kicks off a kernel thread that 658 goes into an infinite loop (calling cond_sched() to let other tasks 659 run), and calls the tracepoint. Each iteration will record the time 660 it took to write to the tracepoint and the next iteration that 661 data will be passed to the tracepoint itself. That is, the tracepoint 662 will report the time it took to do the previous tracepoint. 663 The string written to the tracepoint is a static string of 128 bytes 664 to keep the time the same. The initial string is simply a write of 665 "START". The second string records the cold cache time of the first 666 write which is not added to the rest of the calculations. 667 668 As it is a tight loop, it benchmarks as hot cache. That's fine because 669 we care most about hot paths that are probably in cache already. 670 671 An example of the output: 672 673 START 674 first=3672 [COLD CACHED] 675 last=632 first=3672 max=632 min=632 avg=316 std=446 std^2=199712 676 last=278 first=3672 max=632 min=278 avg=303 std=316 std^2=100337 677 last=277 first=3672 max=632 min=277 avg=296 std=258 std^2=67064 678 last=273 first=3672 max=632 min=273 avg=292 std=224 std^2=50411 679 last=273 first=3672 max=632 min=273 avg=288 std=200 std^2=40389 680 last=281 first=3672 max=632 min=273 avg=287 std=183 std^2=33666 681 682 683config RING_BUFFER_BENCHMARK 684 tristate "Ring buffer benchmark stress tester" 685 depends on RING_BUFFER 686 help 687 This option creates a test to stress the ring buffer and benchmark it. 688 It creates its own ring buffer such that it will not interfere with 689 any other users of the ring buffer (such as ftrace). It then creates 690 a producer and consumer that will run for 10 seconds and sleep for 691 10 seconds. Each interval it will print out the number of events 692 it recorded and give a rough estimate of how long each iteration took. 693 694 It does not disable interrupts or raise its priority, so it may be 695 affected by processes that are running. 696 697 If unsure, say N. 698 699config RING_BUFFER_STARTUP_TEST 700 bool "Ring buffer startup self test" 701 depends on RING_BUFFER 702 help 703 Run a simple self test on the ring buffer on boot up. Late in the 704 kernel boot sequence, the test will start that kicks off 705 a thread per cpu. Each thread will write various size events 706 into the ring buffer. Another thread is created to send IPIs 707 to each of the threads, where the IPI handler will also write 708 to the ring buffer, to test/stress the nesting ability. 709 If any anomalies are discovered, a warning will be displayed 710 and all ring buffers will be disabled. 711 712 The test runs for 10 seconds. This will slow your boot time 713 by at least 10 more seconds. 714 715 At the end of the test, statics and more checks are done. 716 It will output the stats of each per cpu buffer. What 717 was written, the sizes, what was read, what was lost, and 718 other similar details. 719 720 If unsure, say N 721 722config PREEMPTIRQ_DELAY_TEST 723 tristate "Preempt / IRQ disable delay thread to test latency tracers" 724 depends on m 725 help 726 Select this option to build a test module that can help test latency 727 tracers by executing a preempt or irq disable section with a user 728 configurable delay. The module busy waits for the duration of the 729 critical section. 730 731 For example, the following invocation forces a one-time irq-disabled 732 critical section for 500us: 733 modprobe preemptirq_delay_test test_mode=irq delay=500000 734 735 If unsure, say N 736 737config TRACE_EVAL_MAP_FILE 738 bool "Show eval mappings for trace events" 739 depends on TRACING 740 help 741 The "print fmt" of the trace events will show the enum/sizeof names 742 instead of their values. This can cause problems for user space tools 743 that use this string to parse the raw data as user space does not know 744 how to convert the string to its value. 745 746 To fix this, there's a special macro in the kernel that can be used 747 to convert an enum/sizeof into its value. If this macro is used, then 748 the print fmt strings will be converted to their values. 749 750 If something does not get converted properly, this option can be 751 used to show what enums/sizeof the kernel tried to convert. 752 753 This option is for debugging the conversions. A file is created 754 in the tracing directory called "eval_map" that will show the 755 names matched with their values and what trace event system they 756 belong too. 757 758 Normally, the mapping of the strings to values will be freed after 759 boot up or module load. With this option, they will not be freed, as 760 they are needed for the "eval_map" file. Enabling this option will 761 increase the memory footprint of the running kernel. 762 763 If unsure, say N 764 765config TRACING_EVENTS_GPIO 766 bool "Trace gpio events" 767 depends on GPIOLIB 768 default y 769 help 770 Enable tracing events for gpio subsystem 771 772endif # FTRACE 773 774endif # TRACING_SUPPORT 775 776