1========================
2ftrace - Function Tracer
3========================
4
5Copyright 2008 Red Hat Inc.
6
7:Author:   Steven Rostedt <srostedt@redhat.com>
8:License:  The GNU Free Documentation License, Version 1.2
9          (dual licensed under the GPL v2)
10:Original Reviewers:  Elias Oltmanns, Randy Dunlap, Andrew Morton,
11		      John Kacur, and David Teigland.
12
13- Written for: 2.6.28-rc2
14- Updated for: 3.10
15- Updated for: 4.13 - Copyright 2017 VMware Inc. Steven Rostedt
16- Converted to rst format - Changbin Du <changbin.du@intel.com>
17
18Introduction
19------------
20
21Ftrace is an internal tracer designed to help out developers and
22designers of systems to find what is going on inside the kernel.
23It can be used for debugging or analyzing latencies and
24performance issues that take place outside of user-space.
25
26Although ftrace is typically considered the function tracer, it
27is really a framework of several assorted tracing utilities.
28There's latency tracing to examine what occurs between interrupts
29disabled and enabled, as well as for preemption and from a time
30a task is woken to the task is actually scheduled in.
31
32One of the most common uses of ftrace is the event tracing.
33Throughout the kernel is hundreds of static event points that
34can be enabled via the tracefs file system to see what is
35going on in certain parts of the kernel.
36
37See events.txt for more information.
38
39
40Implementation Details
41----------------------
42
43See :doc:`ftrace-design` for details for arch porters and such.
44
45
46The File System
47---------------
48
49Ftrace uses the tracefs file system to hold the control files as
50well as the files to display output.
51
52When tracefs is configured into the kernel (which selecting any ftrace
53option will do) the directory /sys/kernel/tracing will be created. To mount
54this directory, you can add to your /etc/fstab file::
55
56 tracefs       /sys/kernel/tracing       tracefs defaults        0       0
57
58Or you can mount it at run time with::
59
60 mount -t tracefs nodev /sys/kernel/tracing
61
62For quicker access to that directory you may want to make a soft link to
63it::
64
65 ln -s /sys/kernel/tracing /tracing
66
67.. attention::
68
69  Before 4.1, all ftrace tracing control files were within the debugfs
70  file system, which is typically located at /sys/kernel/debug/tracing.
71  For backward compatibility, when mounting the debugfs file system,
72  the tracefs file system will be automatically mounted at:
73
74  /sys/kernel/debug/tracing
75
76  All files located in the tracefs file system will be located in that
77  debugfs file system directory as well.
78
79.. attention::
80
81  Any selected ftrace option will also create the tracefs file system.
82  The rest of the document will assume that you are in the ftrace directory
83  (cd /sys/kernel/tracing) and will only concentrate on the files within that
84  directory and not distract from the content with the extended
85  "/sys/kernel/tracing" path name.
86
87That's it! (assuming that you have ftrace configured into your kernel)
88
89After mounting tracefs you will have access to the control and output files
90of ftrace. Here is a list of some of the key files:
91
92
93 Note: all time values are in microseconds.
94
95  current_tracer:
96
97	This is used to set or display the current tracer
98	that is configured.
99
100  available_tracers:
101
102	This holds the different types of tracers that
103	have been compiled into the kernel. The
104	tracers listed here can be configured by
105	echoing their name into current_tracer.
106
107  tracing_on:
108
109	This sets or displays whether writing to the trace
110	ring buffer is enabled. Echo 0 into this file to disable
111	the tracer or 1 to enable it. Note, this only disables
112	writing to the ring buffer, the tracing overhead may
113	still be occurring.
114
115	The kernel function tracing_off() can be used within the
116	kernel to disable writing to the ring buffer, which will
117	set this file to "0". User space can re-enable tracing by
118	echoing "1" into the file.
119
120	Note, the function and event trigger "traceoff" will also
121	set this file to zero and stop tracing. Which can also
122	be re-enabled by user space using this file.
123
124  trace:
125
126	This file holds the output of the trace in a human
127	readable format (described below). Note, tracing is temporarily
128	disabled while this file is being read (opened).
129
130  trace_pipe:
131
132	The output is the same as the "trace" file but this
133	file is meant to be streamed with live tracing.
134	Reads from this file will block until new data is
135	retrieved.  Unlike the "trace" file, this file is a
136	consumer. This means reading from this file causes
137	sequential reads to display more current data. Once
138	data is read from this file, it is consumed, and
139	will not be read again with a sequential read. The
140	"trace" file is static, and if the tracer is not
141	adding more data, it will display the same
142	information every time it is read. This file will not
143	disable tracing while being read.
144
145  trace_options:
146
147	This file lets the user control the amount of data
148	that is displayed in one of the above output
149	files. Options also exist to modify how a tracer
150	or events work (stack traces, timestamps, etc).
151
152  options:
153
154	This is a directory that has a file for every available
155	trace option (also in trace_options). Options may also be set
156	or cleared by writing a "1" or "0" respectively into the
157	corresponding file with the option name.
158
159  tracing_max_latency:
160
161	Some of the tracers record the max latency.
162	For example, the maximum time that interrupts are disabled.
163	The maximum time is saved in this file. The max trace will also be
164	stored,	and displayed by "trace". A new max trace will only be
165	recorded if the latency is greater than the value in this file
166	(in microseconds).
167
168	By echoing in a time into this file, no latency will be recorded
169	unless it is greater than the time in this file.
170
171  tracing_thresh:
172
173	Some latency tracers will record a trace whenever the
174	latency is greater than the number in this file.
175	Only active when the file contains a number greater than 0.
176	(in microseconds)
177
178  buffer_size_kb:
179
180	This sets or displays the number of kilobytes each CPU
181	buffer holds. By default, the trace buffers are the same size
182	for each CPU. The displayed number is the size of the
183	CPU buffer and not total size of all buffers. The
184	trace buffers are allocated in pages (blocks of memory
185	that the kernel uses for allocation, usually 4 KB in size).
186	If the last page allocated has room for more bytes
187	than requested, the rest of the page will be used,
188	making the actual allocation bigger than requested or shown.
189	( Note, the size may not be a multiple of the page size
190	due to buffer management meta-data. )
191
192	Buffer sizes for individual CPUs may vary
193	(see "per_cpu/cpu0/buffer_size_kb" below), and if they do
194	this file will show "X".
195
196  buffer_total_size_kb:
197
198	This displays the total combined size of all the trace buffers.
199
200  free_buffer:
201
202	If a process is performing tracing, and the ring buffer	should be
203	shrunk "freed" when the process is finished, even if it were to be
204	killed by a signal, this file can be used for that purpose. On close
205	of this file, the ring buffer will be resized to its minimum size.
206	Having a process that is tracing also open this file, when the process
207	exits its file descriptor for this file will be closed, and in doing so,
208	the ring buffer will be "freed".
209
210	It may also stop tracing if disable_on_free option is set.
211
212  tracing_cpumask:
213
214	This is a mask that lets the user only trace on specified CPUs.
215	The format is a hex string representing the CPUs.
216
217  set_ftrace_filter:
218
219	When dynamic ftrace is configured in (see the
220	section below "dynamic ftrace"), the code is dynamically
221	modified (code text rewrite) to disable calling of the
222	function profiler (mcount). This lets tracing be configured
223	in with practically no overhead in performance.  This also
224	has a side effect of enabling or disabling specific functions
225	to be traced. Echoing names of functions into this file
226	will limit the trace to only those functions.
227	This influences the tracers "function" and "function_graph"
228	and thus also function profiling (see "function_profile_enabled").
229
230	The functions listed in "available_filter_functions" are what
231	can be written into this file.
232
233	This interface also allows for commands to be used. See the
234	"Filter commands" section for more details.
235
236	As a speed up, since processing strings can't be quite expensive
237	and requires a check of all functions registered to tracing, instead
238	an index can be written into this file. A number (starting with "1")
239	written will instead select the same corresponding at the line position
240	of the "available_filter_functions" file.
241
242  set_ftrace_notrace:
243
244	This has an effect opposite to that of
245	set_ftrace_filter. Any function that is added here will not
246	be traced. If a function exists in both set_ftrace_filter
247	and set_ftrace_notrace,	the function will _not_ be traced.
248
249  set_ftrace_pid:
250
251	Have the function tracer only trace the threads whose PID are
252	listed in this file.
253
254	If the "function-fork" option is set, then when a task whose
255	PID is listed in this file forks, the child's PID will
256	automatically be added to this file, and the child will be
257	traced by the function tracer as well. This option will also
258	cause PIDs of tasks that exit to be removed from the file.
259
260  set_event_pid:
261
262	Have the events only trace a task with a PID listed in this file.
263	Note, sched_switch and sched_wake_up will also trace events
264	listed in this file.
265
266	To have the PIDs of children of tasks with their PID in this file
267	added on fork, enable the "event-fork" option. That option will also
268	cause the PIDs of tasks to be removed from this file when the task
269	exits.
270
271  set_graph_function:
272
273	Functions listed in this file will cause the function graph
274	tracer to only trace these functions and the functions that
275	they call. (See the section "dynamic ftrace" for more details).
276	Note, set_ftrace_filter and set_ftrace_notrace still affects
277	what functions are being traced.
278
279  set_graph_notrace:
280
281	Similar to set_graph_function, but will disable function graph
282	tracing when the function is hit until it exits the function.
283	This makes it possible to ignore tracing functions that are called
284	by a specific function.
285
286  available_filter_functions:
287
288	This lists the functions that ftrace has processed and can trace.
289	These are the function names that you can pass to
290	"set_ftrace_filter", "set_ftrace_notrace",
291	"set_graph_function", or "set_graph_notrace".
292	(See the section "dynamic ftrace" below for more details.)
293
294  dyn_ftrace_total_info:
295
296	This file is for debugging purposes. The number of functions that
297	have been converted to nops and are available to be traced.
298
299  enabled_functions:
300
301	This file is more for debugging ftrace, but can also be useful
302	in seeing if any function has a callback attached to it.
303	Not only does the trace infrastructure use ftrace function
304	trace utility, but other subsystems might too. This file
305	displays all functions that have a callback attached to them
306	as well as the number of callbacks that have been attached.
307	Note, a callback may also call multiple functions which will
308	not be listed in this count.
309
310	If the callback registered to be traced by a function with
311	the "save regs" attribute (thus even more overhead), a 'R'
312	will be displayed on the same line as the function that
313	is returning registers.
314
315	If the callback registered to be traced by a function with
316	the "ip modify" attribute (thus the regs->ip can be changed),
317	an 'I' will be displayed on the same line as the function that
318	can be overridden.
319
320	If the architecture supports it, it will also show what callback
321	is being directly called by the function. If the count is greater
322	than 1 it most likely will be ftrace_ops_list_func().
323
324	If the callback of the function jumps to a trampoline that is
325	specific to a the callback and not the standard trampoline,
326	its address will be printed as well as the function that the
327	trampoline calls.
328
329  function_profile_enabled:
330
331	When set it will enable all functions with either the function
332	tracer, or if configured, the function graph tracer. It will
333	keep a histogram of the number of functions that were called
334	and if the function graph tracer was configured, it will also keep
335	track of the time spent in those functions. The histogram
336	content can be displayed in the files:
337
338	trace_stat/function<cpu> ( function0, function1, etc).
339
340  trace_stat:
341
342	A directory that holds different tracing stats.
343
344  kprobe_events:
345
346	Enable dynamic trace points. See kprobetrace.txt.
347
348  kprobe_profile:
349
350	Dynamic trace points stats. See kprobetrace.txt.
351
352  max_graph_depth:
353
354	Used with the function graph tracer. This is the max depth
355	it will trace into a function. Setting this to a value of
356	one will show only the first kernel function that is called
357	from user space.
358
359  printk_formats:
360
361	This is for tools that read the raw format files. If an event in
362	the ring buffer references a string, only a pointer to the string
363	is recorded into the buffer and not the string itself. This prevents
364	tools from knowing what that string was. This file displays the string
365	and address for	the string allowing tools to map the pointers to what
366	the strings were.
367
368  saved_cmdlines:
369
370	Only the pid of the task is recorded in a trace event unless
371	the event specifically saves the task comm as well. Ftrace
372	makes a cache of pid mappings to comms to try to display
373	comms for events. If a pid for a comm is not listed, then
374	"<...>" is displayed in the output.
375
376	If the option "record-cmd" is set to "0", then comms of tasks
377	will not be saved during recording. By default, it is enabled.
378
379  saved_cmdlines_size:
380
381	By default, 128 comms are saved (see "saved_cmdlines" above). To
382	increase or decrease the amount of comms that are cached, echo
383	in a the number of comms to cache, into this file.
384
385  saved_tgids:
386
387	If the option "record-tgid" is set, on each scheduling context switch
388	the Task Group ID of a task is saved in a table mapping the PID of
389	the thread to its TGID. By default, the "record-tgid" option is
390	disabled.
391
392  snapshot:
393
394	This displays the "snapshot" buffer and also lets the user
395	take a snapshot of the current running trace.
396	See the "Snapshot" section below for more details.
397
398  stack_max_size:
399
400	When the stack tracer is activated, this will display the
401	maximum stack size it has encountered.
402	See the "Stack Trace" section below.
403
404  stack_trace:
405
406	This displays the stack back trace of the largest stack
407	that was encountered when the stack tracer is activated.
408	See the "Stack Trace" section below.
409
410  stack_trace_filter:
411
412	This is similar to "set_ftrace_filter" but it limits what
413	functions the stack tracer will check.
414
415  trace_clock:
416
417	Whenever an event is recorded into the ring buffer, a
418	"timestamp" is added. This stamp comes from a specified
419	clock. By default, ftrace uses the "local" clock. This
420	clock is very fast and strictly per cpu, but on some
421	systems it may not be monotonic with respect to other
422	CPUs. In other words, the local clocks may not be in sync
423	with local clocks on other CPUs.
424
425	Usual clocks for tracing::
426
427	  # cat trace_clock
428	  [local] global counter x86-tsc
429
430	The clock with the square brackets around it is the one in effect.
431
432	local:
433		Default clock, but may not be in sync across CPUs
434
435	global:
436		This clock is in sync with all CPUs but may
437		be a bit slower than the local clock.
438
439	counter:
440		This is not a clock at all, but literally an atomic
441		counter. It counts up one by one, but is in sync
442		with all CPUs. This is useful when you need to
443		know exactly the order events occurred with respect to
444		each other on different CPUs.
445
446	uptime:
447		This uses the jiffies counter and the time stamp
448		is relative to the time since boot up.
449
450	perf:
451		This makes ftrace use the same clock that perf uses.
452		Eventually perf will be able to read ftrace buffers
453		and this will help out in interleaving the data.
454
455	x86-tsc:
456		Architectures may define their own clocks. For
457		example, x86 uses its own TSC cycle clock here.
458
459	ppc-tb:
460		This uses the powerpc timebase register value.
461		This is in sync across CPUs and can also be used
462		to correlate events across hypervisor/guest if
463		tb_offset is known.
464
465	mono:
466		This uses the fast monotonic clock (CLOCK_MONOTONIC)
467		which is monotonic and is subject to NTP rate adjustments.
468
469	mono_raw:
470		This is the raw monotonic clock (CLOCK_MONOTONIC_RAW)
471		which is monotonic but is not subject to any rate adjustments
472		and ticks at the same rate as the hardware clocksource.
473
474	boot:
475		This is the boot clock (CLOCK_BOOTTIME) and is based on the
476		fast monotonic clock, but also accounts for time spent in
477		suspend. Since the clock access is designed for use in
478		tracing in the suspend path, some side effects are possible
479		if clock is accessed after the suspend time is accounted before
480		the fast mono clock is updated. In this case, the clock update
481		appears to happen slightly sooner than it normally would have.
482		Also on 32-bit systems, it's possible that the 64-bit boot offset
483		sees a partial update. These effects are rare and post
484		processing should be able to handle them. See comments in the
485		ktime_get_boot_fast_ns() function for more information.
486
487	To set a clock, simply echo the clock name into this file::
488
489	  # echo global > trace_clock
490
491  trace_marker:
492
493	This is a very useful file for synchronizing user space
494	with events happening in the kernel. Writing strings into
495	this file will be written into the ftrace buffer.
496
497	It is useful in applications to open this file at the start
498	of the application and just reference the file descriptor
499	for the file::
500
501		void trace_write(const char *fmt, ...)
502		{
503			va_list ap;
504			char buf[256];
505			int n;
506
507			if (trace_fd < 0)
508				return;
509
510			va_start(ap, fmt);
511			n = vsnprintf(buf, 256, fmt, ap);
512			va_end(ap);
513
514			write(trace_fd, buf, n);
515		}
516
517	start::
518
519		trace_fd = open("trace_marker", WR_ONLY);
520
521	Note: Writing into the trace_marker file can also initiate triggers
522	      that are written into /sys/kernel/tracing/events/ftrace/print/trigger
523	      See "Event triggers" in Documentation/trace/events.rst and an
524              example in Documentation/trace/histogram.rst (Section 3.)
525
526  trace_marker_raw:
527
528	This is similar to trace_marker above, but is meant for for binary data
529	to be written to it, where a tool can be used to parse the data
530	from trace_pipe_raw.
531
532  uprobe_events:
533
534	Add dynamic tracepoints in programs.
535	See uprobetracer.txt
536
537  uprobe_profile:
538
539	Uprobe statistics. See uprobetrace.txt
540
541  instances:
542
543	This is a way to make multiple trace buffers where different
544	events can be recorded in different buffers.
545	See "Instances" section below.
546
547  events:
548
549	This is the trace event directory. It holds event tracepoints
550	(also known as static tracepoints) that have been compiled
551	into the kernel. It shows what event tracepoints exist
552	and how they are grouped by system. There are "enable"
553	files at various levels that can enable the tracepoints
554	when a "1" is written to them.
555
556	See events.txt for more information.
557
558  set_event:
559
560	By echoing in the event into this file, will enable that event.
561
562	See events.txt for more information.
563
564  available_events:
565
566	A list of events that can be enabled in tracing.
567
568	See events.txt for more information.
569
570  timestamp_mode:
571
572	Certain tracers may change the timestamp mode used when
573	logging trace events into the event buffer.  Events with
574	different modes can coexist within a buffer but the mode in
575	effect when an event is logged determines which timestamp mode
576	is used for that event.  The default timestamp mode is
577	'delta'.
578
579	Usual timestamp modes for tracing:
580
581	  # cat timestamp_mode
582	  [delta] absolute
583
584	  The timestamp mode with the square brackets around it is the
585	  one in effect.
586
587	  delta: Default timestamp mode - timestamp is a delta against
588	         a per-buffer timestamp.
589
590	  absolute: The timestamp is a full timestamp, not a delta
591                 against some other value.  As such it takes up more
592                 space and is less efficient.
593
594  hwlat_detector:
595
596	Directory for the Hardware Latency Detector.
597	See "Hardware Latency Detector" section below.
598
599  per_cpu:
600
601	This is a directory that contains the trace per_cpu information.
602
603  per_cpu/cpu0/buffer_size_kb:
604
605	The ftrace buffer is defined per_cpu. That is, there's a separate
606	buffer for each CPU to allow writes to be done atomically,
607	and free from cache bouncing. These buffers may have different
608	size buffers. This file is similar to the buffer_size_kb
609	file, but it only displays or sets the buffer size for the
610	specific CPU. (here cpu0).
611
612  per_cpu/cpu0/trace:
613
614	This is similar to the "trace" file, but it will only display
615	the data specific for the CPU. If written to, it only clears
616	the specific CPU buffer.
617
618  per_cpu/cpu0/trace_pipe
619
620	This is similar to the "trace_pipe" file, and is a consuming
621	read, but it will only display (and consume) the data specific
622	for the CPU.
623
624  per_cpu/cpu0/trace_pipe_raw
625
626	For tools that can parse the ftrace ring buffer binary format,
627	the trace_pipe_raw file can be used to extract the data
628	from the ring buffer directly. With the use of the splice()
629	system call, the buffer data can be quickly transferred to
630	a file or to the network where a server is collecting the
631	data.
632
633	Like trace_pipe, this is a consuming reader, where multiple
634	reads will always produce different data.
635
636  per_cpu/cpu0/snapshot:
637
638	This is similar to the main "snapshot" file, but will only
639	snapshot the current CPU (if supported). It only displays
640	the content of the snapshot for a given CPU, and if
641	written to, only clears this CPU buffer.
642
643  per_cpu/cpu0/snapshot_raw:
644
645	Similar to the trace_pipe_raw, but will read the binary format
646	from the snapshot buffer for the given CPU.
647
648  per_cpu/cpu0/stats:
649
650	This displays certain stats about the ring buffer:
651
652	entries:
653		The number of events that are still in the buffer.
654
655	overrun:
656		The number of lost events due to overwriting when
657		the buffer was full.
658
659	commit overrun:
660		Should always be zero.
661		This gets set if so many events happened within a nested
662		event (ring buffer is re-entrant), that it fills the
663		buffer and starts dropping events.
664
665	bytes:
666		Bytes actually read (not overwritten).
667
668	oldest event ts:
669		The oldest timestamp in the buffer
670
671	now ts:
672		The current timestamp
673
674	dropped events:
675		Events lost due to overwrite option being off.
676
677	read events:
678		The number of events read.
679
680The Tracers
681-----------
682
683Here is the list of current tracers that may be configured.
684
685  "function"
686
687	Function call tracer to trace all kernel functions.
688
689  "function_graph"
690
691	Similar to the function tracer except that the
692	function tracer probes the functions on their entry
693	whereas the function graph tracer traces on both entry
694	and exit of the functions. It then provides the ability
695	to draw a graph of function calls similar to C code
696	source.
697
698  "blk"
699
700	The block tracer. The tracer used by the blktrace user
701	application.
702
703  "hwlat"
704
705	The Hardware Latency tracer is used to detect if the hardware
706	produces any latency. See "Hardware Latency Detector" section
707	below.
708
709  "irqsoff"
710
711	Traces the areas that disable interrupts and saves
712	the trace with the longest max latency.
713	See tracing_max_latency. When a new max is recorded,
714	it replaces the old trace. It is best to view this
715	trace with the latency-format option enabled, which
716	happens automatically when the tracer is selected.
717
718  "preemptoff"
719
720	Similar to irqsoff but traces and records the amount of
721	time for which preemption is disabled.
722
723  "preemptirqsoff"
724
725	Similar to irqsoff and preemptoff, but traces and
726	records the largest time for which irqs and/or preemption
727	is disabled.
728
729  "wakeup"
730
731	Traces and records the max latency that it takes for
732	the highest priority task to get scheduled after
733	it has been woken up.
734        Traces all tasks as an average developer would expect.
735
736  "wakeup_rt"
737
738        Traces and records the max latency that it takes for just
739        RT tasks (as the current "wakeup" does). This is useful
740        for those interested in wake up timings of RT tasks.
741
742  "wakeup_dl"
743
744	Traces and records the max latency that it takes for
745	a SCHED_DEADLINE task to be woken (as the "wakeup" and
746	"wakeup_rt" does).
747
748  "mmiotrace"
749
750	A special tracer that is used to trace binary module.
751	It will trace all the calls that a module makes to the
752	hardware. Everything it writes and reads from the I/O
753	as well.
754
755  "branch"
756
757	This tracer can be configured when tracing likely/unlikely
758	calls within the kernel. It will trace when a likely and
759	unlikely branch is hit and if it was correct in its prediction
760	of being correct.
761
762  "nop"
763
764	This is the "trace nothing" tracer. To remove all
765	tracers from tracing simply echo "nop" into
766	current_tracer.
767
768
769Examples of using the tracer
770----------------------------
771
772Here are typical examples of using the tracers when controlling
773them only with the tracefs interface (without using any
774user-land utilities).
775
776Output format:
777--------------
778
779Here is an example of the output format of the file "trace"::
780
781  # tracer: function
782  #
783  # entries-in-buffer/entries-written: 140080/250280   #P:4
784  #
785  #                              _-----=> irqs-off
786  #                             / _----=> need-resched
787  #                            | / _---=> hardirq/softirq
788  #                            || / _--=> preempt-depth
789  #                            ||| /     delay
790  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
791  #              | |       |   ||||       |         |
792              bash-1977  [000] .... 17284.993652: sys_close <-system_call_fastpath
793              bash-1977  [000] .... 17284.993653: __close_fd <-sys_close
794              bash-1977  [000] .... 17284.993653: _raw_spin_lock <-__close_fd
795              sshd-1974  [003] .... 17284.993653: __srcu_read_unlock <-fsnotify
796              bash-1977  [000] .... 17284.993654: add_preempt_count <-_raw_spin_lock
797              bash-1977  [000] ...1 17284.993655: _raw_spin_unlock <-__close_fd
798              bash-1977  [000] ...1 17284.993656: sub_preempt_count <-_raw_spin_unlock
799              bash-1977  [000] .... 17284.993657: filp_close <-__close_fd
800              bash-1977  [000] .... 17284.993657: dnotify_flush <-filp_close
801              sshd-1974  [003] .... 17284.993658: sys_select <-system_call_fastpath
802              ....
803
804A header is printed with the tracer name that is represented by
805the trace. In this case the tracer is "function". Then it shows the
806number of events in the buffer as well as the total number of entries
807that were written. The difference is the number of entries that were
808lost due to the buffer filling up (250280 - 140080 = 110200 events
809lost).
810
811The header explains the content of the events. Task name "bash", the task
812PID "1977", the CPU that it was running on "000", the latency format
813(explained below), the timestamp in <secs>.<usecs> format, the
814function name that was traced "sys_close" and the parent function that
815called this function "system_call_fastpath". The timestamp is the time
816at which the function was entered.
817
818Latency trace format
819--------------------
820
821When the latency-format option is enabled or when one of the latency
822tracers is set, the trace file gives somewhat more information to see
823why a latency happened. Here is a typical trace::
824
825  # tracer: irqsoff
826  #
827  # irqsoff latency trace v1.1.5 on 3.8.0-test+
828  # --------------------------------------------------------------------
829  # latency: 259 us, #4/4, CPU#2 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
830  #    -----------------
831  #    | task: ps-6143 (uid:0 nice:0 policy:0 rt_prio:0)
832  #    -----------------
833  #  => started at: __lock_task_sighand
834  #  => ended at:   _raw_spin_unlock_irqrestore
835  #
836  #
837  #                  _------=> CPU#
838  #                 / _-----=> irqs-off
839  #                | / _----=> need-resched
840  #                || / _---=> hardirq/softirq
841  #                ||| / _--=> preempt-depth
842  #                |||| /     delay
843  #  cmd     pid   ||||| time  |   caller
844  #     \   /      |||||  \    |   /
845        ps-6143    2d...    0us!: trace_hardirqs_off <-__lock_task_sighand
846        ps-6143    2d..1  259us+: trace_hardirqs_on <-_raw_spin_unlock_irqrestore
847        ps-6143    2d..1  263us+: time_hardirqs_on <-_raw_spin_unlock_irqrestore
848        ps-6143    2d..1  306us : <stack trace>
849   => trace_hardirqs_on_caller
850   => trace_hardirqs_on
851   => _raw_spin_unlock_irqrestore
852   => do_task_stat
853   => proc_tgid_stat
854   => proc_single_show
855   => seq_read
856   => vfs_read
857   => sys_read
858   => system_call_fastpath
859
860
861This shows that the current tracer is "irqsoff" tracing the time
862for which interrupts were disabled. It gives the trace version (which
863never changes) and the version of the kernel upon which this was executed on
864(3.8). Then it displays the max latency in microseconds (259 us). The number
865of trace entries displayed and the total number (both are four: #4/4).
866VP, KP, SP, and HP are always zero and are reserved for later use.
867#P is the number of online CPUs (#P:4).
868
869The task is the process that was running when the latency
870occurred. (ps pid: 6143).
871
872The start and stop (the functions in which the interrupts were
873disabled and enabled respectively) that caused the latencies:
874
875  - __lock_task_sighand is where the interrupts were disabled.
876  - _raw_spin_unlock_irqrestore is where they were enabled again.
877
878The next lines after the header are the trace itself. The header
879explains which is which.
880
881  cmd: The name of the process in the trace.
882
883  pid: The PID of that process.
884
885  CPU#: The CPU which the process was running on.
886
887  irqs-off: 'd' interrupts are disabled. '.' otherwise.
888	.. caution:: If the architecture does not support a way to
889		read the irq flags variable, an 'X' will always
890		be printed here.
891
892  need-resched:
893	- 'N' both TIF_NEED_RESCHED and PREEMPT_NEED_RESCHED is set,
894	- 'n' only TIF_NEED_RESCHED is set,
895	- 'p' only PREEMPT_NEED_RESCHED is set,
896	- '.' otherwise.
897
898  hardirq/softirq:
899	- 'Z' - NMI occurred inside a hardirq
900	- 'z' - NMI is running
901	- 'H' - hard irq occurred inside a softirq.
902	- 'h' - hard irq is running
903	- 's' - soft irq is running
904	- '.' - normal context.
905
906  preempt-depth: The level of preempt_disabled
907
908The above is mostly meaningful for kernel developers.
909
910  time:
911	When the latency-format option is enabled, the trace file
912	output includes a timestamp relative to the start of the
913	trace. This differs from the output when latency-format
914	is disabled, which includes an absolute timestamp.
915
916  delay:
917	This is just to help catch your eye a bit better. And
918	needs to be fixed to be only relative to the same CPU.
919	The marks are determined by the difference between this
920	current trace and the next trace.
921
922	  - '$' - greater than 1 second
923	  - '@' - greater than 100 millisecond
924	  - '*' - greater than 10 millisecond
925	  - '#' - greater than 1000 microsecond
926	  - '!' - greater than 100 microsecond
927	  - '+' - greater than 10 microsecond
928	  - ' ' - less than or equal to 10 microsecond.
929
930  The rest is the same as the 'trace' file.
931
932  Note, the latency tracers will usually end with a back trace
933  to easily find where the latency occurred.
934
935trace_options
936-------------
937
938The trace_options file (or the options directory) is used to control
939what gets printed in the trace output, or manipulate the tracers.
940To see what is available, simply cat the file::
941
942  cat trace_options
943	print-parent
944	nosym-offset
945	nosym-addr
946	noverbose
947	noraw
948	nohex
949	nobin
950	noblock
951	trace_printk
952	annotate
953	nouserstacktrace
954	nosym-userobj
955	noprintk-msg-only
956	context-info
957	nolatency-format
958	record-cmd
959	norecord-tgid
960	overwrite
961	nodisable_on_free
962	irq-info
963	markers
964	noevent-fork
965	function-trace
966	nofunction-fork
967	nodisplay-graph
968	nostacktrace
969	nobranch
970
971To disable one of the options, echo in the option prepended with
972"no"::
973
974  echo noprint-parent > trace_options
975
976To enable an option, leave off the "no"::
977
978  echo sym-offset > trace_options
979
980Here are the available options:
981
982  print-parent
983	On function traces, display the calling (parent)
984	function as well as the function being traced.
985	::
986
987	  print-parent:
988	   bash-4000  [01]  1477.606694: simple_strtoul <-kstrtoul
989
990	  noprint-parent:
991	   bash-4000  [01]  1477.606694: simple_strtoul
992
993
994  sym-offset
995	Display not only the function name, but also the
996	offset in the function. For example, instead of
997	seeing just "ktime_get", you will see
998	"ktime_get+0xb/0x20".
999	::
1000
1001	  sym-offset:
1002	   bash-4000  [01]  1477.606694: simple_strtoul+0x6/0xa0
1003
1004  sym-addr
1005	This will also display the function address as well
1006	as the function name.
1007	::
1008
1009	  sym-addr:
1010	   bash-4000  [01]  1477.606694: simple_strtoul <c0339346>
1011
1012  verbose
1013	This deals with the trace file when the
1014        latency-format option is enabled.
1015	::
1016
1017	    bash  4000 1 0 00000000 00010a95 [58127d26] 1720.415ms \
1018	    (+0.000ms): simple_strtoul (kstrtoul)
1019
1020  raw
1021	This will display raw numbers. This option is best for
1022	use with user applications that can translate the raw
1023	numbers better than having it done in the kernel.
1024
1025  hex
1026	Similar to raw, but the numbers will be in a hexadecimal format.
1027
1028  bin
1029	This will print out the formats in raw binary.
1030
1031  block
1032	When set, reading trace_pipe will not block when polled.
1033
1034  trace_printk
1035	Can disable trace_printk() from writing into the buffer.
1036
1037  annotate
1038	It is sometimes confusing when the CPU buffers are full
1039	and one CPU buffer had a lot of events recently, thus
1040	a shorter time frame, were another CPU may have only had
1041	a few events, which lets it have older events. When
1042	the trace is reported, it shows the oldest events first,
1043	and it may look like only one CPU ran (the one with the
1044	oldest events). When the annotate option is set, it will
1045	display when a new CPU buffer started::
1046
1047			  <idle>-0     [001] dNs4 21169.031481: wake_up_idle_cpu <-add_timer_on
1048			  <idle>-0     [001] dNs4 21169.031482: _raw_spin_unlock_irqrestore <-add_timer_on
1049			  <idle>-0     [001] .Ns4 21169.031484: sub_preempt_count <-_raw_spin_unlock_irqrestore
1050		##### CPU 2 buffer started ####
1051			  <idle>-0     [002] .N.1 21169.031484: rcu_idle_exit <-cpu_idle
1052			  <idle>-0     [001] .Ns3 21169.031484: _raw_spin_unlock <-clocksource_watchdog
1053			  <idle>-0     [001] .Ns3 21169.031485: sub_preempt_count <-_raw_spin_unlock
1054
1055  userstacktrace
1056	This option changes the trace. It records a
1057	stacktrace of the current user space thread after
1058	each trace event.
1059
1060  sym-userobj
1061	when user stacktrace are enabled, look up which
1062	object the address belongs to, and print a
1063	relative address. This is especially useful when
1064	ASLR is on, otherwise you don't get a chance to
1065	resolve the address to object/file/line after
1066	the app is no longer running
1067
1068	The lookup is performed when you read
1069	trace,trace_pipe. Example::
1070
1071		  a.out-1623  [000] 40874.465068: /root/a.out[+0x480] <-/root/a.out[+0
1072		  x494] <- /root/a.out[+0x4a8] <- /lib/libc-2.7.so[+0x1e1a6]
1073
1074
1075  printk-msg-only
1076	When set, trace_printk()s will only show the format
1077	and not their parameters (if trace_bprintk() or
1078	trace_bputs() was used to save the trace_printk()).
1079
1080  context-info
1081	Show only the event data. Hides the comm, PID,
1082	timestamp, CPU, and other useful data.
1083
1084  latency-format
1085	This option changes the trace output. When it is enabled,
1086	the trace displays additional information about the
1087	latency, as described in "Latency trace format".
1088
1089  record-cmd
1090	When any event or tracer is enabled, a hook is enabled
1091	in the sched_switch trace point to fill comm cache
1092	with mapped pids and comms. But this may cause some
1093	overhead, and if you only care about pids, and not the
1094	name of the task, disabling this option can lower the
1095	impact of tracing. See "saved_cmdlines".
1096
1097  record-tgid
1098	When any event or tracer is enabled, a hook is enabled
1099	in the sched_switch trace point to fill the cache of
1100	mapped Thread Group IDs (TGID) mapping to pids. See
1101	"saved_tgids".
1102
1103  overwrite
1104	This controls what happens when the trace buffer is
1105	full. If "1" (default), the oldest events are
1106	discarded and overwritten. If "0", then the newest
1107	events are discarded.
1108	(see per_cpu/cpu0/stats for overrun and dropped)
1109
1110  disable_on_free
1111	When the free_buffer is closed, tracing will
1112	stop (tracing_on set to 0).
1113
1114  irq-info
1115	Shows the interrupt, preempt count, need resched data.
1116	When disabled, the trace looks like::
1117
1118		# tracer: function
1119		#
1120		# entries-in-buffer/entries-written: 144405/9452052   #P:4
1121		#
1122		#           TASK-PID   CPU#      TIMESTAMP  FUNCTION
1123		#              | |       |          |         |
1124			  <idle>-0     [002]  23636.756054: ttwu_do_activate.constprop.89 <-try_to_wake_up
1125			  <idle>-0     [002]  23636.756054: activate_task <-ttwu_do_activate.constprop.89
1126			  <idle>-0     [002]  23636.756055: enqueue_task <-activate_task
1127
1128
1129  markers
1130	When set, the trace_marker is writable (only by root).
1131	When disabled, the trace_marker will error with EINVAL
1132	on write.
1133
1134  event-fork
1135	When set, tasks with PIDs listed in set_event_pid will have
1136	the PIDs of their children added to set_event_pid when those
1137	tasks fork. Also, when tasks with PIDs in set_event_pid exit,
1138	their PIDs will be removed from the file.
1139
1140  function-trace
1141	The latency tracers will enable function tracing
1142	if this option is enabled (default it is). When
1143	it is disabled, the latency tracers do not trace
1144	functions. This keeps the overhead of the tracer down
1145	when performing latency tests.
1146
1147  function-fork
1148	When set, tasks with PIDs listed in set_ftrace_pid will
1149	have the PIDs of their children added to set_ftrace_pid
1150	when those tasks fork. Also, when tasks with PIDs in
1151	set_ftrace_pid exit, their PIDs will be removed from the
1152	file.
1153
1154  display-graph
1155	When set, the latency tracers (irqsoff, wakeup, etc) will
1156	use function graph tracing instead of function tracing.
1157
1158  stacktrace
1159	When set, a stack trace is recorded after any trace event
1160	is recorded.
1161
1162  branch
1163	Enable branch tracing with the tracer. This enables branch
1164	tracer along with the currently set tracer. Enabling this
1165	with the "nop" tracer is the same as just enabling the
1166	"branch" tracer.
1167
1168.. tip:: Some tracers have their own options. They only appear in this
1169       file when the tracer is active. They always appear in the
1170       options directory.
1171
1172
1173Here are the per tracer options:
1174
1175Options for function tracer:
1176
1177  func_stack_trace
1178	When set, a stack trace is recorded after every
1179	function that is recorded. NOTE! Limit the functions
1180	that are recorded before enabling this, with
1181	"set_ftrace_filter" otherwise the system performance
1182	will be critically degraded. Remember to disable
1183	this option before clearing the function filter.
1184
1185Options for function_graph tracer:
1186
1187 Since the function_graph tracer has a slightly different output
1188 it has its own options to control what is displayed.
1189
1190  funcgraph-overrun
1191	When set, the "overrun" of the graph stack is
1192	displayed after each function traced. The
1193	overrun, is when the stack depth of the calls
1194	is greater than what is reserved for each task.
1195	Each task has a fixed array of functions to
1196	trace in the call graph. If the depth of the
1197	calls exceeds that, the function is not traced.
1198	The overrun is the number of functions missed
1199	due to exceeding this array.
1200
1201  funcgraph-cpu
1202	When set, the CPU number of the CPU where the trace
1203	occurred is displayed.
1204
1205  funcgraph-overhead
1206	When set, if the function takes longer than
1207	A certain amount, then a delay marker is
1208	displayed. See "delay" above, under the
1209	header description.
1210
1211  funcgraph-proc
1212	Unlike other tracers, the process' command line
1213	is not displayed by default, but instead only
1214	when a task is traced in and out during a context
1215	switch. Enabling this options has the command
1216	of each process displayed at every line.
1217
1218  funcgraph-duration
1219	At the end of each function (the return)
1220	the duration of the amount of time in the
1221	function is displayed in microseconds.
1222
1223  funcgraph-abstime
1224	When set, the timestamp is displayed at each line.
1225
1226  funcgraph-irqs
1227	When disabled, functions that happen inside an
1228	interrupt will not be traced.
1229
1230  funcgraph-tail
1231	When set, the return event will include the function
1232	that it represents. By default this is off, and
1233	only a closing curly bracket "}" is displayed for
1234	the return of a function.
1235
1236  sleep-time
1237	When running function graph tracer, to include
1238	the time a task schedules out in its function.
1239	When enabled, it will account time the task has been
1240	scheduled out as part of the function call.
1241
1242  graph-time
1243	When running function profiler with function graph tracer,
1244	to include the time to call nested functions. When this is
1245	not set, the time reported for the function will only
1246	include the time the function itself executed for, not the
1247	time for functions that it called.
1248
1249Options for blk tracer:
1250
1251  blk_classic
1252	Shows a more minimalistic output.
1253
1254
1255irqsoff
1256-------
1257
1258When interrupts are disabled, the CPU can not react to any other
1259external event (besides NMIs and SMIs). This prevents the timer
1260interrupt from triggering or the mouse interrupt from letting
1261the kernel know of a new mouse event. The result is a latency
1262with the reaction time.
1263
1264The irqsoff tracer tracks the time for which interrupts are
1265disabled. When a new maximum latency is hit, the tracer saves
1266the trace leading up to that latency point so that every time a
1267new maximum is reached, the old saved trace is discarded and the
1268new trace is saved.
1269
1270To reset the maximum, echo 0 into tracing_max_latency. Here is
1271an example::
1272
1273  # echo 0 > options/function-trace
1274  # echo irqsoff > current_tracer
1275  # echo 1 > tracing_on
1276  # echo 0 > tracing_max_latency
1277  # ls -ltr
1278  [...]
1279  # echo 0 > tracing_on
1280  # cat trace
1281  # tracer: irqsoff
1282  #
1283  # irqsoff latency trace v1.1.5 on 3.8.0-test+
1284  # --------------------------------------------------------------------
1285  # latency: 16 us, #4/4, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1286  #    -----------------
1287  #    | task: swapper/0-0 (uid:0 nice:0 policy:0 rt_prio:0)
1288  #    -----------------
1289  #  => started at: run_timer_softirq
1290  #  => ended at:   run_timer_softirq
1291  #
1292  #
1293  #                  _------=> CPU#
1294  #                 / _-----=> irqs-off
1295  #                | / _----=> need-resched
1296  #                || / _---=> hardirq/softirq
1297  #                ||| / _--=> preempt-depth
1298  #                |||| /     delay
1299  #  cmd     pid   ||||| time  |   caller
1300  #     \   /      |||||  \    |   /
1301    <idle>-0       0d.s2    0us+: _raw_spin_lock_irq <-run_timer_softirq
1302    <idle>-0       0dNs3   17us : _raw_spin_unlock_irq <-run_timer_softirq
1303    <idle>-0       0dNs3   17us+: trace_hardirqs_on <-run_timer_softirq
1304    <idle>-0       0dNs3   25us : <stack trace>
1305   => _raw_spin_unlock_irq
1306   => run_timer_softirq
1307   => __do_softirq
1308   => call_softirq
1309   => do_softirq
1310   => irq_exit
1311   => smp_apic_timer_interrupt
1312   => apic_timer_interrupt
1313   => rcu_idle_exit
1314   => cpu_idle
1315   => rest_init
1316   => start_kernel
1317   => x86_64_start_reservations
1318   => x86_64_start_kernel
1319
1320Here we see that that we had a latency of 16 microseconds (which is
1321very good). The _raw_spin_lock_irq in run_timer_softirq disabled
1322interrupts. The difference between the 16 and the displayed
1323timestamp 25us occurred because the clock was incremented
1324between the time of recording the max latency and the time of
1325recording the function that had that latency.
1326
1327Note the above example had function-trace not set. If we set
1328function-trace, we get a much larger output::
1329
1330 with echo 1 > options/function-trace
1331
1332  # tracer: irqsoff
1333  #
1334  # irqsoff latency trace v1.1.5 on 3.8.0-test+
1335  # --------------------------------------------------------------------
1336  # latency: 71 us, #168/168, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1337  #    -----------------
1338  #    | task: bash-2042 (uid:0 nice:0 policy:0 rt_prio:0)
1339  #    -----------------
1340  #  => started at: ata_scsi_queuecmd
1341  #  => ended at:   ata_scsi_queuecmd
1342  #
1343  #
1344  #                  _------=> CPU#
1345  #                 / _-----=> irqs-off
1346  #                | / _----=> need-resched
1347  #                || / _---=> hardirq/softirq
1348  #                ||| / _--=> preempt-depth
1349  #                |||| /     delay
1350  #  cmd     pid   ||||| time  |   caller
1351  #     \   /      |||||  \    |   /
1352      bash-2042    3d...    0us : _raw_spin_lock_irqsave <-ata_scsi_queuecmd
1353      bash-2042    3d...    0us : add_preempt_count <-_raw_spin_lock_irqsave
1354      bash-2042    3d..1    1us : ata_scsi_find_dev <-ata_scsi_queuecmd
1355      bash-2042    3d..1    1us : __ata_scsi_find_dev <-ata_scsi_find_dev
1356      bash-2042    3d..1    2us : ata_find_dev.part.14 <-__ata_scsi_find_dev
1357      bash-2042    3d..1    2us : ata_qc_new_init <-__ata_scsi_queuecmd
1358      bash-2042    3d..1    3us : ata_sg_init <-__ata_scsi_queuecmd
1359      bash-2042    3d..1    4us : ata_scsi_rw_xlat <-__ata_scsi_queuecmd
1360      bash-2042    3d..1    4us : ata_build_rw_tf <-ata_scsi_rw_xlat
1361  [...]
1362      bash-2042    3d..1   67us : delay_tsc <-__delay
1363      bash-2042    3d..1   67us : add_preempt_count <-delay_tsc
1364      bash-2042    3d..2   67us : sub_preempt_count <-delay_tsc
1365      bash-2042    3d..1   67us : add_preempt_count <-delay_tsc
1366      bash-2042    3d..2   68us : sub_preempt_count <-delay_tsc
1367      bash-2042    3d..1   68us+: ata_bmdma_start <-ata_bmdma_qc_issue
1368      bash-2042    3d..1   71us : _raw_spin_unlock_irqrestore <-ata_scsi_queuecmd
1369      bash-2042    3d..1   71us : _raw_spin_unlock_irqrestore <-ata_scsi_queuecmd
1370      bash-2042    3d..1   72us+: trace_hardirqs_on <-ata_scsi_queuecmd
1371      bash-2042    3d..1  120us : <stack trace>
1372   => _raw_spin_unlock_irqrestore
1373   => ata_scsi_queuecmd
1374   => scsi_dispatch_cmd
1375   => scsi_request_fn
1376   => __blk_run_queue_uncond
1377   => __blk_run_queue
1378   => blk_queue_bio
1379   => generic_make_request
1380   => submit_bio
1381   => submit_bh
1382   => __ext3_get_inode_loc
1383   => ext3_iget
1384   => ext3_lookup
1385   => lookup_real
1386   => __lookup_hash
1387   => walk_component
1388   => lookup_last
1389   => path_lookupat
1390   => filename_lookup
1391   => user_path_at_empty
1392   => user_path_at
1393   => vfs_fstatat
1394   => vfs_stat
1395   => sys_newstat
1396   => system_call_fastpath
1397
1398
1399Here we traced a 71 microsecond latency. But we also see all the
1400functions that were called during that time. Note that by
1401enabling function tracing, we incur an added overhead. This
1402overhead may extend the latency times. But nevertheless, this
1403trace has provided some very helpful debugging information.
1404
1405If we prefer function graph output instead of function, we can set
1406display-graph option::
1407
1408 with echo 1 > options/display-graph
1409
1410  # tracer: irqsoff
1411  #
1412  # irqsoff latency trace v1.1.5 on 4.20.0-rc6+
1413  # --------------------------------------------------------------------
1414  # latency: 3751 us, #274/274, CPU#0 | (M:desktop VP:0, KP:0, SP:0 HP:0 #P:4)
1415  #    -----------------
1416  #    | task: bash-1507 (uid:0 nice:0 policy:0 rt_prio:0)
1417  #    -----------------
1418  #  => started at: free_debug_processing
1419  #  => ended at:   return_to_handler
1420  #
1421  #
1422  #                                       _-----=> irqs-off
1423  #                                      / _----=> need-resched
1424  #                                     | / _---=> hardirq/softirq
1425  #                                     || / _--=> preempt-depth
1426  #                                     ||| /
1427  #   REL TIME      CPU  TASK/PID       ||||     DURATION                  FUNCTION CALLS
1428  #      |          |     |    |        ||||      |   |                     |   |   |   |
1429          0 us |   0)   bash-1507    |  d... |   0.000 us    |  _raw_spin_lock_irqsave();
1430          0 us |   0)   bash-1507    |  d..1 |   0.378 us    |    do_raw_spin_trylock();
1431          1 us |   0)   bash-1507    |  d..2 |               |    set_track() {
1432          2 us |   0)   bash-1507    |  d..2 |               |      save_stack_trace() {
1433          2 us |   0)   bash-1507    |  d..2 |               |        __save_stack_trace() {
1434          3 us |   0)   bash-1507    |  d..2 |               |          __unwind_start() {
1435          3 us |   0)   bash-1507    |  d..2 |               |            get_stack_info() {
1436          3 us |   0)   bash-1507    |  d..2 |   0.351 us    |              in_task_stack();
1437          4 us |   0)   bash-1507    |  d..2 |   1.107 us    |            }
1438  [...]
1439       3750 us |   0)   bash-1507    |  d..1 |   0.516 us    |      do_raw_spin_unlock();
1440       3750 us |   0)   bash-1507    |  d..1 |   0.000 us    |  _raw_spin_unlock_irqrestore();
1441       3764 us |   0)   bash-1507    |  d..1 |   0.000 us    |  tracer_hardirqs_on();
1442      bash-1507    0d..1 3792us : <stack trace>
1443   => free_debug_processing
1444   => __slab_free
1445   => kmem_cache_free
1446   => vm_area_free
1447   => remove_vma
1448   => exit_mmap
1449   => mmput
1450   => flush_old_exec
1451   => load_elf_binary
1452   => search_binary_handler
1453   => __do_execve_file.isra.32
1454   => __x64_sys_execve
1455   => do_syscall_64
1456   => entry_SYSCALL_64_after_hwframe
1457
1458preemptoff
1459----------
1460
1461When preemption is disabled, we may be able to receive
1462interrupts but the task cannot be preempted and a higher
1463priority task must wait for preemption to be enabled again
1464before it can preempt a lower priority task.
1465
1466The preemptoff tracer traces the places that disable preemption.
1467Like the irqsoff tracer, it records the maximum latency for
1468which preemption was disabled. The control of preemptoff tracer
1469is much like the irqsoff tracer.
1470::
1471
1472  # echo 0 > options/function-trace
1473  # echo preemptoff > current_tracer
1474  # echo 1 > tracing_on
1475  # echo 0 > tracing_max_latency
1476  # ls -ltr
1477  [...]
1478  # echo 0 > tracing_on
1479  # cat trace
1480  # tracer: preemptoff
1481  #
1482  # preemptoff latency trace v1.1.5 on 3.8.0-test+
1483  # --------------------------------------------------------------------
1484  # latency: 46 us, #4/4, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1485  #    -----------------
1486  #    | task: sshd-1991 (uid:0 nice:0 policy:0 rt_prio:0)
1487  #    -----------------
1488  #  => started at: do_IRQ
1489  #  => ended at:   do_IRQ
1490  #
1491  #
1492  #                  _------=> CPU#
1493  #                 / _-----=> irqs-off
1494  #                | / _----=> need-resched
1495  #                || / _---=> hardirq/softirq
1496  #                ||| / _--=> preempt-depth
1497  #                |||| /     delay
1498  #  cmd     pid   ||||| time  |   caller
1499  #     \   /      |||||  \    |   /
1500      sshd-1991    1d.h.    0us+: irq_enter <-do_IRQ
1501      sshd-1991    1d..1   46us : irq_exit <-do_IRQ
1502      sshd-1991    1d..1   47us+: trace_preempt_on <-do_IRQ
1503      sshd-1991    1d..1   52us : <stack trace>
1504   => sub_preempt_count
1505   => irq_exit
1506   => do_IRQ
1507   => ret_from_intr
1508
1509
1510This has some more changes. Preemption was disabled when an
1511interrupt came in (notice the 'h'), and was enabled on exit.
1512But we also see that interrupts have been disabled when entering
1513the preempt off section and leaving it (the 'd'). We do not know if
1514interrupts were enabled in the mean time or shortly after this
1515was over.
1516::
1517
1518  # tracer: preemptoff
1519  #
1520  # preemptoff latency trace v1.1.5 on 3.8.0-test+
1521  # --------------------------------------------------------------------
1522  # latency: 83 us, #241/241, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1523  #    -----------------
1524  #    | task: bash-1994 (uid:0 nice:0 policy:0 rt_prio:0)
1525  #    -----------------
1526  #  => started at: wake_up_new_task
1527  #  => ended at:   task_rq_unlock
1528  #
1529  #
1530  #                  _------=> CPU#
1531  #                 / _-----=> irqs-off
1532  #                | / _----=> need-resched
1533  #                || / _---=> hardirq/softirq
1534  #                ||| / _--=> preempt-depth
1535  #                |||| /     delay
1536  #  cmd     pid   ||||| time  |   caller
1537  #     \   /      |||||  \    |   /
1538      bash-1994    1d..1    0us : _raw_spin_lock_irqsave <-wake_up_new_task
1539      bash-1994    1d..1    0us : select_task_rq_fair <-select_task_rq
1540      bash-1994    1d..1    1us : __rcu_read_lock <-select_task_rq_fair
1541      bash-1994    1d..1    1us : source_load <-select_task_rq_fair
1542      bash-1994    1d..1    1us : source_load <-select_task_rq_fair
1543  [...]
1544      bash-1994    1d..1   12us : irq_enter <-smp_apic_timer_interrupt
1545      bash-1994    1d..1   12us : rcu_irq_enter <-irq_enter
1546      bash-1994    1d..1   13us : add_preempt_count <-irq_enter
1547      bash-1994    1d.h1   13us : exit_idle <-smp_apic_timer_interrupt
1548      bash-1994    1d.h1   13us : hrtimer_interrupt <-smp_apic_timer_interrupt
1549      bash-1994    1d.h1   13us : _raw_spin_lock <-hrtimer_interrupt
1550      bash-1994    1d.h1   14us : add_preempt_count <-_raw_spin_lock
1551      bash-1994    1d.h2   14us : ktime_get_update_offsets <-hrtimer_interrupt
1552  [...]
1553      bash-1994    1d.h1   35us : lapic_next_event <-clockevents_program_event
1554      bash-1994    1d.h1   35us : irq_exit <-smp_apic_timer_interrupt
1555      bash-1994    1d.h1   36us : sub_preempt_count <-irq_exit
1556      bash-1994    1d..2   36us : do_softirq <-irq_exit
1557      bash-1994    1d..2   36us : __do_softirq <-call_softirq
1558      bash-1994    1d..2   36us : __local_bh_disable <-__do_softirq
1559      bash-1994    1d.s2   37us : add_preempt_count <-_raw_spin_lock_irq
1560      bash-1994    1d.s3   38us : _raw_spin_unlock <-run_timer_softirq
1561      bash-1994    1d.s3   39us : sub_preempt_count <-_raw_spin_unlock
1562      bash-1994    1d.s2   39us : call_timer_fn <-run_timer_softirq
1563  [...]
1564      bash-1994    1dNs2   81us : cpu_needs_another_gp <-rcu_process_callbacks
1565      bash-1994    1dNs2   82us : __local_bh_enable <-__do_softirq
1566      bash-1994    1dNs2   82us : sub_preempt_count <-__local_bh_enable
1567      bash-1994    1dN.2   82us : idle_cpu <-irq_exit
1568      bash-1994    1dN.2   83us : rcu_irq_exit <-irq_exit
1569      bash-1994    1dN.2   83us : sub_preempt_count <-irq_exit
1570      bash-1994    1.N.1   84us : _raw_spin_unlock_irqrestore <-task_rq_unlock
1571      bash-1994    1.N.1   84us+: trace_preempt_on <-task_rq_unlock
1572      bash-1994    1.N.1  104us : <stack trace>
1573   => sub_preempt_count
1574   => _raw_spin_unlock_irqrestore
1575   => task_rq_unlock
1576   => wake_up_new_task
1577   => do_fork
1578   => sys_clone
1579   => stub_clone
1580
1581
1582The above is an example of the preemptoff trace with
1583function-trace set. Here we see that interrupts were not disabled
1584the entire time. The irq_enter code lets us know that we entered
1585an interrupt 'h'. Before that, the functions being traced still
1586show that it is not in an interrupt, but we can see from the
1587functions themselves that this is not the case.
1588
1589preemptirqsoff
1590--------------
1591
1592Knowing the locations that have interrupts disabled or
1593preemption disabled for the longest times is helpful. But
1594sometimes we would like to know when either preemption and/or
1595interrupts are disabled.
1596
1597Consider the following code::
1598
1599    local_irq_disable();
1600    call_function_with_irqs_off();
1601    preempt_disable();
1602    call_function_with_irqs_and_preemption_off();
1603    local_irq_enable();
1604    call_function_with_preemption_off();
1605    preempt_enable();
1606
1607The irqsoff tracer will record the total length of
1608call_function_with_irqs_off() and
1609call_function_with_irqs_and_preemption_off().
1610
1611The preemptoff tracer will record the total length of
1612call_function_with_irqs_and_preemption_off() and
1613call_function_with_preemption_off().
1614
1615But neither will trace the time that interrupts and/or
1616preemption is disabled. This total time is the time that we can
1617not schedule. To record this time, use the preemptirqsoff
1618tracer.
1619
1620Again, using this trace is much like the irqsoff and preemptoff
1621tracers.
1622::
1623
1624  # echo 0 > options/function-trace
1625  # echo preemptirqsoff > current_tracer
1626  # echo 1 > tracing_on
1627  # echo 0 > tracing_max_latency
1628  # ls -ltr
1629  [...]
1630  # echo 0 > tracing_on
1631  # cat trace
1632  # tracer: preemptirqsoff
1633  #
1634  # preemptirqsoff latency trace v1.1.5 on 3.8.0-test+
1635  # --------------------------------------------------------------------
1636  # latency: 100 us, #4/4, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1637  #    -----------------
1638  #    | task: ls-2230 (uid:0 nice:0 policy:0 rt_prio:0)
1639  #    -----------------
1640  #  => started at: ata_scsi_queuecmd
1641  #  => ended at:   ata_scsi_queuecmd
1642  #
1643  #
1644  #                  _------=> CPU#
1645  #                 / _-----=> irqs-off
1646  #                | / _----=> need-resched
1647  #                || / _---=> hardirq/softirq
1648  #                ||| / _--=> preempt-depth
1649  #                |||| /     delay
1650  #  cmd     pid   ||||| time  |   caller
1651  #     \   /      |||||  \    |   /
1652        ls-2230    3d...    0us+: _raw_spin_lock_irqsave <-ata_scsi_queuecmd
1653        ls-2230    3...1  100us : _raw_spin_unlock_irqrestore <-ata_scsi_queuecmd
1654        ls-2230    3...1  101us+: trace_preempt_on <-ata_scsi_queuecmd
1655        ls-2230    3...1  111us : <stack trace>
1656   => sub_preempt_count
1657   => _raw_spin_unlock_irqrestore
1658   => ata_scsi_queuecmd
1659   => scsi_dispatch_cmd
1660   => scsi_request_fn
1661   => __blk_run_queue_uncond
1662   => __blk_run_queue
1663   => blk_queue_bio
1664   => generic_make_request
1665   => submit_bio
1666   => submit_bh
1667   => ext3_bread
1668   => ext3_dir_bread
1669   => htree_dirblock_to_tree
1670   => ext3_htree_fill_tree
1671   => ext3_readdir
1672   => vfs_readdir
1673   => sys_getdents
1674   => system_call_fastpath
1675
1676
1677The trace_hardirqs_off_thunk is called from assembly on x86 when
1678interrupts are disabled in the assembly code. Without the
1679function tracing, we do not know if interrupts were enabled
1680within the preemption points. We do see that it started with
1681preemption enabled.
1682
1683Here is a trace with function-trace set::
1684
1685  # tracer: preemptirqsoff
1686  #
1687  # preemptirqsoff latency trace v1.1.5 on 3.8.0-test+
1688  # --------------------------------------------------------------------
1689  # latency: 161 us, #339/339, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1690  #    -----------------
1691  #    | task: ls-2269 (uid:0 nice:0 policy:0 rt_prio:0)
1692  #    -----------------
1693  #  => started at: schedule
1694  #  => ended at:   mutex_unlock
1695  #
1696  #
1697  #                  _------=> CPU#
1698  #                 / _-----=> irqs-off
1699  #                | / _----=> need-resched
1700  #                || / _---=> hardirq/softirq
1701  #                ||| / _--=> preempt-depth
1702  #                |||| /     delay
1703  #  cmd     pid   ||||| time  |   caller
1704  #     \   /      |||||  \    |   /
1705  kworker/-59      3...1    0us : __schedule <-schedule
1706  kworker/-59      3d..1    0us : rcu_preempt_qs <-rcu_note_context_switch
1707  kworker/-59      3d..1    1us : add_preempt_count <-_raw_spin_lock_irq
1708  kworker/-59      3d..2    1us : deactivate_task <-__schedule
1709  kworker/-59      3d..2    1us : dequeue_task <-deactivate_task
1710  kworker/-59      3d..2    2us : update_rq_clock <-dequeue_task
1711  kworker/-59      3d..2    2us : dequeue_task_fair <-dequeue_task
1712  kworker/-59      3d..2    2us : update_curr <-dequeue_task_fair
1713  kworker/-59      3d..2    2us : update_min_vruntime <-update_curr
1714  kworker/-59      3d..2    3us : cpuacct_charge <-update_curr
1715  kworker/-59      3d..2    3us : __rcu_read_lock <-cpuacct_charge
1716  kworker/-59      3d..2    3us : __rcu_read_unlock <-cpuacct_charge
1717  kworker/-59      3d..2    3us : update_cfs_rq_blocked_load <-dequeue_task_fair
1718  kworker/-59      3d..2    4us : clear_buddies <-dequeue_task_fair
1719  kworker/-59      3d..2    4us : account_entity_dequeue <-dequeue_task_fair
1720  kworker/-59      3d..2    4us : update_min_vruntime <-dequeue_task_fair
1721  kworker/-59      3d..2    4us : update_cfs_shares <-dequeue_task_fair
1722  kworker/-59      3d..2    5us : hrtick_update <-dequeue_task_fair
1723  kworker/-59      3d..2    5us : wq_worker_sleeping <-__schedule
1724  kworker/-59      3d..2    5us : kthread_data <-wq_worker_sleeping
1725  kworker/-59      3d..2    5us : put_prev_task_fair <-__schedule
1726  kworker/-59      3d..2    6us : pick_next_task_fair <-pick_next_task
1727  kworker/-59      3d..2    6us : clear_buddies <-pick_next_task_fair
1728  kworker/-59      3d..2    6us : set_next_entity <-pick_next_task_fair
1729  kworker/-59      3d..2    6us : update_stats_wait_end <-set_next_entity
1730        ls-2269    3d..2    7us : finish_task_switch <-__schedule
1731        ls-2269    3d..2    7us : _raw_spin_unlock_irq <-finish_task_switch
1732        ls-2269    3d..2    8us : do_IRQ <-ret_from_intr
1733        ls-2269    3d..2    8us : irq_enter <-do_IRQ
1734        ls-2269    3d..2    8us : rcu_irq_enter <-irq_enter
1735        ls-2269    3d..2    9us : add_preempt_count <-irq_enter
1736        ls-2269    3d.h2    9us : exit_idle <-do_IRQ
1737  [...]
1738        ls-2269    3d.h3   20us : sub_preempt_count <-_raw_spin_unlock
1739        ls-2269    3d.h2   20us : irq_exit <-do_IRQ
1740        ls-2269    3d.h2   21us : sub_preempt_count <-irq_exit
1741        ls-2269    3d..3   21us : do_softirq <-irq_exit
1742        ls-2269    3d..3   21us : __do_softirq <-call_softirq
1743        ls-2269    3d..3   21us+: __local_bh_disable <-__do_softirq
1744        ls-2269    3d.s4   29us : sub_preempt_count <-_local_bh_enable_ip
1745        ls-2269    3d.s5   29us : sub_preempt_count <-_local_bh_enable_ip
1746        ls-2269    3d.s5   31us : do_IRQ <-ret_from_intr
1747        ls-2269    3d.s5   31us : irq_enter <-do_IRQ
1748        ls-2269    3d.s5   31us : rcu_irq_enter <-irq_enter
1749  [...]
1750        ls-2269    3d.s5   31us : rcu_irq_enter <-irq_enter
1751        ls-2269    3d.s5   32us : add_preempt_count <-irq_enter
1752        ls-2269    3d.H5   32us : exit_idle <-do_IRQ
1753        ls-2269    3d.H5   32us : handle_irq <-do_IRQ
1754        ls-2269    3d.H5   32us : irq_to_desc <-handle_irq
1755        ls-2269    3d.H5   33us : handle_fasteoi_irq <-handle_irq
1756  [...]
1757        ls-2269    3d.s5  158us : _raw_spin_unlock_irqrestore <-rtl8139_poll
1758        ls-2269    3d.s3  158us : net_rps_action_and_irq_enable.isra.65 <-net_rx_action
1759        ls-2269    3d.s3  159us : __local_bh_enable <-__do_softirq
1760        ls-2269    3d.s3  159us : sub_preempt_count <-__local_bh_enable
1761        ls-2269    3d..3  159us : idle_cpu <-irq_exit
1762        ls-2269    3d..3  159us : rcu_irq_exit <-irq_exit
1763        ls-2269    3d..3  160us : sub_preempt_count <-irq_exit
1764        ls-2269    3d...  161us : __mutex_unlock_slowpath <-mutex_unlock
1765        ls-2269    3d...  162us+: trace_hardirqs_on <-mutex_unlock
1766        ls-2269    3d...  186us : <stack trace>
1767   => __mutex_unlock_slowpath
1768   => mutex_unlock
1769   => process_output
1770   => n_tty_write
1771   => tty_write
1772   => vfs_write
1773   => sys_write
1774   => system_call_fastpath
1775
1776This is an interesting trace. It started with kworker running and
1777scheduling out and ls taking over. But as soon as ls released the
1778rq lock and enabled interrupts (but not preemption) an interrupt
1779triggered. When the interrupt finished, it started running softirqs.
1780But while the softirq was running, another interrupt triggered.
1781When an interrupt is running inside a softirq, the annotation is 'H'.
1782
1783
1784wakeup
1785------
1786
1787One common case that people are interested in tracing is the
1788time it takes for a task that is woken to actually wake up.
1789Now for non Real-Time tasks, this can be arbitrary. But tracing
1790it none the less can be interesting.
1791
1792Without function tracing::
1793
1794  # echo 0 > options/function-trace
1795  # echo wakeup > current_tracer
1796  # echo 1 > tracing_on
1797  # echo 0 > tracing_max_latency
1798  # chrt -f 5 sleep 1
1799  # echo 0 > tracing_on
1800  # cat trace
1801  # tracer: wakeup
1802  #
1803  # wakeup latency trace v1.1.5 on 3.8.0-test+
1804  # --------------------------------------------------------------------
1805  # latency: 15 us, #4/4, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1806  #    -----------------
1807  #    | task: kworker/3:1H-312 (uid:0 nice:-20 policy:0 rt_prio:0)
1808  #    -----------------
1809  #
1810  #                  _------=> CPU#
1811  #                 / _-----=> irqs-off
1812  #                | / _----=> need-resched
1813  #                || / _---=> hardirq/softirq
1814  #                ||| / _--=> preempt-depth
1815  #                |||| /     delay
1816  #  cmd     pid   ||||| time  |   caller
1817  #     \   /      |||||  \    |   /
1818    <idle>-0       3dNs7    0us :      0:120:R   + [003]   312:100:R kworker/3:1H
1819    <idle>-0       3dNs7    1us+: ttwu_do_activate.constprop.87 <-try_to_wake_up
1820    <idle>-0       3d..3   15us : __schedule <-schedule
1821    <idle>-0       3d..3   15us :      0:120:R ==> [003]   312:100:R kworker/3:1H
1822
1823The tracer only traces the highest priority task in the system
1824to avoid tracing the normal circumstances. Here we see that
1825the kworker with a nice priority of -20 (not very nice), took
1826just 15 microseconds from the time it woke up, to the time it
1827ran.
1828
1829Non Real-Time tasks are not that interesting. A more interesting
1830trace is to concentrate only on Real-Time tasks.
1831
1832wakeup_rt
1833---------
1834
1835In a Real-Time environment it is very important to know the
1836wakeup time it takes for the highest priority task that is woken
1837up to the time that it executes. This is also known as "schedule
1838latency". I stress the point that this is about RT tasks. It is
1839also important to know the scheduling latency of non-RT tasks,
1840but the average schedule latency is better for non-RT tasks.
1841Tools like LatencyTop are more appropriate for such
1842measurements.
1843
1844Real-Time environments are interested in the worst case latency.
1845That is the longest latency it takes for something to happen,
1846and not the average. We can have a very fast scheduler that may
1847only have a large latency once in a while, but that would not
1848work well with Real-Time tasks.  The wakeup_rt tracer was designed
1849to record the worst case wakeups of RT tasks. Non-RT tasks are
1850not recorded because the tracer only records one worst case and
1851tracing non-RT tasks that are unpredictable will overwrite the
1852worst case latency of RT tasks (just run the normal wakeup
1853tracer for a while to see that effect).
1854
1855Since this tracer only deals with RT tasks, we will run this
1856slightly differently than we did with the previous tracers.
1857Instead of performing an 'ls', we will run 'sleep 1' under
1858'chrt' which changes the priority of the task.
1859::
1860
1861  # echo 0 > options/function-trace
1862  # echo wakeup_rt > current_tracer
1863  # echo 1 > tracing_on
1864  # echo 0 > tracing_max_latency
1865  # chrt -f 5 sleep 1
1866  # echo 0 > tracing_on
1867  # cat trace
1868  # tracer: wakeup
1869  #
1870  # tracer: wakeup_rt
1871  #
1872  # wakeup_rt latency trace v1.1.5 on 3.8.0-test+
1873  # --------------------------------------------------------------------
1874  # latency: 5 us, #4/4, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1875  #    -----------------
1876  #    | task: sleep-2389 (uid:0 nice:0 policy:1 rt_prio:5)
1877  #    -----------------
1878  #
1879  #                  _------=> CPU#
1880  #                 / _-----=> irqs-off
1881  #                | / _----=> need-resched
1882  #                || / _---=> hardirq/softirq
1883  #                ||| / _--=> preempt-depth
1884  #                |||| /     delay
1885  #  cmd     pid   ||||| time  |   caller
1886  #     \   /      |||||  \    |   /
1887    <idle>-0       3d.h4    0us :      0:120:R   + [003]  2389: 94:R sleep
1888    <idle>-0       3d.h4    1us+: ttwu_do_activate.constprop.87 <-try_to_wake_up
1889    <idle>-0       3d..3    5us : __schedule <-schedule
1890    <idle>-0       3d..3    5us :      0:120:R ==> [003]  2389: 94:R sleep
1891
1892
1893Running this on an idle system, we see that it only took 5 microseconds
1894to perform the task switch.  Note, since the trace point in the schedule
1895is before the actual "switch", we stop the tracing when the recorded task
1896is about to schedule in. This may change if we add a new marker at the
1897end of the scheduler.
1898
1899Notice that the recorded task is 'sleep' with the PID of 2389
1900and it has an rt_prio of 5. This priority is user-space priority
1901and not the internal kernel priority. The policy is 1 for
1902SCHED_FIFO and 2 for SCHED_RR.
1903
1904Note, that the trace data shows the internal priority (99 - rtprio).
1905::
1906
1907  <idle>-0       3d..3    5us :      0:120:R ==> [003]  2389: 94:R sleep
1908
1909The 0:120:R means idle was running with a nice priority of 0 (120 - 120)
1910and in the running state 'R'. The sleep task was scheduled in with
19112389: 94:R. That is the priority is the kernel rtprio (99 - 5 = 94)
1912and it too is in the running state.
1913
1914Doing the same with chrt -r 5 and function-trace set.
1915::
1916
1917  echo 1 > options/function-trace
1918
1919  # tracer: wakeup_rt
1920  #
1921  # wakeup_rt latency trace v1.1.5 on 3.8.0-test+
1922  # --------------------------------------------------------------------
1923  # latency: 29 us, #85/85, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1924  #    -----------------
1925  #    | task: sleep-2448 (uid:0 nice:0 policy:1 rt_prio:5)
1926  #    -----------------
1927  #
1928  #                  _------=> CPU#
1929  #                 / _-----=> irqs-off
1930  #                | / _----=> need-resched
1931  #                || / _---=> hardirq/softirq
1932  #                ||| / _--=> preempt-depth
1933  #                |||| /     delay
1934  #  cmd     pid   ||||| time  |   caller
1935  #     \   /      |||||  \    |   /
1936    <idle>-0       3d.h4    1us+:      0:120:R   + [003]  2448: 94:R sleep
1937    <idle>-0       3d.h4    2us : ttwu_do_activate.constprop.87 <-try_to_wake_up
1938    <idle>-0       3d.h3    3us : check_preempt_curr <-ttwu_do_wakeup
1939    <idle>-0       3d.h3    3us : resched_curr <-check_preempt_curr
1940    <idle>-0       3dNh3    4us : task_woken_rt <-ttwu_do_wakeup
1941    <idle>-0       3dNh3    4us : _raw_spin_unlock <-try_to_wake_up
1942    <idle>-0       3dNh3    4us : sub_preempt_count <-_raw_spin_unlock
1943    <idle>-0       3dNh2    5us : ttwu_stat <-try_to_wake_up
1944    <idle>-0       3dNh2    5us : _raw_spin_unlock_irqrestore <-try_to_wake_up
1945    <idle>-0       3dNh2    6us : sub_preempt_count <-_raw_spin_unlock_irqrestore
1946    <idle>-0       3dNh1    6us : _raw_spin_lock <-__run_hrtimer
1947    <idle>-0       3dNh1    6us : add_preempt_count <-_raw_spin_lock
1948    <idle>-0       3dNh2    7us : _raw_spin_unlock <-hrtimer_interrupt
1949    <idle>-0       3dNh2    7us : sub_preempt_count <-_raw_spin_unlock
1950    <idle>-0       3dNh1    7us : tick_program_event <-hrtimer_interrupt
1951    <idle>-0       3dNh1    7us : clockevents_program_event <-tick_program_event
1952    <idle>-0       3dNh1    8us : ktime_get <-clockevents_program_event
1953    <idle>-0       3dNh1    8us : lapic_next_event <-clockevents_program_event
1954    <idle>-0       3dNh1    8us : irq_exit <-smp_apic_timer_interrupt
1955    <idle>-0       3dNh1    9us : sub_preempt_count <-irq_exit
1956    <idle>-0       3dN.2    9us : idle_cpu <-irq_exit
1957    <idle>-0       3dN.2    9us : rcu_irq_exit <-irq_exit
1958    <idle>-0       3dN.2   10us : rcu_eqs_enter_common.isra.45 <-rcu_irq_exit
1959    <idle>-0       3dN.2   10us : sub_preempt_count <-irq_exit
1960    <idle>-0       3.N.1   11us : rcu_idle_exit <-cpu_idle
1961    <idle>-0       3dN.1   11us : rcu_eqs_exit_common.isra.43 <-rcu_idle_exit
1962    <idle>-0       3.N.1   11us : tick_nohz_idle_exit <-cpu_idle
1963    <idle>-0       3dN.1   12us : menu_hrtimer_cancel <-tick_nohz_idle_exit
1964    <idle>-0       3dN.1   12us : ktime_get <-tick_nohz_idle_exit
1965    <idle>-0       3dN.1   12us : tick_do_update_jiffies64 <-tick_nohz_idle_exit
1966    <idle>-0       3dN.1   13us : cpu_load_update_nohz <-tick_nohz_idle_exit
1967    <idle>-0       3dN.1   13us : _raw_spin_lock <-cpu_load_update_nohz
1968    <idle>-0       3dN.1   13us : add_preempt_count <-_raw_spin_lock
1969    <idle>-0       3dN.2   13us : __cpu_load_update <-cpu_load_update_nohz
1970    <idle>-0       3dN.2   14us : sched_avg_update <-__cpu_load_update
1971    <idle>-0       3dN.2   14us : _raw_spin_unlock <-cpu_load_update_nohz
1972    <idle>-0       3dN.2   14us : sub_preempt_count <-_raw_spin_unlock
1973    <idle>-0       3dN.1   15us : calc_load_nohz_stop <-tick_nohz_idle_exit
1974    <idle>-0       3dN.1   15us : touch_softlockup_watchdog <-tick_nohz_idle_exit
1975    <idle>-0       3dN.1   15us : hrtimer_cancel <-tick_nohz_idle_exit
1976    <idle>-0       3dN.1   15us : hrtimer_try_to_cancel <-hrtimer_cancel
1977    <idle>-0       3dN.1   16us : lock_hrtimer_base.isra.18 <-hrtimer_try_to_cancel
1978    <idle>-0       3dN.1   16us : _raw_spin_lock_irqsave <-lock_hrtimer_base.isra.18
1979    <idle>-0       3dN.1   16us : add_preempt_count <-_raw_spin_lock_irqsave
1980    <idle>-0       3dN.2   17us : __remove_hrtimer <-remove_hrtimer.part.16
1981    <idle>-0       3dN.2   17us : hrtimer_force_reprogram <-__remove_hrtimer
1982    <idle>-0       3dN.2   17us : tick_program_event <-hrtimer_force_reprogram
1983    <idle>-0       3dN.2   18us : clockevents_program_event <-tick_program_event
1984    <idle>-0       3dN.2   18us : ktime_get <-clockevents_program_event
1985    <idle>-0       3dN.2   18us : lapic_next_event <-clockevents_program_event
1986    <idle>-0       3dN.2   19us : _raw_spin_unlock_irqrestore <-hrtimer_try_to_cancel
1987    <idle>-0       3dN.2   19us : sub_preempt_count <-_raw_spin_unlock_irqrestore
1988    <idle>-0       3dN.1   19us : hrtimer_forward <-tick_nohz_idle_exit
1989    <idle>-0       3dN.1   20us : ktime_add_safe <-hrtimer_forward
1990    <idle>-0       3dN.1   20us : ktime_add_safe <-hrtimer_forward
1991    <idle>-0       3dN.1   20us : hrtimer_start_range_ns <-hrtimer_start_expires.constprop.11
1992    <idle>-0       3dN.1   20us : __hrtimer_start_range_ns <-hrtimer_start_range_ns
1993    <idle>-0       3dN.1   21us : lock_hrtimer_base.isra.18 <-__hrtimer_start_range_ns
1994    <idle>-0       3dN.1   21us : _raw_spin_lock_irqsave <-lock_hrtimer_base.isra.18
1995    <idle>-0       3dN.1   21us : add_preempt_count <-_raw_spin_lock_irqsave
1996    <idle>-0       3dN.2   22us : ktime_add_safe <-__hrtimer_start_range_ns
1997    <idle>-0       3dN.2   22us : enqueue_hrtimer <-__hrtimer_start_range_ns
1998    <idle>-0       3dN.2   22us : tick_program_event <-__hrtimer_start_range_ns
1999    <idle>-0       3dN.2   23us : clockevents_program_event <-tick_program_event
2000    <idle>-0       3dN.2   23us : ktime_get <-clockevents_program_event
2001    <idle>-0       3dN.2   23us : lapic_next_event <-clockevents_program_event
2002    <idle>-0       3dN.2   24us : _raw_spin_unlock_irqrestore <-__hrtimer_start_range_ns
2003    <idle>-0       3dN.2   24us : sub_preempt_count <-_raw_spin_unlock_irqrestore
2004    <idle>-0       3dN.1   24us : account_idle_ticks <-tick_nohz_idle_exit
2005    <idle>-0       3dN.1   24us : account_idle_time <-account_idle_ticks
2006    <idle>-0       3.N.1   25us : sub_preempt_count <-cpu_idle
2007    <idle>-0       3.N..   25us : schedule <-cpu_idle
2008    <idle>-0       3.N..   25us : __schedule <-preempt_schedule
2009    <idle>-0       3.N..   26us : add_preempt_count <-__schedule
2010    <idle>-0       3.N.1   26us : rcu_note_context_switch <-__schedule
2011    <idle>-0       3.N.1   26us : rcu_sched_qs <-rcu_note_context_switch
2012    <idle>-0       3dN.1   27us : rcu_preempt_qs <-rcu_note_context_switch
2013    <idle>-0       3.N.1   27us : _raw_spin_lock_irq <-__schedule
2014    <idle>-0       3dN.1   27us : add_preempt_count <-_raw_spin_lock_irq
2015    <idle>-0       3dN.2   28us : put_prev_task_idle <-__schedule
2016    <idle>-0       3dN.2   28us : pick_next_task_stop <-pick_next_task
2017    <idle>-0       3dN.2   28us : pick_next_task_rt <-pick_next_task
2018    <idle>-0       3dN.2   29us : dequeue_pushable_task <-pick_next_task_rt
2019    <idle>-0       3d..3   29us : __schedule <-preempt_schedule
2020    <idle>-0       3d..3   30us :      0:120:R ==> [003]  2448: 94:R sleep
2021
2022This isn't that big of a trace, even with function tracing enabled,
2023so I included the entire trace.
2024
2025The interrupt went off while when the system was idle. Somewhere
2026before task_woken_rt() was called, the NEED_RESCHED flag was set,
2027this is indicated by the first occurrence of the 'N' flag.
2028
2029Latency tracing and events
2030--------------------------
2031As function tracing can induce a much larger latency, but without
2032seeing what happens within the latency it is hard to know what
2033caused it. There is a middle ground, and that is with enabling
2034events.
2035::
2036
2037  # echo 0 > options/function-trace
2038  # echo wakeup_rt > current_tracer
2039  # echo 1 > events/enable
2040  # echo 1 > tracing_on
2041  # echo 0 > tracing_max_latency
2042  # chrt -f 5 sleep 1
2043  # echo 0 > tracing_on
2044  # cat trace
2045  # tracer: wakeup_rt
2046  #
2047  # wakeup_rt latency trace v1.1.5 on 3.8.0-test+
2048  # --------------------------------------------------------------------
2049  # latency: 6 us, #12/12, CPU#2 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
2050  #    -----------------
2051  #    | task: sleep-5882 (uid:0 nice:0 policy:1 rt_prio:5)
2052  #    -----------------
2053  #
2054  #                  _------=> CPU#
2055  #                 / _-----=> irqs-off
2056  #                | / _----=> need-resched
2057  #                || / _---=> hardirq/softirq
2058  #                ||| / _--=> preempt-depth
2059  #                |||| /     delay
2060  #  cmd     pid   ||||| time  |   caller
2061  #     \   /      |||||  \    |   /
2062    <idle>-0       2d.h4    0us :      0:120:R   + [002]  5882: 94:R sleep
2063    <idle>-0       2d.h4    0us : ttwu_do_activate.constprop.87 <-try_to_wake_up
2064    <idle>-0       2d.h4    1us : sched_wakeup: comm=sleep pid=5882 prio=94 success=1 target_cpu=002
2065    <idle>-0       2dNh2    1us : hrtimer_expire_exit: hrtimer=ffff88007796feb8
2066    <idle>-0       2.N.2    2us : power_end: cpu_id=2
2067    <idle>-0       2.N.2    3us : cpu_idle: state=4294967295 cpu_id=2
2068    <idle>-0       2dN.3    4us : hrtimer_cancel: hrtimer=ffff88007d50d5e0
2069    <idle>-0       2dN.3    4us : hrtimer_start: hrtimer=ffff88007d50d5e0 function=tick_sched_timer expires=34311211000000 softexpires=34311211000000
2070    <idle>-0       2.N.2    5us : rcu_utilization: Start context switch
2071    <idle>-0       2.N.2    5us : rcu_utilization: End context switch
2072    <idle>-0       2d..3    6us : __schedule <-schedule
2073    <idle>-0       2d..3    6us :      0:120:R ==> [002]  5882: 94:R sleep
2074
2075
2076Hardware Latency Detector
2077-------------------------
2078
2079The hardware latency detector is executed by enabling the "hwlat" tracer.
2080
2081NOTE, this tracer will affect the performance of the system as it will
2082periodically make a CPU constantly busy with interrupts disabled.
2083::
2084
2085  # echo hwlat > current_tracer
2086  # sleep 100
2087  # cat trace
2088  # tracer: hwlat
2089  #
2090  #                              _-----=> irqs-off
2091  #                             / _----=> need-resched
2092  #                            | / _---=> hardirq/softirq
2093  #                            || / _--=> preempt-depth
2094  #                            ||| /     delay
2095  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
2096  #              | |       |   ||||       |         |
2097             <...>-3638  [001] d... 19452.055471: #1     inner/outer(us):   12/14    ts:1499801089.066141940
2098             <...>-3638  [003] d... 19454.071354: #2     inner/outer(us):   11/9     ts:1499801091.082164365
2099             <...>-3638  [002] dn.. 19461.126852: #3     inner/outer(us):   12/9     ts:1499801098.138150062
2100             <...>-3638  [001] d... 19488.340960: #4     inner/outer(us):    8/12    ts:1499801125.354139633
2101             <...>-3638  [003] d... 19494.388553: #5     inner/outer(us):    8/12    ts:1499801131.402150961
2102             <...>-3638  [003] d... 19501.283419: #6     inner/outer(us):    0/12    ts:1499801138.297435289 nmi-total:4 nmi-count:1
2103
2104
2105The above output is somewhat the same in the header. All events will have
2106interrupts disabled 'd'. Under the FUNCTION title there is:
2107
2108 #1
2109	This is the count of events recorded that were greater than the
2110	tracing_threshold (See below).
2111
2112 inner/outer(us):   12/14
2113
2114      This shows two numbers as "inner latency" and "outer latency". The test
2115      runs in a loop checking a timestamp twice. The latency detected within
2116      the two timestamps is the "inner latency" and the latency detected
2117      after the previous timestamp and the next timestamp in the loop is
2118      the "outer latency".
2119
2120 ts:1499801089.066141940
2121
2122      The absolute timestamp that the event happened.
2123
2124 nmi-total:4 nmi-count:1
2125
2126      On architectures that support it, if an NMI comes in during the
2127      test, the time spent in NMI is reported in "nmi-total" (in
2128      microseconds).
2129
2130      All architectures that have NMIs will show the "nmi-count" if an
2131      NMI comes in during the test.
2132
2133hwlat files:
2134
2135  tracing_threshold
2136	This gets automatically set to "10" to represent 10
2137	microseconds. This is the threshold of latency that
2138	needs to be detected before the trace will be recorded.
2139
2140	Note, when hwlat tracer is finished (another tracer is
2141	written into "current_tracer"), the original value for
2142	tracing_threshold is placed back into this file.
2143
2144  hwlat_detector/width
2145	The length of time the test runs with interrupts disabled.
2146
2147  hwlat_detector/window
2148	The length of time of the window which the test
2149	runs. That is, the test will run for "width"
2150	microseconds per "window" microseconds
2151
2152  tracing_cpumask
2153	When the test is started. A kernel thread is created that
2154	runs the test. This thread will alternate between CPUs
2155	listed in the tracing_cpumask between each period
2156	(one "window"). To limit the test to specific CPUs
2157	set the mask in this file to only the CPUs that the test
2158	should run on.
2159
2160function
2161--------
2162
2163This tracer is the function tracer. Enabling the function tracer
2164can be done from the debug file system. Make sure the
2165ftrace_enabled is set; otherwise this tracer is a nop.
2166See the "ftrace_enabled" section below.
2167::
2168
2169  # sysctl kernel.ftrace_enabled=1
2170  # echo function > current_tracer
2171  # echo 1 > tracing_on
2172  # usleep 1
2173  # echo 0 > tracing_on
2174  # cat trace
2175  # tracer: function
2176  #
2177  # entries-in-buffer/entries-written: 24799/24799   #P:4
2178  #
2179  #                              _-----=> irqs-off
2180  #                             / _----=> need-resched
2181  #                            | / _---=> hardirq/softirq
2182  #                            || / _--=> preempt-depth
2183  #                            ||| /     delay
2184  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
2185  #              | |       |   ||||       |         |
2186              bash-1994  [002] ....  3082.063030: mutex_unlock <-rb_simple_write
2187              bash-1994  [002] ....  3082.063031: __mutex_unlock_slowpath <-mutex_unlock
2188              bash-1994  [002] ....  3082.063031: __fsnotify_parent <-fsnotify_modify
2189              bash-1994  [002] ....  3082.063032: fsnotify <-fsnotify_modify
2190              bash-1994  [002] ....  3082.063032: __srcu_read_lock <-fsnotify
2191              bash-1994  [002] ....  3082.063032: add_preempt_count <-__srcu_read_lock
2192              bash-1994  [002] ...1  3082.063032: sub_preempt_count <-__srcu_read_lock
2193              bash-1994  [002] ....  3082.063033: __srcu_read_unlock <-fsnotify
2194  [...]
2195
2196
2197Note: function tracer uses ring buffers to store the above
2198entries. The newest data may overwrite the oldest data.
2199Sometimes using echo to stop the trace is not sufficient because
2200the tracing could have overwritten the data that you wanted to
2201record. For this reason, it is sometimes better to disable
2202tracing directly from a program. This allows you to stop the
2203tracing at the point that you hit the part that you are
2204interested in. To disable the tracing directly from a C program,
2205something like following code snippet can be used::
2206
2207	int trace_fd;
2208	[...]
2209	int main(int argc, char *argv[]) {
2210		[...]
2211		trace_fd = open(tracing_file("tracing_on"), O_WRONLY);
2212		[...]
2213		if (condition_hit()) {
2214			write(trace_fd, "0", 1);
2215		}
2216		[...]
2217	}
2218
2219
2220Single thread tracing
2221---------------------
2222
2223By writing into set_ftrace_pid you can trace a
2224single thread. For example::
2225
2226  # cat set_ftrace_pid
2227  no pid
2228  # echo 3111 > set_ftrace_pid
2229  # cat set_ftrace_pid
2230  3111
2231  # echo function > current_tracer
2232  # cat trace | head
2233  # tracer: function
2234  #
2235  #           TASK-PID    CPU#    TIMESTAMP  FUNCTION
2236  #              | |       |          |         |
2237      yum-updatesd-3111  [003]  1637.254676: finish_task_switch <-thread_return
2238      yum-updatesd-3111  [003]  1637.254681: hrtimer_cancel <-schedule_hrtimeout_range
2239      yum-updatesd-3111  [003]  1637.254682: hrtimer_try_to_cancel <-hrtimer_cancel
2240      yum-updatesd-3111  [003]  1637.254683: lock_hrtimer_base <-hrtimer_try_to_cancel
2241      yum-updatesd-3111  [003]  1637.254685: fget_light <-do_sys_poll
2242      yum-updatesd-3111  [003]  1637.254686: pipe_poll <-do_sys_poll
2243  # echo > set_ftrace_pid
2244  # cat trace |head
2245  # tracer: function
2246  #
2247  #           TASK-PID    CPU#    TIMESTAMP  FUNCTION
2248  #              | |       |          |         |
2249  ##### CPU 3 buffer started ####
2250      yum-updatesd-3111  [003]  1701.957688: free_poll_entry <-poll_freewait
2251      yum-updatesd-3111  [003]  1701.957689: remove_wait_queue <-free_poll_entry
2252      yum-updatesd-3111  [003]  1701.957691: fput <-free_poll_entry
2253      yum-updatesd-3111  [003]  1701.957692: audit_syscall_exit <-sysret_audit
2254      yum-updatesd-3111  [003]  1701.957693: path_put <-audit_syscall_exit
2255
2256If you want to trace a function when executing, you could use
2257something like this simple program.
2258::
2259
2260	#include <stdio.h>
2261	#include <stdlib.h>
2262	#include <sys/types.h>
2263	#include <sys/stat.h>
2264	#include <fcntl.h>
2265	#include <unistd.h>
2266	#include <string.h>
2267
2268	#define _STR(x) #x
2269	#define STR(x) _STR(x)
2270	#define MAX_PATH 256
2271
2272	const char *find_tracefs(void)
2273	{
2274	       static char tracefs[MAX_PATH+1];
2275	       static int tracefs_found;
2276	       char type[100];
2277	       FILE *fp;
2278
2279	       if (tracefs_found)
2280		       return tracefs;
2281
2282	       if ((fp = fopen("/proc/mounts","r")) == NULL) {
2283		       perror("/proc/mounts");
2284		       return NULL;
2285	       }
2286
2287	       while (fscanf(fp, "%*s %"
2288		             STR(MAX_PATH)
2289		             "s %99s %*s %*d %*d\n",
2290		             tracefs, type) == 2) {
2291		       if (strcmp(type, "tracefs") == 0)
2292		               break;
2293	       }
2294	       fclose(fp);
2295
2296	       if (strcmp(type, "tracefs") != 0) {
2297		       fprintf(stderr, "tracefs not mounted");
2298		       return NULL;
2299	       }
2300
2301	       strcat(tracefs, "/tracing/");
2302	       tracefs_found = 1;
2303
2304	       return tracefs;
2305	}
2306
2307	const char *tracing_file(const char *file_name)
2308	{
2309	       static char trace_file[MAX_PATH+1];
2310	       snprintf(trace_file, MAX_PATH, "%s/%s", find_tracefs(), file_name);
2311	       return trace_file;
2312	}
2313
2314	int main (int argc, char **argv)
2315	{
2316		if (argc < 1)
2317		        exit(-1);
2318
2319		if (fork() > 0) {
2320		        int fd, ffd;
2321		        char line[64];
2322		        int s;
2323
2324		        ffd = open(tracing_file("current_tracer"), O_WRONLY);
2325		        if (ffd < 0)
2326		                exit(-1);
2327		        write(ffd, "nop", 3);
2328
2329		        fd = open(tracing_file("set_ftrace_pid"), O_WRONLY);
2330		        s = sprintf(line, "%d\n", getpid());
2331		        write(fd, line, s);
2332
2333		        write(ffd, "function", 8);
2334
2335		        close(fd);
2336		        close(ffd);
2337
2338		        execvp(argv[1], argv+1);
2339		}
2340
2341		return 0;
2342	}
2343
2344Or this simple script!
2345::
2346
2347  #!/bin/bash
2348
2349  tracefs=`sed -ne 's/^tracefs \(.*\) tracefs.*/\1/p' /proc/mounts`
2350  echo nop > $tracefs/tracing/current_tracer
2351  echo 0 > $tracefs/tracing/tracing_on
2352  echo $$ > $tracefs/tracing/set_ftrace_pid
2353  echo function > $tracefs/tracing/current_tracer
2354  echo 1 > $tracefs/tracing/tracing_on
2355  exec "$@"
2356
2357
2358function graph tracer
2359---------------------------
2360
2361This tracer is similar to the function tracer except that it
2362probes a function on its entry and its exit. This is done by
2363using a dynamically allocated stack of return addresses in each
2364task_struct. On function entry the tracer overwrites the return
2365address of each function traced to set a custom probe. Thus the
2366original return address is stored on the stack of return address
2367in the task_struct.
2368
2369Probing on both ends of a function leads to special features
2370such as:
2371
2372- measure of a function's time execution
2373- having a reliable call stack to draw function calls graph
2374
2375This tracer is useful in several situations:
2376
2377- you want to find the reason of a strange kernel behavior and
2378  need to see what happens in detail on any areas (or specific
2379  ones).
2380
2381- you are experiencing weird latencies but it's difficult to
2382  find its origin.
2383
2384- you want to find quickly which path is taken by a specific
2385  function
2386
2387- you just want to peek inside a working kernel and want to see
2388  what happens there.
2389
2390::
2391
2392  # tracer: function_graph
2393  #
2394  # CPU  DURATION                  FUNCTION CALLS
2395  # |     |   |                     |   |   |   |
2396
2397   0)               |  sys_open() {
2398   0)               |    do_sys_open() {
2399   0)               |      getname() {
2400   0)               |        kmem_cache_alloc() {
2401   0)   1.382 us    |          __might_sleep();
2402   0)   2.478 us    |        }
2403   0)               |        strncpy_from_user() {
2404   0)               |          might_fault() {
2405   0)   1.389 us    |            __might_sleep();
2406   0)   2.553 us    |          }
2407   0)   3.807 us    |        }
2408   0)   7.876 us    |      }
2409   0)               |      alloc_fd() {
2410   0)   0.668 us    |        _spin_lock();
2411   0)   0.570 us    |        expand_files();
2412   0)   0.586 us    |        _spin_unlock();
2413
2414
2415There are several columns that can be dynamically
2416enabled/disabled. You can use every combination of options you
2417want, depending on your needs.
2418
2419- The cpu number on which the function executed is default
2420  enabled.  It is sometimes better to only trace one cpu (see
2421  tracing_cpu_mask file) or you might sometimes see unordered
2422  function calls while cpu tracing switch.
2423
2424	- hide: echo nofuncgraph-cpu > trace_options
2425	- show: echo funcgraph-cpu > trace_options
2426
2427- The duration (function's time of execution) is displayed on
2428  the closing bracket line of a function or on the same line
2429  than the current function in case of a leaf one. It is default
2430  enabled.
2431
2432	- hide: echo nofuncgraph-duration > trace_options
2433	- show: echo funcgraph-duration > trace_options
2434
2435- The overhead field precedes the duration field in case of
2436  reached duration thresholds.
2437
2438	- hide: echo nofuncgraph-overhead > trace_options
2439	- show: echo funcgraph-overhead > trace_options
2440	- depends on: funcgraph-duration
2441
2442  ie::
2443
2444    3) # 1837.709 us |          } /* __switch_to */
2445    3)               |          finish_task_switch() {
2446    3)   0.313 us    |            _raw_spin_unlock_irq();
2447    3)   3.177 us    |          }
2448    3) # 1889.063 us |        } /* __schedule */
2449    3) ! 140.417 us  |      } /* __schedule */
2450    3) # 2034.948 us |    } /* schedule */
2451    3) * 33998.59 us |  } /* schedule_preempt_disabled */
2452
2453    [...]
2454
2455    1)   0.260 us    |              msecs_to_jiffies();
2456    1)   0.313 us    |              __rcu_read_unlock();
2457    1) + 61.770 us   |            }
2458    1) + 64.479 us   |          }
2459    1)   0.313 us    |          rcu_bh_qs();
2460    1)   0.313 us    |          __local_bh_enable();
2461    1) ! 217.240 us  |        }
2462    1)   0.365 us    |        idle_cpu();
2463    1)               |        rcu_irq_exit() {
2464    1)   0.417 us    |          rcu_eqs_enter_common.isra.47();
2465    1)   3.125 us    |        }
2466    1) ! 227.812 us  |      }
2467    1) ! 457.395 us  |    }
2468    1) @ 119760.2 us |  }
2469
2470    [...]
2471
2472    2)               |    handle_IPI() {
2473    1)   6.979 us    |                  }
2474    2)   0.417 us    |      scheduler_ipi();
2475    1)   9.791 us    |                }
2476    1) + 12.917 us   |              }
2477    2)   3.490 us    |    }
2478    1) + 15.729 us   |            }
2479    1) + 18.542 us   |          }
2480    2) $ 3594274 us  |  }
2481
2482Flags::
2483
2484  + means that the function exceeded 10 usecs.
2485  ! means that the function exceeded 100 usecs.
2486  # means that the function exceeded 1000 usecs.
2487  * means that the function exceeded 10 msecs.
2488  @ means that the function exceeded 100 msecs.
2489  $ means that the function exceeded 1 sec.
2490
2491
2492- The task/pid field displays the thread cmdline and pid which
2493  executed the function. It is default disabled.
2494
2495	- hide: echo nofuncgraph-proc > trace_options
2496	- show: echo funcgraph-proc > trace_options
2497
2498  ie::
2499
2500    # tracer: function_graph
2501    #
2502    # CPU  TASK/PID        DURATION                  FUNCTION CALLS
2503    # |    |    |           |   |                     |   |   |   |
2504    0)    sh-4802     |               |                  d_free() {
2505    0)    sh-4802     |               |                    call_rcu() {
2506    0)    sh-4802     |               |                      __call_rcu() {
2507    0)    sh-4802     |   0.616 us    |                        rcu_process_gp_end();
2508    0)    sh-4802     |   0.586 us    |                        check_for_new_grace_period();
2509    0)    sh-4802     |   2.899 us    |                      }
2510    0)    sh-4802     |   4.040 us    |                    }
2511    0)    sh-4802     |   5.151 us    |                  }
2512    0)    sh-4802     | + 49.370 us   |                }
2513
2514
2515- The absolute time field is an absolute timestamp given by the
2516  system clock since it started. A snapshot of this time is
2517  given on each entry/exit of functions
2518
2519	- hide: echo nofuncgraph-abstime > trace_options
2520	- show: echo funcgraph-abstime > trace_options
2521
2522  ie::
2523
2524    #
2525    #      TIME       CPU  DURATION                  FUNCTION CALLS
2526    #       |         |     |   |                     |   |   |   |
2527    360.774522 |   1)   0.541 us    |                                          }
2528    360.774522 |   1)   4.663 us    |                                        }
2529    360.774523 |   1)   0.541 us    |                                        __wake_up_bit();
2530    360.774524 |   1)   6.796 us    |                                      }
2531    360.774524 |   1)   7.952 us    |                                    }
2532    360.774525 |   1)   9.063 us    |                                  }
2533    360.774525 |   1)   0.615 us    |                                  journal_mark_dirty();
2534    360.774527 |   1)   0.578 us    |                                  __brelse();
2535    360.774528 |   1)               |                                  reiserfs_prepare_for_journal() {
2536    360.774528 |   1)               |                                    unlock_buffer() {
2537    360.774529 |   1)               |                                      wake_up_bit() {
2538    360.774529 |   1)               |                                        bit_waitqueue() {
2539    360.774530 |   1)   0.594 us    |                                          __phys_addr();
2540
2541
2542The function name is always displayed after the closing bracket
2543for a function if the start of that function is not in the
2544trace buffer.
2545
2546Display of the function name after the closing bracket may be
2547enabled for functions whose start is in the trace buffer,
2548allowing easier searching with grep for function durations.
2549It is default disabled.
2550
2551	- hide: echo nofuncgraph-tail > trace_options
2552	- show: echo funcgraph-tail > trace_options
2553
2554  Example with nofuncgraph-tail (default)::
2555
2556    0)               |      putname() {
2557    0)               |        kmem_cache_free() {
2558    0)   0.518 us    |          __phys_addr();
2559    0)   1.757 us    |        }
2560    0)   2.861 us    |      }
2561
2562  Example with funcgraph-tail::
2563
2564    0)               |      putname() {
2565    0)               |        kmem_cache_free() {
2566    0)   0.518 us    |          __phys_addr();
2567    0)   1.757 us    |        } /* kmem_cache_free() */
2568    0)   2.861 us    |      } /* putname() */
2569
2570You can put some comments on specific functions by using
2571trace_printk() For example, if you want to put a comment inside
2572the __might_sleep() function, you just have to include
2573<linux/ftrace.h> and call trace_printk() inside __might_sleep()::
2574
2575	trace_printk("I'm a comment!\n")
2576
2577will produce::
2578
2579   1)               |             __might_sleep() {
2580   1)               |                /* I'm a comment! */
2581   1)   1.449 us    |             }
2582
2583
2584You might find other useful features for this tracer in the
2585following "dynamic ftrace" section such as tracing only specific
2586functions or tasks.
2587
2588dynamic ftrace
2589--------------
2590
2591If CONFIG_DYNAMIC_FTRACE is set, the system will run with
2592virtually no overhead when function tracing is disabled. The way
2593this works is the mcount function call (placed at the start of
2594every kernel function, produced by the -pg switch in gcc),
2595starts of pointing to a simple return. (Enabling FTRACE will
2596include the -pg switch in the compiling of the kernel.)
2597
2598At compile time every C file object is run through the
2599recordmcount program (located in the scripts directory). This
2600program will parse the ELF headers in the C object to find all
2601the locations in the .text section that call mcount. Starting
2602with gcc version 4.6, the -mfentry has been added for x86, which
2603calls "__fentry__" instead of "mcount". Which is called before
2604the creation of the stack frame.
2605
2606Note, not all sections are traced. They may be prevented by either
2607a notrace, or blocked another way and all inline functions are not
2608traced. Check the "available_filter_functions" file to see what functions
2609can be traced.
2610
2611A section called "__mcount_loc" is created that holds
2612references to all the mcount/fentry call sites in the .text section.
2613The recordmcount program re-links this section back into the
2614original object. The final linking stage of the kernel will add all these
2615references into a single table.
2616
2617On boot up, before SMP is initialized, the dynamic ftrace code
2618scans this table and updates all the locations into nops. It
2619also records the locations, which are added to the
2620available_filter_functions list.  Modules are processed as they
2621are loaded and before they are executed.  When a module is
2622unloaded, it also removes its functions from the ftrace function
2623list. This is automatic in the module unload code, and the
2624module author does not need to worry about it.
2625
2626When tracing is enabled, the process of modifying the function
2627tracepoints is dependent on architecture. The old method is to use
2628kstop_machine to prevent races with the CPUs executing code being
2629modified (which can cause the CPU to do undesirable things, especially
2630if the modified code crosses cache (or page) boundaries), and the nops are
2631patched back to calls. But this time, they do not call mcount
2632(which is just a function stub). They now call into the ftrace
2633infrastructure.
2634
2635The new method of modifying the function tracepoints is to place
2636a breakpoint at the location to be modified, sync all CPUs, modify
2637the rest of the instruction not covered by the breakpoint. Sync
2638all CPUs again, and then remove the breakpoint with the finished
2639version to the ftrace call site.
2640
2641Some archs do not even need to monkey around with the synchronization,
2642and can just slap the new code on top of the old without any
2643problems with other CPUs executing it at the same time.
2644
2645One special side-effect to the recording of the functions being
2646traced is that we can now selectively choose which functions we
2647wish to trace and which ones we want the mcount calls to remain
2648as nops.
2649
2650Two files are used, one for enabling and one for disabling the
2651tracing of specified functions. They are:
2652
2653  set_ftrace_filter
2654
2655and
2656
2657  set_ftrace_notrace
2658
2659A list of available functions that you can add to these files is
2660listed in:
2661
2662   available_filter_functions
2663
2664::
2665
2666  # cat available_filter_functions
2667  put_prev_task_idle
2668  kmem_cache_create
2669  pick_next_task_rt
2670  get_online_cpus
2671  pick_next_task_fair
2672  mutex_lock
2673  [...]
2674
2675If I am only interested in sys_nanosleep and hrtimer_interrupt::
2676
2677  # echo sys_nanosleep hrtimer_interrupt > set_ftrace_filter
2678  # echo function > current_tracer
2679  # echo 1 > tracing_on
2680  # usleep 1
2681  # echo 0 > tracing_on
2682  # cat trace
2683  # tracer: function
2684  #
2685  # entries-in-buffer/entries-written: 5/5   #P:4
2686  #
2687  #                              _-----=> irqs-off
2688  #                             / _----=> need-resched
2689  #                            | / _---=> hardirq/softirq
2690  #                            || / _--=> preempt-depth
2691  #                            ||| /     delay
2692  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
2693  #              | |       |   ||||       |         |
2694            usleep-2665  [001] ....  4186.475355: sys_nanosleep <-system_call_fastpath
2695            <idle>-0     [001] d.h1  4186.475409: hrtimer_interrupt <-smp_apic_timer_interrupt
2696            usleep-2665  [001] d.h1  4186.475426: hrtimer_interrupt <-smp_apic_timer_interrupt
2697            <idle>-0     [003] d.h1  4186.475426: hrtimer_interrupt <-smp_apic_timer_interrupt
2698            <idle>-0     [002] d.h1  4186.475427: hrtimer_interrupt <-smp_apic_timer_interrupt
2699
2700To see which functions are being traced, you can cat the file:
2701::
2702
2703  # cat set_ftrace_filter
2704  hrtimer_interrupt
2705  sys_nanosleep
2706
2707
2708Perhaps this is not enough. The filters also allow glob(7) matching.
2709
2710  ``<match>*``
2711	will match functions that begin with <match>
2712  ``*<match>``
2713	will match functions that end with <match>
2714  ``*<match>*``
2715	will match functions that have <match> in it
2716  ``<match1>*<match2>``
2717	will match functions that begin with <match1> and end with <match2>
2718
2719.. note::
2720      It is better to use quotes to enclose the wild cards,
2721      otherwise the shell may expand the parameters into names
2722      of files in the local directory.
2723
2724::
2725
2726  # echo 'hrtimer_*' > set_ftrace_filter
2727
2728Produces::
2729
2730  # tracer: function
2731  #
2732  # entries-in-buffer/entries-written: 897/897   #P:4
2733  #
2734  #                              _-----=> irqs-off
2735  #                             / _----=> need-resched
2736  #                            | / _---=> hardirq/softirq
2737  #                            || / _--=> preempt-depth
2738  #                            ||| /     delay
2739  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
2740  #              | |       |   ||||       |         |
2741            <idle>-0     [003] dN.1  4228.547803: hrtimer_cancel <-tick_nohz_idle_exit
2742            <idle>-0     [003] dN.1  4228.547804: hrtimer_try_to_cancel <-hrtimer_cancel
2743            <idle>-0     [003] dN.2  4228.547805: hrtimer_force_reprogram <-__remove_hrtimer
2744            <idle>-0     [003] dN.1  4228.547805: hrtimer_forward <-tick_nohz_idle_exit
2745            <idle>-0     [003] dN.1  4228.547805: hrtimer_start_range_ns <-hrtimer_start_expires.constprop.11
2746            <idle>-0     [003] d..1  4228.547858: hrtimer_get_next_event <-get_next_timer_interrupt
2747            <idle>-0     [003] d..1  4228.547859: hrtimer_start <-__tick_nohz_idle_enter
2748            <idle>-0     [003] d..2  4228.547860: hrtimer_force_reprogram <-__rem
2749
2750Notice that we lost the sys_nanosleep.
2751::
2752
2753  # cat set_ftrace_filter
2754  hrtimer_run_queues
2755  hrtimer_run_pending
2756  hrtimer_init
2757  hrtimer_cancel
2758  hrtimer_try_to_cancel
2759  hrtimer_forward
2760  hrtimer_start
2761  hrtimer_reprogram
2762  hrtimer_force_reprogram
2763  hrtimer_get_next_event
2764  hrtimer_interrupt
2765  hrtimer_nanosleep
2766  hrtimer_wakeup
2767  hrtimer_get_remaining
2768  hrtimer_get_res
2769  hrtimer_init_sleeper
2770
2771
2772This is because the '>' and '>>' act just like they do in bash.
2773To rewrite the filters, use '>'
2774To append to the filters, use '>>'
2775
2776To clear out a filter so that all functions will be recorded
2777again::
2778
2779 # echo > set_ftrace_filter
2780 # cat set_ftrace_filter
2781 #
2782
2783Again, now we want to append.
2784
2785::
2786
2787  # echo sys_nanosleep > set_ftrace_filter
2788  # cat set_ftrace_filter
2789  sys_nanosleep
2790  # echo 'hrtimer_*' >> set_ftrace_filter
2791  # cat set_ftrace_filter
2792  hrtimer_run_queues
2793  hrtimer_run_pending
2794  hrtimer_init
2795  hrtimer_cancel
2796  hrtimer_try_to_cancel
2797  hrtimer_forward
2798  hrtimer_start
2799  hrtimer_reprogram
2800  hrtimer_force_reprogram
2801  hrtimer_get_next_event
2802  hrtimer_interrupt
2803  sys_nanosleep
2804  hrtimer_nanosleep
2805  hrtimer_wakeup
2806  hrtimer_get_remaining
2807  hrtimer_get_res
2808  hrtimer_init_sleeper
2809
2810
2811The set_ftrace_notrace prevents those functions from being
2812traced.
2813::
2814
2815  # echo '*preempt*' '*lock*' > set_ftrace_notrace
2816
2817Produces::
2818
2819  # tracer: function
2820  #
2821  # entries-in-buffer/entries-written: 39608/39608   #P:4
2822  #
2823  #                              _-----=> irqs-off
2824  #                             / _----=> need-resched
2825  #                            | / _---=> hardirq/softirq
2826  #                            || / _--=> preempt-depth
2827  #                            ||| /     delay
2828  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
2829  #              | |       |   ||||       |         |
2830              bash-1994  [000] ....  4342.324896: file_ra_state_init <-do_dentry_open
2831              bash-1994  [000] ....  4342.324897: open_check_o_direct <-do_last
2832              bash-1994  [000] ....  4342.324897: ima_file_check <-do_last
2833              bash-1994  [000] ....  4342.324898: process_measurement <-ima_file_check
2834              bash-1994  [000] ....  4342.324898: ima_get_action <-process_measurement
2835              bash-1994  [000] ....  4342.324898: ima_match_policy <-ima_get_action
2836              bash-1994  [000] ....  4342.324899: do_truncate <-do_last
2837              bash-1994  [000] ....  4342.324899: should_remove_suid <-do_truncate
2838              bash-1994  [000] ....  4342.324899: notify_change <-do_truncate
2839              bash-1994  [000] ....  4342.324900: current_fs_time <-notify_change
2840              bash-1994  [000] ....  4342.324900: current_kernel_time <-current_fs_time
2841              bash-1994  [000] ....  4342.324900: timespec_trunc <-current_fs_time
2842
2843We can see that there's no more lock or preempt tracing.
2844
2845Selecting function filters via index
2846------------------------------------
2847
2848Because processing of strings is expensive (the address of the function
2849needs to be looked up before comparing to the string being passed in),
2850an index can be used as well to enable functions. This is useful in the
2851case of setting thousands of specific functions at a time. By passing
2852in a list of numbers, no string processing will occur. Instead, the function
2853at the specific location in the internal array (which corresponds to the
2854functions in the "available_filter_functions" file), is selected.
2855
2856::
2857
2858  # echo 1 > set_ftrace_filter
2859
2860Will select the first function listed in "available_filter_functions"
2861
2862::
2863
2864  # head -1 available_filter_functions
2865  trace_initcall_finish_cb
2866
2867  # cat set_ftrace_filter
2868  trace_initcall_finish_cb
2869
2870  # head -50 available_filter_functions | tail -1
2871  x86_pmu_commit_txn
2872
2873  # echo 1 50 > set_ftrace_filter
2874  # cat set_ftrace_filter
2875  trace_initcall_finish_cb
2876  x86_pmu_commit_txn
2877
2878Dynamic ftrace with the function graph tracer
2879---------------------------------------------
2880
2881Although what has been explained above concerns both the
2882function tracer and the function-graph-tracer, there are some
2883special features only available in the function-graph tracer.
2884
2885If you want to trace only one function and all of its children,
2886you just have to echo its name into set_graph_function::
2887
2888 echo __do_fault > set_graph_function
2889
2890will produce the following "expanded" trace of the __do_fault()
2891function::
2892
2893   0)               |  __do_fault() {
2894   0)               |    filemap_fault() {
2895   0)               |      find_lock_page() {
2896   0)   0.804 us    |        find_get_page();
2897   0)               |        __might_sleep() {
2898   0)   1.329 us    |        }
2899   0)   3.904 us    |      }
2900   0)   4.979 us    |    }
2901   0)   0.653 us    |    _spin_lock();
2902   0)   0.578 us    |    page_add_file_rmap();
2903   0)   0.525 us    |    native_set_pte_at();
2904   0)   0.585 us    |    _spin_unlock();
2905   0)               |    unlock_page() {
2906   0)   0.541 us    |      page_waitqueue();
2907   0)   0.639 us    |      __wake_up_bit();
2908   0)   2.786 us    |    }
2909   0) + 14.237 us   |  }
2910   0)               |  __do_fault() {
2911   0)               |    filemap_fault() {
2912   0)               |      find_lock_page() {
2913   0)   0.698 us    |        find_get_page();
2914   0)               |        __might_sleep() {
2915   0)   1.412 us    |        }
2916   0)   3.950 us    |      }
2917   0)   5.098 us    |    }
2918   0)   0.631 us    |    _spin_lock();
2919   0)   0.571 us    |    page_add_file_rmap();
2920   0)   0.526 us    |    native_set_pte_at();
2921   0)   0.586 us    |    _spin_unlock();
2922   0)               |    unlock_page() {
2923   0)   0.533 us    |      page_waitqueue();
2924   0)   0.638 us    |      __wake_up_bit();
2925   0)   2.793 us    |    }
2926   0) + 14.012 us   |  }
2927
2928You can also expand several functions at once::
2929
2930 echo sys_open > set_graph_function
2931 echo sys_close >> set_graph_function
2932
2933Now if you want to go back to trace all functions you can clear
2934this special filter via::
2935
2936 echo > set_graph_function
2937
2938
2939ftrace_enabled
2940--------------
2941
2942Note, the proc sysctl ftrace_enable is a big on/off switch for the
2943function tracer. By default it is enabled (when function tracing is
2944enabled in the kernel). If it is disabled, all function tracing is
2945disabled. This includes not only the function tracers for ftrace, but
2946also for any other uses (perf, kprobes, stack tracing, profiling, etc).
2947
2948Please disable this with care.
2949
2950This can be disable (and enabled) with::
2951
2952  sysctl kernel.ftrace_enabled=0
2953  sysctl kernel.ftrace_enabled=1
2954
2955 or
2956
2957  echo 0 > /proc/sys/kernel/ftrace_enabled
2958  echo 1 > /proc/sys/kernel/ftrace_enabled
2959
2960
2961Filter commands
2962---------------
2963
2964A few commands are supported by the set_ftrace_filter interface.
2965Trace commands have the following format::
2966
2967  <function>:<command>:<parameter>
2968
2969The following commands are supported:
2970
2971- mod:
2972  This command enables function filtering per module. The
2973  parameter defines the module. For example, if only the write*
2974  functions in the ext3 module are desired, run:
2975
2976   echo 'write*:mod:ext3' > set_ftrace_filter
2977
2978  This command interacts with the filter in the same way as
2979  filtering based on function names. Thus, adding more functions
2980  in a different module is accomplished by appending (>>) to the
2981  filter file. Remove specific module functions by prepending
2982  '!'::
2983
2984   echo '!writeback*:mod:ext3' >> set_ftrace_filter
2985
2986  Mod command supports module globbing. Disable tracing for all
2987  functions except a specific module::
2988
2989   echo '!*:mod:!ext3' >> set_ftrace_filter
2990
2991  Disable tracing for all modules, but still trace kernel::
2992
2993   echo '!*:mod:*' >> set_ftrace_filter
2994
2995  Enable filter only for kernel::
2996
2997   echo '*write*:mod:!*' >> set_ftrace_filter
2998
2999  Enable filter for module globbing::
3000
3001   echo '*write*:mod:*snd*' >> set_ftrace_filter
3002
3003- traceon/traceoff:
3004  These commands turn tracing on and off when the specified
3005  functions are hit. The parameter determines how many times the
3006  tracing system is turned on and off. If unspecified, there is
3007  no limit. For example, to disable tracing when a schedule bug
3008  is hit the first 5 times, run::
3009
3010   echo '__schedule_bug:traceoff:5' > set_ftrace_filter
3011
3012  To always disable tracing when __schedule_bug is hit::
3013
3014   echo '__schedule_bug:traceoff' > set_ftrace_filter
3015
3016  These commands are cumulative whether or not they are appended
3017  to set_ftrace_filter. To remove a command, prepend it by '!'
3018  and drop the parameter::
3019
3020   echo '!__schedule_bug:traceoff:0' > set_ftrace_filter
3021
3022  The above removes the traceoff command for __schedule_bug
3023  that have a counter. To remove commands without counters::
3024
3025   echo '!__schedule_bug:traceoff' > set_ftrace_filter
3026
3027- snapshot:
3028  Will cause a snapshot to be triggered when the function is hit.
3029  ::
3030
3031   echo 'native_flush_tlb_others:snapshot' > set_ftrace_filter
3032
3033  To only snapshot once:
3034  ::
3035
3036   echo 'native_flush_tlb_others:snapshot:1' > set_ftrace_filter
3037
3038  To remove the above commands::
3039
3040   echo '!native_flush_tlb_others:snapshot' > set_ftrace_filter
3041   echo '!native_flush_tlb_others:snapshot:0' > set_ftrace_filter
3042
3043- enable_event/disable_event:
3044  These commands can enable or disable a trace event. Note, because
3045  function tracing callbacks are very sensitive, when these commands
3046  are registered, the trace point is activated, but disabled in
3047  a "soft" mode. That is, the tracepoint will be called, but
3048  just will not be traced. The event tracepoint stays in this mode
3049  as long as there's a command that triggers it.
3050  ::
3051
3052   echo 'try_to_wake_up:enable_event:sched:sched_switch:2' > \
3053   	 set_ftrace_filter
3054
3055  The format is::
3056
3057    <function>:enable_event:<system>:<event>[:count]
3058    <function>:disable_event:<system>:<event>[:count]
3059
3060  To remove the events commands::
3061
3062   echo '!try_to_wake_up:enable_event:sched:sched_switch:0' > \
3063   	 set_ftrace_filter
3064   echo '!schedule:disable_event:sched:sched_switch' > \
3065   	 set_ftrace_filter
3066
3067- dump:
3068  When the function is hit, it will dump the contents of the ftrace
3069  ring buffer to the console. This is useful if you need to debug
3070  something, and want to dump the trace when a certain function
3071  is hit. Perhaps it's a function that is called before a triple
3072  fault happens and does not allow you to get a regular dump.
3073
3074- cpudump:
3075  When the function is hit, it will dump the contents of the ftrace
3076  ring buffer for the current CPU to the console. Unlike the "dump"
3077  command, it only prints out the contents of the ring buffer for the
3078  CPU that executed the function that triggered the dump.
3079
3080- stacktrace:
3081  When the function is hit, a stack trace is recorded.
3082
3083trace_pipe
3084----------
3085
3086The trace_pipe outputs the same content as the trace file, but
3087the effect on the tracing is different. Every read from
3088trace_pipe is consumed. This means that subsequent reads will be
3089different. The trace is live.
3090::
3091
3092  # echo function > current_tracer
3093  # cat trace_pipe > /tmp/trace.out &
3094  [1] 4153
3095  # echo 1 > tracing_on
3096  # usleep 1
3097  # echo 0 > tracing_on
3098  # cat trace
3099  # tracer: function
3100  #
3101  # entries-in-buffer/entries-written: 0/0   #P:4
3102  #
3103  #                              _-----=> irqs-off
3104  #                             / _----=> need-resched
3105  #                            | / _---=> hardirq/softirq
3106  #                            || / _--=> preempt-depth
3107  #                            ||| /     delay
3108  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
3109  #              | |       |   ||||       |         |
3110
3111  #
3112  # cat /tmp/trace.out
3113             bash-1994  [000] ....  5281.568961: mutex_unlock <-rb_simple_write
3114             bash-1994  [000] ....  5281.568963: __mutex_unlock_slowpath <-mutex_unlock
3115             bash-1994  [000] ....  5281.568963: __fsnotify_parent <-fsnotify_modify
3116             bash-1994  [000] ....  5281.568964: fsnotify <-fsnotify_modify
3117             bash-1994  [000] ....  5281.568964: __srcu_read_lock <-fsnotify
3118             bash-1994  [000] ....  5281.568964: add_preempt_count <-__srcu_read_lock
3119             bash-1994  [000] ...1  5281.568965: sub_preempt_count <-__srcu_read_lock
3120             bash-1994  [000] ....  5281.568965: __srcu_read_unlock <-fsnotify
3121             bash-1994  [000] ....  5281.568967: sys_dup2 <-system_call_fastpath
3122
3123
3124Note, reading the trace_pipe file will block until more input is
3125added.
3126
3127trace entries
3128-------------
3129
3130Having too much or not enough data can be troublesome in
3131diagnosing an issue in the kernel. The file buffer_size_kb is
3132used to modify the size of the internal trace buffers. The
3133number listed is the number of entries that can be recorded per
3134CPU. To know the full size, multiply the number of possible CPUs
3135with the number of entries.
3136::
3137
3138  # cat buffer_size_kb
3139  1408 (units kilobytes)
3140
3141Or simply read buffer_total_size_kb
3142::
3143
3144  # cat buffer_total_size_kb
3145  5632
3146
3147To modify the buffer, simple echo in a number (in 1024 byte segments).
3148::
3149
3150  # echo 10000 > buffer_size_kb
3151  # cat buffer_size_kb
3152  10000 (units kilobytes)
3153
3154It will try to allocate as much as possible. If you allocate too
3155much, it can cause Out-Of-Memory to trigger.
3156::
3157
3158  # echo 1000000000000 > buffer_size_kb
3159  -bash: echo: write error: Cannot allocate memory
3160  # cat buffer_size_kb
3161  85
3162
3163The per_cpu buffers can be changed individually as well:
3164::
3165
3166  # echo 10000 > per_cpu/cpu0/buffer_size_kb
3167  # echo 100 > per_cpu/cpu1/buffer_size_kb
3168
3169When the per_cpu buffers are not the same, the buffer_size_kb
3170at the top level will just show an X
3171::
3172
3173  # cat buffer_size_kb
3174  X
3175
3176This is where the buffer_total_size_kb is useful:
3177::
3178
3179  # cat buffer_total_size_kb
3180  12916
3181
3182Writing to the top level buffer_size_kb will reset all the buffers
3183to be the same again.
3184
3185Snapshot
3186--------
3187CONFIG_TRACER_SNAPSHOT makes a generic snapshot feature
3188available to all non latency tracers. (Latency tracers which
3189record max latency, such as "irqsoff" or "wakeup", can't use
3190this feature, since those are already using the snapshot
3191mechanism internally.)
3192
3193Snapshot preserves a current trace buffer at a particular point
3194in time without stopping tracing. Ftrace swaps the current
3195buffer with a spare buffer, and tracing continues in the new
3196current (=previous spare) buffer.
3197
3198The following tracefs files in "tracing" are related to this
3199feature:
3200
3201  snapshot:
3202
3203	This is used to take a snapshot and to read the output
3204	of the snapshot. Echo 1 into this file to allocate a
3205	spare buffer and to take a snapshot (swap), then read
3206	the snapshot from this file in the same format as
3207	"trace" (described above in the section "The File
3208	System"). Both reads snapshot and tracing are executable
3209	in parallel. When the spare buffer is allocated, echoing
3210	0 frees it, and echoing else (positive) values clear the
3211	snapshot contents.
3212	More details are shown in the table below.
3213
3214	+--------------+------------+------------+------------+
3215	|status\\input |     0      |     1      |    else    |
3216	+==============+============+============+============+
3217	|not allocated |(do nothing)| alloc+swap |(do nothing)|
3218	+--------------+------------+------------+------------+
3219	|allocated     |    free    |    swap    |   clear    |
3220	+--------------+------------+------------+------------+
3221
3222Here is an example of using the snapshot feature.
3223::
3224
3225  # echo 1 > events/sched/enable
3226  # echo 1 > snapshot
3227  # cat snapshot
3228  # tracer: nop
3229  #
3230  # entries-in-buffer/entries-written: 71/71   #P:8
3231  #
3232  #                              _-----=> irqs-off
3233  #                             / _----=> need-resched
3234  #                            | / _---=> hardirq/softirq
3235  #                            || / _--=> preempt-depth
3236  #                            ||| /     delay
3237  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
3238  #              | |       |   ||||       |         |
3239            <idle>-0     [005] d...  2440.603828: sched_switch: prev_comm=swapper/5 prev_pid=0 prev_prio=120   prev_state=R ==> next_comm=snapshot-test-2 next_pid=2242 next_prio=120
3240             sleep-2242  [005] d...  2440.603846: sched_switch: prev_comm=snapshot-test-2 prev_pid=2242 prev_prio=120   prev_state=R ==> next_comm=kworker/5:1 next_pid=60 next_prio=120
3241  [...]
3242          <idle>-0     [002] d...  2440.707230: sched_switch: prev_comm=swapper/2 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=snapshot-test-2 next_pid=2229 next_prio=120
3243
3244  # cat trace
3245  # tracer: nop
3246  #
3247  # entries-in-buffer/entries-written: 77/77   #P:8
3248  #
3249  #                              _-----=> irqs-off
3250  #                             / _----=> need-resched
3251  #                            | / _---=> hardirq/softirq
3252  #                            || / _--=> preempt-depth
3253  #                            ||| /     delay
3254  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
3255  #              | |       |   ||||       |         |
3256            <idle>-0     [007] d...  2440.707395: sched_switch: prev_comm=swapper/7 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=snapshot-test-2 next_pid=2243 next_prio=120
3257   snapshot-test-2-2229  [002] d...  2440.707438: sched_switch: prev_comm=snapshot-test-2 prev_pid=2229 prev_prio=120 prev_state=S ==> next_comm=swapper/2 next_pid=0 next_prio=120
3258  [...]
3259
3260
3261If you try to use this snapshot feature when current tracer is
3262one of the latency tracers, you will get the following results.
3263::
3264
3265  # echo wakeup > current_tracer
3266  # echo 1 > snapshot
3267  bash: echo: write error: Device or resource busy
3268  # cat snapshot
3269  cat: snapshot: Device or resource busy
3270
3271
3272Instances
3273---------
3274In the tracefs tracing directory is a directory called "instances".
3275This directory can have new directories created inside of it using
3276mkdir, and removing directories with rmdir. The directory created
3277with mkdir in this directory will already contain files and other
3278directories after it is created.
3279::
3280
3281  # mkdir instances/foo
3282  # ls instances/foo
3283  buffer_size_kb  buffer_total_size_kb  events  free_buffer  per_cpu
3284  set_event  snapshot  trace  trace_clock  trace_marker  trace_options
3285  trace_pipe  tracing_on
3286
3287As you can see, the new directory looks similar to the tracing directory
3288itself. In fact, it is very similar, except that the buffer and
3289events are agnostic from the main director, or from any other
3290instances that are created.
3291
3292The files in the new directory work just like the files with the
3293same name in the tracing directory except the buffer that is used
3294is a separate and new buffer. The files affect that buffer but do not
3295affect the main buffer with the exception of trace_options. Currently,
3296the trace_options affect all instances and the top level buffer
3297the same, but this may change in future releases. That is, options
3298may become specific to the instance they reside in.
3299
3300Notice that none of the function tracer files are there, nor is
3301current_tracer and available_tracers. This is because the buffers
3302can currently only have events enabled for them.
3303::
3304
3305  # mkdir instances/foo
3306  # mkdir instances/bar
3307  # mkdir instances/zoot
3308  # echo 100000 > buffer_size_kb
3309  # echo 1000 > instances/foo/buffer_size_kb
3310  # echo 5000 > instances/bar/per_cpu/cpu1/buffer_size_kb
3311  # echo function > current_trace
3312  # echo 1 > instances/foo/events/sched/sched_wakeup/enable
3313  # echo 1 > instances/foo/events/sched/sched_wakeup_new/enable
3314  # echo 1 > instances/foo/events/sched/sched_switch/enable
3315  # echo 1 > instances/bar/events/irq/enable
3316  # echo 1 > instances/zoot/events/syscalls/enable
3317  # cat trace_pipe
3318  CPU:2 [LOST 11745 EVENTS]
3319              bash-2044  [002] .... 10594.481032: _raw_spin_lock_irqsave <-get_page_from_freelist
3320              bash-2044  [002] d... 10594.481032: add_preempt_count <-_raw_spin_lock_irqsave
3321              bash-2044  [002] d..1 10594.481032: __rmqueue <-get_page_from_freelist
3322              bash-2044  [002] d..1 10594.481033: _raw_spin_unlock <-get_page_from_freelist
3323              bash-2044  [002] d..1 10594.481033: sub_preempt_count <-_raw_spin_unlock
3324              bash-2044  [002] d... 10594.481033: get_pageblock_flags_group <-get_pageblock_migratetype
3325              bash-2044  [002] d... 10594.481034: __mod_zone_page_state <-get_page_from_freelist
3326              bash-2044  [002] d... 10594.481034: zone_statistics <-get_page_from_freelist
3327              bash-2044  [002] d... 10594.481034: __inc_zone_state <-zone_statistics
3328              bash-2044  [002] d... 10594.481034: __inc_zone_state <-zone_statistics
3329              bash-2044  [002] .... 10594.481035: arch_dup_task_struct <-copy_process
3330  [...]
3331
3332  # cat instances/foo/trace_pipe
3333              bash-1998  [000] d..4   136.676759: sched_wakeup: comm=kworker/0:1 pid=59 prio=120 success=1 target_cpu=000
3334              bash-1998  [000] dN.4   136.676760: sched_wakeup: comm=bash pid=1998 prio=120 success=1 target_cpu=000
3335            <idle>-0     [003] d.h3   136.676906: sched_wakeup: comm=rcu_preempt pid=9 prio=120 success=1 target_cpu=003
3336            <idle>-0     [003] d..3   136.676909: sched_switch: prev_comm=swapper/3 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcu_preempt next_pid=9 next_prio=120
3337       rcu_preempt-9     [003] d..3   136.676916: sched_switch: prev_comm=rcu_preempt prev_pid=9 prev_prio=120 prev_state=S ==> next_comm=swapper/3 next_pid=0 next_prio=120
3338              bash-1998  [000] d..4   136.677014: sched_wakeup: comm=kworker/0:1 pid=59 prio=120 success=1 target_cpu=000
3339              bash-1998  [000] dN.4   136.677016: sched_wakeup: comm=bash pid=1998 prio=120 success=1 target_cpu=000
3340              bash-1998  [000] d..3   136.677018: sched_switch: prev_comm=bash prev_pid=1998 prev_prio=120 prev_state=R+ ==> next_comm=kworker/0:1 next_pid=59 next_prio=120
3341       kworker/0:1-59    [000] d..4   136.677022: sched_wakeup: comm=sshd pid=1995 prio=120 success=1 target_cpu=001
3342       kworker/0:1-59    [000] d..3   136.677025: sched_switch: prev_comm=kworker/0:1 prev_pid=59 prev_prio=120 prev_state=S ==> next_comm=bash next_pid=1998 next_prio=120
3343  [...]
3344
3345  # cat instances/bar/trace_pipe
3346       migration/1-14    [001] d.h3   138.732674: softirq_raise: vec=3 [action=NET_RX]
3347            <idle>-0     [001] dNh3   138.732725: softirq_raise: vec=3 [action=NET_RX]
3348              bash-1998  [000] d.h1   138.733101: softirq_raise: vec=1 [action=TIMER]
3349              bash-1998  [000] d.h1   138.733102: softirq_raise: vec=9 [action=RCU]
3350              bash-1998  [000] ..s2   138.733105: softirq_entry: vec=1 [action=TIMER]
3351              bash-1998  [000] ..s2   138.733106: softirq_exit: vec=1 [action=TIMER]
3352              bash-1998  [000] ..s2   138.733106: softirq_entry: vec=9 [action=RCU]
3353              bash-1998  [000] ..s2   138.733109: softirq_exit: vec=9 [action=RCU]
3354              sshd-1995  [001] d.h1   138.733278: irq_handler_entry: irq=21 name=uhci_hcd:usb4
3355              sshd-1995  [001] d.h1   138.733280: irq_handler_exit: irq=21 ret=unhandled
3356              sshd-1995  [001] d.h1   138.733281: irq_handler_entry: irq=21 name=eth0
3357              sshd-1995  [001] d.h1   138.733283: irq_handler_exit: irq=21 ret=handled
3358  [...]
3359
3360  # cat instances/zoot/trace
3361  # tracer: nop
3362  #
3363  # entries-in-buffer/entries-written: 18996/18996   #P:4
3364  #
3365  #                              _-----=> irqs-off
3366  #                             / _----=> need-resched
3367  #                            | / _---=> hardirq/softirq
3368  #                            || / _--=> preempt-depth
3369  #                            ||| /     delay
3370  #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
3371  #              | |       |   ||||       |         |
3372              bash-1998  [000] d...   140.733501: sys_write -> 0x2
3373              bash-1998  [000] d...   140.733504: sys_dup2(oldfd: a, newfd: 1)
3374              bash-1998  [000] d...   140.733506: sys_dup2 -> 0x1
3375              bash-1998  [000] d...   140.733508: sys_fcntl(fd: a, cmd: 1, arg: 0)
3376              bash-1998  [000] d...   140.733509: sys_fcntl -> 0x1
3377              bash-1998  [000] d...   140.733510: sys_close(fd: a)
3378              bash-1998  [000] d...   140.733510: sys_close -> 0x0
3379              bash-1998  [000] d...   140.733514: sys_rt_sigprocmask(how: 0, nset: 0, oset: 6e2768, sigsetsize: 8)
3380              bash-1998  [000] d...   140.733515: sys_rt_sigprocmask -> 0x0
3381              bash-1998  [000] d...   140.733516: sys_rt_sigaction(sig: 2, act: 7fff718846f0, oact: 7fff71884650, sigsetsize: 8)
3382              bash-1998  [000] d...   140.733516: sys_rt_sigaction -> 0x0
3383
3384You can see that the trace of the top most trace buffer shows only
3385the function tracing. The foo instance displays wakeups and task
3386switches.
3387
3388To remove the instances, simply delete their directories:
3389::
3390
3391  # rmdir instances/foo
3392  # rmdir instances/bar
3393  # rmdir instances/zoot
3394
3395Note, if a process has a trace file open in one of the instance
3396directories, the rmdir will fail with EBUSY.
3397
3398
3399Stack trace
3400-----------
3401Since the kernel has a fixed sized stack, it is important not to
3402waste it in functions. A kernel developer must be conscience of
3403what they allocate on the stack. If they add too much, the system
3404can be in danger of a stack overflow, and corruption will occur,
3405usually leading to a system panic.
3406
3407There are some tools that check this, usually with interrupts
3408periodically checking usage. But if you can perform a check
3409at every function call that will become very useful. As ftrace provides
3410a function tracer, it makes it convenient to check the stack size
3411at every function call. This is enabled via the stack tracer.
3412
3413CONFIG_STACK_TRACER enables the ftrace stack tracing functionality.
3414To enable it, write a '1' into /proc/sys/kernel/stack_tracer_enabled.
3415::
3416
3417 # echo 1 > /proc/sys/kernel/stack_tracer_enabled
3418
3419You can also enable it from the kernel command line to trace
3420the stack size of the kernel during boot up, by adding "stacktrace"
3421to the kernel command line parameter.
3422
3423After running it for a few minutes, the output looks like:
3424::
3425
3426  # cat stack_max_size
3427  2928
3428
3429  # cat stack_trace
3430          Depth    Size   Location    (18 entries)
3431          -----    ----   --------
3432    0)     2928     224   update_sd_lb_stats+0xbc/0x4ac
3433    1)     2704     160   find_busiest_group+0x31/0x1f1
3434    2)     2544     256   load_balance+0xd9/0x662
3435    3)     2288      80   idle_balance+0xbb/0x130
3436    4)     2208     128   __schedule+0x26e/0x5b9
3437    5)     2080      16   schedule+0x64/0x66
3438    6)     2064     128   schedule_timeout+0x34/0xe0
3439    7)     1936     112   wait_for_common+0x97/0xf1
3440    8)     1824      16   wait_for_completion+0x1d/0x1f
3441    9)     1808     128   flush_work+0xfe/0x119
3442   10)     1680      16   tty_flush_to_ldisc+0x1e/0x20
3443   11)     1664      48   input_available_p+0x1d/0x5c
3444   12)     1616      48   n_tty_poll+0x6d/0x134
3445   13)     1568      64   tty_poll+0x64/0x7f
3446   14)     1504     880   do_select+0x31e/0x511
3447   15)      624     400   core_sys_select+0x177/0x216
3448   16)      224      96   sys_select+0x91/0xb9
3449   17)      128     128   system_call_fastpath+0x16/0x1b
3450
3451Note, if -mfentry is being used by gcc, functions get traced before
3452they set up the stack frame. This means that leaf level functions
3453are not tested by the stack tracer when -mfentry is used.
3454
3455Currently, -mfentry is used by gcc 4.6.0 and above on x86 only.
3456
3457More
3458----
3459More details can be found in the source code, in the `kernel/trace/*.c` files.
3460