1The Kernel Address Sanitizer (KASAN)
2====================================
3
4Overview
5--------
6
7KernelAddressSANitizer (KASAN) is a dynamic memory error detector designed to
8find out-of-bound and use-after-free bugs. KASAN has two modes: generic KASAN
9(similar to userspace ASan) and software tag-based KASAN (similar to userspace
10HWASan).
11
12KASAN uses compile-time instrumentation to insert validity checks before every
13memory access, and therefore requires a compiler version that supports that.
14
15Generic KASAN is supported in both GCC and Clang. With GCC it requires version
168.3.0 or later. With Clang it requires version 7.0.0 or later, but detection of
17out-of-bounds accesses for global variables is only supported since Clang 11.
18
19Tag-based KASAN is only supported in Clang and requires version 7.0.0 or later.
20
21Currently generic KASAN is supported for the x86_64, arm64, xtensa, s390 and
22riscv architectures, and tag-based KASAN is supported only for arm64.
23
24Usage
25-----
26
27To enable KASAN configure kernel with::
28
29	  CONFIG_KASAN = y
30
31and choose between CONFIG_KASAN_GENERIC (to enable generic KASAN) and
32CONFIG_KASAN_SW_TAGS (to enable software tag-based KASAN).
33
34You also need to choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE.
35Outline and inline are compiler instrumentation types. The former produces
36smaller binary while the latter is 1.1 - 2 times faster.
37
38Both KASAN modes work with both SLUB and SLAB memory allocators.
39For better bug detection and nicer reporting, enable CONFIG_STACKTRACE.
40
41To augment reports with last allocation and freeing stack of the physical page,
42it is recommended to enable also CONFIG_PAGE_OWNER and boot with page_owner=on.
43
44To disable instrumentation for specific files or directories, add a line
45similar to the following to the respective kernel Makefile:
46
47- For a single file (e.g. main.o)::
48
49    KASAN_SANITIZE_main.o := n
50
51- For all files in one directory::
52
53    KASAN_SANITIZE := n
54
55Error reports
56~~~~~~~~~~~~~
57
58A typical out-of-bounds access generic KASAN report looks like this::
59
60    ==================================================================
61    BUG: KASAN: slab-out-of-bounds in kmalloc_oob_right+0xa8/0xbc [test_kasan]
62    Write of size 1 at addr ffff8801f44ec37b by task insmod/2760
63
64    CPU: 1 PID: 2760 Comm: insmod Not tainted 4.19.0-rc3+ #698
65    Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
66    Call Trace:
67     dump_stack+0x94/0xd8
68     print_address_description+0x73/0x280
69     kasan_report+0x144/0x187
70     __asan_report_store1_noabort+0x17/0x20
71     kmalloc_oob_right+0xa8/0xbc [test_kasan]
72     kmalloc_tests_init+0x16/0x700 [test_kasan]
73     do_one_initcall+0xa5/0x3ae
74     do_init_module+0x1b6/0x547
75     load_module+0x75df/0x8070
76     __do_sys_init_module+0x1c6/0x200
77     __x64_sys_init_module+0x6e/0xb0
78     do_syscall_64+0x9f/0x2c0
79     entry_SYSCALL_64_after_hwframe+0x44/0xa9
80    RIP: 0033:0x7f96443109da
81    RSP: 002b:00007ffcf0b51b08 EFLAGS: 00000202 ORIG_RAX: 00000000000000af
82    RAX: ffffffffffffffda RBX: 000055dc3ee521a0 RCX: 00007f96443109da
83    RDX: 00007f96445cff88 RSI: 0000000000057a50 RDI: 00007f9644992000
84    RBP: 000055dc3ee510b0 R08: 0000000000000003 R09: 0000000000000000
85    R10: 00007f964430cd0a R11: 0000000000000202 R12: 00007f96445cff88
86    R13: 000055dc3ee51090 R14: 0000000000000000 R15: 0000000000000000
87
88    Allocated by task 2760:
89     save_stack+0x43/0xd0
90     kasan_kmalloc+0xa7/0xd0
91     kmem_cache_alloc_trace+0xe1/0x1b0
92     kmalloc_oob_right+0x56/0xbc [test_kasan]
93     kmalloc_tests_init+0x16/0x700 [test_kasan]
94     do_one_initcall+0xa5/0x3ae
95     do_init_module+0x1b6/0x547
96     load_module+0x75df/0x8070
97     __do_sys_init_module+0x1c6/0x200
98     __x64_sys_init_module+0x6e/0xb0
99     do_syscall_64+0x9f/0x2c0
100     entry_SYSCALL_64_after_hwframe+0x44/0xa9
101
102    Freed by task 815:
103     save_stack+0x43/0xd0
104     __kasan_slab_free+0x135/0x190
105     kasan_slab_free+0xe/0x10
106     kfree+0x93/0x1a0
107     umh_complete+0x6a/0xa0
108     call_usermodehelper_exec_async+0x4c3/0x640
109     ret_from_fork+0x35/0x40
110
111    The buggy address belongs to the object at ffff8801f44ec300
112     which belongs to the cache kmalloc-128 of size 128
113    The buggy address is located 123 bytes inside of
114     128-byte region [ffff8801f44ec300, ffff8801f44ec380)
115    The buggy address belongs to the page:
116    page:ffffea0007d13b00 count:1 mapcount:0 mapping:ffff8801f7001640 index:0x0
117    flags: 0x200000000000100(slab)
118    raw: 0200000000000100 ffffea0007d11dc0 0000001a0000001a ffff8801f7001640
119    raw: 0000000000000000 0000000080150015 00000001ffffffff 0000000000000000
120    page dumped because: kasan: bad access detected
121
122    Memory state around the buggy address:
123     ffff8801f44ec200: fc fc fc fc fc fc fc fc fb fb fb fb fb fb fb fb
124     ffff8801f44ec280: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
125    >ffff8801f44ec300: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 03
126                                                                    ^
127     ffff8801f44ec380: fc fc fc fc fc fc fc fc fb fb fb fb fb fb fb fb
128     ffff8801f44ec400: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
129    ==================================================================
130
131The header of the report provides a short summary of what kind of bug happened
132and what kind of access caused it. It's followed by a stack trace of the bad
133access, a stack trace of where the accessed memory was allocated (in case bad
134access happens on a slab object), and a stack trace of where the object was
135freed (in case of a use-after-free bug report). Next comes a description of
136the accessed slab object and information about the accessed memory page.
137
138In the last section the report shows memory state around the accessed address.
139Reading this part requires some understanding of how KASAN works.
140
141The state of each 8 aligned bytes of memory is encoded in one shadow byte.
142Those 8 bytes can be accessible, partially accessible, freed or be a redzone.
143We use the following encoding for each shadow byte: 0 means that all 8 bytes
144of the corresponding memory region are accessible; number N (1 <= N <= 7) means
145that the first N bytes are accessible, and other (8 - N) bytes are not;
146any negative value indicates that the entire 8-byte word is inaccessible.
147We use different negative values to distinguish between different kinds of
148inaccessible memory like redzones or freed memory (see mm/kasan/kasan.h).
149
150In the report above the arrows point to the shadow byte 03, which means that
151the accessed address is partially accessible.
152
153For tag-based KASAN this last report section shows the memory tags around the
154accessed address (see Implementation details section).
155
156
157Implementation details
158----------------------
159
160Generic KASAN
161~~~~~~~~~~~~~
162
163From a high level, our approach to memory error detection is similar to that
164of kmemcheck: use shadow memory to record whether each byte of memory is safe
165to access, and use compile-time instrumentation to insert checks of shadow
166memory on each memory access.
167
168Generic KASAN dedicates 1/8th of kernel memory to its shadow memory (e.g. 16TB
169to cover 128TB on x86_64) and uses direct mapping with a scale and offset to
170translate a memory address to its corresponding shadow address.
171
172Here is the function which translates an address to its corresponding shadow
173address::
174
175    static inline void *kasan_mem_to_shadow(const void *addr)
176    {
177	return ((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
178		+ KASAN_SHADOW_OFFSET;
179    }
180
181where ``KASAN_SHADOW_SCALE_SHIFT = 3``.
182
183Compile-time instrumentation is used to insert memory access checks. Compiler
184inserts function calls (__asan_load*(addr), __asan_store*(addr)) before each
185memory access of size 1, 2, 4, 8 or 16. These functions check whether memory
186access is valid or not by checking corresponding shadow memory.
187
188GCC 5.0 has possibility to perform inline instrumentation. Instead of making
189function calls GCC directly inserts the code to check the shadow memory.
190This option significantly enlarges kernel but it gives x1.1-x2 performance
191boost over outline instrumented kernel.
192
193Generic KASAN prints up to 2 call_rcu() call stacks in reports, the last one
194and the second to last.
195
196Software tag-based KASAN
197~~~~~~~~~~~~~~~~~~~~~~~~
198
199Tag-based KASAN uses the Top Byte Ignore (TBI) feature of modern arm64 CPUs to
200store a pointer tag in the top byte of kernel pointers. Like generic KASAN it
201uses shadow memory to store memory tags associated with each 16-byte memory
202cell (therefore it dedicates 1/16th of the kernel memory for shadow memory).
203
204On each memory allocation tag-based KASAN generates a random tag, tags the
205allocated memory with this tag, and embeds this tag into the returned pointer.
206Software tag-based KASAN uses compile-time instrumentation to insert checks
207before each memory access. These checks make sure that tag of the memory that
208is being accessed is equal to tag of the pointer that is used to access this
209memory. In case of a tag mismatch tag-based KASAN prints a bug report.
210
211Software tag-based KASAN also has two instrumentation modes (outline, that
212emits callbacks to check memory accesses; and inline, that performs the shadow
213memory checks inline). With outline instrumentation mode, a bug report is
214simply printed from the function that performs the access check. With inline
215instrumentation a brk instruction is emitted by the compiler, and a dedicated
216brk handler is used to print bug reports.
217
218A potential expansion of this mode is a hardware tag-based mode, which would
219use hardware memory tagging support instead of compiler instrumentation and
220manual shadow memory manipulation.
221
222What memory accesses are sanitised by KASAN?
223--------------------------------------------
224
225The kernel maps memory in a number of different parts of the address
226space. This poses something of a problem for KASAN, which requires
227that all addresses accessed by instrumented code have a valid shadow
228region.
229
230The range of kernel virtual addresses is large: there is not enough
231real memory to support a real shadow region for every address that
232could be accessed by the kernel.
233
234By default
235~~~~~~~~~~
236
237By default, architectures only map real memory over the shadow region
238for the linear mapping (and potentially other small areas). For all
239other areas - such as vmalloc and vmemmap space - a single read-only
240page is mapped over the shadow area. This read-only shadow page
241declares all memory accesses as permitted.
242
243This presents a problem for modules: they do not live in the linear
244mapping, but in a dedicated module space. By hooking in to the module
245allocator, KASAN can temporarily map real shadow memory to cover
246them. This allows detection of invalid accesses to module globals, for
247example.
248
249This also creates an incompatibility with ``VMAP_STACK``: if the stack
250lives in vmalloc space, it will be shadowed by the read-only page, and
251the kernel will fault when trying to set up the shadow data for stack
252variables.
253
254CONFIG_KASAN_VMALLOC
255~~~~~~~~~~~~~~~~~~~~
256
257With ``CONFIG_KASAN_VMALLOC``, KASAN can cover vmalloc space at the
258cost of greater memory usage. Currently this is only supported on x86.
259
260This works by hooking into vmalloc and vmap, and dynamically
261allocating real shadow memory to back the mappings.
262
263Most mappings in vmalloc space are small, requiring less than a full
264page of shadow space. Allocating a full shadow page per mapping would
265therefore be wasteful. Furthermore, to ensure that different mappings
266use different shadow pages, mappings would have to be aligned to
267``KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE``.
268
269Instead, we share backing space across multiple mappings. We allocate
270a backing page when a mapping in vmalloc space uses a particular page
271of the shadow region. This page can be shared by other vmalloc
272mappings later on.
273
274We hook in to the vmap infrastructure to lazily clean up unused shadow
275memory.
276
277To avoid the difficulties around swapping mappings around, we expect
278that the part of the shadow region that covers the vmalloc space will
279not be covered by the early shadow page, but will be left
280unmapped. This will require changes in arch-specific code.
281
282This allows ``VMAP_STACK`` support on x86, and can simplify support of
283architectures that do not have a fixed module region.
284