1# SPDX-License-Identifier: GPL-2.0 2# 3# General architecture dependent options 4# 5 6# 7# Note: arch/$(SRCARCH)/Kconfig needs to be included first so that it can 8# override the default values in this file. 9# 10source "arch/$(SRCARCH)/Kconfig" 11 12menu "General architecture-dependent options" 13 14config CRASH_CORE 15 bool 16 17config KEXEC_CORE 18 select CRASH_CORE 19 bool 20 21config KEXEC_ELF 22 bool 23 24config HAVE_IMA_KEXEC 25 bool 26 27config ARCH_HAS_SUBPAGE_FAULTS 28 bool 29 help 30 Select if the architecture can check permissions at sub-page 31 granularity (e.g. arm64 MTE). The probe_user_*() functions 32 must be implemented. 33 34config HOTPLUG_SMT 35 bool 36 37config GENERIC_ENTRY 38 bool 39 40config KPROBES 41 bool "Kprobes" 42 depends on MODULES 43 depends on HAVE_KPROBES 44 select KALLSYMS 45 select TASKS_RCU if PREEMPTION 46 help 47 Kprobes allows you to trap at almost any kernel address and 48 execute a callback function. register_kprobe() establishes 49 a probepoint and specifies the callback. Kprobes is useful 50 for kernel debugging, non-intrusive instrumentation and testing. 51 If in doubt, say "N". 52 53config JUMP_LABEL 54 bool "Optimize very unlikely/likely branches" 55 depends on HAVE_ARCH_JUMP_LABEL 56 select OBJTOOL if HAVE_JUMP_LABEL_HACK 57 help 58 This option enables a transparent branch optimization that 59 makes certain almost-always-true or almost-always-false branch 60 conditions even cheaper to execute within the kernel. 61 62 Certain performance-sensitive kernel code, such as trace points, 63 scheduler functionality, networking code and KVM have such 64 branches and include support for this optimization technique. 65 66 If it is detected that the compiler has support for "asm goto", 67 the kernel will compile such branches with just a nop 68 instruction. When the condition flag is toggled to true, the 69 nop will be converted to a jump instruction to execute the 70 conditional block of instructions. 71 72 This technique lowers overhead and stress on the branch prediction 73 of the processor and generally makes the kernel faster. The update 74 of the condition is slower, but those are always very rare. 75 76 ( On 32-bit x86, the necessary options added to the compiler 77 flags may increase the size of the kernel slightly. ) 78 79config STATIC_KEYS_SELFTEST 80 bool "Static key selftest" 81 depends on JUMP_LABEL 82 help 83 Boot time self-test of the branch patching code. 84 85config STATIC_CALL_SELFTEST 86 bool "Static call selftest" 87 depends on HAVE_STATIC_CALL 88 help 89 Boot time self-test of the call patching code. 90 91config OPTPROBES 92 def_bool y 93 depends on KPROBES && HAVE_OPTPROBES 94 select TASKS_RCU if PREEMPTION 95 96config KPROBES_ON_FTRACE 97 def_bool y 98 depends on KPROBES && HAVE_KPROBES_ON_FTRACE 99 depends on DYNAMIC_FTRACE_WITH_REGS 100 help 101 If function tracer is enabled and the arch supports full 102 passing of pt_regs to function tracing, then kprobes can 103 optimize on top of function tracing. 104 105config UPROBES 106 def_bool n 107 depends on ARCH_SUPPORTS_UPROBES 108 help 109 Uprobes is the user-space counterpart to kprobes: they 110 enable instrumentation applications (such as 'perf probe') 111 to establish unintrusive probes in user-space binaries and 112 libraries, by executing handler functions when the probes 113 are hit by user-space applications. 114 115 ( These probes come in the form of single-byte breakpoints, 116 managed by the kernel and kept transparent to the probed 117 application. ) 118 119config HAVE_64BIT_ALIGNED_ACCESS 120 def_bool 64BIT && !HAVE_EFFICIENT_UNALIGNED_ACCESS 121 help 122 Some architectures require 64 bit accesses to be 64 bit 123 aligned, which also requires structs containing 64 bit values 124 to be 64 bit aligned too. This includes some 32 bit 125 architectures which can do 64 bit accesses, as well as 64 bit 126 architectures without unaligned access. 127 128 This symbol should be selected by an architecture if 64 bit 129 accesses are required to be 64 bit aligned in this way even 130 though it is not a 64 bit architecture. 131 132 See Documentation/core-api/unaligned-memory-access.rst for 133 more information on the topic of unaligned memory accesses. 134 135config HAVE_EFFICIENT_UNALIGNED_ACCESS 136 bool 137 help 138 Some architectures are unable to perform unaligned accesses 139 without the use of get_unaligned/put_unaligned. Others are 140 unable to perform such accesses efficiently (e.g. trap on 141 unaligned access and require fixing it up in the exception 142 handler.) 143 144 This symbol should be selected by an architecture if it can 145 perform unaligned accesses efficiently to allow different 146 code paths to be selected for these cases. Some network 147 drivers, for example, could opt to not fix up alignment 148 problems with received packets if doing so would not help 149 much. 150 151 See Documentation/core-api/unaligned-memory-access.rst for more 152 information on the topic of unaligned memory accesses. 153 154config ARCH_USE_BUILTIN_BSWAP 155 bool 156 help 157 Modern versions of GCC (since 4.4) have builtin functions 158 for handling byte-swapping. Using these, instead of the old 159 inline assembler that the architecture code provides in the 160 __arch_bswapXX() macros, allows the compiler to see what's 161 happening and offers more opportunity for optimisation. In 162 particular, the compiler will be able to combine the byteswap 163 with a nearby load or store and use load-and-swap or 164 store-and-swap instructions if the architecture has them. It 165 should almost *never* result in code which is worse than the 166 hand-coded assembler in <asm/swab.h>. But just in case it 167 does, the use of the builtins is optional. 168 169 Any architecture with load-and-swap or store-and-swap 170 instructions should set this. And it shouldn't hurt to set it 171 on architectures that don't have such instructions. 172 173config KRETPROBES 174 def_bool y 175 depends on KPROBES && (HAVE_KRETPROBES || HAVE_RETHOOK) 176 177config KRETPROBE_ON_RETHOOK 178 def_bool y 179 depends on HAVE_RETHOOK 180 depends on KRETPROBES 181 select RETHOOK 182 183config USER_RETURN_NOTIFIER 184 bool 185 depends on HAVE_USER_RETURN_NOTIFIER 186 help 187 Provide a kernel-internal notification when a cpu is about to 188 switch to user mode. 189 190config HAVE_IOREMAP_PROT 191 bool 192 193config HAVE_KPROBES 194 bool 195 196config HAVE_KRETPROBES 197 bool 198 199config HAVE_OPTPROBES 200 bool 201 202config HAVE_KPROBES_ON_FTRACE 203 bool 204 205config ARCH_CORRECT_STACKTRACE_ON_KRETPROBE 206 bool 207 help 208 Since kretprobes modifies return address on the stack, the 209 stacktrace may see the kretprobe trampoline address instead 210 of correct one. If the architecture stacktrace code and 211 unwinder can adjust such entries, select this configuration. 212 213config HAVE_FUNCTION_ERROR_INJECTION 214 bool 215 216config HAVE_NMI 217 bool 218 219config HAVE_FUNCTION_DESCRIPTORS 220 bool 221 222config TRACE_IRQFLAGS_SUPPORT 223 bool 224 225config TRACE_IRQFLAGS_NMI_SUPPORT 226 bool 227 228# 229# An arch should select this if it provides all these things: 230# 231# task_pt_regs() in asm/processor.h or asm/ptrace.h 232# arch_has_single_step() if there is hardware single-step support 233# arch_has_block_step() if there is hardware block-step support 234# asm/syscall.h supplying asm-generic/syscall.h interface 235# linux/regset.h user_regset interfaces 236# CORE_DUMP_USE_REGSET #define'd in linux/elf.h 237# TIF_SYSCALL_TRACE calls ptrace_report_syscall_{entry,exit} 238# TIF_NOTIFY_RESUME calls resume_user_mode_work() 239# 240config HAVE_ARCH_TRACEHOOK 241 bool 242 243config HAVE_DMA_CONTIGUOUS 244 bool 245 246config GENERIC_SMP_IDLE_THREAD 247 bool 248 249config GENERIC_IDLE_POLL_SETUP 250 bool 251 252config ARCH_HAS_FORTIFY_SOURCE 253 bool 254 help 255 An architecture should select this when it can successfully 256 build and run with CONFIG_FORTIFY_SOURCE. 257 258# 259# Select if the arch provides a historic keepinit alias for the retain_initrd 260# command line option 261# 262config ARCH_HAS_KEEPINITRD 263 bool 264 265# Select if arch has all set_memory_ro/rw/x/nx() functions in asm/cacheflush.h 266config ARCH_HAS_SET_MEMORY 267 bool 268 269# Select if arch has all set_direct_map_invalid/default() functions 270config ARCH_HAS_SET_DIRECT_MAP 271 bool 272 273# 274# Select if the architecture provides the arch_dma_set_uncached symbol to 275# either provide an uncached segment alias for a DMA allocation, or 276# to remap the page tables in place. 277# 278config ARCH_HAS_DMA_SET_UNCACHED 279 bool 280 281# 282# Select if the architectures provides the arch_dma_clear_uncached symbol 283# to undo an in-place page table remap for uncached access. 284# 285config ARCH_HAS_DMA_CLEAR_UNCACHED 286 bool 287 288# Select if arch init_task must go in the __init_task_data section 289config ARCH_TASK_STRUCT_ON_STACK 290 bool 291 292# Select if arch has its private alloc_task_struct() function 293config ARCH_TASK_STRUCT_ALLOCATOR 294 bool 295 296config HAVE_ARCH_THREAD_STRUCT_WHITELIST 297 bool 298 depends on !ARCH_TASK_STRUCT_ALLOCATOR 299 help 300 An architecture should select this to provide hardened usercopy 301 knowledge about what region of the thread_struct should be 302 whitelisted for copying to userspace. Normally this is only the 303 FPU registers. Specifically, arch_thread_struct_whitelist() 304 should be implemented. Without this, the entire thread_struct 305 field in task_struct will be left whitelisted. 306 307# Select if arch has its private alloc_thread_stack() function 308config ARCH_THREAD_STACK_ALLOCATOR 309 bool 310 311# Select if arch wants to size task_struct dynamically via arch_task_struct_size: 312config ARCH_WANTS_DYNAMIC_TASK_STRUCT 313 bool 314 315config ARCH_WANTS_NO_INSTR 316 bool 317 help 318 An architecture should select this if the noinstr macro is being used on 319 functions to denote that the toolchain should avoid instrumenting such 320 functions and is required for correctness. 321 322config ARCH_32BIT_OFF_T 323 bool 324 depends on !64BIT 325 help 326 All new 32-bit architectures should have 64-bit off_t type on 327 userspace side which corresponds to the loff_t kernel type. This 328 is the requirement for modern ABIs. Some existing architectures 329 still support 32-bit off_t. This option is enabled for all such 330 architectures explicitly. 331 332# Selected by 64 bit architectures which have a 32 bit f_tinode in struct ustat 333config ARCH_32BIT_USTAT_F_TINODE 334 bool 335 336config HAVE_ASM_MODVERSIONS 337 bool 338 help 339 This symbol should be selected by an architecture if it provides 340 <asm/asm-prototypes.h> to support the module versioning for symbols 341 exported from assembly code. 342 343config HAVE_REGS_AND_STACK_ACCESS_API 344 bool 345 help 346 This symbol should be selected by an architecture if it supports 347 the API needed to access registers and stack entries from pt_regs, 348 declared in asm/ptrace.h 349 For example the kprobes-based event tracer needs this API. 350 351config HAVE_RSEQ 352 bool 353 depends on HAVE_REGS_AND_STACK_ACCESS_API 354 help 355 This symbol should be selected by an architecture if it 356 supports an implementation of restartable sequences. 357 358config HAVE_RUST 359 bool 360 help 361 This symbol should be selected by an architecture if it 362 supports Rust. 363 364config HAVE_FUNCTION_ARG_ACCESS_API 365 bool 366 help 367 This symbol should be selected by an architecture if it supports 368 the API needed to access function arguments from pt_regs, 369 declared in asm/ptrace.h 370 371config HAVE_HW_BREAKPOINT 372 bool 373 depends on PERF_EVENTS 374 375config HAVE_MIXED_BREAKPOINTS_REGS 376 bool 377 depends on HAVE_HW_BREAKPOINT 378 help 379 Depending on the arch implementation of hardware breakpoints, 380 some of them have separate registers for data and instruction 381 breakpoints addresses, others have mixed registers to store 382 them but define the access type in a control register. 383 Select this option if your arch implements breakpoints under the 384 latter fashion. 385 386config HAVE_USER_RETURN_NOTIFIER 387 bool 388 389config HAVE_PERF_EVENTS_NMI 390 bool 391 help 392 System hardware can generate an NMI using the perf event 393 subsystem. Also has support for calculating CPU cycle events 394 to determine how many clock cycles in a given period. 395 396config HAVE_HARDLOCKUP_DETECTOR_PERF 397 bool 398 depends on HAVE_PERF_EVENTS_NMI 399 help 400 The arch chooses to use the generic perf-NMI-based hardlockup 401 detector. Must define HAVE_PERF_EVENTS_NMI. 402 403config HAVE_NMI_WATCHDOG 404 depends on HAVE_NMI 405 bool 406 help 407 The arch provides a low level NMI watchdog. It provides 408 asm/nmi.h, and defines its own watchdog_hardlockup_probe() and 409 arch_touch_nmi_watchdog(). 410 411config HAVE_HARDLOCKUP_DETECTOR_ARCH 412 bool 413 select HAVE_NMI_WATCHDOG 414 help 415 The arch chooses to provide its own hardlockup detector, which is 416 a superset of the HAVE_NMI_WATCHDOG. It also conforms to config 417 interfaces and parameters provided by hardlockup detector subsystem. 418 419config HAVE_PERF_REGS 420 bool 421 help 422 Support selective register dumps for perf events. This includes 423 bit-mapping of each registers and a unique architecture id. 424 425config HAVE_PERF_USER_STACK_DUMP 426 bool 427 help 428 Support user stack dumps for perf event samples. This needs 429 access to the user stack pointer which is not unified across 430 architectures. 431 432config HAVE_ARCH_JUMP_LABEL 433 bool 434 435config HAVE_ARCH_JUMP_LABEL_RELATIVE 436 bool 437 438config MMU_GATHER_TABLE_FREE 439 bool 440 441config MMU_GATHER_RCU_TABLE_FREE 442 bool 443 select MMU_GATHER_TABLE_FREE 444 445config MMU_GATHER_PAGE_SIZE 446 bool 447 448config MMU_GATHER_NO_RANGE 449 bool 450 select MMU_GATHER_MERGE_VMAS 451 452config MMU_GATHER_NO_FLUSH_CACHE 453 bool 454 455config MMU_GATHER_MERGE_VMAS 456 bool 457 458config MMU_GATHER_NO_GATHER 459 bool 460 depends on MMU_GATHER_TABLE_FREE 461 462config ARCH_WANT_IRQS_OFF_ACTIVATE_MM 463 bool 464 help 465 Temporary select until all architectures can be converted to have 466 irqs disabled over activate_mm. Architectures that do IPI based TLB 467 shootdowns should enable this. 468 469# Use normal mm refcounting for MMU_LAZY_TLB kernel thread references. 470# MMU_LAZY_TLB_REFCOUNT=n can improve the scalability of context switching 471# to/from kernel threads when the same mm is running on a lot of CPUs (a large 472# multi-threaded application), by reducing contention on the mm refcount. 473# 474# This can be disabled if the architecture ensures no CPUs are using an mm as a 475# "lazy tlb" beyond its final refcount (i.e., by the time __mmdrop frees the mm 476# or its kernel page tables). This could be arranged by arch_exit_mmap(), or 477# final exit(2) TLB flush, for example. 478# 479# To implement this, an arch *must*: 480# Ensure the _lazy_tlb variants of mmgrab/mmdrop are used when manipulating 481# the lazy tlb reference of a kthread's ->active_mm (non-arch code has been 482# converted already). 483config MMU_LAZY_TLB_REFCOUNT 484 def_bool y 485 depends on !MMU_LAZY_TLB_SHOOTDOWN 486 487# This option allows MMU_LAZY_TLB_REFCOUNT=n. It ensures no CPUs are using an 488# mm as a lazy tlb beyond its last reference count, by shooting down these 489# users before the mm is deallocated. __mmdrop() first IPIs all CPUs that may 490# be using the mm as a lazy tlb, so that they may switch themselves to using 491# init_mm for their active mm. mm_cpumask(mm) is used to determine which CPUs 492# may be using mm as a lazy tlb mm. 493# 494# To implement this, an arch *must*: 495# - At the time of the final mmdrop of the mm, ensure mm_cpumask(mm) contains 496# at least all possible CPUs in which the mm is lazy. 497# - It must meet the requirements for MMU_LAZY_TLB_REFCOUNT=n (see above). 498config MMU_LAZY_TLB_SHOOTDOWN 499 bool 500 501config ARCH_HAVE_NMI_SAFE_CMPXCHG 502 bool 503 504config ARCH_HAS_NMI_SAFE_THIS_CPU_OPS 505 bool 506 507config HAVE_ALIGNED_STRUCT_PAGE 508 bool 509 help 510 This makes sure that struct pages are double word aligned and that 511 e.g. the SLUB allocator can perform double word atomic operations 512 on a struct page for better performance. However selecting this 513 might increase the size of a struct page by a word. 514 515config HAVE_CMPXCHG_LOCAL 516 bool 517 518config HAVE_CMPXCHG_DOUBLE 519 bool 520 521config ARCH_WEAK_RELEASE_ACQUIRE 522 bool 523 524config ARCH_WANT_IPC_PARSE_VERSION 525 bool 526 527config ARCH_WANT_COMPAT_IPC_PARSE_VERSION 528 bool 529 530config ARCH_WANT_OLD_COMPAT_IPC 531 select ARCH_WANT_COMPAT_IPC_PARSE_VERSION 532 bool 533 534config HAVE_ARCH_SECCOMP 535 bool 536 help 537 An arch should select this symbol to support seccomp mode 1 (the fixed 538 syscall policy), and must provide an overrides for __NR_seccomp_sigreturn, 539 and compat syscalls if the asm-generic/seccomp.h defaults need adjustment: 540 - __NR_seccomp_read_32 541 - __NR_seccomp_write_32 542 - __NR_seccomp_exit_32 543 - __NR_seccomp_sigreturn_32 544 545config HAVE_ARCH_SECCOMP_FILTER 546 bool 547 select HAVE_ARCH_SECCOMP 548 help 549 An arch should select this symbol if it provides all of these things: 550 - all the requirements for HAVE_ARCH_SECCOMP 551 - syscall_get_arch() 552 - syscall_get_arguments() 553 - syscall_rollback() 554 - syscall_set_return_value() 555 - SIGSYS siginfo_t support 556 - secure_computing is called from a ptrace_event()-safe context 557 - secure_computing return value is checked and a return value of -1 558 results in the system call being skipped immediately. 559 - seccomp syscall wired up 560 - if !HAVE_SPARSE_SYSCALL_NR, have SECCOMP_ARCH_NATIVE, 561 SECCOMP_ARCH_NATIVE_NR, SECCOMP_ARCH_NATIVE_NAME defined. If 562 COMPAT is supported, have the SECCOMP_ARCH_COMPAT* defines too. 563 564config SECCOMP 565 prompt "Enable seccomp to safely execute untrusted bytecode" 566 def_bool y 567 depends on HAVE_ARCH_SECCOMP 568 help 569 This kernel feature is useful for number crunching applications 570 that may need to handle untrusted bytecode during their 571 execution. By using pipes or other transports made available 572 to the process as file descriptors supporting the read/write 573 syscalls, it's possible to isolate those applications in their 574 own address space using seccomp. Once seccomp is enabled via 575 prctl(PR_SET_SECCOMP) or the seccomp() syscall, it cannot be 576 disabled and the task is only allowed to execute a few safe 577 syscalls defined by each seccomp mode. 578 579 If unsure, say Y. 580 581config SECCOMP_FILTER 582 def_bool y 583 depends on HAVE_ARCH_SECCOMP_FILTER && SECCOMP && NET 584 help 585 Enable tasks to build secure computing environments defined 586 in terms of Berkeley Packet Filter programs which implement 587 task-defined system call filtering polices. 588 589 See Documentation/userspace-api/seccomp_filter.rst for details. 590 591config SECCOMP_CACHE_DEBUG 592 bool "Show seccomp filter cache status in /proc/pid/seccomp_cache" 593 depends on SECCOMP_FILTER && !HAVE_SPARSE_SYSCALL_NR 594 depends on PROC_FS 595 help 596 This enables the /proc/pid/seccomp_cache interface to monitor 597 seccomp cache data. The file format is subject to change. Reading 598 the file requires CAP_SYS_ADMIN. 599 600 This option is for debugging only. Enabling presents the risk that 601 an adversary may be able to infer the seccomp filter logic. 602 603 If unsure, say N. 604 605config HAVE_ARCH_STACKLEAK 606 bool 607 help 608 An architecture should select this if it has the code which 609 fills the used part of the kernel stack with the STACKLEAK_POISON 610 value before returning from system calls. 611 612config HAVE_STACKPROTECTOR 613 bool 614 help 615 An arch should select this symbol if: 616 - it has implemented a stack canary (e.g. __stack_chk_guard) 617 618config STACKPROTECTOR 619 bool "Stack Protector buffer overflow detection" 620 depends on HAVE_STACKPROTECTOR 621 depends on $(cc-option,-fstack-protector) 622 default y 623 help 624 This option turns on the "stack-protector" GCC feature. This 625 feature puts, at the beginning of functions, a canary value on 626 the stack just before the return address, and validates 627 the value just before actually returning. Stack based buffer 628 overflows (that need to overwrite this return address) now also 629 overwrite the canary, which gets detected and the attack is then 630 neutralized via a kernel panic. 631 632 Functions will have the stack-protector canary logic added if they 633 have an 8-byte or larger character array on the stack. 634 635 This feature requires gcc version 4.2 or above, or a distribution 636 gcc with the feature backported ("-fstack-protector"). 637 638 On an x86 "defconfig" build, this feature adds canary checks to 639 about 3% of all kernel functions, which increases kernel code size 640 by about 0.3%. 641 642config STACKPROTECTOR_STRONG 643 bool "Strong Stack Protector" 644 depends on STACKPROTECTOR 645 depends on $(cc-option,-fstack-protector-strong) 646 default y 647 help 648 Functions will have the stack-protector canary logic added in any 649 of the following conditions: 650 651 - local variable's address used as part of the right hand side of an 652 assignment or function argument 653 - local variable is an array (or union containing an array), 654 regardless of array type or length 655 - uses register local variables 656 657 This feature requires gcc version 4.9 or above, or a distribution 658 gcc with the feature backported ("-fstack-protector-strong"). 659 660 On an x86 "defconfig" build, this feature adds canary checks to 661 about 20% of all kernel functions, which increases the kernel code 662 size by about 2%. 663 664config ARCH_SUPPORTS_SHADOW_CALL_STACK 665 bool 666 help 667 An architecture should select this if it supports the compiler's 668 Shadow Call Stack and implements runtime support for shadow stack 669 switching. 670 671config SHADOW_CALL_STACK 672 bool "Shadow Call Stack" 673 depends on ARCH_SUPPORTS_SHADOW_CALL_STACK 674 depends on DYNAMIC_FTRACE_WITH_ARGS || DYNAMIC_FTRACE_WITH_REGS || !FUNCTION_GRAPH_TRACER 675 help 676 This option enables the compiler's Shadow Call Stack, which 677 uses a shadow stack to protect function return addresses from 678 being overwritten by an attacker. More information can be found 679 in the compiler's documentation: 680 681 - Clang: https://clang.llvm.org/docs/ShadowCallStack.html 682 - GCC: https://gcc.gnu.org/onlinedocs/gcc/Instrumentation-Options.html#Instrumentation-Options 683 684 Note that security guarantees in the kernel differ from the 685 ones documented for user space. The kernel must store addresses 686 of shadow stacks in memory, which means an attacker capable of 687 reading and writing arbitrary memory may be able to locate them 688 and hijack control flow by modifying the stacks. 689 690config DYNAMIC_SCS 691 bool 692 help 693 Set by the arch code if it relies on code patching to insert the 694 shadow call stack push and pop instructions rather than on the 695 compiler. 696 697config LTO 698 bool 699 help 700 Selected if the kernel will be built using the compiler's LTO feature. 701 702config LTO_CLANG 703 bool 704 select LTO 705 help 706 Selected if the kernel will be built using Clang's LTO feature. 707 708config ARCH_SUPPORTS_LTO_CLANG 709 bool 710 help 711 An architecture should select this option if it supports: 712 - compiling with Clang, 713 - compiling inline assembly with Clang's integrated assembler, 714 - and linking with LLD. 715 716config ARCH_SUPPORTS_LTO_CLANG_THIN 717 bool 718 help 719 An architecture should select this option if it can support Clang's 720 ThinLTO mode. 721 722config HAS_LTO_CLANG 723 def_bool y 724 depends on CC_IS_CLANG && LD_IS_LLD && AS_IS_LLVM 725 depends on $(success,$(NM) --help | head -n 1 | grep -qi llvm) 726 depends on $(success,$(AR) --help | head -n 1 | grep -qi llvm) 727 depends on ARCH_SUPPORTS_LTO_CLANG 728 depends on !FTRACE_MCOUNT_USE_RECORDMCOUNT 729 depends on !KASAN || KASAN_HW_TAGS 730 depends on !GCOV_KERNEL 731 help 732 The compiler and Kconfig options support building with Clang's 733 LTO. 734 735choice 736 prompt "Link Time Optimization (LTO)" 737 default LTO_NONE 738 help 739 This option enables Link Time Optimization (LTO), which allows the 740 compiler to optimize binaries globally. 741 742 If unsure, select LTO_NONE. Note that LTO is very resource-intensive 743 so it's disabled by default. 744 745config LTO_NONE 746 bool "None" 747 help 748 Build the kernel normally, without Link Time Optimization (LTO). 749 750config LTO_CLANG_FULL 751 bool "Clang Full LTO (EXPERIMENTAL)" 752 depends on HAS_LTO_CLANG 753 depends on !COMPILE_TEST 754 select LTO_CLANG 755 help 756 This option enables Clang's full Link Time Optimization (LTO), which 757 allows the compiler to optimize the kernel globally. If you enable 758 this option, the compiler generates LLVM bitcode instead of ELF 759 object files, and the actual compilation from bitcode happens at 760 the LTO link step, which may take several minutes depending on the 761 kernel configuration. More information can be found from LLVM's 762 documentation: 763 764 https://llvm.org/docs/LinkTimeOptimization.html 765 766 During link time, this option can use a large amount of RAM, and 767 may take much longer than the ThinLTO option. 768 769config LTO_CLANG_THIN 770 bool "Clang ThinLTO (EXPERIMENTAL)" 771 depends on HAS_LTO_CLANG && ARCH_SUPPORTS_LTO_CLANG_THIN 772 select LTO_CLANG 773 help 774 This option enables Clang's ThinLTO, which allows for parallel 775 optimization and faster incremental compiles compared to the 776 CONFIG_LTO_CLANG_FULL option. More information can be found 777 from Clang's documentation: 778 779 https://clang.llvm.org/docs/ThinLTO.html 780 781 If unsure, say Y. 782endchoice 783 784config ARCH_SUPPORTS_CFI_CLANG 785 bool 786 help 787 An architecture should select this option if it can support Clang's 788 Control-Flow Integrity (CFI) checking. 789 790config ARCH_USES_CFI_TRAPS 791 bool 792 793config CFI_CLANG 794 bool "Use Clang's Control Flow Integrity (CFI)" 795 depends on ARCH_SUPPORTS_CFI_CLANG 796 depends on $(cc-option,-fsanitize=kcfi) 797 help 798 This option enables Clang’s forward-edge Control Flow Integrity 799 (CFI) checking, where the compiler injects a runtime check to each 800 indirect function call to ensure the target is a valid function with 801 the correct static type. This restricts possible call targets and 802 makes it more difficult for an attacker to exploit bugs that allow 803 the modification of stored function pointers. More information can be 804 found from Clang's documentation: 805 806 https://clang.llvm.org/docs/ControlFlowIntegrity.html 807 808config CFI_PERMISSIVE 809 bool "Use CFI in permissive mode" 810 depends on CFI_CLANG 811 help 812 When selected, Control Flow Integrity (CFI) violations result in a 813 warning instead of a kernel panic. This option should only be used 814 for finding indirect call type mismatches during development. 815 816 If unsure, say N. 817 818config HAVE_ARCH_WITHIN_STACK_FRAMES 819 bool 820 help 821 An architecture should select this if it can walk the kernel stack 822 frames to determine if an object is part of either the arguments 823 or local variables (i.e. that it excludes saved return addresses, 824 and similar) by implementing an inline arch_within_stack_frames(), 825 which is used by CONFIG_HARDENED_USERCOPY. 826 827config HAVE_CONTEXT_TRACKING_USER 828 bool 829 help 830 Provide kernel/user boundaries probes necessary for subsystems 831 that need it, such as userspace RCU extended quiescent state. 832 Syscalls need to be wrapped inside user_exit()-user_enter(), either 833 optimized behind static key or through the slow path using TIF_NOHZ 834 flag. Exceptions handlers must be wrapped as well. Irqs are already 835 protected inside ct_irq_enter/ct_irq_exit() but preemption or signal 836 handling on irq exit still need to be protected. 837 838config HAVE_CONTEXT_TRACKING_USER_OFFSTACK 839 bool 840 help 841 Architecture neither relies on exception_enter()/exception_exit() 842 nor on schedule_user(). Also preempt_schedule_notrace() and 843 preempt_schedule_irq() can't be called in a preemptible section 844 while context tracking is CONTEXT_USER. This feature reflects a sane 845 entry implementation where the following requirements are met on 846 critical entry code, ie: before user_exit() or after user_enter(): 847 848 - Critical entry code isn't preemptible (or better yet: 849 not interruptible). 850 - No use of RCU read side critical sections, unless ct_nmi_enter() 851 got called. 852 - No use of instrumentation, unless instrumentation_begin() got 853 called. 854 855config HAVE_TIF_NOHZ 856 bool 857 help 858 Arch relies on TIF_NOHZ and syscall slow path to implement context 859 tracking calls to user_enter()/user_exit(). 860 861config HAVE_VIRT_CPU_ACCOUNTING 862 bool 863 864config HAVE_VIRT_CPU_ACCOUNTING_IDLE 865 bool 866 help 867 Architecture has its own way to account idle CPU time and therefore 868 doesn't implement vtime_account_idle(). 869 870config ARCH_HAS_SCALED_CPUTIME 871 bool 872 873config HAVE_VIRT_CPU_ACCOUNTING_GEN 874 bool 875 default y if 64BIT 876 help 877 With VIRT_CPU_ACCOUNTING_GEN, cputime_t becomes 64-bit. 878 Before enabling this option, arch code must be audited 879 to ensure there are no races in concurrent read/write of 880 cputime_t. For example, reading/writing 64-bit cputime_t on 881 some 32-bit arches may require multiple accesses, so proper 882 locking is needed to protect against concurrent accesses. 883 884config HAVE_IRQ_TIME_ACCOUNTING 885 bool 886 help 887 Archs need to ensure they use a high enough resolution clock to 888 support irq time accounting and then call enable_sched_clock_irqtime(). 889 890config HAVE_MOVE_PUD 891 bool 892 help 893 Architectures that select this are able to move page tables at the 894 PUD level. If there are only 3 page table levels, the move effectively 895 happens at the PGD level. 896 897config HAVE_MOVE_PMD 898 bool 899 help 900 Archs that select this are able to move page tables at the PMD level. 901 902config HAVE_ARCH_TRANSPARENT_HUGEPAGE 903 bool 904 905config HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD 906 bool 907 908config HAVE_ARCH_HUGE_VMAP 909 bool 910 911# 912# Archs that select this would be capable of PMD-sized vmaps (i.e., 913# arch_vmap_pmd_supported() returns true). The VM_ALLOW_HUGE_VMAP flag 914# must be used to enable allocations to use hugepages. 915# 916config HAVE_ARCH_HUGE_VMALLOC 917 depends on HAVE_ARCH_HUGE_VMAP 918 bool 919 920config ARCH_WANT_HUGE_PMD_SHARE 921 bool 922 923config HAVE_ARCH_SOFT_DIRTY 924 bool 925 926config HAVE_MOD_ARCH_SPECIFIC 927 bool 928 help 929 The arch uses struct mod_arch_specific to store data. Many arches 930 just need a simple module loader without arch specific data - those 931 should not enable this. 932 933config MODULES_USE_ELF_RELA 934 bool 935 help 936 Modules only use ELF RELA relocations. Modules with ELF REL 937 relocations will give an error. 938 939config MODULES_USE_ELF_REL 940 bool 941 help 942 Modules only use ELF REL relocations. Modules with ELF RELA 943 relocations will give an error. 944 945config ARCH_WANTS_MODULES_DATA_IN_VMALLOC 946 bool 947 help 948 For architectures like powerpc/32 which have constraints on module 949 allocation and need to allocate module data outside of module area. 950 951config HAVE_IRQ_EXIT_ON_IRQ_STACK 952 bool 953 help 954 Architecture doesn't only execute the irq handler on the irq stack 955 but also irq_exit(). This way we can process softirqs on this irq 956 stack instead of switching to a new one when we call __do_softirq() 957 in the end of an hardirq. 958 This spares a stack switch and improves cache usage on softirq 959 processing. 960 961config HAVE_SOFTIRQ_ON_OWN_STACK 962 bool 963 help 964 Architecture provides a function to run __do_softirq() on a 965 separate stack. 966 967config SOFTIRQ_ON_OWN_STACK 968 def_bool HAVE_SOFTIRQ_ON_OWN_STACK && !PREEMPT_RT 969 970config ALTERNATE_USER_ADDRESS_SPACE 971 bool 972 help 973 Architectures set this when the CPU uses separate address 974 spaces for kernel and user space pointers. In this case, the 975 access_ok() check on a __user pointer is skipped. 976 977config PGTABLE_LEVELS 978 int 979 default 2 980 981config ARCH_HAS_ELF_RANDOMIZE 982 bool 983 help 984 An architecture supports choosing randomized locations for 985 stack, mmap, brk, and ET_DYN. Defined functions: 986 - arch_mmap_rnd() 987 - arch_randomize_brk() 988 989config HAVE_ARCH_MMAP_RND_BITS 990 bool 991 help 992 An arch should select this symbol if it supports setting a variable 993 number of bits for use in establishing the base address for mmap 994 allocations, has MMU enabled and provides values for both: 995 - ARCH_MMAP_RND_BITS_MIN 996 - ARCH_MMAP_RND_BITS_MAX 997 998config HAVE_EXIT_THREAD 999 bool 1000 help 1001 An architecture implements exit_thread. 1002 1003config ARCH_MMAP_RND_BITS_MIN 1004 int 1005 1006config ARCH_MMAP_RND_BITS_MAX 1007 int 1008 1009config ARCH_MMAP_RND_BITS_DEFAULT 1010 int 1011 1012config ARCH_MMAP_RND_BITS 1013 int "Number of bits to use for ASLR of mmap base address" if EXPERT 1014 range ARCH_MMAP_RND_BITS_MIN ARCH_MMAP_RND_BITS_MAX 1015 default ARCH_MMAP_RND_BITS_DEFAULT if ARCH_MMAP_RND_BITS_DEFAULT 1016 default ARCH_MMAP_RND_BITS_MIN 1017 depends on HAVE_ARCH_MMAP_RND_BITS 1018 help 1019 This value can be used to select the number of bits to use to 1020 determine the random offset to the base address of vma regions 1021 resulting from mmap allocations. This value will be bounded 1022 by the architecture's minimum and maximum supported values. 1023 1024 This value can be changed after boot using the 1025 /proc/sys/vm/mmap_rnd_bits tunable 1026 1027config HAVE_ARCH_MMAP_RND_COMPAT_BITS 1028 bool 1029 help 1030 An arch should select this symbol if it supports running applications 1031 in compatibility mode, supports setting a variable number of bits for 1032 use in establishing the base address for mmap allocations, has MMU 1033 enabled and provides values for both: 1034 - ARCH_MMAP_RND_COMPAT_BITS_MIN 1035 - ARCH_MMAP_RND_COMPAT_BITS_MAX 1036 1037config ARCH_MMAP_RND_COMPAT_BITS_MIN 1038 int 1039 1040config ARCH_MMAP_RND_COMPAT_BITS_MAX 1041 int 1042 1043config ARCH_MMAP_RND_COMPAT_BITS_DEFAULT 1044 int 1045 1046config ARCH_MMAP_RND_COMPAT_BITS 1047 int "Number of bits to use for ASLR of mmap base address for compatible applications" if EXPERT 1048 range ARCH_MMAP_RND_COMPAT_BITS_MIN ARCH_MMAP_RND_COMPAT_BITS_MAX 1049 default ARCH_MMAP_RND_COMPAT_BITS_DEFAULT if ARCH_MMAP_RND_COMPAT_BITS_DEFAULT 1050 default ARCH_MMAP_RND_COMPAT_BITS_MIN 1051 depends on HAVE_ARCH_MMAP_RND_COMPAT_BITS 1052 help 1053 This value can be used to select the number of bits to use to 1054 determine the random offset to the base address of vma regions 1055 resulting from mmap allocations for compatible applications This 1056 value will be bounded by the architecture's minimum and maximum 1057 supported values. 1058 1059 This value can be changed after boot using the 1060 /proc/sys/vm/mmap_rnd_compat_bits tunable 1061 1062config HAVE_ARCH_COMPAT_MMAP_BASES 1063 bool 1064 help 1065 This allows 64bit applications to invoke 32-bit mmap() syscall 1066 and vice-versa 32-bit applications to call 64-bit mmap(). 1067 Required for applications doing different bitness syscalls. 1068 1069config PAGE_SIZE_LESS_THAN_64KB 1070 def_bool y 1071 depends on !ARM64_64K_PAGES 1072 depends on !IA64_PAGE_SIZE_64KB 1073 depends on !PAGE_SIZE_64KB 1074 depends on !PARISC_PAGE_SIZE_64KB 1075 depends on PAGE_SIZE_LESS_THAN_256KB 1076 1077config PAGE_SIZE_LESS_THAN_256KB 1078 def_bool y 1079 depends on !PAGE_SIZE_256KB 1080 1081# This allows to use a set of generic functions to determine mmap base 1082# address by giving priority to top-down scheme only if the process 1083# is not in legacy mode (compat task, unlimited stack size or 1084# sysctl_legacy_va_layout). 1085# Architecture that selects this option can provide its own version of: 1086# - STACK_RND_MASK 1087config ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT 1088 bool 1089 depends on MMU 1090 select ARCH_HAS_ELF_RANDOMIZE 1091 1092config HAVE_OBJTOOL 1093 bool 1094 1095config HAVE_JUMP_LABEL_HACK 1096 bool 1097 1098config HAVE_NOINSTR_HACK 1099 bool 1100 1101config HAVE_NOINSTR_VALIDATION 1102 bool 1103 1104config HAVE_UACCESS_VALIDATION 1105 bool 1106 select OBJTOOL 1107 1108config HAVE_STACK_VALIDATION 1109 bool 1110 help 1111 Architecture supports objtool compile-time frame pointer rule 1112 validation. 1113 1114config HAVE_RELIABLE_STACKTRACE 1115 bool 1116 help 1117 Architecture has either save_stack_trace_tsk_reliable() or 1118 arch_stack_walk_reliable() function which only returns a stack trace 1119 if it can guarantee the trace is reliable. 1120 1121config HAVE_ARCH_HASH 1122 bool 1123 default n 1124 help 1125 If this is set, the architecture provides an <asm/hash.h> 1126 file which provides platform-specific implementations of some 1127 functions in <linux/hash.h> or fs/namei.c. 1128 1129config HAVE_ARCH_NVRAM_OPS 1130 bool 1131 1132config ISA_BUS_API 1133 def_bool ISA 1134 1135# 1136# ABI hall of shame 1137# 1138config CLONE_BACKWARDS 1139 bool 1140 help 1141 Architecture has tls passed as the 4th argument of clone(2), 1142 not the 5th one. 1143 1144config CLONE_BACKWARDS2 1145 bool 1146 help 1147 Architecture has the first two arguments of clone(2) swapped. 1148 1149config CLONE_BACKWARDS3 1150 bool 1151 help 1152 Architecture has tls passed as the 3rd argument of clone(2), 1153 not the 5th one. 1154 1155config ODD_RT_SIGACTION 1156 bool 1157 help 1158 Architecture has unusual rt_sigaction(2) arguments 1159 1160config OLD_SIGSUSPEND 1161 bool 1162 help 1163 Architecture has old sigsuspend(2) syscall, of one-argument variety 1164 1165config OLD_SIGSUSPEND3 1166 bool 1167 help 1168 Even weirder antique ABI - three-argument sigsuspend(2) 1169 1170config OLD_SIGACTION 1171 bool 1172 help 1173 Architecture has old sigaction(2) syscall. Nope, not the same 1174 as OLD_SIGSUSPEND | OLD_SIGSUSPEND3 - alpha has sigsuspend(2), 1175 but fairly different variant of sigaction(2), thanks to OSF/1 1176 compatibility... 1177 1178config COMPAT_OLD_SIGACTION 1179 bool 1180 1181config COMPAT_32BIT_TIME 1182 bool "Provide system calls for 32-bit time_t" 1183 default !64BIT || COMPAT 1184 help 1185 This enables 32 bit time_t support in addition to 64 bit time_t support. 1186 This is relevant on all 32-bit architectures, and 64-bit architectures 1187 as part of compat syscall handling. 1188 1189config ARCH_NO_PREEMPT 1190 bool 1191 1192config ARCH_EPHEMERAL_INODES 1193 def_bool n 1194 help 1195 An arch should select this symbol if it doesn't keep track of inode 1196 instances on its own, but instead relies on something else (e.g. the 1197 host kernel for an UML kernel). 1198 1199config ARCH_SUPPORTS_RT 1200 bool 1201 1202config CPU_NO_EFFICIENT_FFS 1203 def_bool n 1204 1205config HAVE_ARCH_VMAP_STACK 1206 def_bool n 1207 help 1208 An arch should select this symbol if it can support kernel stacks 1209 in vmalloc space. This means: 1210 1211 - vmalloc space must be large enough to hold many kernel stacks. 1212 This may rule out many 32-bit architectures. 1213 1214 - Stacks in vmalloc space need to work reliably. For example, if 1215 vmap page tables are created on demand, either this mechanism 1216 needs to work while the stack points to a virtual address with 1217 unpopulated page tables or arch code (switch_to() and switch_mm(), 1218 most likely) needs to ensure that the stack's page table entries 1219 are populated before running on a possibly unpopulated stack. 1220 1221 - If the stack overflows into a guard page, something reasonable 1222 should happen. The definition of "reasonable" is flexible, but 1223 instantly rebooting without logging anything would be unfriendly. 1224 1225config VMAP_STACK 1226 default y 1227 bool "Use a virtually-mapped stack" 1228 depends on HAVE_ARCH_VMAP_STACK 1229 depends on !KASAN || KASAN_HW_TAGS || KASAN_VMALLOC 1230 help 1231 Enable this if you want the use virtually-mapped kernel stacks 1232 with guard pages. This causes kernel stack overflows to be 1233 caught immediately rather than causing difficult-to-diagnose 1234 corruption. 1235 1236 To use this with software KASAN modes, the architecture must support 1237 backing virtual mappings with real shadow memory, and KASAN_VMALLOC 1238 must be enabled. 1239 1240config HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET 1241 def_bool n 1242 help 1243 An arch should select this symbol if it can support kernel stack 1244 offset randomization with calls to add_random_kstack_offset() 1245 during syscall entry and choose_random_kstack_offset() during 1246 syscall exit. Careful removal of -fstack-protector-strong and 1247 -fstack-protector should also be applied to the entry code and 1248 closely examined, as the artificial stack bump looks like an array 1249 to the compiler, so it will attempt to add canary checks regardless 1250 of the static branch state. 1251 1252config RANDOMIZE_KSTACK_OFFSET 1253 bool "Support for randomizing kernel stack offset on syscall entry" if EXPERT 1254 default y 1255 depends on HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET 1256 depends on INIT_STACK_NONE || !CC_IS_CLANG || CLANG_VERSION >= 140000 1257 help 1258 The kernel stack offset can be randomized (after pt_regs) by 1259 roughly 5 bits of entropy, frustrating memory corruption 1260 attacks that depend on stack address determinism or 1261 cross-syscall address exposures. 1262 1263 The feature is controlled via the "randomize_kstack_offset=on/off" 1264 kernel boot param, and if turned off has zero overhead due to its use 1265 of static branches (see JUMP_LABEL). 1266 1267 If unsure, say Y. 1268 1269config RANDOMIZE_KSTACK_OFFSET_DEFAULT 1270 bool "Default state of kernel stack offset randomization" 1271 depends on RANDOMIZE_KSTACK_OFFSET 1272 help 1273 Kernel stack offset randomization is controlled by kernel boot param 1274 "randomize_kstack_offset=on/off", and this config chooses the default 1275 boot state. 1276 1277config ARCH_OPTIONAL_KERNEL_RWX 1278 def_bool n 1279 1280config ARCH_OPTIONAL_KERNEL_RWX_DEFAULT 1281 def_bool n 1282 1283config ARCH_HAS_STRICT_KERNEL_RWX 1284 def_bool n 1285 1286config STRICT_KERNEL_RWX 1287 bool "Make kernel text and rodata read-only" if ARCH_OPTIONAL_KERNEL_RWX 1288 depends on ARCH_HAS_STRICT_KERNEL_RWX 1289 default !ARCH_OPTIONAL_KERNEL_RWX || ARCH_OPTIONAL_KERNEL_RWX_DEFAULT 1290 help 1291 If this is set, kernel text and rodata memory will be made read-only, 1292 and non-text memory will be made non-executable. This provides 1293 protection against certain security exploits (e.g. executing the heap 1294 or modifying text) 1295 1296 These features are considered standard security practice these days. 1297 You should say Y here in almost all cases. 1298 1299config ARCH_HAS_STRICT_MODULE_RWX 1300 def_bool n 1301 1302config STRICT_MODULE_RWX 1303 bool "Set loadable kernel module data as NX and text as RO" if ARCH_OPTIONAL_KERNEL_RWX 1304 depends on ARCH_HAS_STRICT_MODULE_RWX && MODULES 1305 default !ARCH_OPTIONAL_KERNEL_RWX || ARCH_OPTIONAL_KERNEL_RWX_DEFAULT 1306 help 1307 If this is set, module text and rodata memory will be made read-only, 1308 and non-text memory will be made non-executable. This provides 1309 protection against certain security exploits (e.g. writing to text) 1310 1311# select if the architecture provides an asm/dma-direct.h header 1312config ARCH_HAS_PHYS_TO_DMA 1313 bool 1314 1315config HAVE_ARCH_COMPILER_H 1316 bool 1317 help 1318 An architecture can select this if it provides an 1319 asm/compiler.h header that should be included after 1320 linux/compiler-*.h in order to override macro definitions that those 1321 headers generally provide. 1322 1323config HAVE_ARCH_PREL32_RELOCATIONS 1324 bool 1325 help 1326 May be selected by an architecture if it supports place-relative 1327 32-bit relocations, both in the toolchain and in the module loader, 1328 in which case relative references can be used in special sections 1329 for PCI fixup, initcalls etc which are only half the size on 64 bit 1330 architectures, and don't require runtime relocation on relocatable 1331 kernels. 1332 1333config ARCH_USE_MEMREMAP_PROT 1334 bool 1335 1336config LOCK_EVENT_COUNTS 1337 bool "Locking event counts collection" 1338 depends on DEBUG_FS 1339 help 1340 Enable light-weight counting of various locking related events 1341 in the system with minimal performance impact. This reduces 1342 the chance of application behavior change because of timing 1343 differences. The counts are reported via debugfs. 1344 1345# Select if the architecture has support for applying RELR relocations. 1346config ARCH_HAS_RELR 1347 bool 1348 1349config RELR 1350 bool "Use RELR relocation packing" 1351 depends on ARCH_HAS_RELR && TOOLS_SUPPORT_RELR 1352 default y 1353 help 1354 Store the kernel's dynamic relocations in the RELR relocation packing 1355 format. Requires a compatible linker (LLD supports this feature), as 1356 well as compatible NM and OBJCOPY utilities (llvm-nm and llvm-objcopy 1357 are compatible). 1358 1359config ARCH_HAS_MEM_ENCRYPT 1360 bool 1361 1362config ARCH_HAS_CC_PLATFORM 1363 bool 1364 1365config HAVE_SPARSE_SYSCALL_NR 1366 bool 1367 help 1368 An architecture should select this if its syscall numbering is sparse 1369 to save space. For example, MIPS architecture has a syscall array with 1370 entries at 4000, 5000 and 6000 locations. This option turns on syscall 1371 related optimizations for a given architecture. 1372 1373config ARCH_HAS_VDSO_DATA 1374 bool 1375 1376config HAVE_STATIC_CALL 1377 bool 1378 1379config HAVE_STATIC_CALL_INLINE 1380 bool 1381 depends on HAVE_STATIC_CALL 1382 select OBJTOOL 1383 1384config HAVE_PREEMPT_DYNAMIC 1385 bool 1386 1387config HAVE_PREEMPT_DYNAMIC_CALL 1388 bool 1389 depends on HAVE_STATIC_CALL 1390 select HAVE_PREEMPT_DYNAMIC 1391 help 1392 An architecture should select this if it can handle the preemption 1393 model being selected at boot time using static calls. 1394 1395 Where an architecture selects HAVE_STATIC_CALL_INLINE, any call to a 1396 preemption function will be patched directly. 1397 1398 Where an architecture does not select HAVE_STATIC_CALL_INLINE, any 1399 call to a preemption function will go through a trampoline, and the 1400 trampoline will be patched. 1401 1402 It is strongly advised to support inline static call to avoid any 1403 overhead. 1404 1405config HAVE_PREEMPT_DYNAMIC_KEY 1406 bool 1407 depends on HAVE_ARCH_JUMP_LABEL 1408 select HAVE_PREEMPT_DYNAMIC 1409 help 1410 An architecture should select this if it can handle the preemption 1411 model being selected at boot time using static keys. 1412 1413 Each preemption function will be given an early return based on a 1414 static key. This should have slightly lower overhead than non-inline 1415 static calls, as this effectively inlines each trampoline into the 1416 start of its callee. This may avoid redundant work, and may 1417 integrate better with CFI schemes. 1418 1419 This will have greater overhead than using inline static calls as 1420 the call to the preemption function cannot be entirely elided. 1421 1422config ARCH_WANT_LD_ORPHAN_WARN 1423 bool 1424 help 1425 An arch should select this symbol once all linker sections are explicitly 1426 included, size-asserted, or discarded in the linker scripts. This is 1427 important because we never want expected sections to be placed heuristically 1428 by the linker, since the locations of such sections can change between linker 1429 versions. 1430 1431config HAVE_ARCH_PFN_VALID 1432 bool 1433 1434config ARCH_SUPPORTS_DEBUG_PAGEALLOC 1435 bool 1436 1437config ARCH_SUPPORTS_PAGE_TABLE_CHECK 1438 bool 1439 1440config ARCH_SPLIT_ARG64 1441 bool 1442 help 1443 If a 32-bit architecture requires 64-bit arguments to be split into 1444 pairs of 32-bit arguments, select this option. 1445 1446config ARCH_HAS_ELFCORE_COMPAT 1447 bool 1448 1449config ARCH_HAS_PARANOID_L1D_FLUSH 1450 bool 1451 1452config ARCH_HAVE_TRACE_MMIO_ACCESS 1453 bool 1454 1455config DYNAMIC_SIGFRAME 1456 bool 1457 1458# Select, if arch has a named attribute group bound to NUMA device nodes. 1459config HAVE_ARCH_NODE_DEV_GROUP 1460 bool 1461 1462config ARCH_HAS_NONLEAF_PMD_YOUNG 1463 bool 1464 help 1465 Architectures that select this option are capable of setting the 1466 accessed bit in non-leaf PMD entries when using them as part of linear 1467 address translations. Page table walkers that clear the accessed bit 1468 may use this capability to reduce their search space. 1469 1470source "kernel/gcov/Kconfig" 1471 1472source "scripts/gcc-plugins/Kconfig" 1473 1474config FUNCTION_ALIGNMENT_4B 1475 bool 1476 1477config FUNCTION_ALIGNMENT_8B 1478 bool 1479 1480config FUNCTION_ALIGNMENT_16B 1481 bool 1482 1483config FUNCTION_ALIGNMENT_32B 1484 bool 1485 1486config FUNCTION_ALIGNMENT_64B 1487 bool 1488 1489config FUNCTION_ALIGNMENT 1490 int 1491 default 64 if FUNCTION_ALIGNMENT_64B 1492 default 32 if FUNCTION_ALIGNMENT_32B 1493 default 16 if FUNCTION_ALIGNMENT_16B 1494 default 8 if FUNCTION_ALIGNMENT_8B 1495 default 4 if FUNCTION_ALIGNMENT_4B 1496 default 0 1497 1498endmenu 1499