1# SPDX-License-Identifier: GPL-2.0 2# 3# General architecture dependent options 4# 5 6# 7# Note: arch/$(SRCARCH)/Kconfig needs to be included first so that it can 8# override the default values in this file. 9# 10source "arch/$(SRCARCH)/Kconfig" 11 12menu "General architecture-dependent options" 13 14config CRASH_CORE 15 bool 16 17config KEXEC_CORE 18 select CRASH_CORE 19 bool 20 21config KEXEC_ELF 22 bool 23 24config HAVE_IMA_KEXEC 25 bool 26 27config SET_FS 28 bool 29 30config HOTPLUG_SMT 31 bool 32 33config GENERIC_ENTRY 34 bool 35 36config OPROFILE 37 tristate "OProfile system profiling" 38 depends on PROFILING 39 depends on HAVE_OPROFILE 40 select RING_BUFFER 41 select RING_BUFFER_ALLOW_SWAP 42 help 43 OProfile is a profiling system capable of profiling the 44 whole system, include the kernel, kernel modules, libraries, 45 and applications. 46 47 If unsure, say N. 48 49config OPROFILE_EVENT_MULTIPLEX 50 bool "OProfile multiplexing support (EXPERIMENTAL)" 51 default n 52 depends on OPROFILE && X86 53 help 54 The number of hardware counters is limited. The multiplexing 55 feature enables OProfile to gather more events than counters 56 are provided by the hardware. This is realized by switching 57 between events at a user specified time interval. 58 59 If unsure, say N. 60 61config HAVE_OPROFILE 62 bool 63 64config OPROFILE_NMI_TIMER 65 def_bool y 66 depends on PERF_EVENTS && HAVE_PERF_EVENTS_NMI && !PPC64 67 68config KPROBES 69 bool "Kprobes" 70 depends on MODULES 71 depends on HAVE_KPROBES 72 select KALLSYMS 73 help 74 Kprobes allows you to trap at almost any kernel address and 75 execute a callback function. register_kprobe() establishes 76 a probepoint and specifies the callback. Kprobes is useful 77 for kernel debugging, non-intrusive instrumentation and testing. 78 If in doubt, say "N". 79 80config JUMP_LABEL 81 bool "Optimize very unlikely/likely branches" 82 depends on HAVE_ARCH_JUMP_LABEL 83 depends on CC_HAS_ASM_GOTO 84 help 85 This option enables a transparent branch optimization that 86 makes certain almost-always-true or almost-always-false branch 87 conditions even cheaper to execute within the kernel. 88 89 Certain performance-sensitive kernel code, such as trace points, 90 scheduler functionality, networking code and KVM have such 91 branches and include support for this optimization technique. 92 93 If it is detected that the compiler has support for "asm goto", 94 the kernel will compile such branches with just a nop 95 instruction. When the condition flag is toggled to true, the 96 nop will be converted to a jump instruction to execute the 97 conditional block of instructions. 98 99 This technique lowers overhead and stress on the branch prediction 100 of the processor and generally makes the kernel faster. The update 101 of the condition is slower, but those are always very rare. 102 103 ( On 32-bit x86, the necessary options added to the compiler 104 flags may increase the size of the kernel slightly. ) 105 106config STATIC_KEYS_SELFTEST 107 bool "Static key selftest" 108 depends on JUMP_LABEL 109 help 110 Boot time self-test of the branch patching code. 111 112config STATIC_CALL_SELFTEST 113 bool "Static call selftest" 114 depends on HAVE_STATIC_CALL 115 help 116 Boot time self-test of the call patching code. 117 118config OPTPROBES 119 def_bool y 120 depends on KPROBES && HAVE_OPTPROBES 121 select TASKS_RCU if PREEMPTION 122 123config KPROBES_ON_FTRACE 124 def_bool y 125 depends on KPROBES && HAVE_KPROBES_ON_FTRACE 126 depends on DYNAMIC_FTRACE_WITH_REGS 127 help 128 If function tracer is enabled and the arch supports full 129 passing of pt_regs to function tracing, then kprobes can 130 optimize on top of function tracing. 131 132config UPROBES 133 def_bool n 134 depends on ARCH_SUPPORTS_UPROBES 135 help 136 Uprobes is the user-space counterpart to kprobes: they 137 enable instrumentation applications (such as 'perf probe') 138 to establish unintrusive probes in user-space binaries and 139 libraries, by executing handler functions when the probes 140 are hit by user-space applications. 141 142 ( These probes come in the form of single-byte breakpoints, 143 managed by the kernel and kept transparent to the probed 144 application. ) 145 146config HAVE_64BIT_ALIGNED_ACCESS 147 def_bool 64BIT && !HAVE_EFFICIENT_UNALIGNED_ACCESS 148 help 149 Some architectures require 64 bit accesses to be 64 bit 150 aligned, which also requires structs containing 64 bit values 151 to be 64 bit aligned too. This includes some 32 bit 152 architectures which can do 64 bit accesses, as well as 64 bit 153 architectures without unaligned access. 154 155 This symbol should be selected by an architecture if 64 bit 156 accesses are required to be 64 bit aligned in this way even 157 though it is not a 64 bit architecture. 158 159 See Documentation/unaligned-memory-access.txt for more 160 information on the topic of unaligned memory accesses. 161 162config HAVE_EFFICIENT_UNALIGNED_ACCESS 163 bool 164 help 165 Some architectures are unable to perform unaligned accesses 166 without the use of get_unaligned/put_unaligned. Others are 167 unable to perform such accesses efficiently (e.g. trap on 168 unaligned access and require fixing it up in the exception 169 handler.) 170 171 This symbol should be selected by an architecture if it can 172 perform unaligned accesses efficiently to allow different 173 code paths to be selected for these cases. Some network 174 drivers, for example, could opt to not fix up alignment 175 problems with received packets if doing so would not help 176 much. 177 178 See Documentation/core-api/unaligned-memory-access.rst for more 179 information on the topic of unaligned memory accesses. 180 181config ARCH_USE_BUILTIN_BSWAP 182 bool 183 help 184 Modern versions of GCC (since 4.4) have builtin functions 185 for handling byte-swapping. Using these, instead of the old 186 inline assembler that the architecture code provides in the 187 __arch_bswapXX() macros, allows the compiler to see what's 188 happening and offers more opportunity for optimisation. In 189 particular, the compiler will be able to combine the byteswap 190 with a nearby load or store and use load-and-swap or 191 store-and-swap instructions if the architecture has them. It 192 should almost *never* result in code which is worse than the 193 hand-coded assembler in <asm/swab.h>. But just in case it 194 does, the use of the builtins is optional. 195 196 Any architecture with load-and-swap or store-and-swap 197 instructions should set this. And it shouldn't hurt to set it 198 on architectures that don't have such instructions. 199 200config KRETPROBES 201 def_bool y 202 depends on KPROBES && HAVE_KRETPROBES 203 204config USER_RETURN_NOTIFIER 205 bool 206 depends on HAVE_USER_RETURN_NOTIFIER 207 help 208 Provide a kernel-internal notification when a cpu is about to 209 switch to user mode. 210 211config HAVE_IOREMAP_PROT 212 bool 213 214config HAVE_KPROBES 215 bool 216 217config HAVE_KRETPROBES 218 bool 219 220config HAVE_OPTPROBES 221 bool 222 223config HAVE_KPROBES_ON_FTRACE 224 bool 225 226config HAVE_FUNCTION_ERROR_INJECTION 227 bool 228 229config HAVE_NMI 230 bool 231 232# 233# An arch should select this if it provides all these things: 234# 235# task_pt_regs() in asm/processor.h or asm/ptrace.h 236# arch_has_single_step() if there is hardware single-step support 237# arch_has_block_step() if there is hardware block-step support 238# asm/syscall.h supplying asm-generic/syscall.h interface 239# linux/regset.h user_regset interfaces 240# CORE_DUMP_USE_REGSET #define'd in linux/elf.h 241# TIF_SYSCALL_TRACE calls tracehook_report_syscall_{entry,exit} 242# TIF_NOTIFY_RESUME calls tracehook_notify_resume() 243# signal delivery calls tracehook_signal_handler() 244# 245config HAVE_ARCH_TRACEHOOK 246 bool 247 248config HAVE_DMA_CONTIGUOUS 249 bool 250 251config GENERIC_SMP_IDLE_THREAD 252 bool 253 254config GENERIC_IDLE_POLL_SETUP 255 bool 256 257config ARCH_HAS_FORTIFY_SOURCE 258 bool 259 help 260 An architecture should select this when it can successfully 261 build and run with CONFIG_FORTIFY_SOURCE. 262 263# 264# Select if the arch provides a historic keepinit alias for the retain_initrd 265# command line option 266# 267config ARCH_HAS_KEEPINITRD 268 bool 269 270# Select if arch has all set_memory_ro/rw/x/nx() functions in asm/cacheflush.h 271config ARCH_HAS_SET_MEMORY 272 bool 273 274# Select if arch has all set_direct_map_invalid/default() functions 275config ARCH_HAS_SET_DIRECT_MAP 276 bool 277 278# 279# Select if the architecture provides the arch_dma_set_uncached symbol to 280# either provide an uncached segment alias for a DMA allocation, or 281# to remap the page tables in place. 282# 283config ARCH_HAS_DMA_SET_UNCACHED 284 bool 285 286# 287# Select if the architectures provides the arch_dma_clear_uncached symbol 288# to undo an in-place page table remap for uncached access. 289# 290config ARCH_HAS_DMA_CLEAR_UNCACHED 291 bool 292 293# Select if arch init_task must go in the __init_task_data section 294config ARCH_TASK_STRUCT_ON_STACK 295 bool 296 297# Select if arch has its private alloc_task_struct() function 298config ARCH_TASK_STRUCT_ALLOCATOR 299 bool 300 301config HAVE_ARCH_THREAD_STRUCT_WHITELIST 302 bool 303 depends on !ARCH_TASK_STRUCT_ALLOCATOR 304 help 305 An architecture should select this to provide hardened usercopy 306 knowledge about what region of the thread_struct should be 307 whitelisted for copying to userspace. Normally this is only the 308 FPU registers. Specifically, arch_thread_struct_whitelist() 309 should be implemented. Without this, the entire thread_struct 310 field in task_struct will be left whitelisted. 311 312# Select if arch has its private alloc_thread_stack() function 313config ARCH_THREAD_STACK_ALLOCATOR 314 bool 315 316# Select if arch wants to size task_struct dynamically via arch_task_struct_size: 317config ARCH_WANTS_DYNAMIC_TASK_STRUCT 318 bool 319 320config ARCH_32BIT_OFF_T 321 bool 322 depends on !64BIT 323 help 324 All new 32-bit architectures should have 64-bit off_t type on 325 userspace side which corresponds to the loff_t kernel type. This 326 is the requirement for modern ABIs. Some existing architectures 327 still support 32-bit off_t. This option is enabled for all such 328 architectures explicitly. 329 330config HAVE_ASM_MODVERSIONS 331 bool 332 help 333 This symbol should be selected by an architecture if it provides 334 <asm/asm-prototypes.h> to support the module versioning for symbols 335 exported from assembly code. 336 337config HAVE_REGS_AND_STACK_ACCESS_API 338 bool 339 help 340 This symbol should be selected by an architecture if it supports 341 the API needed to access registers and stack entries from pt_regs, 342 declared in asm/ptrace.h 343 For example the kprobes-based event tracer needs this API. 344 345config HAVE_RSEQ 346 bool 347 depends on HAVE_REGS_AND_STACK_ACCESS_API 348 help 349 This symbol should be selected by an architecture if it 350 supports an implementation of restartable sequences. 351 352config HAVE_FUNCTION_ARG_ACCESS_API 353 bool 354 help 355 This symbol should be selected by an architecture if it supports 356 the API needed to access function arguments from pt_regs, 357 declared in asm/ptrace.h 358 359config HAVE_HW_BREAKPOINT 360 bool 361 depends on PERF_EVENTS 362 363config HAVE_MIXED_BREAKPOINTS_REGS 364 bool 365 depends on HAVE_HW_BREAKPOINT 366 help 367 Depending on the arch implementation of hardware breakpoints, 368 some of them have separate registers for data and instruction 369 breakpoints addresses, others have mixed registers to store 370 them but define the access type in a control register. 371 Select this option if your arch implements breakpoints under the 372 latter fashion. 373 374config HAVE_USER_RETURN_NOTIFIER 375 bool 376 377config HAVE_PERF_EVENTS_NMI 378 bool 379 help 380 System hardware can generate an NMI using the perf event 381 subsystem. Also has support for calculating CPU cycle events 382 to determine how many clock cycles in a given period. 383 384config HAVE_HARDLOCKUP_DETECTOR_PERF 385 bool 386 depends on HAVE_PERF_EVENTS_NMI 387 help 388 The arch chooses to use the generic perf-NMI-based hardlockup 389 detector. Must define HAVE_PERF_EVENTS_NMI. 390 391config HAVE_NMI_WATCHDOG 392 depends on HAVE_NMI 393 bool 394 help 395 The arch provides a low level NMI watchdog. It provides 396 asm/nmi.h, and defines its own arch_touch_nmi_watchdog(). 397 398config HAVE_HARDLOCKUP_DETECTOR_ARCH 399 bool 400 select HAVE_NMI_WATCHDOG 401 help 402 The arch chooses to provide its own hardlockup detector, which is 403 a superset of the HAVE_NMI_WATCHDOG. It also conforms to config 404 interfaces and parameters provided by hardlockup detector subsystem. 405 406config HAVE_PERF_REGS 407 bool 408 help 409 Support selective register dumps for perf events. This includes 410 bit-mapping of each registers and a unique architecture id. 411 412config HAVE_PERF_USER_STACK_DUMP 413 bool 414 help 415 Support user stack dumps for perf event samples. This needs 416 access to the user stack pointer which is not unified across 417 architectures. 418 419config HAVE_ARCH_JUMP_LABEL 420 bool 421 422config HAVE_ARCH_JUMP_LABEL_RELATIVE 423 bool 424 425config MMU_GATHER_TABLE_FREE 426 bool 427 428config MMU_GATHER_RCU_TABLE_FREE 429 bool 430 select MMU_GATHER_TABLE_FREE 431 432config MMU_GATHER_PAGE_SIZE 433 bool 434 435config MMU_GATHER_NO_RANGE 436 bool 437 438config MMU_GATHER_NO_GATHER 439 bool 440 depends on MMU_GATHER_TABLE_FREE 441 442config ARCH_WANT_IRQS_OFF_ACTIVATE_MM 443 bool 444 help 445 Temporary select until all architectures can be converted to have 446 irqs disabled over activate_mm. Architectures that do IPI based TLB 447 shootdowns should enable this. 448 449config ARCH_HAVE_NMI_SAFE_CMPXCHG 450 bool 451 452config HAVE_ALIGNED_STRUCT_PAGE 453 bool 454 help 455 This makes sure that struct pages are double word aligned and that 456 e.g. the SLUB allocator can perform double word atomic operations 457 on a struct page for better performance. However selecting this 458 might increase the size of a struct page by a word. 459 460config HAVE_CMPXCHG_LOCAL 461 bool 462 463config HAVE_CMPXCHG_DOUBLE 464 bool 465 466config ARCH_WEAK_RELEASE_ACQUIRE 467 bool 468 469config ARCH_WANT_IPC_PARSE_VERSION 470 bool 471 472config ARCH_WANT_COMPAT_IPC_PARSE_VERSION 473 bool 474 475config ARCH_WANT_OLD_COMPAT_IPC 476 select ARCH_WANT_COMPAT_IPC_PARSE_VERSION 477 bool 478 479config HAVE_ARCH_SECCOMP 480 bool 481 help 482 An arch should select this symbol to support seccomp mode 1 (the fixed 483 syscall policy), and must provide an overrides for __NR_seccomp_sigreturn, 484 and compat syscalls if the asm-generic/seccomp.h defaults need adjustment: 485 - __NR_seccomp_read_32 486 - __NR_seccomp_write_32 487 - __NR_seccomp_exit_32 488 - __NR_seccomp_sigreturn_32 489 490config HAVE_ARCH_SECCOMP_FILTER 491 bool 492 select HAVE_ARCH_SECCOMP 493 help 494 An arch should select this symbol if it provides all of these things: 495 - all the requirements for HAVE_ARCH_SECCOMP 496 - syscall_get_arch() 497 - syscall_get_arguments() 498 - syscall_rollback() 499 - syscall_set_return_value() 500 - SIGSYS siginfo_t support 501 - secure_computing is called from a ptrace_event()-safe context 502 - secure_computing return value is checked and a return value of -1 503 results in the system call being skipped immediately. 504 - seccomp syscall wired up 505 - if !HAVE_SPARSE_SYSCALL_NR, have SECCOMP_ARCH_NATIVE, 506 SECCOMP_ARCH_NATIVE_NR, SECCOMP_ARCH_NATIVE_NAME defined. If 507 COMPAT is supported, have the SECCOMP_ARCH_COMPAT* defines too. 508 509config SECCOMP 510 prompt "Enable seccomp to safely execute untrusted bytecode" 511 def_bool y 512 depends on HAVE_ARCH_SECCOMP 513 help 514 This kernel feature is useful for number crunching applications 515 that may need to handle untrusted bytecode during their 516 execution. By using pipes or other transports made available 517 to the process as file descriptors supporting the read/write 518 syscalls, it's possible to isolate those applications in their 519 own address space using seccomp. Once seccomp is enabled via 520 prctl(PR_SET_SECCOMP) or the seccomp() syscall, it cannot be 521 disabled and the task is only allowed to execute a few safe 522 syscalls defined by each seccomp mode. 523 524 If unsure, say Y. 525 526config SECCOMP_FILTER 527 def_bool y 528 depends on HAVE_ARCH_SECCOMP_FILTER && SECCOMP && NET 529 help 530 Enable tasks to build secure computing environments defined 531 in terms of Berkeley Packet Filter programs which implement 532 task-defined system call filtering polices. 533 534 See Documentation/userspace-api/seccomp_filter.rst for details. 535 536config SECCOMP_CACHE_DEBUG 537 bool "Show seccomp filter cache status in /proc/pid/seccomp_cache" 538 depends on SECCOMP_FILTER && !HAVE_SPARSE_SYSCALL_NR 539 depends on PROC_FS 540 help 541 This enables the /proc/pid/seccomp_cache interface to monitor 542 seccomp cache data. The file format is subject to change. Reading 543 the file requires CAP_SYS_ADMIN. 544 545 This option is for debugging only. Enabling presents the risk that 546 an adversary may be able to infer the seccomp filter logic. 547 548 If unsure, say N. 549 550config HAVE_ARCH_STACKLEAK 551 bool 552 help 553 An architecture should select this if it has the code which 554 fills the used part of the kernel stack with the STACKLEAK_POISON 555 value before returning from system calls. 556 557config HAVE_STACKPROTECTOR 558 bool 559 help 560 An arch should select this symbol if: 561 - it has implemented a stack canary (e.g. __stack_chk_guard) 562 563config STACKPROTECTOR 564 bool "Stack Protector buffer overflow detection" 565 depends on HAVE_STACKPROTECTOR 566 depends on $(cc-option,-fstack-protector) 567 default y 568 help 569 This option turns on the "stack-protector" GCC feature. This 570 feature puts, at the beginning of functions, a canary value on 571 the stack just before the return address, and validates 572 the value just before actually returning. Stack based buffer 573 overflows (that need to overwrite this return address) now also 574 overwrite the canary, which gets detected and the attack is then 575 neutralized via a kernel panic. 576 577 Functions will have the stack-protector canary logic added if they 578 have an 8-byte or larger character array on the stack. 579 580 This feature requires gcc version 4.2 or above, or a distribution 581 gcc with the feature backported ("-fstack-protector"). 582 583 On an x86 "defconfig" build, this feature adds canary checks to 584 about 3% of all kernel functions, which increases kernel code size 585 by about 0.3%. 586 587config STACKPROTECTOR_STRONG 588 bool "Strong Stack Protector" 589 depends on STACKPROTECTOR 590 depends on $(cc-option,-fstack-protector-strong) 591 default y 592 help 593 Functions will have the stack-protector canary logic added in any 594 of the following conditions: 595 596 - local variable's address used as part of the right hand side of an 597 assignment or function argument 598 - local variable is an array (or union containing an array), 599 regardless of array type or length 600 - uses register local variables 601 602 This feature requires gcc version 4.9 or above, or a distribution 603 gcc with the feature backported ("-fstack-protector-strong"). 604 605 On an x86 "defconfig" build, this feature adds canary checks to 606 about 20% of all kernel functions, which increases the kernel code 607 size by about 2%. 608 609config ARCH_SUPPORTS_SHADOW_CALL_STACK 610 bool 611 help 612 An architecture should select this if it supports Clang's Shadow 613 Call Stack and implements runtime support for shadow stack 614 switching. 615 616config SHADOW_CALL_STACK 617 bool "Clang Shadow Call Stack" 618 depends on CC_IS_CLANG && ARCH_SUPPORTS_SHADOW_CALL_STACK 619 depends on DYNAMIC_FTRACE_WITH_REGS || !FUNCTION_GRAPH_TRACER 620 help 621 This option enables Clang's Shadow Call Stack, which uses a 622 shadow stack to protect function return addresses from being 623 overwritten by an attacker. More information can be found in 624 Clang's documentation: 625 626 https://clang.llvm.org/docs/ShadowCallStack.html 627 628 Note that security guarantees in the kernel differ from the 629 ones documented for user space. The kernel must store addresses 630 of shadow stacks in memory, which means an attacker capable of 631 reading and writing arbitrary memory may be able to locate them 632 and hijack control flow by modifying the stacks. 633 634config HAVE_ARCH_WITHIN_STACK_FRAMES 635 bool 636 help 637 An architecture should select this if it can walk the kernel stack 638 frames to determine if an object is part of either the arguments 639 or local variables (i.e. that it excludes saved return addresses, 640 and similar) by implementing an inline arch_within_stack_frames(), 641 which is used by CONFIG_HARDENED_USERCOPY. 642 643config HAVE_CONTEXT_TRACKING 644 bool 645 help 646 Provide kernel/user boundaries probes necessary for subsystems 647 that need it, such as userspace RCU extended quiescent state. 648 Syscalls need to be wrapped inside user_exit()-user_enter(), either 649 optimized behind static key or through the slow path using TIF_NOHZ 650 flag. Exceptions handlers must be wrapped as well. Irqs are already 651 protected inside rcu_irq_enter/rcu_irq_exit() but preemption or signal 652 handling on irq exit still need to be protected. 653 654config HAVE_CONTEXT_TRACKING_OFFSTACK 655 bool 656 help 657 Architecture neither relies on exception_enter()/exception_exit() 658 nor on schedule_user(). Also preempt_schedule_notrace() and 659 preempt_schedule_irq() can't be called in a preemptible section 660 while context tracking is CONTEXT_USER. This feature reflects a sane 661 entry implementation where the following requirements are met on 662 critical entry code, ie: before user_exit() or after user_enter(): 663 664 - Critical entry code isn't preemptible (or better yet: 665 not interruptible). 666 - No use of RCU read side critical sections, unless rcu_nmi_enter() 667 got called. 668 - No use of instrumentation, unless instrumentation_begin() got 669 called. 670 671config HAVE_TIF_NOHZ 672 bool 673 help 674 Arch relies on TIF_NOHZ and syscall slow path to implement context 675 tracking calls to user_enter()/user_exit(). 676 677config HAVE_VIRT_CPU_ACCOUNTING 678 bool 679 680config HAVE_VIRT_CPU_ACCOUNTING_IDLE 681 bool 682 help 683 Architecture has its own way to account idle CPU time and therefore 684 doesn't implement vtime_account_idle(). 685 686config ARCH_HAS_SCALED_CPUTIME 687 bool 688 689config HAVE_VIRT_CPU_ACCOUNTING_GEN 690 bool 691 default y if 64BIT 692 help 693 With VIRT_CPU_ACCOUNTING_GEN, cputime_t becomes 64-bit. 694 Before enabling this option, arch code must be audited 695 to ensure there are no races in concurrent read/write of 696 cputime_t. For example, reading/writing 64-bit cputime_t on 697 some 32-bit arches may require multiple accesses, so proper 698 locking is needed to protect against concurrent accesses. 699 700config HAVE_IRQ_TIME_ACCOUNTING 701 bool 702 help 703 Archs need to ensure they use a high enough resolution clock to 704 support irq time accounting and then call enable_sched_clock_irqtime(). 705 706config HAVE_MOVE_PUD 707 bool 708 help 709 Architectures that select this are able to move page tables at the 710 PUD level. If there are only 3 page table levels, the move effectively 711 happens at the PGD level. 712 713config HAVE_MOVE_PMD 714 bool 715 help 716 Archs that select this are able to move page tables at the PMD level. 717 718config HAVE_ARCH_TRANSPARENT_HUGEPAGE 719 bool 720 721config HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD 722 bool 723 724config HAVE_ARCH_HUGE_VMAP 725 bool 726 727config ARCH_WANT_HUGE_PMD_SHARE 728 bool 729 730config HAVE_ARCH_SOFT_DIRTY 731 bool 732 733config HAVE_MOD_ARCH_SPECIFIC 734 bool 735 help 736 The arch uses struct mod_arch_specific to store data. Many arches 737 just need a simple module loader without arch specific data - those 738 should not enable this. 739 740config MODULES_USE_ELF_RELA 741 bool 742 help 743 Modules only use ELF RELA relocations. Modules with ELF REL 744 relocations will give an error. 745 746config MODULES_USE_ELF_REL 747 bool 748 help 749 Modules only use ELF REL relocations. Modules with ELF RELA 750 relocations will give an error. 751 752config HAVE_IRQ_EXIT_ON_IRQ_STACK 753 bool 754 help 755 Architecture doesn't only execute the irq handler on the irq stack 756 but also irq_exit(). This way we can process softirqs on this irq 757 stack instead of switching to a new one when we call __do_softirq() 758 in the end of an hardirq. 759 This spares a stack switch and improves cache usage on softirq 760 processing. 761 762config PGTABLE_LEVELS 763 int 764 default 2 765 766config ARCH_HAS_ELF_RANDOMIZE 767 bool 768 help 769 An architecture supports choosing randomized locations for 770 stack, mmap, brk, and ET_DYN. Defined functions: 771 - arch_mmap_rnd() 772 - arch_randomize_brk() 773 774config HAVE_ARCH_MMAP_RND_BITS 775 bool 776 help 777 An arch should select this symbol if it supports setting a variable 778 number of bits for use in establishing the base address for mmap 779 allocations, has MMU enabled and provides values for both: 780 - ARCH_MMAP_RND_BITS_MIN 781 - ARCH_MMAP_RND_BITS_MAX 782 783config HAVE_EXIT_THREAD 784 bool 785 help 786 An architecture implements exit_thread. 787 788config ARCH_MMAP_RND_BITS_MIN 789 int 790 791config ARCH_MMAP_RND_BITS_MAX 792 int 793 794config ARCH_MMAP_RND_BITS_DEFAULT 795 int 796 797config ARCH_MMAP_RND_BITS 798 int "Number of bits to use for ASLR of mmap base address" if EXPERT 799 range ARCH_MMAP_RND_BITS_MIN ARCH_MMAP_RND_BITS_MAX 800 default ARCH_MMAP_RND_BITS_DEFAULT if ARCH_MMAP_RND_BITS_DEFAULT 801 default ARCH_MMAP_RND_BITS_MIN 802 depends on HAVE_ARCH_MMAP_RND_BITS 803 help 804 This value can be used to select the number of bits to use to 805 determine the random offset to the base address of vma regions 806 resulting from mmap allocations. This value will be bounded 807 by the architecture's minimum and maximum supported values. 808 809 This value can be changed after boot using the 810 /proc/sys/vm/mmap_rnd_bits tunable 811 812config HAVE_ARCH_MMAP_RND_COMPAT_BITS 813 bool 814 help 815 An arch should select this symbol if it supports running applications 816 in compatibility mode, supports setting a variable number of bits for 817 use in establishing the base address for mmap allocations, has MMU 818 enabled and provides values for both: 819 - ARCH_MMAP_RND_COMPAT_BITS_MIN 820 - ARCH_MMAP_RND_COMPAT_BITS_MAX 821 822config ARCH_MMAP_RND_COMPAT_BITS_MIN 823 int 824 825config ARCH_MMAP_RND_COMPAT_BITS_MAX 826 int 827 828config ARCH_MMAP_RND_COMPAT_BITS_DEFAULT 829 int 830 831config ARCH_MMAP_RND_COMPAT_BITS 832 int "Number of bits to use for ASLR of mmap base address for compatible applications" if EXPERT 833 range ARCH_MMAP_RND_COMPAT_BITS_MIN ARCH_MMAP_RND_COMPAT_BITS_MAX 834 default ARCH_MMAP_RND_COMPAT_BITS_DEFAULT if ARCH_MMAP_RND_COMPAT_BITS_DEFAULT 835 default ARCH_MMAP_RND_COMPAT_BITS_MIN 836 depends on HAVE_ARCH_MMAP_RND_COMPAT_BITS 837 help 838 This value can be used to select the number of bits to use to 839 determine the random offset to the base address of vma regions 840 resulting from mmap allocations for compatible applications This 841 value will be bounded by the architecture's minimum and maximum 842 supported values. 843 844 This value can be changed after boot using the 845 /proc/sys/vm/mmap_rnd_compat_bits tunable 846 847config HAVE_ARCH_COMPAT_MMAP_BASES 848 bool 849 help 850 This allows 64bit applications to invoke 32-bit mmap() syscall 851 and vice-versa 32-bit applications to call 64-bit mmap(). 852 Required for applications doing different bitness syscalls. 853 854# This allows to use a set of generic functions to determine mmap base 855# address by giving priority to top-down scheme only if the process 856# is not in legacy mode (compat task, unlimited stack size or 857# sysctl_legacy_va_layout). 858# Architecture that selects this option can provide its own version of: 859# - STACK_RND_MASK 860config ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT 861 bool 862 depends on MMU 863 select ARCH_HAS_ELF_RANDOMIZE 864 865config HAVE_STACK_VALIDATION 866 bool 867 help 868 Architecture supports the 'objtool check' host tool command, which 869 performs compile-time stack metadata validation. 870 871config HAVE_RELIABLE_STACKTRACE 872 bool 873 help 874 Architecture has either save_stack_trace_tsk_reliable() or 875 arch_stack_walk_reliable() function which only returns a stack trace 876 if it can guarantee the trace is reliable. 877 878config HAVE_ARCH_HASH 879 bool 880 default n 881 help 882 If this is set, the architecture provides an <asm/hash.h> 883 file which provides platform-specific implementations of some 884 functions in <linux/hash.h> or fs/namei.c. 885 886config HAVE_ARCH_NVRAM_OPS 887 bool 888 889config ISA_BUS_API 890 def_bool ISA 891 892# 893# ABI hall of shame 894# 895config CLONE_BACKWARDS 896 bool 897 help 898 Architecture has tls passed as the 4th argument of clone(2), 899 not the 5th one. 900 901config CLONE_BACKWARDS2 902 bool 903 help 904 Architecture has the first two arguments of clone(2) swapped. 905 906config CLONE_BACKWARDS3 907 bool 908 help 909 Architecture has tls passed as the 3rd argument of clone(2), 910 not the 5th one. 911 912config ODD_RT_SIGACTION 913 bool 914 help 915 Architecture has unusual rt_sigaction(2) arguments 916 917config OLD_SIGSUSPEND 918 bool 919 help 920 Architecture has old sigsuspend(2) syscall, of one-argument variety 921 922config OLD_SIGSUSPEND3 923 bool 924 help 925 Even weirder antique ABI - three-argument sigsuspend(2) 926 927config OLD_SIGACTION 928 bool 929 help 930 Architecture has old sigaction(2) syscall. Nope, not the same 931 as OLD_SIGSUSPEND | OLD_SIGSUSPEND3 - alpha has sigsuspend(2), 932 but fairly different variant of sigaction(2), thanks to OSF/1 933 compatibility... 934 935config COMPAT_OLD_SIGACTION 936 bool 937 938config COMPAT_32BIT_TIME 939 bool "Provide system calls for 32-bit time_t" 940 default !64BIT || COMPAT 941 help 942 This enables 32 bit time_t support in addition to 64 bit time_t support. 943 This is relevant on all 32-bit architectures, and 64-bit architectures 944 as part of compat syscall handling. 945 946config ARCH_NO_PREEMPT 947 bool 948 949config ARCH_SUPPORTS_RT 950 bool 951 952config CPU_NO_EFFICIENT_FFS 953 def_bool n 954 955config HAVE_ARCH_VMAP_STACK 956 def_bool n 957 help 958 An arch should select this symbol if it can support kernel stacks 959 in vmalloc space. This means: 960 961 - vmalloc space must be large enough to hold many kernel stacks. 962 This may rule out many 32-bit architectures. 963 964 - Stacks in vmalloc space need to work reliably. For example, if 965 vmap page tables are created on demand, either this mechanism 966 needs to work while the stack points to a virtual address with 967 unpopulated page tables or arch code (switch_to() and switch_mm(), 968 most likely) needs to ensure that the stack's page table entries 969 are populated before running on a possibly unpopulated stack. 970 971 - If the stack overflows into a guard page, something reasonable 972 should happen. The definition of "reasonable" is flexible, but 973 instantly rebooting without logging anything would be unfriendly. 974 975config VMAP_STACK 976 default y 977 bool "Use a virtually-mapped stack" 978 depends on HAVE_ARCH_VMAP_STACK 979 depends on !KASAN || KASAN_HW_TAGS || KASAN_VMALLOC 980 help 981 Enable this if you want the use virtually-mapped kernel stacks 982 with guard pages. This causes kernel stack overflows to be 983 caught immediately rather than causing difficult-to-diagnose 984 corruption. 985 986 To use this with software KASAN modes, the architecture must support 987 backing virtual mappings with real shadow memory, and KASAN_VMALLOC 988 must be enabled. 989 990config ARCH_OPTIONAL_KERNEL_RWX 991 def_bool n 992 993config ARCH_OPTIONAL_KERNEL_RWX_DEFAULT 994 def_bool n 995 996config ARCH_HAS_STRICT_KERNEL_RWX 997 def_bool n 998 999config STRICT_KERNEL_RWX 1000 bool "Make kernel text and rodata read-only" if ARCH_OPTIONAL_KERNEL_RWX 1001 depends on ARCH_HAS_STRICT_KERNEL_RWX 1002 default !ARCH_OPTIONAL_KERNEL_RWX || ARCH_OPTIONAL_KERNEL_RWX_DEFAULT 1003 help 1004 If this is set, kernel text and rodata memory will be made read-only, 1005 and non-text memory will be made non-executable. This provides 1006 protection against certain security exploits (e.g. executing the heap 1007 or modifying text) 1008 1009 These features are considered standard security practice these days. 1010 You should say Y here in almost all cases. 1011 1012config ARCH_HAS_STRICT_MODULE_RWX 1013 def_bool n 1014 1015config STRICT_MODULE_RWX 1016 bool "Set loadable kernel module data as NX and text as RO" if ARCH_OPTIONAL_KERNEL_RWX 1017 depends on ARCH_HAS_STRICT_MODULE_RWX && MODULES 1018 default !ARCH_OPTIONAL_KERNEL_RWX || ARCH_OPTIONAL_KERNEL_RWX_DEFAULT 1019 help 1020 If this is set, module text and rodata memory will be made read-only, 1021 and non-text memory will be made non-executable. This provides 1022 protection against certain security exploits (e.g. writing to text) 1023 1024# select if the architecture provides an asm/dma-direct.h header 1025config ARCH_HAS_PHYS_TO_DMA 1026 bool 1027 1028config HAVE_ARCH_COMPILER_H 1029 bool 1030 help 1031 An architecture can select this if it provides an 1032 asm/compiler.h header that should be included after 1033 linux/compiler-*.h in order to override macro definitions that those 1034 headers generally provide. 1035 1036config HAVE_ARCH_PREL32_RELOCATIONS 1037 bool 1038 help 1039 May be selected by an architecture if it supports place-relative 1040 32-bit relocations, both in the toolchain and in the module loader, 1041 in which case relative references can be used in special sections 1042 for PCI fixup, initcalls etc which are only half the size on 64 bit 1043 architectures, and don't require runtime relocation on relocatable 1044 kernels. 1045 1046config ARCH_USE_MEMREMAP_PROT 1047 bool 1048 1049config LOCK_EVENT_COUNTS 1050 bool "Locking event counts collection" 1051 depends on DEBUG_FS 1052 help 1053 Enable light-weight counting of various locking related events 1054 in the system with minimal performance impact. This reduces 1055 the chance of application behavior change because of timing 1056 differences. The counts are reported via debugfs. 1057 1058# Select if the architecture has support for applying RELR relocations. 1059config ARCH_HAS_RELR 1060 bool 1061 1062config RELR 1063 bool "Use RELR relocation packing" 1064 depends on ARCH_HAS_RELR && TOOLS_SUPPORT_RELR 1065 default y 1066 help 1067 Store the kernel's dynamic relocations in the RELR relocation packing 1068 format. Requires a compatible linker (LLD supports this feature), as 1069 well as compatible NM and OBJCOPY utilities (llvm-nm and llvm-objcopy 1070 are compatible). 1071 1072config ARCH_HAS_MEM_ENCRYPT 1073 bool 1074 1075config HAVE_SPARSE_SYSCALL_NR 1076 bool 1077 help 1078 An architecture should select this if its syscall numbering is sparse 1079 to save space. For example, MIPS architecture has a syscall array with 1080 entries at 4000, 5000 and 6000 locations. This option turns on syscall 1081 related optimizations for a given architecture. 1082 1083config ARCH_HAS_VDSO_DATA 1084 bool 1085 1086config HAVE_STATIC_CALL 1087 bool 1088 1089config HAVE_STATIC_CALL_INLINE 1090 bool 1091 depends on HAVE_STATIC_CALL 1092 1093config ARCH_WANT_LD_ORPHAN_WARN 1094 bool 1095 help 1096 An arch should select this symbol once all linker sections are explicitly 1097 included, size-asserted, or discarded in the linker scripts. This is 1098 important because we never want expected sections to be placed heuristically 1099 by the linker, since the locations of such sections can change between linker 1100 versions. 1101 1102config HAVE_ARCH_PFN_VALID 1103 bool 1104 1105config ARCH_SUPPORTS_DEBUG_PAGEALLOC 1106 bool 1107 1108source "kernel/gcov/Kconfig" 1109 1110source "scripts/gcc-plugins/Kconfig" 1111 1112endmenu 1113