1============== 2BPF Design Q&A 3============== 4 5BPF extensibility and applicability to networking, tracing, security 6in the linux kernel and several user space implementations of BPF 7virtual machine led to a number of misunderstanding on what BPF actually is. 8This short QA is an attempt to address that and outline a direction 9of where BPF is heading long term. 10 11.. contents:: 12 :local: 13 :depth: 3 14 15Questions and Answers 16===================== 17 18Q: Is BPF a generic instruction set similar to x64 and arm64? 19------------------------------------------------------------- 20A: NO. 21 22Q: Is BPF a generic virtual machine ? 23------------------------------------- 24A: NO. 25 26BPF is generic instruction set *with* C calling convention. 27----------------------------------------------------------- 28 29Q: Why C calling convention was chosen? 30~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 31 32A: Because BPF programs are designed to run in the linux kernel 33which is written in C, hence BPF defines instruction set compatible 34with two most used architectures x64 and arm64 (and takes into 35consideration important quirks of other architectures) and 36defines calling convention that is compatible with C calling 37convention of the linux kernel on those architectures. 38 39Q: Can multiple return values be supported in the future? 40~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 41A: NO. BPF allows only register R0 to be used as return value. 42 43Q: Can more than 5 function arguments be supported in the future? 44~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 45A: NO. BPF calling convention only allows registers R1-R5 to be used 46as arguments. BPF is not a standalone instruction set. 47(unlike x64 ISA that allows msft, cdecl and other conventions) 48 49Q: Can BPF programs access instruction pointer or return address? 50----------------------------------------------------------------- 51A: NO. 52 53Q: Can BPF programs access stack pointer ? 54------------------------------------------ 55A: NO. 56 57Only frame pointer (register R10) is accessible. 58From compiler point of view it's necessary to have stack pointer. 59For example, LLVM defines register R11 as stack pointer in its 60BPF backend, but it makes sure that generated code never uses it. 61 62Q: Does C-calling convention diminishes possible use cases? 63----------------------------------------------------------- 64A: YES. 65 66BPF design forces addition of major functionality in the form 67of kernel helper functions and kernel objects like BPF maps with 68seamless interoperability between them. It lets kernel call into 69BPF programs and programs call kernel helpers with zero overhead, 70as all of them were native C code. That is particularly the case 71for JITed BPF programs that are indistinguishable from 72native kernel C code. 73 74Q: Does it mean that 'innovative' extensions to BPF code are disallowed? 75------------------------------------------------------------------------ 76A: Soft yes. 77 78At least for now, until BPF core has support for 79bpf-to-bpf calls, indirect calls, loops, global variables, 80jump tables, read-only sections, and all other normal constructs 81that C code can produce. 82 83Q: Can loops be supported in a safe way? 84---------------------------------------- 85A: It's not clear yet. 86 87BPF developers are trying to find a way to 88support bounded loops. 89 90Q: What are the verifier limits? 91-------------------------------- 92A: The only limit known to the user space is BPF_MAXINSNS (4096). 93It's the maximum number of instructions that the unprivileged bpf 94program can have. The verifier has various internal limits. 95Like the maximum number of instructions that can be explored during 96program analysis. Currently, that limit is set to 1 million. 97Which essentially means that the largest program can consist 98of 1 million NOP instructions. There is a limit to the maximum number 99of subsequent branches, a limit to the number of nested bpf-to-bpf 100calls, a limit to the number of the verifier states per instruction, 101a limit to the number of maps used by the program. 102All these limits can be hit with a sufficiently complex program. 103There are also non-numerical limits that can cause the program 104to be rejected. The verifier used to recognize only pointer + constant 105expressions. Now it can recognize pointer + bounded_register. 106bpf_lookup_map_elem(key) had a requirement that 'key' must be 107a pointer to the stack. Now, 'key' can be a pointer to map value. 108The verifier is steadily getting 'smarter'. The limits are 109being removed. The only way to know that the program is going to 110be accepted by the verifier is to try to load it. 111The bpf development process guarantees that the future kernel 112versions will accept all bpf programs that were accepted by 113the earlier versions. 114 115 116Instruction level questions 117--------------------------- 118 119Q: LD_ABS and LD_IND instructions vs C code 120~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 121 122Q: How come LD_ABS and LD_IND instruction are present in BPF whereas 123C code cannot express them and has to use builtin intrinsics? 124 125A: This is artifact of compatibility with classic BPF. Modern 126networking code in BPF performs better without them. 127See 'direct packet access'. 128 129Q: BPF instructions mapping not one-to-one to native CPU 130~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 131Q: It seems not all BPF instructions are one-to-one to native CPU. 132For example why BPF_JNE and other compare and jumps are not cpu-like? 133 134A: This was necessary to avoid introducing flags into ISA which are 135impossible to make generic and efficient across CPU architectures. 136 137Q: Why BPF_DIV instruction doesn't map to x64 div? 138~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 139A: Because if we picked one-to-one relationship to x64 it would have made 140it more complicated to support on arm64 and other archs. Also it 141needs div-by-zero runtime check. 142 143Q: Why there is no BPF_SDIV for signed divide operation? 144~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 145A: Because it would be rarely used. llvm errors in such case and 146prints a suggestion to use unsigned divide instead. 147 148Q: Why BPF has implicit prologue and epilogue? 149~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 150A: Because architectures like sparc have register windows and in general 151there are enough subtle differences between architectures, so naive 152store return address into stack won't work. Another reason is BPF has 153to be safe from division by zero (and legacy exception path 154of LD_ABS insn). Those instructions need to invoke epilogue and 155return implicitly. 156 157Q: Why BPF_JLT and BPF_JLE instructions were not introduced in the beginning? 158~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 159A: Because classic BPF didn't have them and BPF authors felt that compiler 160workaround would be acceptable. Turned out that programs lose performance 161due to lack of these compare instructions and they were added. 162These two instructions is a perfect example what kind of new BPF 163instructions are acceptable and can be added in the future. 164These two already had equivalent instructions in native CPUs. 165New instructions that don't have one-to-one mapping to HW instructions 166will not be accepted. 167 168Q: BPF 32-bit subregister requirements 169~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 170Q: BPF 32-bit subregisters have a requirement to zero upper 32-bits of BPF 171registers which makes BPF inefficient virtual machine for 32-bit 172CPU architectures and 32-bit HW accelerators. Can true 32-bit registers 173be added to BPF in the future? 174 175A: NO. 176 177But some optimizations on zero-ing the upper 32 bits for BPF registers are 178available, and can be leveraged to improve the performance of JITed BPF 179programs for 32-bit architectures. 180 181Starting with version 7, LLVM is able to generate instructions that operate 182on 32-bit subregisters, provided the option -mattr=+alu32 is passed for 183compiling a program. Furthermore, the verifier can now mark the 184instructions for which zero-ing the upper bits of the destination register 185is required, and insert an explicit zero-extension (zext) instruction 186(a mov32 variant). This means that for architectures without zext hardware 187support, the JIT back-ends do not need to clear the upper bits for 188subregisters written by alu32 instructions or narrow loads. Instead, the 189back-ends simply need to support code generation for that mov32 variant, 190and to overwrite bpf_jit_needs_zext() to make it return "true" (in order to 191enable zext insertion in the verifier). 192 193Note that it is possible for a JIT back-end to have partial hardware 194support for zext. In that case, if verifier zext insertion is enabled, 195it could lead to the insertion of unnecessary zext instructions. Such 196instructions could be removed by creating a simple peephole inside the JIT 197back-end: if one instruction has hardware support for zext and if the next 198instruction is an explicit zext, then the latter can be skipped when doing 199the code generation. 200 201Q: Does BPF have a stable ABI? 202------------------------------ 203A: YES. BPF instructions, arguments to BPF programs, set of helper 204functions and their arguments, recognized return codes are all part 205of ABI. However there is one specific exception to tracing programs 206which are using helpers like bpf_probe_read() to walk kernel internal 207data structures and compile with kernel internal headers. Both of these 208kernel internals are subject to change and can break with newer kernels 209such that the program needs to be adapted accordingly. 210 211Q: Are tracepoints part of the stable ABI? 212------------------------------------------ 213A: NO. Tracepoints are tied to internal implementation details hence they are 214subject to change and can break with newer kernels. BPF programs need to change 215accordingly when this happens. 216 217Q: How much stack space a BPF program uses? 218------------------------------------------- 219A: Currently all program types are limited to 512 bytes of stack 220space, but the verifier computes the actual amount of stack used 221and both interpreter and most JITed code consume necessary amount. 222 223Q: Can BPF be offloaded to HW? 224------------------------------ 225A: YES. BPF HW offload is supported by NFP driver. 226 227Q: Does classic BPF interpreter still exist? 228-------------------------------------------- 229A: NO. Classic BPF programs are converted into extend BPF instructions. 230 231Q: Can BPF call arbitrary kernel functions? 232------------------------------------------- 233A: NO. BPF programs can only call a set of helper functions which 234is defined for every program type. 235 236Q: Can BPF overwrite arbitrary kernel memory? 237--------------------------------------------- 238A: NO. 239 240Tracing bpf programs can *read* arbitrary memory with bpf_probe_read() 241and bpf_probe_read_str() helpers. Networking programs cannot read 242arbitrary memory, since they don't have access to these helpers. 243Programs can never read or write arbitrary memory directly. 244 245Q: Can BPF overwrite arbitrary user memory? 246------------------------------------------- 247A: Sort-of. 248 249Tracing BPF programs can overwrite the user memory 250of the current task with bpf_probe_write_user(). Every time such 251program is loaded the kernel will print warning message, so 252this helper is only useful for experiments and prototypes. 253Tracing BPF programs are root only. 254 255Q: New functionality via kernel modules? 256---------------------------------------- 257Q: Can BPF functionality such as new program or map types, new 258helpers, etc be added out of kernel module code? 259 260A: NO. 261