1================================
2Documentation for /proc/sys/net/
3================================
4
5Copyright
6
7Copyright (c) 1999
8
9	- Terrehon Bowden <terrehon@pacbell.net>
10	- Bodo Bauer <bb@ricochet.net>
11
12Copyright (c) 2000
13
14	- Jorge Nerin <comandante@zaralinux.com>
15
16Copyright (c) 2009
17
18	- Shen Feng <shen@cn.fujitsu.com>
19
20For general info and legal blurb, please look in index.rst.
21
22------------------------------------------------------------------------------
23
24This file contains the documentation for the sysctl files in
25/proc/sys/net
26
27The interface  to  the  networking  parts  of  the  kernel  is  located  in
28/proc/sys/net. The following table shows all possible subdirectories.  You may
29see only some of them, depending on your kernel's configuration.
30
31
32Table : Subdirectories in /proc/sys/net
33
34 ========= =================== = ========== ==================
35 Directory Content               Directory  Content
36 ========= =================== = ========== ==================
37 core      General parameter     appletalk  Appletalk protocol
38 unix      Unix domain sockets   netrom     NET/ROM
39 802       E802 protocol         ax25       AX25
40 ethernet  Ethernet protocol     rose       X.25 PLP layer
41 ipv4      IP version 4          x25        X.25 protocol
42 bridge    Bridging              decnet     DEC net
43 ipv6      IP version 6          tipc       TIPC
44 ========= =================== = ========== ==================
45
461. /proc/sys/net/core - Network core options
47============================================
48
49bpf_jit_enable
50--------------
51
52This enables the BPF Just in Time (JIT) compiler. BPF is a flexible
53and efficient infrastructure allowing to execute bytecode at various
54hook points. It is used in a number of Linux kernel subsystems such
55as networking (e.g. XDP, tc), tracing (e.g. kprobes, uprobes, tracepoints)
56and security (e.g. seccomp). LLVM has a BPF back end that can compile
57restricted C into a sequence of BPF instructions. After program load
58through bpf(2) and passing a verifier in the kernel, a JIT will then
59translate these BPF proglets into native CPU instructions. There are
60two flavors of JITs, the newer eBPF JIT currently supported on:
61
62  - x86_64
63  - x86_32
64  - arm64
65  - arm32
66  - ppc64
67  - sparc64
68  - mips64
69  - s390x
70  - riscv64
71  - riscv32
72
73And the older cBPF JIT supported on the following archs:
74
75  - mips
76  - ppc
77  - sparc
78
79eBPF JITs are a superset of cBPF JITs, meaning the kernel will
80migrate cBPF instructions into eBPF instructions and then JIT
81compile them transparently. Older cBPF JITs can only translate
82tcpdump filters, seccomp rules, etc, but not mentioned eBPF
83programs loaded through bpf(2).
84
85Values:
86
87	- 0 - disable the JIT (default value)
88	- 1 - enable the JIT
89	- 2 - enable the JIT and ask the compiler to emit traces on kernel log.
90
91bpf_jit_harden
92--------------
93
94This enables hardening for the BPF JIT compiler. Supported are eBPF
95JIT backends. Enabling hardening trades off performance, but can
96mitigate JIT spraying.
97
98Values:
99
100	- 0 - disable JIT hardening (default value)
101	- 1 - enable JIT hardening for unprivileged users only
102	- 2 - enable JIT hardening for all users
103
104bpf_jit_kallsyms
105----------------
106
107When BPF JIT compiler is enabled, then compiled images are unknown
108addresses to the kernel, meaning they neither show up in traces nor
109in /proc/kallsyms. This enables export of these addresses, which can
110be used for debugging/tracing. If bpf_jit_harden is enabled, this
111feature is disabled.
112
113Values :
114
115	- 0 - disable JIT kallsyms export (default value)
116	- 1 - enable JIT kallsyms export for privileged users only
117
118bpf_jit_limit
119-------------
120
121This enforces a global limit for memory allocations to the BPF JIT
122compiler in order to reject unprivileged JIT requests once it has
123been surpassed. bpf_jit_limit contains the value of the global limit
124in bytes.
125
126dev_weight
127----------
128
129The maximum number of packets that kernel can handle on a NAPI interrupt,
130it's a Per-CPU variable. For drivers that support LRO or GRO_HW, a hardware
131aggregated packet is counted as one packet in this context.
132
133Default: 64
134
135dev_weight_rx_bias
136------------------
137
138RPS (e.g. RFS, aRFS) processing is competing with the registered NAPI poll function
139of the driver for the per softirq cycle netdev_budget. This parameter influences
140the proportion of the configured netdev_budget that is spent on RPS based packet
141processing during RX softirq cycles. It is further meant for making current
142dev_weight adaptable for asymmetric CPU needs on RX/TX side of the network stack.
143(see dev_weight_tx_bias) It is effective on a per CPU basis. Determination is based
144on dev_weight and is calculated multiplicative (dev_weight * dev_weight_rx_bias).
145
146Default: 1
147
148dev_weight_tx_bias
149------------------
150
151Scales the maximum number of packets that can be processed during a TX softirq cycle.
152Effective on a per CPU basis. Allows scaling of current dev_weight for asymmetric
153net stack processing needs. Be careful to avoid making TX softirq processing a CPU hog.
154
155Calculation is based on dev_weight (dev_weight * dev_weight_tx_bias).
156
157Default: 1
158
159default_qdisc
160-------------
161
162The default queuing discipline to use for network devices. This allows
163overriding the default of pfifo_fast with an alternative. Since the default
164queuing discipline is created without additional parameters so is best suited
165to queuing disciplines that work well without configuration like stochastic
166fair queue (sfq), CoDel (codel) or fair queue CoDel (fq_codel). Don't use
167queuing disciplines like Hierarchical Token Bucket or Deficit Round Robin
168which require setting up classes and bandwidths. Note that physical multiqueue
169interfaces still use mq as root qdisc, which in turn uses this default for its
170leaves. Virtual devices (like e.g. lo or veth) ignore this setting and instead
171default to noqueue.
172
173Default: pfifo_fast
174
175busy_read
176---------
177
178Low latency busy poll timeout for socket reads. (needs CONFIG_NET_RX_BUSY_POLL)
179Approximate time in us to busy loop waiting for packets on the device queue.
180This sets the default value of the SO_BUSY_POLL socket option.
181Can be set or overridden per socket by setting socket option SO_BUSY_POLL,
182which is the preferred method of enabling. If you need to enable the feature
183globally via sysctl, a value of 50 is recommended.
184
185Will increase power usage.
186
187Default: 0 (off)
188
189busy_poll
190----------------
191Low latency busy poll timeout for poll and select. (needs CONFIG_NET_RX_BUSY_POLL)
192Approximate time in us to busy loop waiting for events.
193Recommended value depends on the number of sockets you poll on.
194For several sockets 50, for several hundreds 100.
195For more than that you probably want to use epoll.
196Note that only sockets with SO_BUSY_POLL set will be busy polled,
197so you want to either selectively set SO_BUSY_POLL on those sockets or set
198sysctl.net.busy_read globally.
199
200Will increase power usage.
201
202Default: 0 (off)
203
204rmem_default
205------------
206
207The default setting of the socket receive buffer in bytes.
208
209rmem_max
210--------
211
212The maximum receive socket buffer size in bytes.
213
214tstamp_allow_data
215-----------------
216Allow processes to receive tx timestamps looped together with the original
217packet contents. If disabled, transmit timestamp requests from unprivileged
218processes are dropped unless socket option SOF_TIMESTAMPING_OPT_TSONLY is set.
219
220Default: 1 (on)
221
222
223wmem_default
224------------
225
226The default setting (in bytes) of the socket send buffer.
227
228wmem_max
229--------
230
231The maximum send socket buffer size in bytes.
232
233message_burst and message_cost
234------------------------------
235
236These parameters  are used to limit the warning messages written to the kernel
237log from  the  networking  code.  They  enforce  a  rate  limit  to  make  a
238denial-of-service attack  impossible. A higher message_cost factor, results in
239fewer messages that will be written. Message_burst controls when messages will
240be dropped.  The  default  settings  limit  warning messages to one every five
241seconds.
242
243warnings
244--------
245
246This sysctl is now unused.
247
248This was used to control console messages from the networking stack that
249occur because of problems on the network like duplicate address or bad
250checksums.
251
252These messages are now emitted at KERN_DEBUG and can generally be enabled
253and controlled by the dynamic_debug facility.
254
255netdev_budget
256-------------
257
258Maximum number of packets taken from all interfaces in one polling cycle (NAPI
259poll). In one polling cycle interfaces which are registered to polling are
260probed in a round-robin manner. Also, a polling cycle may not exceed
261netdev_budget_usecs microseconds, even if netdev_budget has not been
262exhausted.
263
264netdev_budget_usecs
265---------------------
266
267Maximum number of microseconds in one NAPI polling cycle. Polling
268will exit when either netdev_budget_usecs have elapsed during the
269poll cycle or the number of packets processed reaches netdev_budget.
270
271netdev_max_backlog
272------------------
273
274Maximum number  of  packets,  queued  on  the  INPUT  side, when the interface
275receives packets faster than kernel can process them.
276
277netdev_rss_key
278--------------
279
280RSS (Receive Side Scaling) enabled drivers use a 40 bytes host key that is
281randomly generated.
282Some user space might need to gather its content even if drivers do not
283provide ethtool -x support yet.
284
285::
286
287  myhost:~# cat /proc/sys/net/core/netdev_rss_key
288  84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47:25:42:97:74:ca:56:bb:b6:a1:d8: ... (52 bytes total)
289
290File contains nul bytes if no driver ever called netdev_rss_key_fill() function.
291
292Note:
293  /proc/sys/net/core/netdev_rss_key contains 52 bytes of key,
294  but most drivers only use 40 bytes of it.
295
296::
297
298  myhost:~# ethtool -x eth0
299  RX flow hash indirection table for eth0 with 8 RX ring(s):
300      0:    0     1     2     3     4     5     6     7
301  RSS hash key:
302  84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47:25:42:97:74:ca:56:bb:b6:a1:d8:43:e3:c9:0c:fd:17:55:c2:3a:4d:69:ed:f1:42:89
303
304netdev_tstamp_prequeue
305----------------------
306
307If set to 0, RX packet timestamps can be sampled after RPS processing, when
308the target CPU processes packets. It might give some delay on timestamps, but
309permit to distribute the load on several cpus.
310
311If set to 1 (default), timestamps are sampled as soon as possible, before
312queueing.
313
314optmem_max
315----------
316
317Maximum ancillary buffer size allowed per socket. Ancillary data is a sequence
318of struct cmsghdr structures with appended data.
319
320fb_tunnels_only_for_init_net
321----------------------------
322
323Controls if fallback tunnels (like tunl0, gre0, gretap0, erspan0,
324sit0, ip6tnl0, ip6gre0) are automatically created when a new
325network namespace is created, if corresponding tunnel is present
326in initial network namespace.
327If set to 1, these devices are not automatically created, and
328user space is responsible for creating them if needed.
329
330Default : 0  (for compatibility reasons)
331
332devconf_inherit_init_net
333------------------------
334
335Controls if a new network namespace should inherit all current
336settings under /proc/sys/net/{ipv4,ipv6}/conf/{all,default}/. By
337default, we keep the current behavior: for IPv4 we inherit all current
338settings from init_net and for IPv6 we reset all settings to default.
339
340If set to 1, both IPv4 and IPv6 settings are forced to inherit from
341current ones in init_net. If set to 2, both IPv4 and IPv6 settings are
342forced to reset to their default values.
343
344Default : 0  (for compatibility reasons)
345
3462. /proc/sys/net/unix - Parameters for Unix domain sockets
347----------------------------------------------------------
348
349There is only one file in this directory.
350unix_dgram_qlen limits the max number of datagrams queued in Unix domain
351socket's buffer. It will not take effect unless PF_UNIX flag is specified.
352
353
3543. /proc/sys/net/ipv4 - IPV4 settings
355-------------------------------------
356Please see: Documentation/networking/ip-sysctl.txt and ipvs-sysctl.txt for
357descriptions of these entries.
358
359
3604. Appletalk
361------------
362
363The /proc/sys/net/appletalk  directory  holds the Appletalk configuration data
364when Appletalk is loaded. The configurable parameters are:
365
366aarp-expiry-time
367----------------
368
369The amount  of  time  we keep an ARP entry before expiring it. Used to age out
370old hosts.
371
372aarp-resolve-time
373-----------------
374
375The amount of time we will spend trying to resolve an Appletalk address.
376
377aarp-retransmit-limit
378---------------------
379
380The number of times we will retransmit a query before giving up.
381
382aarp-tick-time
383--------------
384
385Controls the rate at which expires are checked.
386
387The directory  /proc/net/appletalk  holds the list of active Appletalk sockets
388on a machine.
389
390The fields  indicate  the DDP type, the local address (in network:node format)
391the remote  address,  the  size of the transmit pending queue, the size of the
392received queue  (bytes waiting for applications to read) the state and the uid
393owning the socket.
394
395/proc/net/atalk_iface lists  all  the  interfaces  configured for appletalk.It
396shows the  name  of the interface, its Appletalk address, the network range on
397that address  (or  network number for phase 1 networks), and the status of the
398interface.
399
400/proc/net/atalk_route lists  each  known  network  route.  It lists the target
401(network) that the route leads to, the router (may be directly connected), the
402route flags, and the device the route is using.
403
4045. TIPC
405-------
406
407tipc_rmem
408---------
409
410The TIPC protocol now has a tunable for the receive memory, similar to the
411tcp_rmem - i.e. a vector of 3 INTEGERs: (min, default, max)
412
413::
414
415    # cat /proc/sys/net/tipc/tipc_rmem
416    4252725 34021800        68043600
417    #
418
419The max value is set to CONN_OVERLOAD_LIMIT, and the default and min values
420are scaled (shifted) versions of that same value.  Note that the min value
421is not at this point in time used in any meaningful way, but the triplet is
422preserved in order to be consistent with things like tcp_rmem.
423
424named_timeout
425-------------
426
427TIPC name table updates are distributed asynchronously in a cluster, without
428any form of transaction handling. This means that different race scenarios are
429possible. One such is that a name withdrawal sent out by one node and received
430by another node may arrive after a second, overlapping name publication already
431has been accepted from a third node, although the conflicting updates
432originally may have been issued in the correct sequential order.
433If named_timeout is nonzero, failed topology updates will be placed on a defer
434queue until another event arrives that clears the error, or until the timeout
435expires. Value is in milliseconds.
436