1.. SPDX-License-Identifier: GPL-2.0+
2
3===========================================================================
4Linux Base Driver for the Intel(R) Ethernet 10 Gigabit PCI Express Adapters
5===========================================================================
6
7Intel 10 Gigabit Linux driver.
8Copyright(c) 1999-2018 Intel Corporation.
9
10Contents
11========
12
13- Identifying Your Adapter
14- Command Line Parameters
15- Additional Configurations
16- Known Issues
17- Support
18
19Identifying Your Adapter
20========================
21The driver is compatible with devices based on the following:
22
23 * Intel(R) Ethernet Controller 82598
24 * Intel(R) Ethernet Controller 82599
25 * Intel(R) Ethernet Controller X520
26 * Intel(R) Ethernet Controller X540
27 * Intel(R) Ethernet Controller x550
28 * Intel(R) Ethernet Controller X552
29 * Intel(R) Ethernet Controller X553
30
31For information on how to identify your adapter, and for the latest Intel
32network drivers, refer to the Intel Support website:
33https://www.intel.com/support
34
35SFP+ Devices with Pluggable Optics
36----------------------------------
37
3882599-BASED ADAPTERS
39~~~~~~~~~~~~~~~~~~~~
40NOTES:
41- If your 82599-based Intel(R) Network Adapter came with Intel optics or is an
42Intel(R) Ethernet Server Adapter X520-2, then it only supports Intel optics
43and/or the direct attach cables listed below.
44- When 82599-based SFP+ devices are connected back to back, they should be set
45to the same Speed setting via ethtool. Results may vary if you mix speed
46settings.
47
48+---------------+---------------------------------------+------------------+
49| Supplier      | Type                                  | Part Numbers     |
50+===============+=======================================+==================+
51| SR Modules                                                               |
52+---------------+---------------------------------------+------------------+
53| Intel         | DUAL RATE 1G/10G SFP+ SR (bailed)     | FTLX8571D3BCV-IT |
54+---------------+---------------------------------------+------------------+
55| Intel         | DUAL RATE 1G/10G SFP+ SR (bailed)     | AFBR-703SDZ-IN2  |
56+---------------+---------------------------------------+------------------+
57| Intel         | DUAL RATE 1G/10G SFP+ SR (bailed)     | AFBR-703SDDZ-IN1 |
58+---------------+---------------------------------------+------------------+
59| LR Modules                                                               |
60+---------------+---------------------------------------+------------------+
61| Intel         | DUAL RATE 1G/10G SFP+ LR (bailed)     | FTLX1471D3BCV-IT |
62+---------------+---------------------------------------+------------------+
63| Intel         | DUAL RATE 1G/10G SFP+ LR (bailed)     | AFCT-701SDZ-IN2  |
64+---------------+---------------------------------------+------------------+
65| Intel         | DUAL RATE 1G/10G SFP+ LR (bailed)     | AFCT-701SDDZ-IN1 |
66+---------------+---------------------------------------+------------------+
67
68The following is a list of 3rd party SFP+ modules that have received some
69testing. Not all modules are applicable to all devices.
70
71+---------------+---------------------------------------+------------------+
72| Supplier      | Type                                  | Part Numbers     |
73+===============+=======================================+==================+
74| Finisar       | SFP+ SR bailed, 10g single rate       | FTLX8571D3BCL    |
75+---------------+---------------------------------------+------------------+
76| Avago         | SFP+ SR bailed, 10g single rate       | AFBR-700SDZ      |
77+---------------+---------------------------------------+------------------+
78| Finisar       | SFP+ LR bailed, 10g single rate       | FTLX1471D3BCL    |
79+---------------+---------------------------------------+------------------+
80| Finisar       | DUAL RATE 1G/10G SFP+ SR (No Bail)    | FTLX8571D3QCV-IT |
81+---------------+---------------------------------------+------------------+
82| Avago         | DUAL RATE 1G/10G SFP+ SR (No Bail)    | AFBR-703SDZ-IN1  |
83+---------------+---------------------------------------+------------------+
84| Finisar       | DUAL RATE 1G/10G SFP+ LR (No Bail)    | FTLX1471D3QCV-IT |
85+---------------+---------------------------------------+------------------+
86| Avago         | DUAL RATE 1G/10G SFP+ LR (No Bail)    | AFCT-701SDZ-IN1  |
87+---------------+---------------------------------------+------------------+
88| Finisar       | 1000BASE-T SFP                        | FCLF8522P2BTL    |
89+---------------+---------------------------------------+------------------+
90| Avago         | 1000BASE-T                            | ABCU-5710RZ      |
91+---------------+---------------------------------------+------------------+
92| HP            | 1000BASE-SX SFP                       | 453153-001       |
93+---------------+---------------------------------------+------------------+
94
9582599-based adapters support all passive and active limiting direct attach
96cables that comply with SFF-8431 v4.1 and SFF-8472 v10.4 specifications.
97
98Laser turns off for SFP+ when ifconfig ethX down
99~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
100"ifconfig ethX down" turns off the laser for 82599-based SFP+ fiber adapters.
101"ifconfig ethX up" turns on the laser.
102Alternatively, you can use "ip link set [down/up] dev ethX" to turn the
103laser off and on.
104
105
10682599-based QSFP+ Adapters
107~~~~~~~~~~~~~~~~~~~~~~~~~~
108NOTES:
109- If your 82599-based Intel(R) Network Adapter came with Intel optics, it only
110supports Intel optics.
111- 82599-based QSFP+ adapters only support 4x10 Gbps connections.  1x40 Gbps
112connections are not supported. QSFP+ link partners must be configured for
1134x10 Gbps.
114- 82599-based QSFP+ adapters do not support automatic link speed detection.
115The link speed must be configured to either 10 Gbps or 1 Gbps to match the link
116partners speed capabilities. Incorrect speed configurations will result in
117failure to link.
118- Intel(R) Ethernet Converged Network Adapter X520-Q1 only supports the optics
119and direct attach cables listed below.
120
121+---------------+---------------------------------------+------------------+
122| Supplier      | Type                                  | Part Numbers     |
123+===============+=======================================+==================+
124| Intel         | DUAL RATE 1G/10G QSFP+ SRL (bailed)   | E10GQSFPSR       |
125+---------------+---------------------------------------+------------------+
126
12782599-based QSFP+ adapters support all passive and active limiting QSFP+
128direct attach cables that comply with SFF-8436 v4.1 specifications.
129
13082598-BASED ADAPTERS
131~~~~~~~~~~~~~~~~~~~~
132NOTES:
133- Intel(r) Ethernet Network Adapters that support removable optical modules
134only support their original module type (for example, the Intel(R) 10 Gigabit
135SR Dual Port Express Module only supports SR optical modules). If you plug in
136a different type of module, the driver will not load.
137- Hot Swapping/hot plugging optical modules is not supported.
138- Only single speed, 10 gigabit modules are supported.
139- LAN on Motherboard (LOMs) may support DA, SR, or LR modules. Other module
140types are not supported. Please see your system documentation for details.
141
142The following is a list of SFP+ modules and direct attach cables that have
143received some testing. Not all modules are applicable to all devices.
144
145+---------------+---------------------------------------+------------------+
146| Supplier      | Type                                  | Part Numbers     |
147+===============+=======================================+==================+
148| Finisar       | SFP+ SR bailed, 10g single rate       | FTLX8571D3BCL    |
149+---------------+---------------------------------------+------------------+
150| Avago         | SFP+ SR bailed, 10g single rate       | AFBR-700SDZ      |
151+---------------+---------------------------------------+------------------+
152| Finisar       | SFP+ LR bailed, 10g single rate       | FTLX1471D3BCL    |
153+---------------+---------------------------------------+------------------+
154
15582598-based adapters support all passive direct attach cables that comply with
156SFF-8431 v4.1 and SFF-8472 v10.4 specifications. Active direct attach cables
157are not supported.
158
159Third party optic modules and cables referred to above are listed only for the
160purpose of highlighting third party specifications and potential
161compatibility, and are not recommendations or endorsements or sponsorship of
162any third party's product by Intel. Intel is not endorsing or promoting
163products made by any third party and the third party reference is provided
164only to share information regarding certain optic modules and cables with the
165above specifications. There may be other manufacturers or suppliers, producing
166or supplying optic modules and cables with similar or matching descriptions.
167Customers must use their own discretion and diligence to purchase optic
168modules and cables from any third party of their choice. Customers are solely
169responsible for assessing the suitability of the product and/or devices and
170for the selection of the vendor for purchasing any product. THE OPTIC MODULES
171AND CABLES REFERRED TO ABOVE ARE NOT WARRANTED OR SUPPORTED BY INTEL. INTEL
172ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED
173WARRANTY, RELATING TO SALE AND/OR USE OF SUCH THIRD PARTY PRODUCTS OR
174SELECTION OF VENDOR BY CUSTOMERS.
175
176Command Line Parameters
177=======================
178
179max_vfs
180-------
181:Valid Range: 1-63
182
183This parameter adds support for SR-IOV. It causes the driver to spawn up to
184max_vfs worth of virtual functions.
185If the value is greater than 0 it will also force the VMDq parameter to be 1 or
186more.
187
188NOTE: This parameter is only used on kernel 3.7.x and below. On kernel 3.8.x
189and above, use sysfs to enable VFs. Also, for Red Hat distributions, this
190parameter is only used on version 6.6 and older. For version 6.7 and newer, use
191sysfs. For example::
192
193  #echo $num_vf_enabled > /sys/class/net/$dev/device/sriov_numvfs // enable VFs
194  #echo 0 > /sys/class/net/$dev/device/sriov_numvfs               //disable VFs
195
196The parameters for the driver are referenced by position. Thus, if you have a
197dual port adapter, or more than one adapter in your system, and want N virtual
198functions per port, you must specify a number for each port with each parameter
199separated by a comma. For example::
200
201  modprobe ixgbe max_vfs=4
202
203This will spawn 4 VFs on the first port.
204
205::
206
207  modprobe ixgbe max_vfs=2,4
208
209This will spawn 2 VFs on the first port and 4 VFs on the second port.
210
211NOTE: Caution must be used in loading the driver with these parameters.
212Depending on your system configuration, number of slots, etc., it is impossible
213to predict in all cases where the positions would be on the command line.
214
215NOTE: Neither the device nor the driver control how VFs are mapped into config
216space. Bus layout will vary by operating system. On operating systems that
217support it, you can check sysfs to find the mapping.
218
219NOTE: When either SR-IOV mode or VMDq mode is enabled, hardware VLAN filtering
220and VLAN tag stripping/insertion will remain enabled. Please remove the old
221VLAN filter before the new VLAN filter is added. For example,
222
223::
224
225  ip link set eth0 vf 0 vlan 100 // set VLAN 100 for VF 0
226  ip link set eth0 vf 0 vlan 0   // Delete VLAN 100
227  ip link set eth0 vf 0 vlan 200 // set a new VLAN 200 for VF 0
228
229With kernel 3.6, the driver supports the simultaneous usage of max_vfs and DCB
230features, subject to the constraints described below. Prior to kernel 3.6, the
231driver did not support the simultaneous operation of max_vfs greater than 0 and
232the DCB features (multiple traffic classes utilizing Priority Flow Control and
233Extended Transmission Selection).
234
235When DCB is enabled, network traffic is transmitted and received through
236multiple traffic classes (packet buffers in the NIC). The traffic is associated
237with a specific class based on priority, which has a value of 0 through 7 used
238in the VLAN tag. When SR-IOV is not enabled, each traffic class is associated
239with a set of receive/transmit descriptor queue pairs. The number of queue
240pairs for a given traffic class depends on the hardware configuration. When
241SR-IOV is enabled, the descriptor queue pairs are grouped into pools. The
242Physical Function (PF) and each Virtual Function (VF) is allocated a pool of
243receive/transmit descriptor queue pairs. When multiple traffic classes are
244configured (for example, DCB is enabled), each pool contains a queue pair from
245each traffic class. When a single traffic class is configured in the hardware,
246the pools contain multiple queue pairs from the single traffic class.
247
248The number of VFs that can be allocated depends on the number of traffic
249classes that can be enabled. The configurable number of traffic classes for
250each enabled VF is as follows:
2510 - 15 VFs = Up to 8 traffic classes, depending on device support
25216 - 31 VFs = Up to 4 traffic classes
25332 - 63 VFs = 1 traffic class
254
255When VFs are configured, the PF is allocated one pool as well. The PF supports
256the DCB features with the constraint that each traffic class will only use a
257single queue pair. When zero VFs are configured, the PF can support multiple
258queue pairs per traffic class.
259
260allow_unsupported_sfp
261---------------------
262:Valid Range: 0,1
263:Default Value: 0 (disabled)
264
265This parameter allows unsupported and untested SFP+ modules on 82599-based
266adapters, as long as the type of module is known to the driver.
267
268debug
269-----
270:Valid Range: 0-16 (0=none,...,16=all)
271:Default Value: 0
272
273This parameter adjusts the level of debug messages displayed in the system
274logs.
275
276
277Additional Features and Configurations
278======================================
279
280Flow Control
281------------
282Ethernet Flow Control (IEEE 802.3x) can be configured with ethtool to enable
283receiving and transmitting pause frames for ixgbe. When transmit is enabled,
284pause frames are generated when the receive packet buffer crosses a predefined
285threshold. When receive is enabled, the transmit unit will halt for the time
286delay specified when a pause frame is received.
287
288NOTE: You must have a flow control capable link partner.
289
290Flow Control is enabled by default.
291
292Use ethtool to change the flow control settings. To enable or disable Rx or
293Tx Flow Control::
294
295  ethtool -A eth? rx <on|off> tx <on|off>
296
297Note: This command only enables or disables Flow Control if auto-negotiation is
298disabled. If auto-negotiation is enabled, this command changes the parameters
299used for auto-negotiation with the link partner.
300
301To enable or disable auto-negotiation::
302
303  ethtool -s eth? autoneg <on|off>
304
305Note: Flow Control auto-negotiation is part of link auto-negotiation. Depending
306on your device, you may not be able to change the auto-negotiation setting.
307
308NOTE: For 82598 backplane cards entering 1 gigabit mode, flow control default
309behavior is changed to off. Flow control in 1 gigabit mode on these devices can
310lead to transmit hangs.
311
312Intel(R) Ethernet Flow Director
313-------------------------------
314The Intel Ethernet Flow Director performs the following tasks:
315
316- Directs receive packets according to their flows to different queues.
317- Enables tight control on routing a flow in the platform.
318- Matches flows and CPU cores for flow affinity.
319- Supports multiple parameters for flexible flow classification and load
320  balancing (in SFP mode only).
321
322NOTE: Intel Ethernet Flow Director masking works in the opposite manner from
323subnet masking. In the following command::
324
325  #ethtool -N eth11 flow-type ip4 src-ip 172.4.1.2 m 255.0.0.0 dst-ip \
326  172.21.1.1 m 255.128.0.0 action 31
327
328The src-ip value that is written to the filter will be 0.4.1.2, not 172.0.0.0
329as might be expected. Similarly, the dst-ip value written to the filter will be
3300.21.1.1, not 172.0.0.0.
331
332To enable or disable the Intel Ethernet Flow Director::
333
334  # ethtool -K ethX ntuple <on|off>
335
336When disabling ntuple filters, all the user programmed filters are flushed from
337the driver cache and hardware. All needed filters must be re-added when ntuple
338is re-enabled.
339
340To add a filter that directs packet to queue 2, use -U or -N switch::
341
342  # ethtool -N ethX flow-type tcp4 src-ip 192.168.10.1 dst-ip \
343  192.168.10.2 src-port 2000 dst-port 2001 action 2 [loc 1]
344
345To see the list of filters currently present::
346
347  # ethtool <-u|-n> ethX
348
349Sideband Perfect Filters
350------------------------
351Sideband Perfect Filters are used to direct traffic that matches specified
352characteristics. They are enabled through ethtool's ntuple interface. To add a
353new filter use the following command::
354
355  ethtool -U <device> flow-type <type> src-ip <ip> dst-ip <ip> src-port <port> \
356  dst-port <port> action <queue>
357
358Where:
359  <device> - the ethernet device to program
360  <type> - can be ip4, tcp4, udp4, or sctp4
361  <ip> - the IP address to match on
362  <port> - the port number to match on
363  <queue> - the queue to direct traffic towards (-1 discards the matched traffic)
364
365Use the following command to delete a filter::
366
367  ethtool -U <device> delete <N>
368
369Where <N> is the filter id displayed when printing all the active filters, and
370may also have been specified using "loc <N>" when adding the filter.
371
372The following example matches TCP traffic sent from 192.168.0.1, port 5300,
373directed to 192.168.0.5, port 80, and sends it to queue 7::
374
375  ethtool -U enp130s0 flow-type tcp4 src-ip 192.168.0.1 dst-ip 192.168.0.5 \
376  src-port 5300 dst-port 80 action 7
377
378For each flow-type, the programmed filters must all have the same matching
379input set. For example, issuing the following two commands is acceptable::
380
381  ethtool -U enp130s0 flow-type ip4 src-ip 192.168.0.1 src-port 5300 action 7
382  ethtool -U enp130s0 flow-type ip4 src-ip 192.168.0.5 src-port 55 action 10
383
384Issuing the next two commands, however, is not acceptable, since the first
385specifies src-ip and the second specifies dst-ip::
386
387  ethtool -U enp130s0 flow-type ip4 src-ip 192.168.0.1 src-port 5300 action 7
388  ethtool -U enp130s0 flow-type ip4 dst-ip 192.168.0.5 src-port 55 action 10
389
390The second command will fail with an error. You may program multiple filters
391with the same fields, using different values, but, on one device, you may not
392program two TCP4 filters with different matching fields.
393
394Matching on a sub-portion of a field is not supported by the ixgbe driver, thus
395partial mask fields are not supported.
396
397To create filters that direct traffic to a specific Virtual Function, use the
398"user-def" parameter. Specify the user-def as a 64 bit value, where the lower 32
399bits represents the queue number, while the next 8 bits represent which VF.
400Note that 0 is the PF, so the VF identifier is offset by 1. For example::
401
402  ... user-def 0x800000002 ...
403
404specifies to direct traffic to Virtual Function 7 (8 minus 1) into queue 2 of
405that VF.
406
407Note that these filters will not break internal routing rules, and will not
408route traffic that otherwise would not have been sent to the specified Virtual
409Function.
410
411Jumbo Frames
412------------
413Jumbo Frames support is enabled by changing the Maximum Transmission Unit (MTU)
414to a value larger than the default value of 1500.
415
416Use the ifconfig command to increase the MTU size. For example, enter the
417following where <x> is the interface number::
418
419  ifconfig eth<x> mtu 9000 up
420
421Alternatively, you can use the ip command as follows::
422
423  ip link set mtu 9000 dev eth<x>
424  ip link set up dev eth<x>
425
426This setting is not saved across reboots. The setting change can be made
427permanent by adding 'MTU=9000' to the file::
428
429  /etc/sysconfig/network-scripts/ifcfg-eth<x> // for RHEL
430  /etc/sysconfig/network/<config_file> // for SLES
431
432NOTE: The maximum MTU setting for Jumbo Frames is 9710. This value coincides
433with the maximum Jumbo Frames size of 9728 bytes.
434
435NOTE: This driver will attempt to use multiple page sized buffers to receive
436each jumbo packet. This should help to avoid buffer starvation issues when
437allocating receive packets.
438
439NOTE: For 82599-based network connections, if you are enabling jumbo frames in
440a virtual function (VF), jumbo frames must first be enabled in the physical
441function (PF). The VF MTU setting cannot be larger than the PF MTU.
442
443Generic Receive Offload, aka GRO
444--------------------------------
445The driver supports the in-kernel software implementation of GRO. GRO has
446shown that by coalescing Rx traffic into larger chunks of data, CPU
447utilization can be significantly reduced when under large Rx load. GRO is an
448evolution of the previously-used LRO interface. GRO is able to coalesce
449other protocols besides TCP. It's also safe to use with configurations that
450are problematic for LRO, namely bridging and iSCSI.
451
452Data Center Bridging (DCB)
453--------------------------
454NOTE:
455The kernel assumes that TC0 is available, and will disable Priority Flow
456Control (PFC) on the device if TC0 is not available. To fix this, ensure TC0 is
457enabled when setting up DCB on your switch.
458
459DCB is a configuration Quality of Service implementation in hardware. It uses
460the VLAN priority tag (802.1p) to filter traffic. That means that there are 8
461different priorities that traffic can be filtered into. It also enables
462priority flow control (802.1Qbb) which can limit or eliminate the number of
463dropped packets during network stress. Bandwidth can be allocated to each of
464these priorities, which is enforced at the hardware level (802.1Qaz).
465
466Adapter firmware implements LLDP and DCBX protocol agents as per 802.1AB and
467802.1Qaz respectively. The firmware based DCBX agent runs in willing mode only
468and can accept settings from a DCBX capable peer. Software configuration of
469DCBX parameters via dcbtool/lldptool are not supported.
470
471The ixgbe driver implements the DCB netlink interface layer to allow user-space
472to communicate with the driver and query DCB configuration for the port.
473
474ethtool
475-------
476The driver utilizes the ethtool interface for driver configuration and
477diagnostics, as well as displaying statistical information. The latest ethtool
478version is required for this functionality. Download it at:
479https://www.kernel.org/pub/software/network/ethtool/
480
481FCoE
482----
483The ixgbe driver supports Fiber Channel over Ethernet (FCoE) and Data Center
484Bridging (DCB). This code has no default effect on the regular driver
485operation. Configuring DCB and FCoE is outside the scope of this README. Refer
486to http://www.open-fcoe.org/ for FCoE project information and contact
487ixgbe-eedc@lists.sourceforge.net for DCB information.
488
489MAC and VLAN anti-spoofing feature
490----------------------------------
491When a malicious driver attempts to send a spoofed packet, it is dropped by the
492hardware and not transmitted.
493
494An interrupt is sent to the PF driver notifying it of the spoof attempt. When a
495spoofed packet is detected, the PF driver will send the following message to
496the system log (displayed by the "dmesg" command)::
497
498  ixgbe ethX: ixgbe_spoof_check: n spoofed packets detected
499
500where "x" is the PF interface number; and "n" is number of spoofed packets.
501NOTE: This feature can be disabled for a specific Virtual Function (VF)::
502
503  ip link set <pf dev> vf <vf id> spoofchk {off|on}
504
505IPsec Offload
506-------------
507The ixgbe driver supports IPsec Hardware Offload.  When creating Security
508Associations with "ip xfrm ..." the 'offload' tag option can be used to
509register the IPsec SA with the driver in order to get higher throughput in
510the secure communications.
511
512The offload is also supported for ixgbe's VFs, but the VF must be set as
513'trusted' and the support must be enabled with::
514
515  ethtool --set-priv-flags eth<x> vf-ipsec on
516  ip link set eth<x> vf <y> trust on
517
518
519Known Issues/Troubleshooting
520============================
521
522Enabling SR-IOV in a 64-bit Microsoft Windows Server 2012/R2 guest OS
523---------------------------------------------------------------------
524Linux KVM Hypervisor/VMM supports direct assignment of a PCIe device to a VM.
525This includes traditional PCIe devices, as well as SR-IOV-capable devices based
526on the Intel Ethernet Controller XL710.
527
528
529Support
530=======
531For general information, go to the Intel support website at:
532
533https://www.intel.com/support/
534
535or the Intel Wired Networking project hosted by Sourceforge at:
536
537https://sourceforge.net/projects/e1000
538
539If an issue is identified with the released source code on a supported kernel
540with a supported adapter, email the specific information related to the issue
541to e1000-devel@lists.sf.net.
542