1.. SPDX-License-Identifier: GPL-2.0 2 3============================== 4The QorIQ DPAA Ethernet Driver 5============================== 6 7Authors: 8- Madalin Bucur <madalin.bucur@nxp.com> 9- Camelia Groza <camelia.groza@nxp.com> 10 11.. Contents 12 13 - DPAA Ethernet Overview 14 - DPAA Ethernet Supported SoCs 15 - Configuring DPAA Ethernet in your kernel 16 - DPAA Ethernet Frame Processing 17 - DPAA Ethernet Features 18 - DPAA IRQ Affinity and Receive Side Scaling 19 - Debugging 20 21DPAA Ethernet Overview 22====================== 23 24DPAA stands for Data Path Acceleration Architecture and it is a 25set of networking acceleration IPs that are available on several 26generations of SoCs, both on PowerPC and ARM64. 27 28The Freescale DPAA architecture consists of a series of hardware blocks 29that support Ethernet connectivity. The Ethernet driver depends upon the 30following drivers in the Linux kernel: 31 32 - Peripheral Access Memory Unit (PAMU) (* needed only for PPC platforms) 33 drivers/iommu/fsl_* 34 - Frame Manager (FMan) 35 drivers/net/ethernet/freescale/fman 36 - Queue Manager (QMan), Buffer Manager (BMan) 37 drivers/soc/fsl/qbman 38 39A simplified view of the dpaa_eth interfaces mapped to FMan MACs:: 40 41 dpaa_eth /eth0\ ... /ethN\ 42 driver | | | | 43 ------------- ---- ----------- ---- ------------- 44 -Ports / Tx Rx \ ... / Tx Rx \ 45 FMan | | | | 46 -MACs | MAC0 | | MACN | 47 / dtsec0 \ ... / dtsecN \ (or tgec) 48 / \ / \(or memac) 49 --------- -------------- --- -------------- --------- 50 FMan, FMan Port, FMan SP, FMan MURAM drivers 51 --------------------------------------------------------- 52 FMan HW blocks: MURAM, MACs, Ports, SP 53 --------------------------------------------------------- 54 55The dpaa_eth relation to the QMan, BMan and FMan:: 56 57 ________________________________ 58 dpaa_eth / eth0 \ 59 driver / \ 60 --------- -^- -^- -^- --- --------- 61 QMan driver / \ / \ / \ \ / | BMan | 62 |Rx | |Rx | |Tx | |Tx | | driver | 63 --------- |Dfl| |Err| |Cnf| |FQs| | | 64 QMan HW |FQ | |FQ | |FQs| | | | | 65 / \ / \ / \ \ / | | 66 --------- --- --- --- -v- --------- 67 | FMan QMI | | 68 | FMan HW FMan BMI | BMan HW | 69 ----------------------- -------- 70 71where the acronyms used above (and in the code) are: 72 73=============== =========================================================== 74DPAA Data Path Acceleration Architecture 75FMan DPAA Frame Manager 76QMan DPAA Queue Manager 77BMan DPAA Buffers Manager 78QMI QMan interface in FMan 79BMI BMan interface in FMan 80FMan SP FMan Storage Profiles 81MURAM Multi-user RAM in FMan 82FQ QMan Frame Queue 83Rx Dfl FQ default reception FQ 84Rx Err FQ Rx error frames FQ 85Tx Cnf FQ Tx confirmation FQs 86Tx FQs transmission frame queues 87dtsec datapath three speed Ethernet controller (10/100/1000 Mbps) 88tgec ten gigabit Ethernet controller (10 Gbps) 89memac multirate Ethernet MAC (10/100/1000/10000) 90=============== =========================================================== 91 92DPAA Ethernet Supported SoCs 93============================ 94 95The DPAA drivers enable the Ethernet controllers present on the following SoCs: 96 97PPC 98- P1023 99- P2041 100- P3041 101- P4080 102- P5020 103- P5040 104- T1023 105- T1024 106- T1040 107- T1042 108- T2080 109- T4240 110- B4860 111 112ARM 113- LS1043A 114- LS1046A 115 116Configuring DPAA Ethernet in your kernel 117======================================== 118 119To enable the DPAA Ethernet driver, the following Kconfig options are required:: 120 121 # common for arch/arm64 and arch/powerpc platforms 122 CONFIG_FSL_DPAA=y 123 CONFIG_FSL_FMAN=y 124 CONFIG_FSL_DPAA_ETH=y 125 CONFIG_FSL_XGMAC_MDIO=y 126 127 # for arch/powerpc only 128 CONFIG_FSL_PAMU=y 129 130 # common options needed for the PHYs used on the RDBs 131 CONFIG_VITESSE_PHY=y 132 CONFIG_REALTEK_PHY=y 133 CONFIG_AQUANTIA_PHY=y 134 135DPAA Ethernet Frame Processing 136============================== 137 138On Rx, buffers for the incoming frames are retrieved from the buffers found 139in the dedicated interface buffer pool. The driver initializes and seeds these 140with one page buffers. 141 142On Tx, all transmitted frames are returned to the driver through Tx 143confirmation frame queues. The driver is then responsible for freeing the 144buffers. In order to do this properly, a backpointer is added to the buffer 145before transmission that points to the skb. When the buffer returns to the 146driver on a confirmation FQ, the skb can be correctly consumed. 147 148DPAA Ethernet Features 149====================== 150 151Currently the DPAA Ethernet driver enables the basic features required for 152a Linux Ethernet driver. The support for advanced features will be added 153gradually. 154 155The driver has Rx and Tx checksum offloading for UDP and TCP. Currently the Rx 156checksum offload feature is enabled by default and cannot be controlled through 157ethtool. Also, rx-flow-hash and rx-hashing was added. The addition of RSS 158provides a big performance boost for the forwarding scenarios, allowing 159different traffic flows received by one interface to be processed by different 160CPUs in parallel. 161 162The driver has support for multiple prioritized Tx traffic classes. Priorities 163range from 0 (lowest) to 3 (highest). These are mapped to HW workqueues with 164strict priority levels. Each traffic class contains NR_CPU TX queues. By 165default, only one traffic class is enabled and the lowest priority Tx queues 166are used. Higher priority traffic classes can be enabled with the mqprio 167qdisc. For example, all four traffic classes are enabled on an interface with 168the following command. Furthermore, skb priority levels are mapped to traffic 169classes as follows: 170 171 * priorities 0 to 3 - traffic class 0 (low priority) 172 * priorities 4 to 7 - traffic class 1 (medium-low priority) 173 * priorities 8 to 11 - traffic class 2 (medium-high priority) 174 * priorities 12 to 15 - traffic class 3 (high priority) 175 176:: 177 178 tc qdisc add dev <int> root handle 1: \ 179 mqprio num_tc 4 map 0 0 0 0 1 1 1 1 2 2 2 2 3 3 3 3 hw 1 180 181DPAA IRQ Affinity and Receive Side Scaling 182========================================== 183 184Traffic coming on the DPAA Rx queues or on the DPAA Tx confirmation 185queues is seen by the CPU as ingress traffic on a certain portal. 186The DPAA QMan portal interrupts are affined each to a certain CPU. 187The same portal interrupt services all the QMan portal consumers. 188 189By default the DPAA Ethernet driver enables RSS, making use of the 190DPAA FMan Parser and Keygen blocks to distribute traffic on 128 191hardware frame queues using a hash on IP v4/v6 source and destination 192and L4 source and destination ports, in present in the received frame. 193When RSS is disabled, all traffic received by a certain interface is 194received on the default Rx frame queue. The default DPAA Rx frame 195queues are configured to put the received traffic into a pool channel 196that allows any available CPU portal to dequeue the ingress traffic. 197The default frame queues have the HOLDACTIVE option set, ensuring that 198traffic bursts from a certain queue are serviced by the same CPU. 199This ensures a very low rate of frame reordering. A drawback of this 200is that only one CPU at a time can service the traffic received by a 201certain interface when RSS is not enabled. 202 203To implement RSS, the DPAA Ethernet driver allocates an extra set of 204128 Rx frame queues that are configured to dedicated channels, in a 205round-robin manner. The mapping of the frame queues to CPUs is now 206hardcoded, there is no indirection table to move traffic for a certain 207FQ (hash result) to another CPU. The ingress traffic arriving on one 208of these frame queues will arrive at the same portal and will always 209be processed by the same CPU. This ensures intra-flow order preservation 210and workload distribution for multiple traffic flows. 211 212RSS can be turned off for a certain interface using ethtool, i.e.:: 213 214 # ethtool -N fm1-mac9 rx-flow-hash tcp4 "" 215 216To turn it back on, one needs to set rx-flow-hash for tcp4/6 or udp4/6:: 217 218 # ethtool -N fm1-mac9 rx-flow-hash udp4 sfdn 219 220There is no independent control for individual protocols, any command 221run for one of tcp4|udp4|ah4|esp4|sctp4|tcp6|udp6|ah6|esp6|sctp6 is 222going to control the rx-flow-hashing for all protocols on that interface. 223 224Besides using the FMan Keygen computed hash for spreading traffic on the 225128 Rx FQs, the DPAA Ethernet driver also sets the skb hash value when 226the NETIF_F_RXHASH feature is on (active by default). This can be turned 227on or off through ethtool, i.e.:: 228 229 # ethtool -K fm1-mac9 rx-hashing off 230 # ethtool -k fm1-mac9 | grep hash 231 receive-hashing: off 232 # ethtool -K fm1-mac9 rx-hashing on 233 Actual changes: 234 receive-hashing: on 235 # ethtool -k fm1-mac9 | grep hash 236 receive-hashing: on 237 238Please note that Rx hashing depends upon the rx-flow-hashing being on 239for that interface - turning off rx-flow-hashing will also disable the 240rx-hashing (without ethtool reporting it as off as that depends on the 241NETIF_F_RXHASH feature flag). 242 243Debugging 244========= 245 246The following statistics are exported for each interface through ethtool: 247 248 - interrupt count per CPU 249 - Rx packets count per CPU 250 - Tx packets count per CPU 251 - Tx confirmed packets count per CPU 252 - Tx S/G frames count per CPU 253 - Tx error count per CPU 254 - Rx error count per CPU 255 - Rx error count per type 256 - congestion related statistics: 257 258 - congestion status 259 - time spent in congestion 260 - number of time the device entered congestion 261 - dropped packets count per cause 262 263The driver also exports the following information in sysfs: 264 265 - the FQ IDs for each FQ type 266 /sys/devices/platform/soc/<addr>.fman/<addr>.ethernet/dpaa-ethernet.<id>/net/fm<nr>-mac<nr>/fqids 267 268 - the ID of the buffer pool in use 269 /sys/devices/platform/soc/<addr>.fman/<addr>.ethernet/dpaa-ethernet.<id>/net/fm<nr>-mac<nr>/bpids 270