History log of /openbmc/linux/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c (Results 451 – 475 of 568)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
# e4d86a4a 07-Jan-2018 Tariq Toukan <tariqt@mellanox.com>

net/mlx5e: Do not busy-wait for UMR completion in Striding RQ

Do not busy-wait a pending UMR completion. Under high HW load,
busy-waiting a delayed completion would fully utilize the CPU

net/mlx5e: Do not busy-wait for UMR completion in Striding RQ

Do not busy-wait a pending UMR completion. Under high HW load,
busy-waiting a delayed completion would fully utilize the CPU core
and mistakenly indicate a SW bottleneck.

Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>

show more ...


# 18187fb2 18-Jul-2017 Tariq Toukan <tariqt@mellanox.com>

net/mlx5e: Code movements in RX UMR WQE post

Gets the process of a UMR WQE post in one function,
in preparation for a downstream patch that inlines
the WQE data.
No functional ch

net/mlx5e: Code movements in RX UMR WQE post

Gets the process of a UMR WQE post in one function,
in preparation for a downstream patch that inlines
the WQE data.
No functional change here.

Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>

show more ...


# 472a1e44 12-Mar-2018 Tariq Toukan <tariqt@mellanox.com>

net/mlx5e: Save MTU in channels params

Knowing the MTU is required for RQ creation flow.
By our design, channels creation flow is totally isolated
from priv/netdev, and can be comple

net/mlx5e: Save MTU in channels params

Knowing the MTU is required for RQ creation flow.
By our design, channels creation flow is totally isolated
from priv/netdev, and can be completed with access to
channels params and mdev.
Adding the MTU to the channels params helps preserving that.
In addition, we save it in RQ to make its access faster in
datapath checks.

Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>

show more ...


# 24fd07ab 15-Feb-2018 Tariq Toukan <tariqt@mellanox.com>

net/mlx5e: Use no-offset function in skb header copy

In copying skb header to skb->data, replace the call to
skb_copy_to_linear_data_offset() with a zero offset with
the call to the

net/mlx5e: Use no-offset function in skb header copy

In copying skb header to skb->data, replace the call to
skb_copy_to_linear_data_offset() with a zero offset with
the call to the no-offset function skb_copy_to_linear_data().

Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>

show more ...


# bd658dda 27-Nov-2017 Tariq Toukan <tariqt@mellanox.com>

net/mlx5e: Separate dma base address and offset in dma_sync call

Pass the base dma address and offset to dma_sync_single_range_for_cpu(),
instead of doing the pre-calculation.

S

net/mlx5e: Separate dma base address and offset in dma_sync call

Pass the base dma address and offset to dma_sync_single_range_for_cpu(),
instead of doing the pre-calculation.

Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>

show more ...


# f74290fd 23-Feb-2018 David S. Miller <davem@davemloft.net>

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net


# 8babd44d 20-Dec-2017 Gal Pressman <galp@mellanox.com>

net/mlx5e: Fix TCP checksum in LRO buffers

When receiving an LRO packet, the checksum field is set by the hardware
to the checksum of the first coalesced packet. Obviously, this checksum

net/mlx5e: Fix TCP checksum in LRO buffers

When receiving an LRO packet, the checksum field is set by the hardware
to the checksum of the first coalesced packet. Obviously, this checksum
is not valid for the merged LRO packet and should be fixed. We can use
the CQE checksum which covers the checksum of the entire merged packet
TCP payload to help us calculate the checksum incrementally.

Tested by sending IPv4/6 traffic with LRO enabled, RX checksum disabled
and watching nstat checksum error counters (in addition to the obvious
bandwidth drop caused by checksum errors).

This bug is usually "hidden" since LRO packets would go through the
CHECKSUM_UNNECESSARY flow which does not validate the packet checksum.

It's important to note that previous to this patch, LRO packets provided
with CHECKSUM_UNNECESSARY are indeed packets with a correct validated
checksum (even though the checksum inside the TCP header is incorrect),
since the hardware LRO aggregation is terminated upon receiving a packet
with bad checksum.

Fixes: e586b3b0baee ("net/mlx5: Ethernet Datapath files")
Signed-off-by: Gal Pressman <galp@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>

show more ...


# 388ca8be 02-Jan-2018 Yonatan Cohen <yonatanc@mellanox.com>

IB/mlx5: Implement fragmented completion queue (CQ)

The current implementation of create CQ requires contiguous
memory, such requirement is problematic once the memory is
fragmented

IB/mlx5: Implement fragmented completion queue (CQ)

The current implementation of create CQ requires contiguous
memory, such requirement is problematic once the memory is
fragmented or the system is low in memory, it causes for
failures in dma_zalloc_coherent().

This patch implements new scheme of fragmented CQ to overcome
this issue by introducing new type: 'struct mlx5_frag_buf_ctrl'
to allocate fragmented buffers, rather than contiguous ones.

Base the Completion Queues (CQs) on this new fragmented buffer.

It fixes following crashes:
kworker/29:0: page allocation failure: order:6, mode:0x80d0
CPU: 29 PID: 8374 Comm: kworker/29:0 Tainted: G OE 3.10.0
Workqueue: ib_cm cm_work_handler [ib_cm]
Call Trace:
[<>] dump_stack+0x19/0x1b
[<>] warn_alloc_failed+0x110/0x180
[<>] __alloc_pages_slowpath+0x6b7/0x725
[<>] __alloc_pages_nodemask+0x405/0x420
[<>] dma_generic_alloc_coherent+0x8f/0x140
[<>] x86_swiotlb_alloc_coherent+0x21/0x50
[<>] mlx5_dma_zalloc_coherent_node+0xad/0x110 [mlx5_core]
[<>] ? mlx5_db_alloc_node+0x69/0x1b0 [mlx5_core]
[<>] mlx5_buf_alloc_node+0x3e/0xa0 [mlx5_core]
[<>] mlx5_buf_alloc+0x14/0x20 [mlx5_core]
[<>] create_cq_kernel+0x90/0x1f0 [mlx5_ib]
[<>] mlx5_ib_create_cq+0x3b0/0x4e0 [mlx5_ib]

Signed-off-by: Yonatan Cohen <yonatanc@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>

show more ...


# 63a612f9 09-Jan-2018 Gal Pressman <galp@mellanox.com>

net/mlx5e: Add likely to the common RX checksum flow

Most of the packets return true for is_last_ethertype_ip, surround it
with likely compiler hint.

Signed-off-by: Gal Pressman

net/mlx5e: Add likely to the common RX checksum flow

Most of the packets return true for is_last_ethertype_ip, surround it
with likely compiler hint.

Signed-off-by: Gal Pressman <galp@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>

show more ...


# 36e564b7 31-Oct-2017 Feras Daoud <ferasda@mellanox.com>

net/mlx5e: IPoIB, Use correct timestamp in child receive flow

The current implementation takes the child timestamp object from
the parent since the rq in mlx5i_complete_rx_cqe belongs to

net/mlx5e: IPoIB, Use correct timestamp in child receive flow

The current implementation takes the child timestamp object from
the parent since the rq in mlx5i_complete_rx_cqe belongs to the parent.
This change fixes the issue by taking the correct timestamp.

Fixes: 7e7f4780c340 ("net/mlx5e: IPoIB, Use hash-table to map between QPN to child netdev")
Signed-off-by: Feras Daoud <ferasda@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>

show more ...


# cd4a87df 07-Jan-2018 Gal Pressman <galp@mellanox.com>

net/mlx5e: Replace WARN_ONCE with netdev_WARN_ONCE

Use the more appropriate netdev_WARN_ONCE instead of WARN_ONCE macro.

Signed-off-by: Gal Pressman <galp@mellanox.com>
Reviewed

net/mlx5e: Replace WARN_ONCE with netdev_WARN_ONCE

Use the more appropriate netdev_WARN_ONCE instead of WARN_ONCE macro.

Signed-off-by: Gal Pressman <galp@mellanox.com>
Reviewed-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>

show more ...


# 0ddf5432 03-Jan-2018 Jesper Dangaard Brouer <brouer@redhat.com>

xdp/mlx5: setup xdp_rxq_info

The mlx5 driver have a special drop-RQ queue (one per interface) that
simply drops all incoming traffic. It helps driver keep other HW
objects (flow stee

xdp/mlx5: setup xdp_rxq_info

The mlx5 driver have a special drop-RQ queue (one per interface) that
simply drops all incoming traffic. It helps driver keep other HW
objects (flow steering) alive upon down/up operations. It is
temporarily pointed by flow steering objects during the interface
setup, and when interface is down. It lacks many fields that are set
in a regular RQ (for example its state is never switched to
MLX5_RQC_STATE_RDY). (Thanks to Tariq Toukan for explanation).

The XDP RX-queue info for this drop-RQ marked as unused, which
allow us to use the same takedown/free code path as other RX-queues.

Driver hook points for xdp_rxq_info:
* reg : mlx5e_alloc_rq()
* unused: mlx5e_alloc_drop_rq()
* unreg : mlx5e_free_rq()

Tested on actual hardware with samples/bpf program

Cc: Saeed Mahameed <saeedm@mellanox.com>
Cc: Matan Barak <matanb@mellanox.com>
Cc: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>

show more ...


# fdae5f37 11-Nov-2017 David S. Miller <davem@davemloft.net>

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net


# 2e50b261 15-Oct-2017 Inbar Karmy <inbark@mellanox.com>

net/mlx5e: Set page to null in case dma mapping fails

Currently, when dma mapping fails, put_page is called,
but the page is not set to null. Later, in the page_reuse treatment in
ml

net/mlx5e: Set page to null in case dma mapping fails

Currently, when dma mapping fails, put_page is called,
but the page is not set to null. Later, in the page_reuse treatment in
mlx5e_free_rx_descs(), mlx5e_page_release() is called for the second time,
improperly doing dma_unmap (for a non-mapped address) and an extra put_page.
Prevent this by nullifying the page pointer when dma_map fails.

Fixes: accd58833237 ("net/mlx5e: Introduce RX Page-Reuse")
Signed-off-by: Inbar Karmy <inbark@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Cc: kernel-team@fb.com
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>

show more ...


# f938daee 30-Aug-2017 Gal Pressman <galp@mellanox.com>

net/mlx5e: CHECKSUM_COMPLETE offload for VLAN/QinQ packets

When the VLAN tag is present in the packet buffer (i.e VLAN stripping disabled, QinQ)
the driver will currently report CHECKSUM

net/mlx5e: CHECKSUM_COMPLETE offload for VLAN/QinQ packets

When the VLAN tag is present in the packet buffer (i.e VLAN stripping disabled, QinQ)
the driver will currently report CHECKSUM_UNNECESSARY.
Instead of using CHECKSUM_COMPLETE offload for packets with first
ethertype of IPv4/6, use it for packets with last ethertype of IPv4/6 to
cover the former cases as well.

The checksum field present in the CQE is calculated from the IP header
until the end of the packet. When the first ethertype is different than
IPv4/6 (for ex. 802.1Q VLAN) a checksum of the VLAN header/s should be
added. The small header/s checksum calculation will allow us to use
CHECKSUM_COMPLETE instead of CHECKSUM_UNNECESSARY.

Testing bandwidth of one and 8 TCP streams to a single RQ,
LRO and VLAN stripping offloads disabled:
CPU: Intel(R) Xeon(R) CPU E5-2660 v2 @ 2.20GHz
NIC: Mellanox Technologies MT27710 Family [ConnectX-4 Lx]

Before:
+--------------+--------------------+---------------------+----------------------+
| Traffic type | 1 Stream BW [Mbps] | 8 Streams BW [Mbps] | Checksum offload |
+--------------+--------------------+---------------------+----------------------+
| Untagged | 28,247.35 | 24,716.88 | CHECKSUM_COMPLETE |
| VLAN | 27,516.69 | 23,752.26 | CHECKSUM_UNNECESSARY |
| QinQ | 6,961.30 | 20,667.04 | CHECKSUM_UNNECESSARY |
+--------------+--------------------+---------------------+----------------------+

Now:
+--------------+--------------------+---------------------+-------------------+
| Traffic type | 1 Stream BW [Mbps] | 8 Streams BW [Mbps] | Checksum offload |
+--------------+--------------------+---------------------+-------------------+
| Untagged | 28,521.28 | 24,926.32 | CHECKSUM_COMPLETE |
| VLAN | 27,389.37 | 23,715.34 | CHECKSUM_COMPLETE |
| QinQ | 6,901.77 | 20,845.73 | CHECKSUM_COMPLETE |
+--------------+--------------------+---------------------+-------------------+

No performance degradation observed.

Signed-off-by: Gal Pressman <galp@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>

show more ...


# f24686e8 10-Sep-2017 Gal Pressman <galp@mellanox.com>

net/mlx5e: Add VLAN offloads statistics

The following counters are now exposed through ethtool -S:
rx[i]_removed_vlan_packets (per channel)
rx_removed_vlan_packets
tx[i]_added_vl

net/mlx5e: Add VLAN offloads statistics

The following counters are now exposed through ethtool -S:
rx[i]_removed_vlan_packets (per channel)
rx_removed_vlan_packets
tx[i]_added_vlan_packets (per channel)
tx_added_vlan_packets

rx_removed_vlan_packets: The number of packets that had their
outer VLAN header stripped to the CQE by the hardware.
tx_added_vlan_packets: The number of packets that had their
outer VLAN header inserted by the hardware.

Signed-off-by: Gal Pressman <galp@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>

show more ...


# af28f6f2 15-Oct-2017 David S. Miller <davem@davemloft.net>

Merge tag 'mlx5-updates-2017-10-11' of git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux

Saeed Mahameed says:

====================
mlx5-updates-2017-10-11: IPoIB Mult

Merge tag 'mlx5-updates-2017-10-11' of git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux

Saeed Mahameed says:

====================
mlx5-updates-2017-10-11: IPoIB Multi Pkey support

This series provides the support for IPoIB Multi Pkey.
InfiniBand Pkeys are the equivalent of Ethernet vlans.
Currently IPoIB device driver supports only default Pkey and IPoIB Pkey child
interfaces are not supported with IPoIB offloads mode, this series will add
the support for that by allowing creating mlx5 multiple IPoIB netdevices with
a non-default Pkey.

mlx5 IPoIB Pkey child interface is smaller version of mlx5i IPoIB interfaces and shares
most of its resources with the parent IPoIB interface, namely RX steering and ring
queue resources.

The only mlx5 resources a child Pkey interface will be creating are the TX rings,
since they should be assigned to a specific Pkey.

mlx5i Pkey netdev is implemented via new mlx5e netdev profile implemented in
mlx5/core/ipoib/ipoib_vlan.c.

The series starts with a refactoring of mlx5e PTP and mlx5 clock implementation
to move the code to be part of mlx5 core rather than mlx5e netdevice, in order to
make mlx5 clock and PTP registration part of the core to be shared with mlx5e
master Ethernet netdev/IPoIB parent netdev and mlx5_ib in the near future.

Add the support for attaching multiple underlay QPs for the different Pkeys
in mlx5 core RX steering.

Add Pkey index to rdma_netdev to add the ability to set PKEY index to lower
IPoIB offload netdev.

Use hash-table to map between DQPN (Destination QP number) to child netdev
for the IPoIB parent netdev to forward RX packets to the corresponding
child Pkey netdev, since the RX rings are shared.

The reset of the series adds the ipoib child Pkey: mlx5e netdev profile,
netdev nods implementation and minimal set of ethtool callbacks.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>

show more ...


# 7e7f4780 14-Sep-2017 Alex Vesker <valex@mellanox.com>

net/mlx5e: IPoIB, Use hash-table to map between QPN to child netdev

This change is needed for PKEY support, since the RQs are shared
between the child interface and the parent. The paren

net/mlx5e: IPoIB, Use hash-table to map between QPN to child netdev

This change is needed for PKEY support, since the RQs are shared
between the child interface and the parent. The parent is responsible
for NAPI and the precessing of RX completions. Using the dqpn in the
completion descriptor we set the corresponding child IPoIB netdevice
on the SKB.
The mapping between the dqpn and the netdevice is done using a HT,
each mlx5 IPoIB interface registers its mapping on creation.

Signed-off-by: Alex Vesker <valex@mellanox.com>
Reviewed-by: Erez Shitrit <erezsh@mellanox.com>

show more ...


# 7c39afb3 15-Aug-2017 Feras Daoud <ferasda@mellanox.com>

net/mlx5: PTP code migration to driver core section

PTP code is moved to core section of mlx5 driver in order to share
it between ethernet and infiniband. This movement involves the foll

net/mlx5: PTP code migration to driver core section

PTP code is moved to core section of mlx5 driver in order to share
it between ethernet and infiniband. This movement involves the following
changes:
- Change mlx5e_ prefix to be mlx5_
- Add clock structs to Core
- Add clock object to mlx5_core_dev
- Call Init/Uninit clock from core init/cleanup
- Rename mlx5e_tstamp to be mlx5_clock

Signed-off-by: Feras Daoud <ferasda@mellanox.com>
Signed-off-by: Eitan Rabin <rabin@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>

show more ...


# 53954cf8 05-Oct-2017 David S. Miller <davem@davemloft.net>

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Just simple overlapping changes.

Signed-off-by: David S. Miller <davem@davemloft.net>


# 603e1f5b 13-Sep-2017 Gal Pressman <galp@mellanox.com>

net/mlx5e: Fix calculated checksum offloads counters

Instead of calculating the offloads counters, count them explicitly.
The calculations done for these counters would result in bugs in

net/mlx5e: Fix calculated checksum offloads counters

Instead of calculating the offloads counters, count them explicitly.
The calculations done for these counters would result in bugs in some
cases, for example:
When running TCP traffic over a VXLAN tunnel with TSO enabled the following
counters would increase:
tx_csum_partial: 1,333,284
tx_csum_partial_inner: 29,286
tx4_csum_partial_inner: 384
tx7_csum_partial_inner: 8
tx9_csum_partial_inner: 34
tx10_csum_partial_inner: 26,807
tx11_csum_partial_inner: 287
tx12_csum_partial_inner: 27
tx16_csum_partial_inner: 6
tx25_csum_partial_inner: 1,733

Seems like tx_csum_partial increased out of nowhere.
The issue is in the following calculation in mlx5e_update_sw_counters:
s->tx_csum_partial = s->tx_packets - tx_offload_none - s->tx_csum_partial_inner;

While tx_packets increases by the number of GSO segments for each SKB,
tx_csum_partial_inner will only increase by one, resulting in wrong
tx_csum_partial counter.

Fixes: bfe6d8d1d433 ("net/mlx5e: Reorganize ethtool statistics")
Signed-off-by: Gal Pressman <galp@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>

show more ...


# de8f3a83 24-Sep-2017 Daniel Borkmann <daniel@iogearbox.net>

bpf: add meta pointer for direct access

This work enables generic transfer of metadata from XDP into skb. The
basic idea is that we can make use of the fact that the resulting skb
mu

bpf: add meta pointer for direct access

This work enables generic transfer of metadata from XDP into skb. The
basic idea is that we can make use of the fact that the resulting skb
must be linear and already comes with a larger headroom for supporting
bpf_xdp_adjust_head(), which mangles xdp->data. Here, we base our work
on a similar principle and introduce a small helper bpf_xdp_adjust_meta()
for adjusting a new pointer called xdp->data_meta. Thus, the packet has
a flexible and programmable room for meta data, followed by the actual
packet data. struct xdp_buff is therefore laid out that we first point
to data_hard_start, then data_meta directly prepended to data followed
by data_end marking the end of packet. bpf_xdp_adjust_head() takes into
account whether we have meta data already prepended and if so, memmove()s
this along with the given offset provided there's enough room.

xdp->data_meta is optional and programs are not required to use it. The
rationale is that when we process the packet in XDP (e.g. as DoS filter),
we can push further meta data along with it for the XDP_PASS case, and
give the guarantee that a clsact ingress BPF program on the same device
can pick this up for further post-processing. Since we work with skb
there, we can also set skb->mark, skb->priority or other skb meta data
out of BPF, thus having this scratch space generic and programmable
allows for more flexibility than defining a direct 1:1 transfer of
potentially new XDP members into skb (it's also more efficient as we
don't need to initialize/handle each of such new members). The facility
also works together with GRO aggregation. The scratch space at the head
of the packet can be multiple of 4 byte up to 32 byte large. Drivers not
yet supporting xdp->data_meta can simply be set up with xdp->data_meta
as xdp->data + 1 as bpf_xdp_adjust_meta() will detect this and bail out,
such that the subsequent match against xdp->data for later access is
guaranteed to fail.

The verifier treats xdp->data_meta/xdp->data the same way as we treat
xdp->data/xdp->data_end pointer comparisons. The requirement for doing
the compare against xdp->data is that it hasn't been modified from it's
original address we got from ctx access. It may have a range marking
already from prior successful xdp->data/xdp->data_end pointer comparisons
though.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>

show more ...


# 70871f1e 13-Jul-2017 Tariq Toukan <tariqt@mellanox.com>

net/mlx5e: Don't recycle page if moved to far NUMA

Avoid recycling an RX page if it moved to another NUMA node.
Add an ethtool counter to count such events.

Signed-off-by: Tariq

net/mlx5e: Don't recycle page if moved to far NUMA

Avoid recycling an RX page if it moved to another NUMA node.
Add an ethtool counter to count such events.

Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>

show more ...


# 3b56f7b2 17-Jul-2017 Tariq Toukan <tariqt@mellanox.com>

net/mlx5e: Remove unnecessary fields in ICO SQ

As of current design, in each NAPI, only a single UMR WQE
completion could be available in the completion queue of the
the internal con

net/mlx5e: Remove unnecessary fields in ICO SQ

As of current design, in each NAPI, only a single UMR WQE
completion could be available in the completion queue of the
the internal control operations (ICO) send queue, in addition
to nop operations that require no actions upon completion.
This renders the consume index obsolete, as the wqe_counter
field in CQE is sufficient.

This helps removing a memory barrier, and obsoletes the need
for tracking the num_wqebbs to update the consumer counter.

In addition, remove other unused fields in icosq struct:
pdev, dma_fifo_pc, and prev_cc.

Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>

show more ...


# 7cc6d77b 17-Jul-2017 Tariq Toukan <tariqt@mellanox.com>

net/mlx5e: Type-specific optimizations for RX post WQEs function

Separate the RX post WQEs function of the different RQ types.
This enables RQ type-specific optimizations in data-path.

net/mlx5e: Type-specific optimizations for RX post WQEs function

Separate the RX post WQEs function of the different RQ types.
This enables RQ type-specific optimizations in data-path.

Poll the ICOSQ completion queue only for Striding RQ,
and only when a UMR post completion could be possibly available.

Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>

show more ...


1...<<11121314151617181920>>...23