History log of /openbmc/linux/drivers/net/ethernet/mellanox/mlx5/core/en_main.c (Results 1 – 25 of 1731)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: v6.6.35, v6.6.34, v6.6.33
# 110764a0 06-Jun-2024 Gal Pressman <gal@nvidia.com>

net/mlx5e: Fix features validation check for tunneled UDP (non-VXLAN) packets

[ Upstream commit 791b4089e326271424b78f2fae778b20e53d071b ]

Move the vxlan_features_check() call to after we verified

net/mlx5e: Fix features validation check for tunneled UDP (non-VXLAN) packets

[ Upstream commit 791b4089e326271424b78f2fae778b20e53d071b ]

Move the vxlan_features_check() call to after we verified the packet is
a tunneled VXLAN packet.

Without this, tunneled UDP non-VXLAN packets (for ex. GENENVE) might
wrongly not get offloaded.
In some cases, it worked by chance as GENEVE header is the same size as
VXLAN, but it is obviously incorrect.

Fixes: e3cfc7e6b7bd ("net/mlx5e: TX, Add geneve tunnel stateless offload support")
Signed-off-by: Gal Pressman <gal@nvidia.com>
Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Wojciech Drewek <wojciech.drewek@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


Revision tags: v6.6.32
# 33933f00 22-May-2024 Carolina Jubran <cjubran@nvidia.com>

net/mlx5e: Use rx_missed_errors instead of rx_dropped for reporting buffer exhaustion

[ Upstream commit 5c74195d5dd977e97556e6fa76909b831c241230 ]

Previously, the driver incorrectly used rx_dropped

net/mlx5e: Use rx_missed_errors instead of rx_dropped for reporting buffer exhaustion

[ Upstream commit 5c74195d5dd977e97556e6fa76909b831c241230 ]

Previously, the driver incorrectly used rx_dropped to report device
buffer exhaustion.

According to the documentation, rx_dropped should not be used to count
packets dropped due to buffer exhaustion, which is the purpose of
rx_missed_errors.

Use rx_missed_errors as intended for counting packets dropped due to
buffer exhaustion.

Fixes: 269e6b3af3bf ("net/mlx5e: Report additional error statistics in get stats ndo")
Signed-off-by: Carolina Jubran <cjubran@nvidia.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Simon Horman <horms@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


Revision tags: v6.6.35, v6.6.34, v6.6.33
# 110764a0 06-Jun-2024 Gal Pressman <gal@nvidia.com>

net/mlx5e: Fix features validation check for tunneled UDP (non-VXLAN) packets

[ Upstream commit 791b4089e326271424b78f2fae778b20e53d071b ]

Move the vxlan_features_check() call to after we verified

net/mlx5e: Fix features validation check for tunneled UDP (non-VXLAN) packets

[ Upstream commit 791b4089e326271424b78f2fae778b20e53d071b ]

Move the vxlan_features_check() call to after we verified the packet is
a tunneled VXLAN packet.

Without this, tunneled UDP non-VXLAN packets (for ex. GENENVE) might
wrongly not get offloaded.
In some cases, it worked by chance as GENEVE header is the same size as
VXLAN, but it is obviously incorrect.

Fixes: e3cfc7e6b7bd ("net/mlx5e: TX, Add geneve tunnel stateless offload support")
Signed-off-by: Gal Pressman <gal@nvidia.com>
Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Wojciech Drewek <wojciech.drewek@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


Revision tags: v6.6.32
# 33933f00 22-May-2024 Carolina Jubran <cjubran@nvidia.com>

net/mlx5e: Use rx_missed_errors instead of rx_dropped for reporting buffer exhaustion

[ Upstream commit 5c74195d5dd977e97556e6fa76909b831c241230 ]

Previously, the driver incorrectly used rx_dropped

net/mlx5e: Use rx_missed_errors instead of rx_dropped for reporting buffer exhaustion

[ Upstream commit 5c74195d5dd977e97556e6fa76909b831c241230 ]

Previously, the driver incorrectly used rx_dropped to report device
buffer exhaustion.

According to the documentation, rx_dropped should not be used to count
packets dropped due to buffer exhaustion, which is the purpose of
rx_missed_errors.

Use rx_missed_errors as intended for counting packets dropped due to
buffer exhaustion.

Fixes: 269e6b3af3bf ("net/mlx5e: Report additional error statistics in get stats ndo")
Signed-off-by: Carolina Jubran <cjubran@nvidia.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Simon Horman <horms@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


Revision tags: v6.6.32
# 33933f00 22-May-2024 Carolina Jubran <cjubran@nvidia.com>

net/mlx5e: Use rx_missed_errors instead of rx_dropped for reporting buffer exhaustion

[ Upstream commit 5c74195d5dd977e97556e6fa76909b831c241230 ]

Previously, the driver incorrectly used rx_dropped

net/mlx5e: Use rx_missed_errors instead of rx_dropped for reporting buffer exhaustion

[ Upstream commit 5c74195d5dd977e97556e6fa76909b831c241230 ]

Previously, the driver incorrectly used rx_dropped to report device
buffer exhaustion.

According to the documentation, rx_dropped should not be used to count
packets dropped due to buffer exhaustion, which is the purpose of
rx_missed_errors.

Use rx_missed_errors as intended for counting packets dropped due to
buffer exhaustion.

Fixes: 269e6b3af3bf ("net/mlx5e: Report additional error statistics in get stats ndo")
Signed-off-by: Carolina Jubran <cjubran@nvidia.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Simon Horman <horms@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


Revision tags: v6.6.32
# 33933f00 22-May-2024 Carolina Jubran <cjubran@nvidia.com>

net/mlx5e: Use rx_missed_errors instead of rx_dropped for reporting buffer exhaustion

[ Upstream commit 5c74195d5dd977e97556e6fa76909b831c241230 ]

Previously, the driver incorrectly used rx_dropped

net/mlx5e: Use rx_missed_errors instead of rx_dropped for reporting buffer exhaustion

[ Upstream commit 5c74195d5dd977e97556e6fa76909b831c241230 ]

Previously, the driver incorrectly used rx_dropped to report device
buffer exhaustion.

According to the documentation, rx_dropped should not be used to count
packets dropped due to buffer exhaustion, which is the purpose of
rx_missed_errors.

Use rx_missed_errors as intended for counting packets dropped due to
buffer exhaustion.

Fixes: 269e6b3af3bf ("net/mlx5e: Report additional error statistics in get stats ndo")
Signed-off-by: Carolina Jubran <cjubran@nvidia.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Simon Horman <horms@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


Revision tags: v6.6.32
# 33933f00 22-May-2024 Carolina Jubran <cjubran@nvidia.com>

net/mlx5e: Use rx_missed_errors instead of rx_dropped for reporting buffer exhaustion

[ Upstream commit 5c74195d5dd977e97556e6fa76909b831c241230 ]

Previously, the driver incorrectly used rx_dropped

net/mlx5e: Use rx_missed_errors instead of rx_dropped for reporting buffer exhaustion

[ Upstream commit 5c74195d5dd977e97556e6fa76909b831c241230 ]

Previously, the driver incorrectly used rx_dropped to report device
buffer exhaustion.

According to the documentation, rx_dropped should not be used to count
packets dropped due to buffer exhaustion, which is the purpose of
rx_missed_errors.

Use rx_missed_errors as intended for counting packets dropped due to
buffer exhaustion.

Fixes: 269e6b3af3bf ("net/mlx5e: Report additional error statistics in get stats ndo")
Signed-off-by: Carolina Jubran <cjubran@nvidia.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Simon Horman <horms@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


Revision tags: v6.6.32
# 33933f00 22-May-2024 Carolina Jubran <cjubran@nvidia.com>

net/mlx5e: Use rx_missed_errors instead of rx_dropped for reporting buffer exhaustion

[ Upstream commit 5c74195d5dd977e97556e6fa76909b831c241230 ]

Previously, the driver incorrectly used rx_dropped

net/mlx5e: Use rx_missed_errors instead of rx_dropped for reporting buffer exhaustion

[ Upstream commit 5c74195d5dd977e97556e6fa76909b831c241230 ]

Previously, the driver incorrectly used rx_dropped to report device
buffer exhaustion.

According to the documentation, rx_dropped should not be used to count
packets dropped due to buffer exhaustion, which is the purpose of
rx_missed_errors.

Use rx_missed_errors as intended for counting packets dropped due to
buffer exhaustion.

Fixes: 269e6b3af3bf ("net/mlx5e: Report additional error statistics in get stats ndo")
Signed-off-by: Carolina Jubran <cjubran@nvidia.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Simon Horman <horms@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


Revision tags: v6.6.32
# 33933f00 22-May-2024 Carolina Jubran <cjubran@nvidia.com>

net/mlx5e: Use rx_missed_errors instead of rx_dropped for reporting buffer exhaustion

[ Upstream commit 5c74195d5dd977e97556e6fa76909b831c241230 ]

Previously, the driver incorrectly used rx_dropped

net/mlx5e: Use rx_missed_errors instead of rx_dropped for reporting buffer exhaustion

[ Upstream commit 5c74195d5dd977e97556e6fa76909b831c241230 ]

Previously, the driver incorrectly used rx_dropped to report device
buffer exhaustion.

According to the documentation, rx_dropped should not be used to count
packets dropped due to buffer exhaustion, which is the purpose of
rx_missed_errors.

Use rx_missed_errors as intended for counting packets dropped due to
buffer exhaustion.

Fixes: 269e6b3af3bf ("net/mlx5e: Report additional error statistics in get stats ndo")
Signed-off-by: Carolina Jubran <cjubran@nvidia.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Simon Horman <horms@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


Revision tags: v6.6.31, v6.6.30, v6.6.29, v6.6.28, v6.6.27, v6.6.26
# f9ac93b6 09-Apr-2024 Carolina Jubran <cjubran@nvidia.com>

net/mlx5e: Fix mlx5e_priv_init() cleanup flow

[ Upstream commit ecb829459a841198e142f72fadab56424ae96519 ]

When mlx5e_priv_init() fails, the cleanup flow calls mlx5e_selq_cleanup which
calls mlx5e_

net/mlx5e: Fix mlx5e_priv_init() cleanup flow

[ Upstream commit ecb829459a841198e142f72fadab56424ae96519 ]

When mlx5e_priv_init() fails, the cleanup flow calls mlx5e_selq_cleanup which
calls mlx5e_selq_apply() that assures that the `priv->state_lock` is held using
lockdep_is_held().

Acquire the state_lock in mlx5e_selq_cleanup().

Kernel log:
=============================
WARNING: suspicious RCU usage
6.8.0-rc3_net_next_841a9b5 #1 Not tainted
-----------------------------
drivers/net/ethernet/mellanox/mlx5/core/en/selq.c:124 suspicious rcu_dereference_protected() usage!

other info that might help us debug this:

rcu_scheduler_active = 2, debug_locks = 1
2 locks held by systemd-modules/293:
#0: ffffffffa05067b0 (devices_rwsem){++++}-{3:3}, at: ib_register_client+0x109/0x1b0 [ib_core]
#1: ffff8881096c65c0 (&device->client_data_rwsem){++++}-{3:3}, at: add_client_context+0x104/0x1c0 [ib_core]

stack backtrace:
CPU: 4 PID: 293 Comm: systemd-modules Not tainted 6.8.0-rc3_net_next_841a9b5 #1
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014
Call Trace:
<TASK>
dump_stack_lvl+0x8a/0xa0
lockdep_rcu_suspicious+0x154/0x1a0
mlx5e_selq_apply+0x94/0xa0 [mlx5_core]
mlx5e_selq_cleanup+0x3a/0x60 [mlx5_core]
mlx5e_priv_init+0x2be/0x2f0 [mlx5_core]
mlx5_rdma_setup_rn+0x7c/0x1a0 [mlx5_core]
rdma_init_netdev+0x4e/0x80 [ib_core]
? mlx5_rdma_netdev_free+0x70/0x70 [mlx5_core]
ipoib_intf_init+0x64/0x550 [ib_ipoib]
ipoib_intf_alloc+0x4e/0xc0 [ib_ipoib]
ipoib_add_one+0xb0/0x360 [ib_ipoib]
add_client_context+0x112/0x1c0 [ib_core]
ib_register_client+0x166/0x1b0 [ib_core]
? 0xffffffffa0573000
ipoib_init_module+0xeb/0x1a0 [ib_ipoib]
do_one_initcall+0x61/0x250
do_init_module+0x8a/0x270
init_module_from_file+0x8b/0xd0
idempotent_init_module+0x17d/0x230
__x64_sys_finit_module+0x61/0xb0
do_syscall_64+0x71/0x140
entry_SYSCALL_64_after_hwframe+0x46/0x4e
</TASK>

Fixes: 8bf30be75069 ("net/mlx5e: Introduce select queue parameters")
Signed-off-by: Carolina Jubran <cjubran@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Link: https://lore.kernel.org/r/20240409190820.227554-8-tariqt@nvidia.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


Revision tags: v6.6.31, v6.6.30, v6.6.29, v6.6.28, v6.6.27, v6.6.26
# f9ac93b6 09-Apr-2024 Carolina Jubran <cjubran@nvidia.com>

net/mlx5e: Fix mlx5e_priv_init() cleanup flow

[ Upstream commit ecb829459a841198e142f72fadab56424ae96519 ]

When mlx5e_priv_init() fails, the cleanup flow calls mlx5e_selq_cleanup which
calls mlx5e_

net/mlx5e: Fix mlx5e_priv_init() cleanup flow

[ Upstream commit ecb829459a841198e142f72fadab56424ae96519 ]

When mlx5e_priv_init() fails, the cleanup flow calls mlx5e_selq_cleanup which
calls mlx5e_selq_apply() that assures that the `priv->state_lock` is held using
lockdep_is_held().

Acquire the state_lock in mlx5e_selq_cleanup().

Kernel log:
=============================
WARNING: suspicious RCU usage
6.8.0-rc3_net_next_841a9b5 #1 Not tainted
-----------------------------
drivers/net/ethernet/mellanox/mlx5/core/en/selq.c:124 suspicious rcu_dereference_protected() usage!

other info that might help us debug this:

rcu_scheduler_active = 2, debug_locks = 1
2 locks held by systemd-modules/293:
#0: ffffffffa05067b0 (devices_rwsem){++++}-{3:3}, at: ib_register_client+0x109/0x1b0 [ib_core]
#1: ffff8881096c65c0 (&device->client_data_rwsem){++++}-{3:3}, at: add_client_context+0x104/0x1c0 [ib_core]

stack backtrace:
CPU: 4 PID: 293 Comm: systemd-modules Not tainted 6.8.0-rc3_net_next_841a9b5 #1
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014
Call Trace:
<TASK>
dump_stack_lvl+0x8a/0xa0
lockdep_rcu_suspicious+0x154/0x1a0
mlx5e_selq_apply+0x94/0xa0 [mlx5_core]
mlx5e_selq_cleanup+0x3a/0x60 [mlx5_core]
mlx5e_priv_init+0x2be/0x2f0 [mlx5_core]
mlx5_rdma_setup_rn+0x7c/0x1a0 [mlx5_core]
rdma_init_netdev+0x4e/0x80 [ib_core]
? mlx5_rdma_netdev_free+0x70/0x70 [mlx5_core]
ipoib_intf_init+0x64/0x550 [ib_ipoib]
ipoib_intf_alloc+0x4e/0xc0 [ib_ipoib]
ipoib_add_one+0xb0/0x360 [ib_ipoib]
add_client_context+0x112/0x1c0 [ib_core]
ib_register_client+0x166/0x1b0 [ib_core]
? 0xffffffffa0573000
ipoib_init_module+0xeb/0x1a0 [ib_ipoib]
do_one_initcall+0x61/0x250
do_init_module+0x8a/0x270
init_module_from_file+0x8b/0xd0
idempotent_init_module+0x17d/0x230
__x64_sys_finit_module+0x61/0xb0
do_syscall_64+0x71/0x140
entry_SYSCALL_64_after_hwframe+0x46/0x4e
</TASK>

Fixes: 8bf30be75069 ("net/mlx5e: Introduce select queue parameters")
Signed-off-by: Carolina Jubran <cjubran@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Link: https://lore.kernel.org/r/20240409190820.227554-8-tariqt@nvidia.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


Revision tags: v6.6.31, v6.6.30, v6.6.29, v6.6.28, v6.6.27, v6.6.26
# f9ac93b6 09-Apr-2024 Carolina Jubran <cjubran@nvidia.com>

net/mlx5e: Fix mlx5e_priv_init() cleanup flow

[ Upstream commit ecb829459a841198e142f72fadab56424ae96519 ]

When mlx5e_priv_init() fails, the cleanup flow calls mlx5e_selq_cleanup which
calls mlx5e_

net/mlx5e: Fix mlx5e_priv_init() cleanup flow

[ Upstream commit ecb829459a841198e142f72fadab56424ae96519 ]

When mlx5e_priv_init() fails, the cleanup flow calls mlx5e_selq_cleanup which
calls mlx5e_selq_apply() that assures that the `priv->state_lock` is held using
lockdep_is_held().

Acquire the state_lock in mlx5e_selq_cleanup().

Kernel log:
=============================
WARNING: suspicious RCU usage
6.8.0-rc3_net_next_841a9b5 #1 Not tainted
-----------------------------
drivers/net/ethernet/mellanox/mlx5/core/en/selq.c:124 suspicious rcu_dereference_protected() usage!

other info that might help us debug this:

rcu_scheduler_active = 2, debug_locks = 1
2 locks held by systemd-modules/293:
#0: ffffffffa05067b0 (devices_rwsem){++++}-{3:3}, at: ib_register_client+0x109/0x1b0 [ib_core]
#1: ffff8881096c65c0 (&device->client_data_rwsem){++++}-{3:3}, at: add_client_context+0x104/0x1c0 [ib_core]

stack backtrace:
CPU: 4 PID: 293 Comm: systemd-modules Not tainted 6.8.0-rc3_net_next_841a9b5 #1
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014
Call Trace:
<TASK>
dump_stack_lvl+0x8a/0xa0
lockdep_rcu_suspicious+0x154/0x1a0
mlx5e_selq_apply+0x94/0xa0 [mlx5_core]
mlx5e_selq_cleanup+0x3a/0x60 [mlx5_core]
mlx5e_priv_init+0x2be/0x2f0 [mlx5_core]
mlx5_rdma_setup_rn+0x7c/0x1a0 [mlx5_core]
rdma_init_netdev+0x4e/0x80 [ib_core]
? mlx5_rdma_netdev_free+0x70/0x70 [mlx5_core]
ipoib_intf_init+0x64/0x550 [ib_ipoib]
ipoib_intf_alloc+0x4e/0xc0 [ib_ipoib]
ipoib_add_one+0xb0/0x360 [ib_ipoib]
add_client_context+0x112/0x1c0 [ib_core]
ib_register_client+0x166/0x1b0 [ib_core]
? 0xffffffffa0573000
ipoib_init_module+0xeb/0x1a0 [ib_ipoib]
do_one_initcall+0x61/0x250
do_init_module+0x8a/0x270
init_module_from_file+0x8b/0xd0
idempotent_init_module+0x17d/0x230
__x64_sys_finit_module+0x61/0xb0
do_syscall_64+0x71/0x140
entry_SYSCALL_64_after_hwframe+0x46/0x4e
</TASK>

Fixes: 8bf30be75069 ("net/mlx5e: Introduce select queue parameters")
Signed-off-by: Carolina Jubran <cjubran@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Link: https://lore.kernel.org/r/20240409190820.227554-8-tariqt@nvidia.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


Revision tags: v6.6.31, v6.6.30, v6.6.29, v6.6.28, v6.6.27, v6.6.26
# f9ac93b6 09-Apr-2024 Carolina Jubran <cjubran@nvidia.com>

net/mlx5e: Fix mlx5e_priv_init() cleanup flow

[ Upstream commit ecb829459a841198e142f72fadab56424ae96519 ]

When mlx5e_priv_init() fails, the cleanup flow calls mlx5e_selq_cleanup which
calls mlx5e_

net/mlx5e: Fix mlx5e_priv_init() cleanup flow

[ Upstream commit ecb829459a841198e142f72fadab56424ae96519 ]

When mlx5e_priv_init() fails, the cleanup flow calls mlx5e_selq_cleanup which
calls mlx5e_selq_apply() that assures that the `priv->state_lock` is held using
lockdep_is_held().

Acquire the state_lock in mlx5e_selq_cleanup().

Kernel log:
=============================
WARNING: suspicious RCU usage
6.8.0-rc3_net_next_841a9b5 #1 Not tainted
-----------------------------
drivers/net/ethernet/mellanox/mlx5/core/en/selq.c:124 suspicious rcu_dereference_protected() usage!

other info that might help us debug this:

rcu_scheduler_active = 2, debug_locks = 1
2 locks held by systemd-modules/293:
#0: ffffffffa05067b0 (devices_rwsem){++++}-{3:3}, at: ib_register_client+0x109/0x1b0 [ib_core]
#1: ffff8881096c65c0 (&device->client_data_rwsem){++++}-{3:3}, at: add_client_context+0x104/0x1c0 [ib_core]

stack backtrace:
CPU: 4 PID: 293 Comm: systemd-modules Not tainted 6.8.0-rc3_net_next_841a9b5 #1
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014
Call Trace:
<TASK>
dump_stack_lvl+0x8a/0xa0
lockdep_rcu_suspicious+0x154/0x1a0
mlx5e_selq_apply+0x94/0xa0 [mlx5_core]
mlx5e_selq_cleanup+0x3a/0x60 [mlx5_core]
mlx5e_priv_init+0x2be/0x2f0 [mlx5_core]
mlx5_rdma_setup_rn+0x7c/0x1a0 [mlx5_core]
rdma_init_netdev+0x4e/0x80 [ib_core]
? mlx5_rdma_netdev_free+0x70/0x70 [mlx5_core]
ipoib_intf_init+0x64/0x550 [ib_ipoib]
ipoib_intf_alloc+0x4e/0xc0 [ib_ipoib]
ipoib_add_one+0xb0/0x360 [ib_ipoib]
add_client_context+0x112/0x1c0 [ib_core]
ib_register_client+0x166/0x1b0 [ib_core]
? 0xffffffffa0573000
ipoib_init_module+0xeb/0x1a0 [ib_ipoib]
do_one_initcall+0x61/0x250
do_init_module+0x8a/0x270
init_module_from_file+0x8b/0xd0
idempotent_init_module+0x17d/0x230
__x64_sys_finit_module+0x61/0xb0
do_syscall_64+0x71/0x140
entry_SYSCALL_64_after_hwframe+0x46/0x4e
</TASK>

Fixes: 8bf30be75069 ("net/mlx5e: Introduce select queue parameters")
Signed-off-by: Carolina Jubran <cjubran@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Link: https://lore.kernel.org/r/20240409190820.227554-8-tariqt@nvidia.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


Revision tags: v6.6.31, v6.6.30, v6.6.29, v6.6.28, v6.6.27, v6.6.26
# f9ac93b6 09-Apr-2024 Carolina Jubran <cjubran@nvidia.com>

net/mlx5e: Fix mlx5e_priv_init() cleanup flow

[ Upstream commit ecb829459a841198e142f72fadab56424ae96519 ]

When mlx5e_priv_init() fails, the cleanup flow calls mlx5e_selq_cleanup which
calls mlx5e_

net/mlx5e: Fix mlx5e_priv_init() cleanup flow

[ Upstream commit ecb829459a841198e142f72fadab56424ae96519 ]

When mlx5e_priv_init() fails, the cleanup flow calls mlx5e_selq_cleanup which
calls mlx5e_selq_apply() that assures that the `priv->state_lock` is held using
lockdep_is_held().

Acquire the state_lock in mlx5e_selq_cleanup().

Kernel log:
=============================
WARNING: suspicious RCU usage
6.8.0-rc3_net_next_841a9b5 #1 Not tainted
-----------------------------
drivers/net/ethernet/mellanox/mlx5/core/en/selq.c:124 suspicious rcu_dereference_protected() usage!

other info that might help us debug this:

rcu_scheduler_active = 2, debug_locks = 1
2 locks held by systemd-modules/293:
#0: ffffffffa05067b0 (devices_rwsem){++++}-{3:3}, at: ib_register_client+0x109/0x1b0 [ib_core]
#1: ffff8881096c65c0 (&device->client_data_rwsem){++++}-{3:3}, at: add_client_context+0x104/0x1c0 [ib_core]

stack backtrace:
CPU: 4 PID: 293 Comm: systemd-modules Not tainted 6.8.0-rc3_net_next_841a9b5 #1
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014
Call Trace:
<TASK>
dump_stack_lvl+0x8a/0xa0
lockdep_rcu_suspicious+0x154/0x1a0
mlx5e_selq_apply+0x94/0xa0 [mlx5_core]
mlx5e_selq_cleanup+0x3a/0x60 [mlx5_core]
mlx5e_priv_init+0x2be/0x2f0 [mlx5_core]
mlx5_rdma_setup_rn+0x7c/0x1a0 [mlx5_core]
rdma_init_netdev+0x4e/0x80 [ib_core]
? mlx5_rdma_netdev_free+0x70/0x70 [mlx5_core]
ipoib_intf_init+0x64/0x550 [ib_ipoib]
ipoib_intf_alloc+0x4e/0xc0 [ib_ipoib]
ipoib_add_one+0xb0/0x360 [ib_ipoib]
add_client_context+0x112/0x1c0 [ib_core]
ib_register_client+0x166/0x1b0 [ib_core]
? 0xffffffffa0573000
ipoib_init_module+0xeb/0x1a0 [ib_ipoib]
do_one_initcall+0x61/0x250
do_init_module+0x8a/0x270
init_module_from_file+0x8b/0xd0
idempotent_init_module+0x17d/0x230
__x64_sys_finit_module+0x61/0xb0
do_syscall_64+0x71/0x140
entry_SYSCALL_64_after_hwframe+0x46/0x4e
</TASK>

Fixes: 8bf30be75069 ("net/mlx5e: Introduce select queue parameters")
Signed-off-by: Carolina Jubran <cjubran@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Link: https://lore.kernel.org/r/20240409190820.227554-8-tariqt@nvidia.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


Revision tags: v6.6.31, v6.6.30, v6.6.29, v6.6.28, v6.6.27, v6.6.26
# f9ac93b6 09-Apr-2024 Carolina Jubran <cjubran@nvidia.com>

net/mlx5e: Fix mlx5e_priv_init() cleanup flow

[ Upstream commit ecb829459a841198e142f72fadab56424ae96519 ]

When mlx5e_priv_init() fails, the cleanup flow calls mlx5e_selq_cleanup which
calls mlx5e_

net/mlx5e: Fix mlx5e_priv_init() cleanup flow

[ Upstream commit ecb829459a841198e142f72fadab56424ae96519 ]

When mlx5e_priv_init() fails, the cleanup flow calls mlx5e_selq_cleanup which
calls mlx5e_selq_apply() that assures that the `priv->state_lock` is held using
lockdep_is_held().

Acquire the state_lock in mlx5e_selq_cleanup().

Kernel log:
=============================
WARNING: suspicious RCU usage
6.8.0-rc3_net_next_841a9b5 #1 Not tainted
-----------------------------
drivers/net/ethernet/mellanox/mlx5/core/en/selq.c:124 suspicious rcu_dereference_protected() usage!

other info that might help us debug this:

rcu_scheduler_active = 2, debug_locks = 1
2 locks held by systemd-modules/293:
#0: ffffffffa05067b0 (devices_rwsem){++++}-{3:3}, at: ib_register_client+0x109/0x1b0 [ib_core]
#1: ffff8881096c65c0 (&device->client_data_rwsem){++++}-{3:3}, at: add_client_context+0x104/0x1c0 [ib_core]

stack backtrace:
CPU: 4 PID: 293 Comm: systemd-modules Not tainted 6.8.0-rc3_net_next_841a9b5 #1
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014
Call Trace:
<TASK>
dump_stack_lvl+0x8a/0xa0
lockdep_rcu_suspicious+0x154/0x1a0
mlx5e_selq_apply+0x94/0xa0 [mlx5_core]
mlx5e_selq_cleanup+0x3a/0x60 [mlx5_core]
mlx5e_priv_init+0x2be/0x2f0 [mlx5_core]
mlx5_rdma_setup_rn+0x7c/0x1a0 [mlx5_core]
rdma_init_netdev+0x4e/0x80 [ib_core]
? mlx5_rdma_netdev_free+0x70/0x70 [mlx5_core]
ipoib_intf_init+0x64/0x550 [ib_ipoib]
ipoib_intf_alloc+0x4e/0xc0 [ib_ipoib]
ipoib_add_one+0xb0/0x360 [ib_ipoib]
add_client_context+0x112/0x1c0 [ib_core]
ib_register_client+0x166/0x1b0 [ib_core]
? 0xffffffffa0573000
ipoib_init_module+0xeb/0x1a0 [ib_ipoib]
do_one_initcall+0x61/0x250
do_init_module+0x8a/0x270
init_module_from_file+0x8b/0xd0
idempotent_init_module+0x17d/0x230
__x64_sys_finit_module+0x61/0xb0
do_syscall_64+0x71/0x140
entry_SYSCALL_64_after_hwframe+0x46/0x4e
</TASK>

Fixes: 8bf30be75069 ("net/mlx5e: Introduce select queue parameters")
Signed-off-by: Carolina Jubran <cjubran@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Link: https://lore.kernel.org/r/20240409190820.227554-8-tariqt@nvidia.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


Revision tags: v6.6.31, v6.6.30, v6.6.29, v6.6.28, v6.6.27, v6.6.26
# f9ac93b6 09-Apr-2024 Carolina Jubran <cjubran@nvidia.com>

net/mlx5e: Fix mlx5e_priv_init() cleanup flow

[ Upstream commit ecb829459a841198e142f72fadab56424ae96519 ]

When mlx5e_priv_init() fails, the cleanup flow calls mlx5e_selq_cleanup which
calls mlx5e_

net/mlx5e: Fix mlx5e_priv_init() cleanup flow

[ Upstream commit ecb829459a841198e142f72fadab56424ae96519 ]

When mlx5e_priv_init() fails, the cleanup flow calls mlx5e_selq_cleanup which
calls mlx5e_selq_apply() that assures that the `priv->state_lock` is held using
lockdep_is_held().

Acquire the state_lock in mlx5e_selq_cleanup().

Kernel log:
=============================
WARNING: suspicious RCU usage
6.8.0-rc3_net_next_841a9b5 #1 Not tainted
-----------------------------
drivers/net/ethernet/mellanox/mlx5/core/en/selq.c:124 suspicious rcu_dereference_protected() usage!

other info that might help us debug this:

rcu_scheduler_active = 2, debug_locks = 1
2 locks held by systemd-modules/293:
#0: ffffffffa05067b0 (devices_rwsem){++++}-{3:3}, at: ib_register_client+0x109/0x1b0 [ib_core]
#1: ffff8881096c65c0 (&device->client_data_rwsem){++++}-{3:3}, at: add_client_context+0x104/0x1c0 [ib_core]

stack backtrace:
CPU: 4 PID: 293 Comm: systemd-modules Not tainted 6.8.0-rc3_net_next_841a9b5 #1
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014
Call Trace:
<TASK>
dump_stack_lvl+0x8a/0xa0
lockdep_rcu_suspicious+0x154/0x1a0
mlx5e_selq_apply+0x94/0xa0 [mlx5_core]
mlx5e_selq_cleanup+0x3a/0x60 [mlx5_core]
mlx5e_priv_init+0x2be/0x2f0 [mlx5_core]
mlx5_rdma_setup_rn+0x7c/0x1a0 [mlx5_core]
rdma_init_netdev+0x4e/0x80 [ib_core]
? mlx5_rdma_netdev_free+0x70/0x70 [mlx5_core]
ipoib_intf_init+0x64/0x550 [ib_ipoib]
ipoib_intf_alloc+0x4e/0xc0 [ib_ipoib]
ipoib_add_one+0xb0/0x360 [ib_ipoib]
add_client_context+0x112/0x1c0 [ib_core]
ib_register_client+0x166/0x1b0 [ib_core]
? 0xffffffffa0573000
ipoib_init_module+0xeb/0x1a0 [ib_ipoib]
do_one_initcall+0x61/0x250
do_init_module+0x8a/0x270
init_module_from_file+0x8b/0xd0
idempotent_init_module+0x17d/0x230
__x64_sys_finit_module+0x61/0xb0
do_syscall_64+0x71/0x140
entry_SYSCALL_64_after_hwframe+0x46/0x4e
</TASK>

Fixes: 8bf30be75069 ("net/mlx5e: Introduce select queue parameters")
Signed-off-by: Carolina Jubran <cjubran@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Link: https://lore.kernel.org/r/20240409190820.227554-8-tariqt@nvidia.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


Revision tags: v6.6.31, v6.6.30, v6.6.29, v6.6.28, v6.6.27, v6.6.26
# f9ac93b6 09-Apr-2024 Carolina Jubran <cjubran@nvidia.com>

net/mlx5e: Fix mlx5e_priv_init() cleanup flow

[ Upstream commit ecb829459a841198e142f72fadab56424ae96519 ]

When mlx5e_priv_init() fails, the cleanup flow calls mlx5e_selq_cleanup which
calls mlx5e_

net/mlx5e: Fix mlx5e_priv_init() cleanup flow

[ Upstream commit ecb829459a841198e142f72fadab56424ae96519 ]

When mlx5e_priv_init() fails, the cleanup flow calls mlx5e_selq_cleanup which
calls mlx5e_selq_apply() that assures that the `priv->state_lock` is held using
lockdep_is_held().

Acquire the state_lock in mlx5e_selq_cleanup().

Kernel log:
=============================
WARNING: suspicious RCU usage
6.8.0-rc3_net_next_841a9b5 #1 Not tainted
-----------------------------
drivers/net/ethernet/mellanox/mlx5/core/en/selq.c:124 suspicious rcu_dereference_protected() usage!

other info that might help us debug this:

rcu_scheduler_active = 2, debug_locks = 1
2 locks held by systemd-modules/293:
#0: ffffffffa05067b0 (devices_rwsem){++++}-{3:3}, at: ib_register_client+0x109/0x1b0 [ib_core]
#1: ffff8881096c65c0 (&device->client_data_rwsem){++++}-{3:3}, at: add_client_context+0x104/0x1c0 [ib_core]

stack backtrace:
CPU: 4 PID: 293 Comm: systemd-modules Not tainted 6.8.0-rc3_net_next_841a9b5 #1
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014
Call Trace:
<TASK>
dump_stack_lvl+0x8a/0xa0
lockdep_rcu_suspicious+0x154/0x1a0
mlx5e_selq_apply+0x94/0xa0 [mlx5_core]
mlx5e_selq_cleanup+0x3a/0x60 [mlx5_core]
mlx5e_priv_init+0x2be/0x2f0 [mlx5_core]
mlx5_rdma_setup_rn+0x7c/0x1a0 [mlx5_core]
rdma_init_netdev+0x4e/0x80 [ib_core]
? mlx5_rdma_netdev_free+0x70/0x70 [mlx5_core]
ipoib_intf_init+0x64/0x550 [ib_ipoib]
ipoib_intf_alloc+0x4e/0xc0 [ib_ipoib]
ipoib_add_one+0xb0/0x360 [ib_ipoib]
add_client_context+0x112/0x1c0 [ib_core]
ib_register_client+0x166/0x1b0 [ib_core]
? 0xffffffffa0573000
ipoib_init_module+0xeb/0x1a0 [ib_ipoib]
do_one_initcall+0x61/0x250
do_init_module+0x8a/0x270
init_module_from_file+0x8b/0xd0
idempotent_init_module+0x17d/0x230
__x64_sys_finit_module+0x61/0xb0
do_syscall_64+0x71/0x140
entry_SYSCALL_64_after_hwframe+0x46/0x4e
</TASK>

Fixes: 8bf30be75069 ("net/mlx5e: Introduce select queue parameters")
Signed-off-by: Carolina Jubran <cjubran@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Link: https://lore.kernel.org/r/20240409190820.227554-8-tariqt@nvidia.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


Revision tags: v6.6.31, v6.6.30, v6.6.29, v6.6.28, v6.6.27, v6.6.26
# f9ac93b6 09-Apr-2024 Carolina Jubran <cjubran@nvidia.com>

net/mlx5e: Fix mlx5e_priv_init() cleanup flow

[ Upstream commit ecb829459a841198e142f72fadab56424ae96519 ]

When mlx5e_priv_init() fails, the cleanup flow calls mlx5e_selq_cleanup which
calls mlx5e_

net/mlx5e: Fix mlx5e_priv_init() cleanup flow

[ Upstream commit ecb829459a841198e142f72fadab56424ae96519 ]

When mlx5e_priv_init() fails, the cleanup flow calls mlx5e_selq_cleanup which
calls mlx5e_selq_apply() that assures that the `priv->state_lock` is held using
lockdep_is_held().

Acquire the state_lock in mlx5e_selq_cleanup().

Kernel log:
=============================
WARNING: suspicious RCU usage
6.8.0-rc3_net_next_841a9b5 #1 Not tainted
-----------------------------
drivers/net/ethernet/mellanox/mlx5/core/en/selq.c:124 suspicious rcu_dereference_protected() usage!

other info that might help us debug this:

rcu_scheduler_active = 2, debug_locks = 1
2 locks held by systemd-modules/293:
#0: ffffffffa05067b0 (devices_rwsem){++++}-{3:3}, at: ib_register_client+0x109/0x1b0 [ib_core]
#1: ffff8881096c65c0 (&device->client_data_rwsem){++++}-{3:3}, at: add_client_context+0x104/0x1c0 [ib_core]

stack backtrace:
CPU: 4 PID: 293 Comm: systemd-modules Not tainted 6.8.0-rc3_net_next_841a9b5 #1
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014
Call Trace:
<TASK>
dump_stack_lvl+0x8a/0xa0
lockdep_rcu_suspicious+0x154/0x1a0
mlx5e_selq_apply+0x94/0xa0 [mlx5_core]
mlx5e_selq_cleanup+0x3a/0x60 [mlx5_core]
mlx5e_priv_init+0x2be/0x2f0 [mlx5_core]
mlx5_rdma_setup_rn+0x7c/0x1a0 [mlx5_core]
rdma_init_netdev+0x4e/0x80 [ib_core]
? mlx5_rdma_netdev_free+0x70/0x70 [mlx5_core]
ipoib_intf_init+0x64/0x550 [ib_ipoib]
ipoib_intf_alloc+0x4e/0xc0 [ib_ipoib]
ipoib_add_one+0xb0/0x360 [ib_ipoib]
add_client_context+0x112/0x1c0 [ib_core]
ib_register_client+0x166/0x1b0 [ib_core]
? 0xffffffffa0573000
ipoib_init_module+0xeb/0x1a0 [ib_ipoib]
do_one_initcall+0x61/0x250
do_init_module+0x8a/0x270
init_module_from_file+0x8b/0xd0
idempotent_init_module+0x17d/0x230
__x64_sys_finit_module+0x61/0xb0
do_syscall_64+0x71/0x140
entry_SYSCALL_64_after_hwframe+0x46/0x4e
</TASK>

Fixes: 8bf30be75069 ("net/mlx5e: Introduce select queue parameters")
Signed-off-by: Carolina Jubran <cjubran@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Link: https://lore.kernel.org/r/20240409190820.227554-8-tariqt@nvidia.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


Revision tags: v6.6.31, v6.6.30, v6.6.29, v6.6.28, v6.6.27, v6.6.26
# f9ac93b6 09-Apr-2024 Carolina Jubran <cjubran@nvidia.com>

net/mlx5e: Fix mlx5e_priv_init() cleanup flow

[ Upstream commit ecb829459a841198e142f72fadab56424ae96519 ]

When mlx5e_priv_init() fails, the cleanup flow calls mlx5e_selq_cleanup which
calls mlx5e_

net/mlx5e: Fix mlx5e_priv_init() cleanup flow

[ Upstream commit ecb829459a841198e142f72fadab56424ae96519 ]

When mlx5e_priv_init() fails, the cleanup flow calls mlx5e_selq_cleanup which
calls mlx5e_selq_apply() that assures that the `priv->state_lock` is held using
lockdep_is_held().

Acquire the state_lock in mlx5e_selq_cleanup().

Kernel log:
=============================
WARNING: suspicious RCU usage
6.8.0-rc3_net_next_841a9b5 #1 Not tainted
-----------------------------
drivers/net/ethernet/mellanox/mlx5/core/en/selq.c:124 suspicious rcu_dereference_protected() usage!

other info that might help us debug this:

rcu_scheduler_active = 2, debug_locks = 1
2 locks held by systemd-modules/293:
#0: ffffffffa05067b0 (devices_rwsem){++++}-{3:3}, at: ib_register_client+0x109/0x1b0 [ib_core]
#1: ffff8881096c65c0 (&device->client_data_rwsem){++++}-{3:3}, at: add_client_context+0x104/0x1c0 [ib_core]

stack backtrace:
CPU: 4 PID: 293 Comm: systemd-modules Not tainted 6.8.0-rc3_net_next_841a9b5 #1
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014
Call Trace:
<TASK>
dump_stack_lvl+0x8a/0xa0
lockdep_rcu_suspicious+0x154/0x1a0
mlx5e_selq_apply+0x94/0xa0 [mlx5_core]
mlx5e_selq_cleanup+0x3a/0x60 [mlx5_core]
mlx5e_priv_init+0x2be/0x2f0 [mlx5_core]
mlx5_rdma_setup_rn+0x7c/0x1a0 [mlx5_core]
rdma_init_netdev+0x4e/0x80 [ib_core]
? mlx5_rdma_netdev_free+0x70/0x70 [mlx5_core]
ipoib_intf_init+0x64/0x550 [ib_ipoib]
ipoib_intf_alloc+0x4e/0xc0 [ib_ipoib]
ipoib_add_one+0xb0/0x360 [ib_ipoib]
add_client_context+0x112/0x1c0 [ib_core]
ib_register_client+0x166/0x1b0 [ib_core]
? 0xffffffffa0573000
ipoib_init_module+0xeb/0x1a0 [ib_ipoib]
do_one_initcall+0x61/0x250
do_init_module+0x8a/0x270
init_module_from_file+0x8b/0xd0
idempotent_init_module+0x17d/0x230
__x64_sys_finit_module+0x61/0xb0
do_syscall_64+0x71/0x140
entry_SYSCALL_64_after_hwframe+0x46/0x4e
</TASK>

Fixes: 8bf30be75069 ("net/mlx5e: Introduce select queue parameters")
Signed-off-by: Carolina Jubran <cjubran@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Link: https://lore.kernel.org/r/20240409190820.227554-8-tariqt@nvidia.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


Revision tags: v6.6.25, v6.6.24, v6.6.23, v6.6.16, v6.6.15, v6.6.14, v6.6.13, v6.6.12, v6.6.11, v6.6.10, v6.6.9, v6.6.8, v6.6.7, v6.6.6, v6.6.5, v6.6.4, v6.6.3, v6.6.2, v6.5.11, v6.6.1, v6.5.10, v6.6, v6.5.9, v6.5.8, v6.5.7, v6.5.6, v6.5.5, v6.5.4, v6.5.3, v6.5.2, v6.1.51, v6.5.1, v6.1.50, v6.5, v6.1.49, v6.1.48, v6.1.46, v6.1.45, v6.1.44, v6.1.43, v6.1.42, v6.1.41, v6.1.40, v6.1.39, v6.1.38, v6.1.37, v6.1.36, v6.4, v6.1.35, v6.1.34, v6.1.33, v6.1.32, v6.1.31, v6.1.30, v6.1.29, v6.1.28, v6.1.27, v6.1.26, v6.3, v6.1.25, v6.1.24, v6.1.23, v6.1.22, v6.1.21, v6.1.20, v6.1.19, v6.1.18, v6.1.17, v6.1.16, v6.1.15, v6.1.14, v6.1.13, v6.2, v6.1.12, v6.1.11, v6.1.10, v6.1.9, v6.1.8, v6.1.7, v6.1.6, v6.1.5, v6.0.19, v6.0.18, v6.1.4, v6.1.3, v6.0.17, v6.1.2, v6.0.16, v6.1.1, v6.0.15, v6.0.14, v6.0.13, v6.1, v6.0.12, v6.0.11, v6.0.10, v5.15.80, v6.0.9, v5.15.79, v6.0.8, v5.15.78, v6.0.7, v5.15.77, v5.15.76, v6.0.6, v6.0.5, v5.15.75, v6.0.4, v6.0.3, v6.0.2, v5.15.74, v5.15.73, v6.0.1, v5.15.72, v6.0, v5.15.71, v5.15.70
# 8ce3d969 21-Sep-2022 Moshe Shemesh <moshe@nvidia.com>

net/mlx5e: Fix possible deadlock on mlx5e_tx_timeout_work

[ Upstream commit eab0da38912ebdad922ed0388209f7eb0a5163cd ]

Due to the cited patch, devlink health commands take devlink lock and
this may

net/mlx5e: Fix possible deadlock on mlx5e_tx_timeout_work

[ Upstream commit eab0da38912ebdad922ed0388209f7eb0a5163cd ]

Due to the cited patch, devlink health commands take devlink lock and
this may result in deadlock for mlx5e_tx_reporter as it takes local
state_lock before calling devlink health report and on the other hand
devlink health commands such as diagnose for same reporter take local
state_lock after taking devlink lock (see kernel log below).

To fix it, remove local state_lock from mlx5e_tx_timeout_work() before
calling devlink_health_report() and take care to cancel the work before
any call to close channels, which may free the SQs that should be
handled by the work. Before cancel_work_sync(), use current_work() to
check we are not calling it from within the work, as
mlx5e_tx_timeout_work() itself may close the channels and reopen as part
of recovery flow.

While removing state_lock from mlx5e_tx_timeout_work() keep rtnl_lock to
ensure no change in netdev->real_num_tx_queues, but use rtnl_trylock()
and a flag to avoid deadlock by calling cancel_work_sync() before
closing the channels while holding rtnl_lock too.

Kernel log:
======================================================
WARNING: possible circular locking dependency detected
6.0.0-rc3_for_upstream_debug_2022_08_30_13_10 #1 Not tainted
------------------------------------------------------
kworker/u16:2/65 is trying to acquire lock:
ffff888122f6c2f8 (&devlink->lock_key#2){+.+.}-{3:3}, at: devlink_health_report+0x2f1/0x7e0

but task is already holding lock:
ffff888121d20be0 (&priv->state_lock){+.+.}-{3:3}, at: mlx5e_tx_timeout_work+0x70/0x280 [mlx5_core]

which lock already depends on the new lock.

the existing dependency chain (in reverse order) is:

-> #1 (&priv->state_lock){+.+.}-{3:3}:
__mutex_lock+0x12c/0x14b0
mlx5e_rx_reporter_diagnose+0x71/0x700 [mlx5_core]
devlink_nl_cmd_health_reporter_diagnose_doit+0x212/0xa50
genl_family_rcv_msg_doit+0x1e9/0x2f0
genl_rcv_msg+0x2e9/0x530
netlink_rcv_skb+0x11d/0x340
genl_rcv+0x24/0x40
netlink_unicast+0x438/0x710
netlink_sendmsg+0x788/0xc40
sock_sendmsg+0xb0/0xe0
__sys_sendto+0x1c1/0x290
__x64_sys_sendto+0xdd/0x1b0
do_syscall_64+0x3d/0x90
entry_SYSCALL_64_after_hwframe+0x46/0xb0

-> #0 (&devlink->lock_key#2){+.+.}-{3:3}:
__lock_acquire+0x2c8a/0x6200
lock_acquire+0x1c1/0x550
__mutex_lock+0x12c/0x14b0
devlink_health_report+0x2f1/0x7e0
mlx5e_health_report+0xc9/0xd7 [mlx5_core]
mlx5e_reporter_tx_timeout+0x2ab/0x3d0 [mlx5_core]
mlx5e_tx_timeout_work+0x1c1/0x280 [mlx5_core]
process_one_work+0x7c2/0x1340
worker_thread+0x59d/0xec0
kthread+0x28f/0x330
ret_from_fork+0x1f/0x30

other info that might help us debug this:

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&priv->state_lock);
lock(&devlink->lock_key#2);
lock(&priv->state_lock);
lock(&devlink->lock_key#2);

*** DEADLOCK ***

4 locks held by kworker/u16:2/65:
#0: ffff88811a55b138 ((wq_completion)mlx5e#2){+.+.}-{0:0}, at: process_one_work+0x6e2/0x1340
#1: ffff888101de7db8 ((work_completion)(&priv->tx_timeout_work)){+.+.}-{0:0}, at: process_one_work+0x70f/0x1340
#2: ffffffff84ce8328 (rtnl_mutex){+.+.}-{3:3}, at: mlx5e_tx_timeout_work+0x53/0x280 [mlx5_core]
#3: ffff888121d20be0 (&priv->state_lock){+.+.}-{3:3}, at: mlx5e_tx_timeout_work+0x70/0x280 [mlx5_core]

stack backtrace:
CPU: 1 PID: 65 Comm: kworker/u16:2 Not tainted 6.0.0-rc3_for_upstream_debug_2022_08_30_13_10 #1
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014
Workqueue: mlx5e mlx5e_tx_timeout_work [mlx5_core]
Call Trace:
<TASK>
dump_stack_lvl+0x57/0x7d
check_noncircular+0x278/0x300
? print_circular_bug+0x460/0x460
? find_held_lock+0x2d/0x110
? __stack_depot_save+0x24c/0x520
? alloc_chain_hlocks+0x228/0x700
__lock_acquire+0x2c8a/0x6200
? register_lock_class+0x1860/0x1860
? kasan_save_stack+0x1e/0x40
? kasan_set_free_info+0x20/0x30
? ____kasan_slab_free+0x11d/0x1b0
? kfree+0x1ba/0x520
? devlink_health_do_dump.part.0+0x171/0x3a0
? devlink_health_report+0x3d5/0x7e0
lock_acquire+0x1c1/0x550
? devlink_health_report+0x2f1/0x7e0
? lockdep_hardirqs_on_prepare+0x400/0x400
? find_held_lock+0x2d/0x110
__mutex_lock+0x12c/0x14b0
? devlink_health_report+0x2f1/0x7e0
? devlink_health_report+0x2f1/0x7e0
? mutex_lock_io_nested+0x1320/0x1320
? trace_hardirqs_on+0x2d/0x100
? bit_wait_io_timeout+0x170/0x170
? devlink_health_do_dump.part.0+0x171/0x3a0
? kfree+0x1ba/0x520
? devlink_health_do_dump.part.0+0x171/0x3a0
devlink_health_report+0x2f1/0x7e0
mlx5e_health_report+0xc9/0xd7 [mlx5_core]
mlx5e_reporter_tx_timeout+0x2ab/0x3d0 [mlx5_core]
? lockdep_hardirqs_on_prepare+0x400/0x400
? mlx5e_reporter_tx_err_cqe+0x1b0/0x1b0 [mlx5_core]
? mlx5e_tx_reporter_timeout_dump+0x70/0x70 [mlx5_core]
? mlx5e_tx_reporter_dump_sq+0x320/0x320 [mlx5_core]
? mlx5e_tx_timeout_work+0x70/0x280 [mlx5_core]
? mutex_lock_io_nested+0x1320/0x1320
? process_one_work+0x70f/0x1340
? lockdep_hardirqs_on_prepare+0x400/0x400
? lock_downgrade+0x6e0/0x6e0
mlx5e_tx_timeout_work+0x1c1/0x280 [mlx5_core]
process_one_work+0x7c2/0x1340
? lockdep_hardirqs_on_prepare+0x400/0x400
? pwq_dec_nr_in_flight+0x230/0x230
? rwlock_bug.part.0+0x90/0x90
worker_thread+0x59d/0xec0
? process_one_work+0x1340/0x1340
kthread+0x28f/0x330
? kthread_complete_and_exit+0x20/0x20
ret_from_fork+0x1f/0x30
</TASK>

Fixes: c90005b5f75c ("devlink: Hold the instance lock in health callbacks")
Signed-off-by: Moshe Shemesh <moshe@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


# da6192ca 06-Oct-2023 Will Mortensen <will@extrahop.com>

net/mlx5e: Again mutually exclude RX-FCS and RX-port-timestamp

Commit 1e66220948df8 ("net/mlx5e: Update rx ring hw mtu upon each rx-fcs
flag change") seems to have accidentally inverted the logic ad

net/mlx5e: Again mutually exclude RX-FCS and RX-port-timestamp

Commit 1e66220948df8 ("net/mlx5e: Update rx ring hw mtu upon each rx-fcs
flag change") seems to have accidentally inverted the logic added in
commit 0bc73ad46a76 ("net/mlx5e: Mutually exclude RX-FCS and
RX-port-timestamp").

The impact of this is a little unclear since it seems the FCS scattered
with RX-FCS is (usually?) correct regardless.

Fixes: 1e66220948df8 ("net/mlx5e: Update rx ring hw mtu upon each rx-fcs flag change")
Tested-by: Charlotte Tan <charlotte@extrahop.com>
Reviewed-by: Charlotte Tan <charlotte@extrahop.com>
Cc: Adham Faris <afaris@nvidia.com>
Cc: Aya Levin <ayal@nvidia.com>
Cc: Tariq Toukan <tariqt@nvidia.com>
Cc: Moshe Shemesh <moshe@nvidia.com>
Cc: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: Will Mortensen <will@extrahop.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Link: https://lore.kernel.org/r/20231006053706.514618-1-will@extrahop.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>

show more ...


# 34a79876 01-Aug-2023 Dragos Tatulea <dtatulea@nvidia.com>

net/mlx5e: XDP, Fix fifo overrun on XDP_REDIRECT

Before this fix, running high rate traffic through XDP_REDIRECT
with multibuf could overrun the fifo used to release the
xdp frames after tx completi

net/mlx5e: XDP, Fix fifo overrun on XDP_REDIRECT

Before this fix, running high rate traffic through XDP_REDIRECT
with multibuf could overrun the fifo used to release the
xdp frames after tx completion. This resulted in corrupted data
being consumed on the free side.

The culplirt was a miscalculation of the fifo size: the maximum ratio
between fifo entries / data segments was incorrect. This ratio serves to
calculate the max fifo size for a full sq where each packet uses the
worst case number of entries in the fifo.

This patch fixes the formula and names the constant. It also makes sure
that future values will use a power of 2 number of entries for the fifo
mask to work.

Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com>
Fixes: 3f734b8c594b ("net/mlx5e: XDP, Use multiple single-entry objects in xdpi_fifo")
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>

show more ...


# f1d152eb 07-Aug-2023 Lin Ma <linma@zju.edu.cn>

rtnetlink: remove redundant checks for nlattr IFLA_BRIDGE_MODE

The commit d73ef2d69c0d ("rtnetlink: let rtnl_bridge_setlink checks
IFLA_BRIDGE_MODE length") added the nla_len check in rtnl_bridge_se

rtnetlink: remove redundant checks for nlattr IFLA_BRIDGE_MODE

The commit d73ef2d69c0d ("rtnetlink: let rtnl_bridge_setlink checks
IFLA_BRIDGE_MODE length") added the nla_len check in rtnl_bridge_setlink,
which is the only caller for ndo_bridge_setlink handlers defined in
low-level driver codes. Hence, this patch cleanups the redundant checks in
each ndo_bridge_setlink handler function.

Suggested-by: Hangbin Liu <liuhangbin@gmail.com>
Signed-off-by: Lin Ma <linma@zju.edu.cn>
Acked-by: Pavan Chebbi <pavan.chebbi@broadcom.com>
Reviewed-by: Simon Horman <horms@kernel.org>
Link: https://lore.kernel.org/r/20230807091347.3804523-1-linma@zju.edu.cn
Signed-off-by: Jakub Kicinski <kuba@kernel.org>

show more ...


# a9ca9f9c 04-Aug-2023 Yunsheng Lin <linyunsheng@huawei.com>

page_pool: split types and declarations from page_pool.h

Split types and pure function declarations from page_pool.h
and add them in page_page/types.h, so that C sources can
include page_pool.h and

page_pool: split types and declarations from page_pool.h

Split types and pure function declarations from page_pool.h
and add them in page_page/types.h, so that C sources can
include page_pool.h and headers should generally only include
page_pool/types.h as suggested by jakub.
Rename page_pool.h to page_pool/helpers.h to have both in
one place.

Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Suggested-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Reviewed-by: Alexander Duyck <alexanderduyck@fb.com>
Link: https://lore.kernel.org/r/20230804180529.2483231-2-aleksander.lobakin@intel.com
[Jakub: change microsoft/mana, fix kdoc paths in Documentation]
Signed-off-by: Jakub Kicinski <kuba@kernel.org>

show more ...


# 72cc6549 16-Jul-2023 Gal Pressman <gal@nvidia.com>

net/mlx5e: Take RTNL lock when needed before calling xdp_set_features()

Hold RTNL lock when calling xdp_set_features() with a registered netdev,
as the call triggers the netdev notifiers. This could

net/mlx5e: Take RTNL lock when needed before calling xdp_set_features()

Hold RTNL lock when calling xdp_set_features() with a registered netdev,
as the call triggers the netdev notifiers. This could happen when
switching from uplink rep to nic profile for example.

This resolves the following call trace:

RTNL: assertion failed at net/core/dev.c (1953)
WARNING: CPU: 6 PID: 112670 at net/core/dev.c:1953 call_netdevice_notifiers_info+0x7c/0x80
Modules linked in: sch_mqprio sch_mqprio_lib act_tunnel_key act_mirred act_skbedit cls_matchall nfnetlink_cttimeout act_gact cls_flower sch_ingress bonding ib_umad ip_gre rdma_ucm mlx5_vfio_pci ipip tunnel4 ip6_gre gre mlx5_ib vfio_pci vfio_pci_core vfio_iommu_type1 ib_uverbs vfio mlx5_core ib_ipoib geneve nf_tables ip6_tunnel tunnel6 iptable_raw openvswitch nsh rpcrdma ib_iser libiscsi scsi_transport_iscsi rdma_cm iw_cm ib_cm ib_core xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xt_addrtype iptable_nat nf_nat br_netfilter rpcsec_gss_krb5 auth_rpcgss oid_registry overlay zram zsmalloc fuse [last unloaded: ib_uverbs]
CPU: 6 PID: 112670 Comm: devlink Not tainted 6.4.0-rc7_for_upstream_min_debug_2023_06_28_17_02 #1
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014
RIP: 0010:call_netdevice_notifiers_info+0x7c/0x80
Code: 90 ff 80 3d 2d 6b f7 00 00 75 c5 ba a1 07 00 00 48 c7 c6 e4 ce 0b 82 48 c7 c7 c8 f4 04 82 c6 05 11 6b f7 00 01 e8 a4 7c 8e ff <0f> 0b eb a2 0f 1f 44 00 00 55 48 89 e5 41 54 48 83 e4 f0 48 83 ec
RSP: 0018:ffff8882a21c3948 EFLAGS: 00010282
RAX: 0000000000000000 RBX: ffffffff82e6f880 RCX: 0000000000000027
RDX: ffff88885f99b5c8 RSI: 0000000000000001 RDI: ffff88885f99b5c0
RBP: 0000000000000028 R08: ffff88887ffabaa8 R09: 0000000000000003
R10: ffff88887fecbac0 R11: ffff88887ff7bac0 R12: ffff8882a21c3968
R13: ffff88811c018940 R14: 0000000000000000 R15: ffff8881274401a0
FS: 00007fe141c81800(0000) GS:ffff88885f980000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f787c28b948 CR3: 000000014bcf3005 CR4: 0000000000370ea0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<TASK>
? __warn+0x79/0x120
? call_netdevice_notifiers_info+0x7c/0x80
? report_bug+0x17c/0x190
? handle_bug+0x3c/0x60
? exc_invalid_op+0x14/0x70
? asm_exc_invalid_op+0x16/0x20
? call_netdevice_notifiers_info+0x7c/0x80
? call_netdevice_notifiers_info+0x7c/0x80
call_netdevice_notifiers+0x2e/0x50
mlx5e_set_xdp_feature+0x21/0x50 [mlx5_core]
mlx5e_nic_init+0xf1/0x1a0 [mlx5_core]
mlx5e_netdev_init_profile+0x76/0x110 [mlx5_core]
mlx5e_netdev_attach_profile+0x1f/0x90 [mlx5_core]
mlx5e_netdev_change_profile+0x92/0x160 [mlx5_core]
mlx5e_netdev_attach_nic_profile+0x1b/0x30 [mlx5_core]
mlx5e_vport_rep_unload+0xaa/0xc0 [mlx5_core]
__esw_offloads_unload_rep+0x52/0x60 [mlx5_core]
mlx5_esw_offloads_rep_unload+0x52/0x70 [mlx5_core]
esw_offloads_unload_rep+0x34/0x70 [mlx5_core]
esw_offloads_disable+0x2b/0x90 [mlx5_core]
mlx5_eswitch_disable_locked+0x1b9/0x210 [mlx5_core]
mlx5_devlink_eswitch_mode_set+0xf5/0x630 [mlx5_core]
? devlink_get_from_attrs_lock+0x9e/0x110
devlink_nl_cmd_eswitch_set_doit+0x60/0xe0
genl_family_rcv_msg_doit.isra.0+0xc2/0x110
genl_rcv_msg+0x17d/0x2b0
? devlink_get_from_attrs_lock+0x110/0x110
? devlink_nl_cmd_eswitch_get_doit+0x290/0x290
? devlink_pernet_pre_exit+0xf0/0xf0
? genl_family_rcv_msg_doit.isra.0+0x110/0x110
netlink_rcv_skb+0x54/0x100
genl_rcv+0x24/0x40
netlink_unicast+0x1f6/0x2c0
netlink_sendmsg+0x232/0x4a0
sock_sendmsg+0x38/0x60
? _copy_from_user+0x2a/0x60
__sys_sendto+0x110/0x160
? __count_memcg_events+0x48/0x90
? handle_mm_fault+0x161/0x260
? do_user_addr_fault+0x278/0x6e0
__x64_sys_sendto+0x20/0x30
do_syscall_64+0x3d/0x90
entry_SYSCALL_64_after_hwframe+0x46/0xb0
RIP: 0033:0x7fe141b1340a
Code: d8 64 89 02 48 c7 c0 ff ff ff ff eb b8 0f 1f 00 f3 0f 1e fa 41 89 ca 64 8b 04 25 18 00 00 00 85 c0 75 15 b8 2c 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 7e c3 0f 1f 44 00 00 41 54 48 83 ec 30 44 89
RSP: 002b:00007fff61d03de8 EFLAGS: 00000246 ORIG_RAX: 000000000000002c
RAX: ffffffffffffffda RBX: 0000000000afab00 RCX: 00007fe141b1340a
RDX: 0000000000000038 RSI: 0000000000afab00 RDI: 0000000000000003
RBP: 0000000000afa910 R08: 00007fe141d80200 R09: 000000000000000c
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000001
</TASK>

Fixes: 4d5ab0ad964d ("net/mlx5e: take into account device reconfiguration for xdp_features flag")
Signed-off-by: Gal Pressman <gal@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>

show more ...


12345678910>>...70