History log of /openbmc/linux/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h (Results 1 – 25 of 507)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: v6.6.35, v6.6.34, v6.6.33, v6.6.32, v6.6.31
# e93fc8d9 09-May-2024 Maher Sanalla <msanalla@nvidia.com>

net/mlx5: Reload only IB representors upon lag disable/enable

[ Upstream commit 0f06228d4a2dcc1fca5b3ddb0eefa09c05b102c4 ]

On lag disable, the bond IB device along with all of its
representors are

net/mlx5: Reload only IB representors upon lag disable/enable

[ Upstream commit 0f06228d4a2dcc1fca5b3ddb0eefa09c05b102c4 ]

On lag disable, the bond IB device along with all of its
representors are destroyed, and then the slaves' representors get reloaded.

In case the slave IB representor load fails, the eswitch error flow
unloads all representors, including ethernet representors, where the
netdevs get detached and removed from lag bond. Such flow is inaccurate
as the lag driver is not responsible for loading/unloading ethernet
representors. Furthermore, the flow described above begins by holding
lag lock to prevent bond changes during disable flow. However, when
reaching the ethernet representors detachment from lag, the lag lock is
required again, triggering the following deadlock:

Call trace:
__switch_to+0xf4/0x148
__schedule+0x2c8/0x7d0
schedule+0x50/0xe0
schedule_preempt_disabled+0x18/0x28
__mutex_lock.isra.13+0x2b8/0x570
__mutex_lock_slowpath+0x1c/0x28
mutex_lock+0x4c/0x68
mlx5_lag_remove_netdev+0x3c/0x1a0 [mlx5_core]
mlx5e_uplink_rep_disable+0x70/0xa0 [mlx5_core]
mlx5e_detach_netdev+0x6c/0xb0 [mlx5_core]
mlx5e_netdev_change_profile+0x44/0x138 [mlx5_core]
mlx5e_netdev_attach_nic_profile+0x28/0x38 [mlx5_core]
mlx5e_vport_rep_unload+0x184/0x1b8 [mlx5_core]
mlx5_esw_offloads_rep_load+0xd8/0xe0 [mlx5_core]
mlx5_eswitch_reload_reps+0x74/0xd0 [mlx5_core]
mlx5_disable_lag+0x130/0x138 [mlx5_core]
mlx5_lag_disable_change+0x6c/0x70 [mlx5_core] // hold ldev->lock
mlx5_devlink_eswitch_mode_set+0xc0/0x410 [mlx5_core]
devlink_nl_cmd_eswitch_set_doit+0xdc/0x180
genl_family_rcv_msg_doit.isra.17+0xe8/0x138
genl_rcv_msg+0xe4/0x220
netlink_rcv_skb+0x44/0x108
genl_rcv+0x40/0x58
netlink_unicast+0x198/0x268
netlink_sendmsg+0x1d4/0x418
sock_sendmsg+0x54/0x60
__sys_sendto+0xf4/0x120
__arm64_sys_sendto+0x30/0x40
el0_svc_common+0x8c/0x120
do_el0_svc+0x30/0xa0
el0_svc+0x20/0x30
el0_sync_handler+0x90/0xb8
el0_sync+0x160/0x180

Thus, upon lag enable/disable, load and unload only the IB representors
of the slaves preventing the deadlock mentioned above.

While at it, refactor the mlx5_esw_offloads_rep_load() function to have
a static helper method for its internal logic, in symmetry with the
representor unload design.

Fixes: 598fe77df855 ("net/mlx5: Lag, Create shared FDB when in switchdev mode")
Co-developed-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Maher Sanalla <msanalla@nvidia.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Simon Horman <horms@kernel.org>
Link: https://lore.kernel.org/r/20240509112951.590184-4-tariqt@nvidia.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


Revision tags: v6.6.35, v6.6.34, v6.6.33, v6.6.32, v6.6.31
# e93fc8d9 09-May-2024 Maher Sanalla <msanalla@nvidia.com>

net/mlx5: Reload only IB representors upon lag disable/enable

[ Upstream commit 0f06228d4a2dcc1fca5b3ddb0eefa09c05b102c4 ]

On lag disable, the bond IB device along with all of its
representors are

net/mlx5: Reload only IB representors upon lag disable/enable

[ Upstream commit 0f06228d4a2dcc1fca5b3ddb0eefa09c05b102c4 ]

On lag disable, the bond IB device along with all of its
representors are destroyed, and then the slaves' representors get reloaded.

In case the slave IB representor load fails, the eswitch error flow
unloads all representors, including ethernet representors, where the
netdevs get detached and removed from lag bond. Such flow is inaccurate
as the lag driver is not responsible for loading/unloading ethernet
representors. Furthermore, the flow described above begins by holding
lag lock to prevent bond changes during disable flow. However, when
reaching the ethernet representors detachment from lag, the lag lock is
required again, triggering the following deadlock:

Call trace:
__switch_to+0xf4/0x148
__schedule+0x2c8/0x7d0
schedule+0x50/0xe0
schedule_preempt_disabled+0x18/0x28
__mutex_lock.isra.13+0x2b8/0x570
__mutex_lock_slowpath+0x1c/0x28
mutex_lock+0x4c/0x68
mlx5_lag_remove_netdev+0x3c/0x1a0 [mlx5_core]
mlx5e_uplink_rep_disable+0x70/0xa0 [mlx5_core]
mlx5e_detach_netdev+0x6c/0xb0 [mlx5_core]
mlx5e_netdev_change_profile+0x44/0x138 [mlx5_core]
mlx5e_netdev_attach_nic_profile+0x28/0x38 [mlx5_core]
mlx5e_vport_rep_unload+0x184/0x1b8 [mlx5_core]
mlx5_esw_offloads_rep_load+0xd8/0xe0 [mlx5_core]
mlx5_eswitch_reload_reps+0x74/0xd0 [mlx5_core]
mlx5_disable_lag+0x130/0x138 [mlx5_core]
mlx5_lag_disable_change+0x6c/0x70 [mlx5_core] // hold ldev->lock
mlx5_devlink_eswitch_mode_set+0xc0/0x410 [mlx5_core]
devlink_nl_cmd_eswitch_set_doit+0xdc/0x180
genl_family_rcv_msg_doit.isra.17+0xe8/0x138
genl_rcv_msg+0xe4/0x220
netlink_rcv_skb+0x44/0x108
genl_rcv+0x40/0x58
netlink_unicast+0x198/0x268
netlink_sendmsg+0x1d4/0x418
sock_sendmsg+0x54/0x60
__sys_sendto+0xf4/0x120
__arm64_sys_sendto+0x30/0x40
el0_svc_common+0x8c/0x120
do_el0_svc+0x30/0xa0
el0_svc+0x20/0x30
el0_sync_handler+0x90/0xb8
el0_sync+0x160/0x180

Thus, upon lag enable/disable, load and unload only the IB representors
of the slaves preventing the deadlock mentioned above.

While at it, refactor the mlx5_esw_offloads_rep_load() function to have
a static helper method for its internal logic, in symmetry with the
representor unload design.

Fixes: 598fe77df855 ("net/mlx5: Lag, Create shared FDB when in switchdev mode")
Co-developed-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Maher Sanalla <msanalla@nvidia.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Simon Horman <horms@kernel.org>
Link: https://lore.kernel.org/r/20240509112951.590184-4-tariqt@nvidia.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


Revision tags: v6.6.35, v6.6.34, v6.6.33, v6.6.32, v6.6.31
# e93fc8d9 09-May-2024 Maher Sanalla <msanalla@nvidia.com>

net/mlx5: Reload only IB representors upon lag disable/enable

[ Upstream commit 0f06228d4a2dcc1fca5b3ddb0eefa09c05b102c4 ]

On lag disable, the bond IB device along with all of its
representors are

net/mlx5: Reload only IB representors upon lag disable/enable

[ Upstream commit 0f06228d4a2dcc1fca5b3ddb0eefa09c05b102c4 ]

On lag disable, the bond IB device along with all of its
representors are destroyed, and then the slaves' representors get reloaded.

In case the slave IB representor load fails, the eswitch error flow
unloads all representors, including ethernet representors, where the
netdevs get detached and removed from lag bond. Such flow is inaccurate
as the lag driver is not responsible for loading/unloading ethernet
representors. Furthermore, the flow described above begins by holding
lag lock to prevent bond changes during disable flow. However, when
reaching the ethernet representors detachment from lag, the lag lock is
required again, triggering the following deadlock:

Call trace:
__switch_to+0xf4/0x148
__schedule+0x2c8/0x7d0
schedule+0x50/0xe0
schedule_preempt_disabled+0x18/0x28
__mutex_lock.isra.13+0x2b8/0x570
__mutex_lock_slowpath+0x1c/0x28
mutex_lock+0x4c/0x68
mlx5_lag_remove_netdev+0x3c/0x1a0 [mlx5_core]
mlx5e_uplink_rep_disable+0x70/0xa0 [mlx5_core]
mlx5e_detach_netdev+0x6c/0xb0 [mlx5_core]
mlx5e_netdev_change_profile+0x44/0x138 [mlx5_core]
mlx5e_netdev_attach_nic_profile+0x28/0x38 [mlx5_core]
mlx5e_vport_rep_unload+0x184/0x1b8 [mlx5_core]
mlx5_esw_offloads_rep_load+0xd8/0xe0 [mlx5_core]
mlx5_eswitch_reload_reps+0x74/0xd0 [mlx5_core]
mlx5_disable_lag+0x130/0x138 [mlx5_core]
mlx5_lag_disable_change+0x6c/0x70 [mlx5_core] // hold ldev->lock
mlx5_devlink_eswitch_mode_set+0xc0/0x410 [mlx5_core]
devlink_nl_cmd_eswitch_set_doit+0xdc/0x180
genl_family_rcv_msg_doit.isra.17+0xe8/0x138
genl_rcv_msg+0xe4/0x220
netlink_rcv_skb+0x44/0x108
genl_rcv+0x40/0x58
netlink_unicast+0x198/0x268
netlink_sendmsg+0x1d4/0x418
sock_sendmsg+0x54/0x60
__sys_sendto+0xf4/0x120
__arm64_sys_sendto+0x30/0x40
el0_svc_common+0x8c/0x120
do_el0_svc+0x30/0xa0
el0_svc+0x20/0x30
el0_sync_handler+0x90/0xb8
el0_sync+0x160/0x180

Thus, upon lag enable/disable, load and unload only the IB representors
of the slaves preventing the deadlock mentioned above.

While at it, refactor the mlx5_esw_offloads_rep_load() function to have
a static helper method for its internal logic, in symmetry with the
representor unload design.

Fixes: 598fe77df855 ("net/mlx5: Lag, Create shared FDB when in switchdev mode")
Co-developed-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Maher Sanalla <msanalla@nvidia.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Simon Horman <horms@kernel.org>
Link: https://lore.kernel.org/r/20240509112951.590184-4-tariqt@nvidia.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


Revision tags: v6.6.35, v6.6.34, v6.6.33, v6.6.32, v6.6.31
# e93fc8d9 09-May-2024 Maher Sanalla <msanalla@nvidia.com>

net/mlx5: Reload only IB representors upon lag disable/enable

[ Upstream commit 0f06228d4a2dcc1fca5b3ddb0eefa09c05b102c4 ]

On lag disable, the bond IB device along with all of its
representors are

net/mlx5: Reload only IB representors upon lag disable/enable

[ Upstream commit 0f06228d4a2dcc1fca5b3ddb0eefa09c05b102c4 ]

On lag disable, the bond IB device along with all of its
representors are destroyed, and then the slaves' representors get reloaded.

In case the slave IB representor load fails, the eswitch error flow
unloads all representors, including ethernet representors, where the
netdevs get detached and removed from lag bond. Such flow is inaccurate
as the lag driver is not responsible for loading/unloading ethernet
representors. Furthermore, the flow described above begins by holding
lag lock to prevent bond changes during disable flow. However, when
reaching the ethernet representors detachment from lag, the lag lock is
required again, triggering the following deadlock:

Call trace:
__switch_to+0xf4/0x148
__schedule+0x2c8/0x7d0
schedule+0x50/0xe0
schedule_preempt_disabled+0x18/0x28
__mutex_lock.isra.13+0x2b8/0x570
__mutex_lock_slowpath+0x1c/0x28
mutex_lock+0x4c/0x68
mlx5_lag_remove_netdev+0x3c/0x1a0 [mlx5_core]
mlx5e_uplink_rep_disable+0x70/0xa0 [mlx5_core]
mlx5e_detach_netdev+0x6c/0xb0 [mlx5_core]
mlx5e_netdev_change_profile+0x44/0x138 [mlx5_core]
mlx5e_netdev_attach_nic_profile+0x28/0x38 [mlx5_core]
mlx5e_vport_rep_unload+0x184/0x1b8 [mlx5_core]
mlx5_esw_offloads_rep_load+0xd8/0xe0 [mlx5_core]
mlx5_eswitch_reload_reps+0x74/0xd0 [mlx5_core]
mlx5_disable_lag+0x130/0x138 [mlx5_core]
mlx5_lag_disable_change+0x6c/0x70 [mlx5_core] // hold ldev->lock
mlx5_devlink_eswitch_mode_set+0xc0/0x410 [mlx5_core]
devlink_nl_cmd_eswitch_set_doit+0xdc/0x180
genl_family_rcv_msg_doit.isra.17+0xe8/0x138
genl_rcv_msg+0xe4/0x220
netlink_rcv_skb+0x44/0x108
genl_rcv+0x40/0x58
netlink_unicast+0x198/0x268
netlink_sendmsg+0x1d4/0x418
sock_sendmsg+0x54/0x60
__sys_sendto+0xf4/0x120
__arm64_sys_sendto+0x30/0x40
el0_svc_common+0x8c/0x120
do_el0_svc+0x30/0xa0
el0_svc+0x20/0x30
el0_sync_handler+0x90/0xb8
el0_sync+0x160/0x180

Thus, upon lag enable/disable, load and unload only the IB representors
of the slaves preventing the deadlock mentioned above.

While at it, refactor the mlx5_esw_offloads_rep_load() function to have
a static helper method for its internal logic, in symmetry with the
representor unload design.

Fixes: 598fe77df855 ("net/mlx5: Lag, Create shared FDB when in switchdev mode")
Co-developed-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Maher Sanalla <msanalla@nvidia.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Simon Horman <horms@kernel.org>
Link: https://lore.kernel.org/r/20240509112951.590184-4-tariqt@nvidia.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


Revision tags: v6.6.35, v6.6.34, v6.6.33, v6.6.32, v6.6.31
# e93fc8d9 09-May-2024 Maher Sanalla <msanalla@nvidia.com>

net/mlx5: Reload only IB representors upon lag disable/enable

[ Upstream commit 0f06228d4a2dcc1fca5b3ddb0eefa09c05b102c4 ]

On lag disable, the bond IB device along with all of its
representors are

net/mlx5: Reload only IB representors upon lag disable/enable

[ Upstream commit 0f06228d4a2dcc1fca5b3ddb0eefa09c05b102c4 ]

On lag disable, the bond IB device along with all of its
representors are destroyed, and then the slaves' representors get reloaded.

In case the slave IB representor load fails, the eswitch error flow
unloads all representors, including ethernet representors, where the
netdevs get detached and removed from lag bond. Such flow is inaccurate
as the lag driver is not responsible for loading/unloading ethernet
representors. Furthermore, the flow described above begins by holding
lag lock to prevent bond changes during disable flow. However, when
reaching the ethernet representors detachment from lag, the lag lock is
required again, triggering the following deadlock:

Call trace:
__switch_to+0xf4/0x148
__schedule+0x2c8/0x7d0
schedule+0x50/0xe0
schedule_preempt_disabled+0x18/0x28
__mutex_lock.isra.13+0x2b8/0x570
__mutex_lock_slowpath+0x1c/0x28
mutex_lock+0x4c/0x68
mlx5_lag_remove_netdev+0x3c/0x1a0 [mlx5_core]
mlx5e_uplink_rep_disable+0x70/0xa0 [mlx5_core]
mlx5e_detach_netdev+0x6c/0xb0 [mlx5_core]
mlx5e_netdev_change_profile+0x44/0x138 [mlx5_core]
mlx5e_netdev_attach_nic_profile+0x28/0x38 [mlx5_core]
mlx5e_vport_rep_unload+0x184/0x1b8 [mlx5_core]
mlx5_esw_offloads_rep_load+0xd8/0xe0 [mlx5_core]
mlx5_eswitch_reload_reps+0x74/0xd0 [mlx5_core]
mlx5_disable_lag+0x130/0x138 [mlx5_core]
mlx5_lag_disable_change+0x6c/0x70 [mlx5_core] // hold ldev->lock
mlx5_devlink_eswitch_mode_set+0xc0/0x410 [mlx5_core]
devlink_nl_cmd_eswitch_set_doit+0xdc/0x180
genl_family_rcv_msg_doit.isra.17+0xe8/0x138
genl_rcv_msg+0xe4/0x220
netlink_rcv_skb+0x44/0x108
genl_rcv+0x40/0x58
netlink_unicast+0x198/0x268
netlink_sendmsg+0x1d4/0x418
sock_sendmsg+0x54/0x60
__sys_sendto+0xf4/0x120
__arm64_sys_sendto+0x30/0x40
el0_svc_common+0x8c/0x120
do_el0_svc+0x30/0xa0
el0_svc+0x20/0x30
el0_sync_handler+0x90/0xb8
el0_sync+0x160/0x180

Thus, upon lag enable/disable, load and unload only the IB representors
of the slaves preventing the deadlock mentioned above.

While at it, refactor the mlx5_esw_offloads_rep_load() function to have
a static helper method for its internal logic, in symmetry with the
representor unload design.

Fixes: 598fe77df855 ("net/mlx5: Lag, Create shared FDB when in switchdev mode")
Co-developed-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Maher Sanalla <msanalla@nvidia.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Simon Horman <horms@kernel.org>
Link: https://lore.kernel.org/r/20240509112951.590184-4-tariqt@nvidia.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


Revision tags: v6.6.35, v6.6.34, v6.6.33, v6.6.32, v6.6.31
# e93fc8d9 09-May-2024 Maher Sanalla <msanalla@nvidia.com>

net/mlx5: Reload only IB representors upon lag disable/enable

[ Upstream commit 0f06228d4a2dcc1fca5b3ddb0eefa09c05b102c4 ]

On lag disable, the bond IB device along with all of its
representors are

net/mlx5: Reload only IB representors upon lag disable/enable

[ Upstream commit 0f06228d4a2dcc1fca5b3ddb0eefa09c05b102c4 ]

On lag disable, the bond IB device along with all of its
representors are destroyed, and then the slaves' representors get reloaded.

In case the slave IB representor load fails, the eswitch error flow
unloads all representors, including ethernet representors, where the
netdevs get detached and removed from lag bond. Such flow is inaccurate
as the lag driver is not responsible for loading/unloading ethernet
representors. Furthermore, the flow described above begins by holding
lag lock to prevent bond changes during disable flow. However, when
reaching the ethernet representors detachment from lag, the lag lock is
required again, triggering the following deadlock:

Call trace:
__switch_to+0xf4/0x148
__schedule+0x2c8/0x7d0
schedule+0x50/0xe0
schedule_preempt_disabled+0x18/0x28
__mutex_lock.isra.13+0x2b8/0x570
__mutex_lock_slowpath+0x1c/0x28
mutex_lock+0x4c/0x68
mlx5_lag_remove_netdev+0x3c/0x1a0 [mlx5_core]
mlx5e_uplink_rep_disable+0x70/0xa0 [mlx5_core]
mlx5e_detach_netdev+0x6c/0xb0 [mlx5_core]
mlx5e_netdev_change_profile+0x44/0x138 [mlx5_core]
mlx5e_netdev_attach_nic_profile+0x28/0x38 [mlx5_core]
mlx5e_vport_rep_unload+0x184/0x1b8 [mlx5_core]
mlx5_esw_offloads_rep_load+0xd8/0xe0 [mlx5_core]
mlx5_eswitch_reload_reps+0x74/0xd0 [mlx5_core]
mlx5_disable_lag+0x130/0x138 [mlx5_core]
mlx5_lag_disable_change+0x6c/0x70 [mlx5_core] // hold ldev->lock
mlx5_devlink_eswitch_mode_set+0xc0/0x410 [mlx5_core]
devlink_nl_cmd_eswitch_set_doit+0xdc/0x180
genl_family_rcv_msg_doit.isra.17+0xe8/0x138
genl_rcv_msg+0xe4/0x220
netlink_rcv_skb+0x44/0x108
genl_rcv+0x40/0x58
netlink_unicast+0x198/0x268
netlink_sendmsg+0x1d4/0x418
sock_sendmsg+0x54/0x60
__sys_sendto+0xf4/0x120
__arm64_sys_sendto+0x30/0x40
el0_svc_common+0x8c/0x120
do_el0_svc+0x30/0xa0
el0_svc+0x20/0x30
el0_sync_handler+0x90/0xb8
el0_sync+0x160/0x180

Thus, upon lag enable/disable, load and unload only the IB representors
of the slaves preventing the deadlock mentioned above.

While at it, refactor the mlx5_esw_offloads_rep_load() function to have
a static helper method for its internal logic, in symmetry with the
representor unload design.

Fixes: 598fe77df855 ("net/mlx5: Lag, Create shared FDB when in switchdev mode")
Co-developed-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Maher Sanalla <msanalla@nvidia.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Simon Horman <horms@kernel.org>
Link: https://lore.kernel.org/r/20240509112951.590184-4-tariqt@nvidia.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


Revision tags: v6.6.35, v6.6.34, v6.6.33, v6.6.32, v6.6.31
# e93fc8d9 09-May-2024 Maher Sanalla <msanalla@nvidia.com>

net/mlx5: Reload only IB representors upon lag disable/enable

[ Upstream commit 0f06228d4a2dcc1fca5b3ddb0eefa09c05b102c4 ]

On lag disable, the bond IB device along with all of its
representors are

net/mlx5: Reload only IB representors upon lag disable/enable

[ Upstream commit 0f06228d4a2dcc1fca5b3ddb0eefa09c05b102c4 ]

On lag disable, the bond IB device along with all of its
representors are destroyed, and then the slaves' representors get reloaded.

In case the slave IB representor load fails, the eswitch error flow
unloads all representors, including ethernet representors, where the
netdevs get detached and removed from lag bond. Such flow is inaccurate
as the lag driver is not responsible for loading/unloading ethernet
representors. Furthermore, the flow described above begins by holding
lag lock to prevent bond changes during disable flow. However, when
reaching the ethernet representors detachment from lag, the lag lock is
required again, triggering the following deadlock:

Call trace:
__switch_to+0xf4/0x148
__schedule+0x2c8/0x7d0
schedule+0x50/0xe0
schedule_preempt_disabled+0x18/0x28
__mutex_lock.isra.13+0x2b8/0x570
__mutex_lock_slowpath+0x1c/0x28
mutex_lock+0x4c/0x68
mlx5_lag_remove_netdev+0x3c/0x1a0 [mlx5_core]
mlx5e_uplink_rep_disable+0x70/0xa0 [mlx5_core]
mlx5e_detach_netdev+0x6c/0xb0 [mlx5_core]
mlx5e_netdev_change_profile+0x44/0x138 [mlx5_core]
mlx5e_netdev_attach_nic_profile+0x28/0x38 [mlx5_core]
mlx5e_vport_rep_unload+0x184/0x1b8 [mlx5_core]
mlx5_esw_offloads_rep_load+0xd8/0xe0 [mlx5_core]
mlx5_eswitch_reload_reps+0x74/0xd0 [mlx5_core]
mlx5_disable_lag+0x130/0x138 [mlx5_core]
mlx5_lag_disable_change+0x6c/0x70 [mlx5_core] // hold ldev->lock
mlx5_devlink_eswitch_mode_set+0xc0/0x410 [mlx5_core]
devlink_nl_cmd_eswitch_set_doit+0xdc/0x180
genl_family_rcv_msg_doit.isra.17+0xe8/0x138
genl_rcv_msg+0xe4/0x220
netlink_rcv_skb+0x44/0x108
genl_rcv+0x40/0x58
netlink_unicast+0x198/0x268
netlink_sendmsg+0x1d4/0x418
sock_sendmsg+0x54/0x60
__sys_sendto+0xf4/0x120
__arm64_sys_sendto+0x30/0x40
el0_svc_common+0x8c/0x120
do_el0_svc+0x30/0xa0
el0_svc+0x20/0x30
el0_sync_handler+0x90/0xb8
el0_sync+0x160/0x180

Thus, upon lag enable/disable, load and unload only the IB representors
of the slaves preventing the deadlock mentioned above.

While at it, refactor the mlx5_esw_offloads_rep_load() function to have
a static helper method for its internal logic, in symmetry with the
representor unload design.

Fixes: 598fe77df855 ("net/mlx5: Lag, Create shared FDB when in switchdev mode")
Co-developed-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Maher Sanalla <msanalla@nvidia.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Simon Horman <horms@kernel.org>
Link: https://lore.kernel.org/r/20240509112951.590184-4-tariqt@nvidia.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


Revision tags: v6.6.30, v6.6.29, v6.6.28, v6.6.27, v6.6.26, v6.6.25, v6.6.24, v6.6.23, v6.6.16, v6.6.15, v6.6.14, v6.6.13, v6.6.12, v6.6.11, v6.6.10, v6.6.9, v6.6.8, v6.6.7, v6.6.6, v6.6.5, v6.6.4, v6.6.3, v6.6.2, v6.5.11, v6.6.1, v6.5.10, v6.6, v6.5.9, v6.5.8, v6.5.7
# 96c8c465 06-Oct-2023 Vlad Buslov <vladbu@nvidia.com>

net/mlx5: Refactor mlx5_flow_destination->rep pointer to vport num

[ Upstream commit 04ad04e4fdd10f92ef4f2b3f6227ec9824682197 ]

Currently the destination rep pointer is only used for comparisons or

net/mlx5: Refactor mlx5_flow_destination->rep pointer to vport num

[ Upstream commit 04ad04e4fdd10f92ef4f2b3f6227ec9824682197 ]

Currently the destination rep pointer is only used for comparisons or to
obtain vport number from it. Since it is used both during flow creation and
deletion it may point to representor of another eswitch instance which can
be deallocated during driver unload even when there are rules pointing to
it[0]. Refactor the code to store vport number and 'valid' flag instead of
the representor pointer.

[0]:
[176805.886303] ==================================================================
[176805.889433] BUG: KASAN: slab-use-after-free in esw_cleanup_dests+0x390/0x440 [mlx5_core]
[176805.892981] Read of size 2 at addr ffff888155090aa0 by task modprobe/27280

[176805.895462] CPU: 3 PID: 27280 Comm: modprobe Tainted: G B 6.6.0-rc3+ #1
[176805.896771] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014
[176805.898514] Call Trace:
[176805.899026] <TASK>
[176805.899519] dump_stack_lvl+0x33/0x50
[176805.900221] print_report+0xc2/0x610
[176805.900893] ? mlx5_chains_put_table+0x33d/0x8d0 [mlx5_core]
[176805.901897] ? esw_cleanup_dests+0x390/0x440 [mlx5_core]
[176805.902852] kasan_report+0xac/0xe0
[176805.903509] ? esw_cleanup_dests+0x390/0x440 [mlx5_core]
[176805.904461] esw_cleanup_dests+0x390/0x440 [mlx5_core]
[176805.905223] __mlx5_eswitch_del_rule+0x1ae/0x460 [mlx5_core]
[176805.906044] ? esw_cleanup_dests+0x440/0x440 [mlx5_core]
[176805.906822] ? xas_find_conflict+0x420/0x420
[176805.907496] ? down_read+0x11e/0x200
[176805.908046] mlx5e_tc_rule_unoffload+0xc4/0x2a0 [mlx5_core]
[176805.908844] mlx5e_tc_del_fdb_flow+0x7da/0xb10 [mlx5_core]
[176805.909597] mlx5e_flow_put+0x4b/0x80 [mlx5_core]
[176805.910275] mlx5e_delete_flower+0x5b4/0xb70 [mlx5_core]
[176805.911010] tc_setup_cb_reoffload+0x27/0xb0
[176805.911648] fl_reoffload+0x62d/0x900 [cls_flower]
[176805.912313] ? mlx5e_rep_indr_block_unbind+0xd0/0xd0 [mlx5_core]
[176805.913151] ? __fl_put+0x230/0x230 [cls_flower]
[176805.913768] ? filter_irq_stacks+0x90/0x90
[176805.914335] ? kasan_save_stack+0x1e/0x40
[176805.914893] ? kasan_set_track+0x21/0x30
[176805.915484] ? kasan_save_free_info+0x27/0x40
[176805.916105] tcf_block_playback_offloads+0x79/0x1f0
[176805.916773] ? mlx5e_rep_indr_block_unbind+0xd0/0xd0 [mlx5_core]
[176805.917647] tcf_block_unbind+0x12d/0x330
[176805.918239] tcf_block_offload_cmd.isra.0+0x24e/0x320
[176805.918953] ? tcf_block_bind+0x770/0x770
[176805.919551] ? _raw_read_unlock_irqrestore+0x30/0x30
[176805.920236] ? mutex_lock+0x7d/0xd0
[176805.920735] ? mutex_unlock+0x80/0xd0
[176805.921255] tcf_block_offload_unbind+0xa5/0x120
[176805.921909] __tcf_block_put+0xc2/0x2d0
[176805.922467] ingress_destroy+0xf4/0x3d0 [sch_ingress]
[176805.923178] __qdisc_destroy+0x9d/0x280
[176805.923741] dev_shutdown+0x1c6/0x330
[176805.924295] unregister_netdevice_many_notify+0x6ef/0x1500
[176805.925034] ? netdev_freemem+0x50/0x50
[176805.925610] ? _raw_spin_lock_irq+0x7b/0xd0
[176805.926235] ? _raw_spin_lock_bh+0xe0/0xe0
[176805.926849] unregister_netdevice_queue+0x1e0/0x280
[176805.927592] ? unregister_netdevice_many+0x10/0x10
[176805.928275] unregister_netdev+0x18/0x20
[176805.928835] mlx5e_vport_rep_unload+0xc0/0x200 [mlx5_core]
[176805.929608] mlx5_esw_offloads_unload_rep+0x9d/0xc0 [mlx5_core]
[176805.930492] mlx5_eswitch_unload_vf_vports+0x108/0x1a0 [mlx5_core]
[176805.931422] ? mlx5_eswitch_unload_sf_vport+0x50/0x50 [mlx5_core]
[176805.932304] ? rwsem_down_write_slowpath+0x11f0/0x11f0
[176805.932987] mlx5_eswitch_disable_sriov+0x6f9/0xa60 [mlx5_core]
[176805.933807] ? mlx5_core_disable_hca+0xe1/0x130 [mlx5_core]
[176805.934576] ? mlx5_eswitch_disable_locked+0x580/0x580 [mlx5_core]
[176805.935463] mlx5_device_disable_sriov+0x138/0x490 [mlx5_core]
[176805.936308] mlx5_sriov_disable+0x8c/0xb0 [mlx5_core]
[176805.937063] remove_one+0x7f/0x210 [mlx5_core]
[176805.937711] pci_device_remove+0x96/0x1c0
[176805.938289] device_release_driver_internal+0x361/0x520
[176805.938981] ? kobject_put+0x5c/0x330
[176805.939553] driver_detach+0xd7/0x1d0
[176805.940101] bus_remove_driver+0x11f/0x290
[176805.943847] pci_unregister_driver+0x23/0x1f0
[176805.944505] mlx5_cleanup+0xc/0x20 [mlx5_core]
[176805.945189] __x64_sys_delete_module+0x2b3/0x450
[176805.945837] ? module_flags+0x300/0x300
[176805.946377] ? dput+0xc2/0x830
[176805.946848] ? __kasan_record_aux_stack+0x9c/0xb0
[176805.947555] ? __call_rcu_common.constprop.0+0x46c/0xb50
[176805.948338] ? fpregs_assert_state_consistent+0x1d/0xa0
[176805.949055] ? exit_to_user_mode_prepare+0x30/0x120
[176805.949713] do_syscall_64+0x3d/0x90
[176805.950226] entry_SYSCALL_64_after_hwframe+0x46/0xb0
[176805.950904] RIP: 0033:0x7f7f42c3f5ab
[176805.951462] Code: 73 01 c3 48 8b 0d 75 a8 1b 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa b8 b0 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 45 a8 1b 00 f7 d8 64 89 01 48
[176805.953710] RSP: 002b:00007fff07dc9d08 EFLAGS: 00000206 ORIG_RAX: 00000000000000b0
[176805.954691] RAX: ffffffffffffffda RBX: 000055b6e91c01e0 RCX: 00007f7f42c3f5ab
[176805.955691] RDX: 0000000000000000 RSI: 0000000000000800 RDI: 000055b6e91c0248
[176805.956662] RBP: 000055b6e91c01e0 R08: 0000000000000000 R09: 0000000000000000
[176805.957601] R10: 00007f7f42d9eac0 R11: 0000000000000206 R12: 000055b6e91c0248
[176805.958593] R13: 0000000000000000 R14: 000055b6e91bfb38 R15: 0000000000000000
[176805.959599] </TASK>

[176805.960324] Allocated by task 20490:
[176805.960893] kasan_save_stack+0x1e/0x40
[176805.961463] kasan_set_track+0x21/0x30
[176805.962019] __kasan_kmalloc+0x77/0x90
[176805.962554] esw_offloads_init+0x1bb/0x480 [mlx5_core]
[176805.963318] mlx5_eswitch_init+0xc70/0x15c0 [mlx5_core]
[176805.964092] mlx5_init_one_devl_locked+0x366/0x1230 [mlx5_core]
[176805.964902] probe_one+0x6f7/0xc90 [mlx5_core]
[176805.965541] local_pci_probe+0xd7/0x180
[176805.966075] pci_device_probe+0x231/0x6f0
[176805.966631] really_probe+0x1d4/0xb50
[176805.967179] __driver_probe_device+0x18d/0x450
[176805.967810] driver_probe_device+0x49/0x120
[176805.968431] __driver_attach+0x1fb/0x490
[176805.968976] bus_for_each_dev+0xed/0x170
[176805.969560] bus_add_driver+0x21a/0x570
[176805.970124] driver_register+0x133/0x460
[176805.970684] 0xffffffffa0678065
[176805.971180] do_one_initcall+0x92/0x2b0
[176805.971744] do_init_module+0x22d/0x720
[176805.972318] load_module+0x58c3/0x63b0
[176805.972847] init_module_from_file+0xd2/0x130
[176805.973441] __x64_sys_finit_module+0x389/0x7c0
[176805.974045] do_syscall_64+0x3d/0x90
[176805.974556] entry_SYSCALL_64_after_hwframe+0x46/0xb0

[176805.975566] Freed by task 27280:
[176805.976077] kasan_save_stack+0x1e/0x40
[176805.976655] kasan_set_track+0x21/0x30
[176805.977221] kasan_save_free_info+0x27/0x40
[176805.977834] ____kasan_slab_free+0x11a/0x1b0
[176805.978505] __kmem_cache_free+0x163/0x2d0
[176805.979113] esw_offloads_cleanup_reps+0xb8/0x120 [mlx5_core]
[176805.979963] mlx5_eswitch_cleanup+0x182/0x270 [mlx5_core]
[176805.980763] mlx5_cleanup_once+0x9a/0x1e0 [mlx5_core]
[176805.981477] mlx5_uninit_one+0xa9/0x180 [mlx5_core]
[176805.982196] remove_one+0x8f/0x210 [mlx5_core]
[176805.982868] pci_device_remove+0x96/0x1c0
[176805.983461] device_release_driver_internal+0x361/0x520
[176805.984169] driver_detach+0xd7/0x1d0
[176805.984702] bus_remove_driver+0x11f/0x290
[176805.985261] pci_unregister_driver+0x23/0x1f0
[176805.985847] mlx5_cleanup+0xc/0x20 [mlx5_core]
[176805.986483] __x64_sys_delete_module+0x2b3/0x450
[176805.987126] do_syscall_64+0x3d/0x90
[176805.987665] entry_SYSCALL_64_after_hwframe+0x46/0xb0

[176805.988667] Last potentially related work creation:
[176805.989305] kasan_save_stack+0x1e/0x40
[176805.989839] __kasan_record_aux_stack+0x9c/0xb0
[176805.990443] kvfree_call_rcu+0x84/0xa30
[176805.990973] clean_xps_maps+0x265/0x6e0
[176805.991547] netif_reset_xps_queues.part.0+0x3f/0x80
[176805.992226] unregister_netdevice_many_notify+0xfcf/0x1500
[176805.992966] unregister_netdevice_queue+0x1e0/0x280
[176805.993638] unregister_netdev+0x18/0x20
[176805.994205] mlx5e_remove+0xba/0x1e0 [mlx5_core]
[176805.994872] auxiliary_bus_remove+0x52/0x70
[176805.995490] device_release_driver_internal+0x361/0x520
[176805.996196] bus_remove_device+0x1e1/0x3d0
[176805.996767] device_del+0x390/0x980
[176805.997270] mlx5_rescan_drivers_locked.part.0+0x130/0x540 [mlx5_core]
[176805.998195] mlx5_unregister_device+0x77/0xc0 [mlx5_core]
[176805.998989] mlx5_uninit_one+0x41/0x180 [mlx5_core]
[176805.999719] remove_one+0x8f/0x210 [mlx5_core]
[176806.000387] pci_device_remove+0x96/0x1c0
[176806.000938] device_release_driver_internal+0x361/0x520
[176806.001612] unbind_store+0xd8/0xf0
[176806.002108] kernfs_fop_write_iter+0x2c0/0x440
[176806.002748] vfs_write+0x725/0xba0
[176806.003294] ksys_write+0xed/0x1c0
[176806.003823] do_syscall_64+0x3d/0x90
[176806.004357] entry_SYSCALL_64_after_hwframe+0x46/0xb0

[176806.005317] The buggy address belongs to the object at ffff888155090a80
which belongs to the cache kmalloc-64 of size 64
[176806.006774] The buggy address is located 32 bytes inside of
freed 64-byte region [ffff888155090a80, ffff888155090ac0)

[176806.008773] The buggy address belongs to the physical page:
[176806.009480] page:00000000a407e0e6 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x155090
[176806.010633] flags: 0x200000000000800(slab|node=0|zone=2)
[176806.011352] page_type: 0xffffffff()
[176806.011905] raw: 0200000000000800 ffff888100042640 ffffea000422b1c0 dead000000000004
[176806.012949] raw: 0000000000000000 0000000000200020 00000001ffffffff 0000000000000000
[176806.013933] page dumped because: kasan: bad access detected

[176806.014935] Memory state around the buggy address:
[176806.015601] ffff888155090980: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[176806.016568] ffff888155090a00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[176806.017497] >ffff888155090a80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[176806.018438] ^
[176806.019007] ffff888155090b00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[176806.020001] ffff888155090b80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[176806.020996] ==================================================================

Fixes: a508728a4c8b ("net/mlx5e: VF tunnel RX traffic offloading")
Signed-off-by: Vlad Buslov <vladbu@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


# 594a3064 10-Oct-2023 Jianbo Liu <jianbol@nvidia.com>

net/mlx5e: Reduce eswitch mode_lock protection context

[ Upstream commit baac8351f74c543896b8fd40138b7ad9365587a3 ]

Currently eswitch mode_lock is so heavy, for example, it's locked
during the whol

net/mlx5e: Reduce eswitch mode_lock protection context

[ Upstream commit baac8351f74c543896b8fd40138b7ad9365587a3 ]

Currently eswitch mode_lock is so heavy, for example, it's locked
during the whole process of the mode change, which may need to hold
other locks. As the mode_lock is also used by IPSec to block mode and
encap change now, it is easy to cause lock dependency.

Since some of protections are also done by devlink lock, the eswitch
mode_lock is not needed at those places, and thus the possibility of
lockdep issue is reduced.

Fixes: c8e350e62fc5 ("net/mlx5e: Make TC and IPsec offloads mutually exclusive on a netdev")
Signed-off-by: Jianbo Liu <jianbol@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


Revision tags: v6.5.6, v6.5.5, v6.5.4, v6.5.3, v6.5.2, v6.1.51, v6.5.1, v6.1.50, v6.5, v6.1.49, v6.1.48
# b691b111 25-Aug-2023 Dima Chumak <dchumak@nvidia.com>

net/mlx5: Implement devlink port function cmds to control ipsec_packet

Implement devlink port function commands to enable / disable IPsec
packet offloads. This is used to control the IPsec capabilit

net/mlx5: Implement devlink port function cmds to control ipsec_packet

Implement devlink port function commands to enable / disable IPsec
packet offloads. This is used to control the IPsec capability of the
device.

When ipsec_offload is enabled for a VF, it prevents adding IPsec packet
offloads on the PF, because the two cannot be active simultaneously due
to HW constraints. Conversely, if there are any active IPsec packet
offloads on the PF, it's not allowed to enable ipsec_packet on a VF,
until PF IPsec offloads are cleared.

Signed-off-by: Dima Chumak <dchumak@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Link: https://lore.kernel.org/r/20230825062836.103744-9-saeed@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>

show more ...


# 06bab696 25-Aug-2023 Dima Chumak <dchumak@nvidia.com>

net/mlx5: Implement devlink port function cmds to control ipsec_crypto

Implement devlink port function commands to enable / disable IPsec
crypto offloads. This is used to control the IPsec capabili

net/mlx5: Implement devlink port function cmds to control ipsec_crypto

Implement devlink port function commands to enable / disable IPsec
crypto offloads. This is used to control the IPsec capability of the
device.

When ipsec_crypto is enabled for a VF, it prevents adding IPsec crypto
offloads on the PF, because the two cannot be active simultaneously due
to HW constraints. Conversely, if there are any active IPsec crypto
offloads on the PF, it's not allowed to enable ipsec_crypto on a VF,
until PF IPsec offloads are cleared.

Signed-off-by: Dima Chumak <dchumak@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Link: https://lore.kernel.org/r/20230825062836.103744-8-saeed@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>

show more ...


# 8efd7b17 25-Aug-2023 Leon Romanovsky <leonro@nvidia.com>

net/mlx5: Provide an interface to block change of IPsec capabilities

mlx5 HW can't perform IPsec offload operation simultaneously both on PF
and VFs at the same time. While the previous patches adde

net/mlx5: Provide an interface to block change of IPsec capabilities

mlx5 HW can't perform IPsec offload operation simultaneously both on PF
and VFs at the same time. While the previous patches added devlink knobs
to change IPsec capabilities dynamically, there is a need to add a logic
to block such IPsec capabilities for the cases when IPsec is already
configured.

Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Link: https://lore.kernel.org/r/20230825062836.103744-7-saeed@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>

show more ...


# e2537341 25-Aug-2023 Leon Romanovsky <leonro@nvidia.com>

net/mlx5e: Rewrite IPsec vs. TC block interface

In the commit 366e46242b8e ("net/mlx5e: Make IPsec offload work together
with eswitch and TC"), new API to block IPsec vs. TC creation was introduced.

net/mlx5e: Rewrite IPsec vs. TC block interface

In the commit 366e46242b8e ("net/mlx5e: Make IPsec offload work together
with eswitch and TC"), new API to block IPsec vs. TC creation was introduced.

Internally, that API used devlink lock to avoid races with userspace, but it is
not really needed as dev->priv.eswitch is stable and can't be changed. So remove
dependency on devlink lock and move block encap code back to its original place.

Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Link: https://lore.kernel.org/r/20230825062836.103744-5-saeed@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>

show more ...


Revision tags: v6.1.46, v6.1.45, v6.1.44, v6.1.43, v6.1.42, v6.1.41, v6.1.40, v6.1.39, v6.1.38, v6.1.37, v6.1.36, v6.4, v6.1.35, v6.1.34, v6.1.33, v6.1.32
# 7d833520 31-May-2023 Jiri Pirko <jiri@nvidia.com>

net/mlx5: Store vport in struct mlx5_devlink_port and use it in port ops

Instead of using internal devlink_port->index to perform vport lookup in
every devlink port op, store the vport pointer to th

net/mlx5: Store vport in struct mlx5_devlink_port and use it in port ops

Instead of using internal devlink_port->index to perform vport lookup in
every devlink port op, store the vport pointer to the container struct
mlx5_devlink_port and use it directly in port ops.

Signed-off-by: Jiri Pirko <jiri@nvidia.com>
Reviewed-by: Shay Drory <shayd@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>

show more ...


# 5c632cc3 01-Jun-2023 Jiri Pirko <jiri@nvidia.com>

net/mlx5: Relax mlx5_devlink_eswitch_get() return value checking

If called from port ops, it is not needed to perform the checks in
mlx5_devlink_eswitch_get(). The reason is devlink port would not b

net/mlx5: Relax mlx5_devlink_eswitch_get() return value checking

If called from port ops, it is not needed to perform the checks in
mlx5_devlink_eswitch_get(). The reason is devlink port would not be
registered if the checks are not true. Introduce relaxed version
mlx5_devlink_eswitch_nocheck_get() and use it in port ops.

Signed-off-by: Jiri Pirko <jiri@nvidia.com>
Reviewed-by: Shay Drory <shayd@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>

show more ...


# 2caa2a39 31-May-2023 Jiri Pirko <jiri@nvidia.com>

net/mlx5: Reduce number of vport lookups passing vport pointer instead of index

During devlink port init/cleanup and register/unregister calls, there
are many lookups of vport. Instead of passing vp

net/mlx5: Reduce number of vport lookups passing vport pointer instead of index

During devlink port init/cleanup and register/unregister calls, there
are many lookups of vport. Instead of passing vport_num as argument to
functions, pass the vport struct pointer directly and avoid repeated
lookups.

Signed-off-by: Jiri Pirko <jiri@nvidia.com>
Reviewed-by: Shay Drory <shayd@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>

show more ...


# 2c5f33f6 31-May-2023 Jiri Pirko <jiri@nvidia.com>

net/mlx5: Embed struct devlink_port into driver structure

Struct devlink_port is usually embedded in a driver-specific struct
which allows to carry driver context to devlink port ops.

Introduce a c

net/mlx5: Embed struct devlink_port into driver structure

Struct devlink_port is usually embedded in a driver-specific struct
which allows to carry driver context to devlink port ops.

Introduce a container struct to include devlink_port struct
in preparation to also include driver context for devlink port ops.

Signed-off-by: Jiri Pirko <jiri@nvidia.com>
Reviewed-by: Shay Drory <shayd@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>

show more ...


# 13f878a2 01-Jun-2023 Jiri Pirko <jiri@nvidia.com>

net/mlx5: Don't register ops for non-PF/VF/SF port and avoid checks in ops

Currently each PF/VF/SF devlink port op called into mlx5 code calls
is_port_function_supported() to check if the port is ei

net/mlx5: Don't register ops for non-PF/VF/SF port and avoid checks in ops

Currently each PF/VF/SF devlink port op called into mlx5 code calls
is_port_function_supported() to check if the port is either
PF, VF or SF. So make sure that the ops are registered with devlink
port only for those and avoid the is_port_function_supported() checks
in ops.

Signed-off-by: Jiri Pirko <jiri@nvidia.com>
Reviewed-by: Shay Drory <shayd@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>

show more ...


Revision tags: v6.1.31
# b940ec4b 26-May-2023 Jiri Pirko <jiri@nvidia.com>

net/mlx5: Remove no longer used mlx5_esw_offloads_sf_vport_enable/disable()

Since the previous patch removed the only users of these functions,
remove them.

Signed-off-by: Jiri Pirko <jiri@nvidia.c

net/mlx5: Remove no longer used mlx5_esw_offloads_sf_vport_enable/disable()

Since the previous patch removed the only users of these functions,
remove them.

Signed-off-by: Jiri Pirko <jiri@nvidia.com>
Reviewed-by: Shay Drory <shayd@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>

show more ...


# e855afd7 26-May-2023 Jiri Pirko <jiri@nvidia.com>

net/mlx5: Introduce mlx5_eswitch_load/unload_sf_vport() and use it from SF code

Similar to the PF/VF helpers, introduce a set of load/unload helpers
for SF vports. From there, call mlx5_eswitch_load

net/mlx5: Introduce mlx5_eswitch_load/unload_sf_vport() and use it from SF code

Similar to the PF/VF helpers, introduce a set of load/unload helpers
for SF vports. From there, call mlx5_eswitch_load/unload_vport() which
are common for PFs/VFs and newly introduced SF helpers.

Signed-off-by: Jiri Pirko <jiri@nvidia.com>
Reviewed-by: Shay Drory <shayd@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>

show more ...


# d9833bcf 25-May-2023 Jiri Pirko <jiri@nvidia.com>

net/mlx5: Push devlink port PF/VF init/cleanup calls out of devlink_port_register/unregister()

In order to prepare for
mlx5_esw_offloads_devlink_port_register/unregister() to be used
for SFs as well

net/mlx5: Push devlink port PF/VF init/cleanup calls out of devlink_port_register/unregister()

In order to prepare for
mlx5_esw_offloads_devlink_port_register/unregister() to be used
for SFs as well, push out the PF/VF specific init/cleanup calls outside.
Introduce mlx5_eswitch_load/unload_pf_vf_vport() and call them from
there. Use these new helpers of PF/VF loading and make
mlx5_eswitch_local/unload_vport() reusable for SFs.

Signed-off-by: Jiri Pirko <jiri@nvidia.com>
Reviewed-by: Shay Drory <shayd@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>

show more ...


# d1569537 31-Jul-2023 Jianbo Liu <jianbol@nvidia.com>

net/mlx5e: Modify and restore TC rules for IPSec TX rules

After IPsec policy/state TX rules are added, any TC flow rule, which
forwards packets to uplink, is modified to forward to IPsec TX tables.

net/mlx5e: Modify and restore TC rules for IPSec TX rules

After IPsec policy/state TX rules are added, any TC flow rule, which
forwards packets to uplink, is modified to forward to IPsec TX tables.
As these tables are destroyed dynamically, whenever there is no
reference to them, the destinations of this kind of rules must be
restored to uplink.

There is a special case for packet encapsulation, as the
packet_reformat_id in the extended destination is used to reformat
packets, but only for the VPORT destination. To forward packet to
IPsec table and do encapsulation in one FTE, move the
packet_reformat_id to flow context, instead of using the extended
destination. As a limitation, multiple encapsulations with table
forwarding, and one together with other VPORT destinations, are not
allowed, so add a check when offloading TC rules.

TC rules are not allowed before IPsec TX rule is added, so only need
to restore TC rules after flush IPSec TX rules. As they are saved in
the vport_rep rhashtables, we walk all the rules in the rhashtables,
and find TC rules with destinations pointing to IPsec tables, and
modify them one by one. To avoid concurrent issue, this handling is
done under the protection of eswitch mode_lock.

Signed-off-by: Jianbo Liu <jianbol@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Link: https://lore.kernel.org/r/7bcb2c7e2ecf0e0d06b095c8dcc6a37ea7f02faf.1690802064.git.leon@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>

show more ...


# 366e4624 31-Jul-2023 Jianbo Liu <jianbol@nvidia.com>

net/mlx5e: Make IPsec offload work together with eswitch and TC

The eswitch mode is not allowed to change if there are any IPsec rules.
Besides, by using mlx5_esw_try_lock() to get eswitch mode lock

net/mlx5e: Make IPsec offload work together with eswitch and TC

The eswitch mode is not allowed to change if there are any IPsec rules.
Besides, by using mlx5_esw_try_lock() to get eswitch mode lock, IPsec
rules are not allowed to be offloaded if there are any TC rules.

Signed-off-by: Jianbo Liu <jianbol@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Link: https://lore.kernel.org/r/e442b512b21a931fbdfb87d57ae428c37badd58a.1690802064.git.leon@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>

show more ...


# c6c2bf5d 31-Jul-2023 Jianbo Liu <jianbol@nvidia.com>

net/mlx5e: Support IPsec packet offload for TX in switchdev mode

The IPsec encryption is done at the last, so add new prio for IPsec
offload in FDB, and put it just lower than the slow path prio and

net/mlx5e: Support IPsec packet offload for TX in switchdev mode

The IPsec encryption is done at the last, so add new prio for IPsec
offload in FDB, and put it just lower than the slow path prio and
higher than the per-vport prio.
Three levels are added for TX. The first one is for ip xfrm policy.
The sa table is created in the second level for ip xfrm state. The
status table is created at the last to count the number of packets
encrypted.
The rules, which forward packets to uplink, are changed to forward
them to IPsec TX tables first. These rules are restored after those
tables are destroyed, which is done immediately when there is no
reference to them, just as what does in legacy mode. The support for
slow path is added here, by refreshing uplink's channels. But, the
handling for TC fast path, which is more complicated, will be added
later. Besides, reg c4 is used instead to match reqid.

Signed-off-by: Jianbo Liu <jianbol@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Link: https://lore.kernel.org/r/cfd0e6ffaf0b8c55ebaa9fb0649b7c504b6b8ec6.1690802064.git.leon@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>

show more ...


# 9eca8bb8 25-May-2023 Jiri Pirko <jiri@nvidia.com>

net/mlx5: Give esw_offloads_load/unload_rep() "mlx5_" prefix

As esw_offloads_load/unload_rep() are used outside eswitch.c it is nicer
for them to have "mlx5_" prefix. Add it.

Signed-off-by: Jiri Pi

net/mlx5: Give esw_offloads_load/unload_rep() "mlx5_" prefix

As esw_offloads_load/unload_rep() are used outside eswitch.c it is nicer
for them to have "mlx5_" prefix. Add it.

Signed-off-by: Jiri Pirko <jiri@nvidia.com>
Reviewed-by: Shay Drory <shayd@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>

show more ...


12345678910>>...21