History log of /openbmc/linux/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c (Results 1 – 25 of 36)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: v6.6.25, v6.6.24, v6.6.23, v6.6.16, v6.6.15, v6.6.14, v6.6.13, v6.6.12, v6.6.11, v6.6.10, v6.6.9, v6.6.8, v6.6.7, v6.6.6, v6.6.5, v6.6.4, v6.6.3, v6.6.2, v6.5.11, v6.6.1, v6.5.10, v6.6, v6.5.9, v6.5.8, v6.5.7
# 96c8c465 06-Oct-2023 Vlad Buslov <vladbu@nvidia.com>

net/mlx5: Refactor mlx5_flow_destination->rep pointer to vport num

[ Upstream commit 04ad04e4fdd10f92ef4f2b3f6227ec9824682197 ]

Currently the destination rep pointer is only used for comparisons or

net/mlx5: Refactor mlx5_flow_destination->rep pointer to vport num

[ Upstream commit 04ad04e4fdd10f92ef4f2b3f6227ec9824682197 ]

Currently the destination rep pointer is only used for comparisons or to
obtain vport number from it. Since it is used both during flow creation and
deletion it may point to representor of another eswitch instance which can
be deallocated during driver unload even when there are rules pointing to
it[0]. Refactor the code to store vport number and 'valid' flag instead of
the representor pointer.

[0]:
[176805.886303] ==================================================================
[176805.889433] BUG: KASAN: slab-use-after-free in esw_cleanup_dests+0x390/0x440 [mlx5_core]
[176805.892981] Read of size 2 at addr ffff888155090aa0 by task modprobe/27280

[176805.895462] CPU: 3 PID: 27280 Comm: modprobe Tainted: G B 6.6.0-rc3+ #1
[176805.896771] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014
[176805.898514] Call Trace:
[176805.899026] <TASK>
[176805.899519] dump_stack_lvl+0x33/0x50
[176805.900221] print_report+0xc2/0x610
[176805.900893] ? mlx5_chains_put_table+0x33d/0x8d0 [mlx5_core]
[176805.901897] ? esw_cleanup_dests+0x390/0x440 [mlx5_core]
[176805.902852] kasan_report+0xac/0xe0
[176805.903509] ? esw_cleanup_dests+0x390/0x440 [mlx5_core]
[176805.904461] esw_cleanup_dests+0x390/0x440 [mlx5_core]
[176805.905223] __mlx5_eswitch_del_rule+0x1ae/0x460 [mlx5_core]
[176805.906044] ? esw_cleanup_dests+0x440/0x440 [mlx5_core]
[176805.906822] ? xas_find_conflict+0x420/0x420
[176805.907496] ? down_read+0x11e/0x200
[176805.908046] mlx5e_tc_rule_unoffload+0xc4/0x2a0 [mlx5_core]
[176805.908844] mlx5e_tc_del_fdb_flow+0x7da/0xb10 [mlx5_core]
[176805.909597] mlx5e_flow_put+0x4b/0x80 [mlx5_core]
[176805.910275] mlx5e_delete_flower+0x5b4/0xb70 [mlx5_core]
[176805.911010] tc_setup_cb_reoffload+0x27/0xb0
[176805.911648] fl_reoffload+0x62d/0x900 [cls_flower]
[176805.912313] ? mlx5e_rep_indr_block_unbind+0xd0/0xd0 [mlx5_core]
[176805.913151] ? __fl_put+0x230/0x230 [cls_flower]
[176805.913768] ? filter_irq_stacks+0x90/0x90
[176805.914335] ? kasan_save_stack+0x1e/0x40
[176805.914893] ? kasan_set_track+0x21/0x30
[176805.915484] ? kasan_save_free_info+0x27/0x40
[176805.916105] tcf_block_playback_offloads+0x79/0x1f0
[176805.916773] ? mlx5e_rep_indr_block_unbind+0xd0/0xd0 [mlx5_core]
[176805.917647] tcf_block_unbind+0x12d/0x330
[176805.918239] tcf_block_offload_cmd.isra.0+0x24e/0x320
[176805.918953] ? tcf_block_bind+0x770/0x770
[176805.919551] ? _raw_read_unlock_irqrestore+0x30/0x30
[176805.920236] ? mutex_lock+0x7d/0xd0
[176805.920735] ? mutex_unlock+0x80/0xd0
[176805.921255] tcf_block_offload_unbind+0xa5/0x120
[176805.921909] __tcf_block_put+0xc2/0x2d0
[176805.922467] ingress_destroy+0xf4/0x3d0 [sch_ingress]
[176805.923178] __qdisc_destroy+0x9d/0x280
[176805.923741] dev_shutdown+0x1c6/0x330
[176805.924295] unregister_netdevice_many_notify+0x6ef/0x1500
[176805.925034] ? netdev_freemem+0x50/0x50
[176805.925610] ? _raw_spin_lock_irq+0x7b/0xd0
[176805.926235] ? _raw_spin_lock_bh+0xe0/0xe0
[176805.926849] unregister_netdevice_queue+0x1e0/0x280
[176805.927592] ? unregister_netdevice_many+0x10/0x10
[176805.928275] unregister_netdev+0x18/0x20
[176805.928835] mlx5e_vport_rep_unload+0xc0/0x200 [mlx5_core]
[176805.929608] mlx5_esw_offloads_unload_rep+0x9d/0xc0 [mlx5_core]
[176805.930492] mlx5_eswitch_unload_vf_vports+0x108/0x1a0 [mlx5_core]
[176805.931422] ? mlx5_eswitch_unload_sf_vport+0x50/0x50 [mlx5_core]
[176805.932304] ? rwsem_down_write_slowpath+0x11f0/0x11f0
[176805.932987] mlx5_eswitch_disable_sriov+0x6f9/0xa60 [mlx5_core]
[176805.933807] ? mlx5_core_disable_hca+0xe1/0x130 [mlx5_core]
[176805.934576] ? mlx5_eswitch_disable_locked+0x580/0x580 [mlx5_core]
[176805.935463] mlx5_device_disable_sriov+0x138/0x490 [mlx5_core]
[176805.936308] mlx5_sriov_disable+0x8c/0xb0 [mlx5_core]
[176805.937063] remove_one+0x7f/0x210 [mlx5_core]
[176805.937711] pci_device_remove+0x96/0x1c0
[176805.938289] device_release_driver_internal+0x361/0x520
[176805.938981] ? kobject_put+0x5c/0x330
[176805.939553] driver_detach+0xd7/0x1d0
[176805.940101] bus_remove_driver+0x11f/0x290
[176805.943847] pci_unregister_driver+0x23/0x1f0
[176805.944505] mlx5_cleanup+0xc/0x20 [mlx5_core]
[176805.945189] __x64_sys_delete_module+0x2b3/0x450
[176805.945837] ? module_flags+0x300/0x300
[176805.946377] ? dput+0xc2/0x830
[176805.946848] ? __kasan_record_aux_stack+0x9c/0xb0
[176805.947555] ? __call_rcu_common.constprop.0+0x46c/0xb50
[176805.948338] ? fpregs_assert_state_consistent+0x1d/0xa0
[176805.949055] ? exit_to_user_mode_prepare+0x30/0x120
[176805.949713] do_syscall_64+0x3d/0x90
[176805.950226] entry_SYSCALL_64_after_hwframe+0x46/0xb0
[176805.950904] RIP: 0033:0x7f7f42c3f5ab
[176805.951462] Code: 73 01 c3 48 8b 0d 75 a8 1b 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa b8 b0 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 45 a8 1b 00 f7 d8 64 89 01 48
[176805.953710] RSP: 002b:00007fff07dc9d08 EFLAGS: 00000206 ORIG_RAX: 00000000000000b0
[176805.954691] RAX: ffffffffffffffda RBX: 000055b6e91c01e0 RCX: 00007f7f42c3f5ab
[176805.955691] RDX: 0000000000000000 RSI: 0000000000000800 RDI: 000055b6e91c0248
[176805.956662] RBP: 000055b6e91c01e0 R08: 0000000000000000 R09: 0000000000000000
[176805.957601] R10: 00007f7f42d9eac0 R11: 0000000000000206 R12: 000055b6e91c0248
[176805.958593] R13: 0000000000000000 R14: 000055b6e91bfb38 R15: 0000000000000000
[176805.959599] </TASK>

[176805.960324] Allocated by task 20490:
[176805.960893] kasan_save_stack+0x1e/0x40
[176805.961463] kasan_set_track+0x21/0x30
[176805.962019] __kasan_kmalloc+0x77/0x90
[176805.962554] esw_offloads_init+0x1bb/0x480 [mlx5_core]
[176805.963318] mlx5_eswitch_init+0xc70/0x15c0 [mlx5_core]
[176805.964092] mlx5_init_one_devl_locked+0x366/0x1230 [mlx5_core]
[176805.964902] probe_one+0x6f7/0xc90 [mlx5_core]
[176805.965541] local_pci_probe+0xd7/0x180
[176805.966075] pci_device_probe+0x231/0x6f0
[176805.966631] really_probe+0x1d4/0xb50
[176805.967179] __driver_probe_device+0x18d/0x450
[176805.967810] driver_probe_device+0x49/0x120
[176805.968431] __driver_attach+0x1fb/0x490
[176805.968976] bus_for_each_dev+0xed/0x170
[176805.969560] bus_add_driver+0x21a/0x570
[176805.970124] driver_register+0x133/0x460
[176805.970684] 0xffffffffa0678065
[176805.971180] do_one_initcall+0x92/0x2b0
[176805.971744] do_init_module+0x22d/0x720
[176805.972318] load_module+0x58c3/0x63b0
[176805.972847] init_module_from_file+0xd2/0x130
[176805.973441] __x64_sys_finit_module+0x389/0x7c0
[176805.974045] do_syscall_64+0x3d/0x90
[176805.974556] entry_SYSCALL_64_after_hwframe+0x46/0xb0

[176805.975566] Freed by task 27280:
[176805.976077] kasan_save_stack+0x1e/0x40
[176805.976655] kasan_set_track+0x21/0x30
[176805.977221] kasan_save_free_info+0x27/0x40
[176805.977834] ____kasan_slab_free+0x11a/0x1b0
[176805.978505] __kmem_cache_free+0x163/0x2d0
[176805.979113] esw_offloads_cleanup_reps+0xb8/0x120 [mlx5_core]
[176805.979963] mlx5_eswitch_cleanup+0x182/0x270 [mlx5_core]
[176805.980763] mlx5_cleanup_once+0x9a/0x1e0 [mlx5_core]
[176805.981477] mlx5_uninit_one+0xa9/0x180 [mlx5_core]
[176805.982196] remove_one+0x8f/0x210 [mlx5_core]
[176805.982868] pci_device_remove+0x96/0x1c0
[176805.983461] device_release_driver_internal+0x361/0x520
[176805.984169] driver_detach+0xd7/0x1d0
[176805.984702] bus_remove_driver+0x11f/0x290
[176805.985261] pci_unregister_driver+0x23/0x1f0
[176805.985847] mlx5_cleanup+0xc/0x20 [mlx5_core]
[176805.986483] __x64_sys_delete_module+0x2b3/0x450
[176805.987126] do_syscall_64+0x3d/0x90
[176805.987665] entry_SYSCALL_64_after_hwframe+0x46/0xb0

[176805.988667] Last potentially related work creation:
[176805.989305] kasan_save_stack+0x1e/0x40
[176805.989839] __kasan_record_aux_stack+0x9c/0xb0
[176805.990443] kvfree_call_rcu+0x84/0xa30
[176805.990973] clean_xps_maps+0x265/0x6e0
[176805.991547] netif_reset_xps_queues.part.0+0x3f/0x80
[176805.992226] unregister_netdevice_many_notify+0xfcf/0x1500
[176805.992966] unregister_netdevice_queue+0x1e0/0x280
[176805.993638] unregister_netdev+0x18/0x20
[176805.994205] mlx5e_remove+0xba/0x1e0 [mlx5_core]
[176805.994872] auxiliary_bus_remove+0x52/0x70
[176805.995490] device_release_driver_internal+0x361/0x520
[176805.996196] bus_remove_device+0x1e1/0x3d0
[176805.996767] device_del+0x390/0x980
[176805.997270] mlx5_rescan_drivers_locked.part.0+0x130/0x540 [mlx5_core]
[176805.998195] mlx5_unregister_device+0x77/0xc0 [mlx5_core]
[176805.998989] mlx5_uninit_one+0x41/0x180 [mlx5_core]
[176805.999719] remove_one+0x8f/0x210 [mlx5_core]
[176806.000387] pci_device_remove+0x96/0x1c0
[176806.000938] device_release_driver_internal+0x361/0x520
[176806.001612] unbind_store+0xd8/0xf0
[176806.002108] kernfs_fop_write_iter+0x2c0/0x440
[176806.002748] vfs_write+0x725/0xba0
[176806.003294] ksys_write+0xed/0x1c0
[176806.003823] do_syscall_64+0x3d/0x90
[176806.004357] entry_SYSCALL_64_after_hwframe+0x46/0xb0

[176806.005317] The buggy address belongs to the object at ffff888155090a80
which belongs to the cache kmalloc-64 of size 64
[176806.006774] The buggy address is located 32 bytes inside of
freed 64-byte region [ffff888155090a80, ffff888155090ac0)

[176806.008773] The buggy address belongs to the physical page:
[176806.009480] page:00000000a407e0e6 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x155090
[176806.010633] flags: 0x200000000000800(slab|node=0|zone=2)
[176806.011352] page_type: 0xffffffff()
[176806.011905] raw: 0200000000000800 ffff888100042640 ffffea000422b1c0 dead000000000004
[176806.012949] raw: 0000000000000000 0000000000200020 00000001ffffffff 0000000000000000
[176806.013933] page dumped because: kasan: bad access detected

[176806.014935] Memory state around the buggy address:
[176806.015601] ffff888155090980: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[176806.016568] ffff888155090a00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[176806.017497] >ffff888155090a80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[176806.018438] ^
[176806.019007] ffff888155090b00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[176806.020001] ffff888155090b80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[176806.020996] ==================================================================

Fixes: a508728a4c8b ("net/mlx5e: VF tunnel RX traffic offloading")
Signed-off-by: Vlad Buslov <vladbu@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


Revision tags: v6.5.6, v6.5.5, v6.5.4, v6.5.3
# 06b4eac9 11-Sep-2023 Jianbo Liu <jianbol@nvidia.com>

net/mlx5e: Don't offload internal port if filter device is out device

In the cited commit, if the routing device is ovs internal port, the
out device is set to uplink, and packets go out after encap

net/mlx5e: Don't offload internal port if filter device is out device

In the cited commit, if the routing device is ovs internal port, the
out device is set to uplink, and packets go out after encapsulation.

If filter device is uplink, it can trigger the following syndrome:
mlx5_core 0000:08:00.0: mlx5_cmd_out_err:803:(pid 3966): SET_FLOW_TABLE_ENTRY(0x936) op_mod(0x0) failed, status bad parameter(0x3), syndrome (0xcdb051), err(-22)

Fix this issue by not offloading internal port if filter device is out
device. In this case, packets are not forwarded to the root table to
be processed, the termination table is used instead to forward them
from uplink to uplink.

Fixes: 100ad4e2d758 ("net/mlx5e: Offload internal port as encap route device")
Signed-off-by: Jianbo Liu <jianbol@nvidia.com>
Reviewed-by: Ariel Levkovich <lariel@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>

show more ...


Revision tags: v6.5.2, v6.1.51, v6.5.1, v6.1.50, v6.5, v6.1.49, v6.1.48, v6.1.46, v6.1.45, v6.1.44, v6.1.43, v6.1.42
# 6b5926eb 26-Jul-2023 Chris Mi <cmi@nvidia.com>

net/mlx5e: Unoffload post act rule when handling FIB events

If having the following tc rule on stack device:

filter parent ffff: protocol ip pref 3 flower chain 1
filter parent ffff: protocol ip pr

net/mlx5e: Unoffload post act rule when handling FIB events

If having the following tc rule on stack device:

filter parent ffff: protocol ip pref 3 flower chain 1
filter parent ffff: protocol ip pref 3 flower chain 1 handle 0x1
dst_mac 24:25:d0:e1:00:00
src_mac 02:25:d0:25:01:02
eth_type ipv4
ct_state +trk+new
in_hw in_hw_count 1
action order 1: ct commit zone 0 pipe
index 2 ref 1 bind 1 installed 3807 sec used 3779 sec firstused 3800 sec
Action statistics:
Sent 120 bytes 2 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
used_hw_stats delayed

action order 2: tunnel_key set
src_ip 192.168.1.25
dst_ip 192.168.1.26
key_id 4
dst_port 4789
csum pipe
index 3 ref 1 bind 1 installed 3807 sec used 3779 sec firstused 3800 sec
Action statistics:
Sent 120 bytes 2 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
used_hw_stats delayed

action order 3: mirred (Egress Redirect to device vxlan1) stolen
index 9 ref 1 bind 1 installed 3807 sec used 3779 sec firstused 3800 sec
Action statistics:
Sent 120 bytes 2 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
used_hw_stats delayed

When handling FIB events, the rule in post act will not be deleted.
And because the post act rule has packet reformat and modify header
actions, also will hit the following syndromes:

mlx5_core 0000:08:00.0: mlx5_cmd_out_err:829:(pid 11613): DEALLOC_MODIFY_HEADER_CONTEXT(0x941) op_mod(0x0) failed, status bad resource state(0x9), syndrome (0x1ab444), err(-22)
mlx5_core 0000:08:00.0: mlx5_cmd_out_err:829:(pid 11613): DEALLOC_PACKET_REFORMAT_CONTEXT(0x93e) op_mod(0x0) failed, status bad resource state(0x9), syndrome (0x179e84), err(-22)

Fix it by unoffloading post act rule when handling FIB events.

Fixes: 314e1105831b ("net/mlx5e: Add post act offload/unoffload API")
Signed-off-by: Chris Mi <cmi@nvidia.com>
Reviewed-by: Vlad Buslov <vladbu@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>

show more ...


Revision tags: v6.1.41, v6.1.40, v6.1.39, v6.1.38, v6.1.37
# 93a33193 29-Jun-2023 Chris Mi <cmi@nvidia.com>

net/mlx5e: Don't hold encap tbl lock if there is no encap action

The cited commit holds encap tbl lock unconditionally when setting
up dests. But it may cause the following deadlock:

PID: 1063722

net/mlx5e: Don't hold encap tbl lock if there is no encap action

The cited commit holds encap tbl lock unconditionally when setting
up dests. But it may cause the following deadlock:

PID: 1063722 TASK: ffffa062ca5d0000 CPU: 13 COMMAND: "handler8"
#0 [ffffb14de05b7368] __schedule at ffffffffa1d5aa91
#1 [ffffb14de05b7410] schedule at ffffffffa1d5afdb
#2 [ffffb14de05b7430] schedule_preempt_disabled at ffffffffa1d5b528
#3 [ffffb14de05b7440] __mutex_lock at ffffffffa1d5d6cb
#4 [ffffb14de05b74e8] mutex_lock_nested at ffffffffa1d5ddeb
#5 [ffffb14de05b74f8] mlx5e_tc_tun_encap_dests_set at ffffffffc12f2096 [mlx5_core]
#6 [ffffb14de05b7568] post_process_attr at ffffffffc12d9fc5 [mlx5_core]
#7 [ffffb14de05b75a0] mlx5e_tc_add_fdb_flow at ffffffffc12de877 [mlx5_core]
#8 [ffffb14de05b75f0] __mlx5e_add_fdb_flow at ffffffffc12e0eef [mlx5_core]
#9 [ffffb14de05b7660] mlx5e_tc_add_flow at ffffffffc12e12f7 [mlx5_core]
#10 [ffffb14de05b76b8] mlx5e_configure_flower at ffffffffc12e1686 [mlx5_core]
#11 [ffffb14de05b7720] mlx5e_rep_indr_offload at ffffffffc12e3817 [mlx5_core]
#12 [ffffb14de05b7730] mlx5e_rep_indr_setup_tc_cb at ffffffffc12e388a [mlx5_core]
#13 [ffffb14de05b7740] tc_setup_cb_add at ffffffffa1ab2ba8
#14 [ffffb14de05b77a0] fl_hw_replace_filter at ffffffffc0bdec2f [cls_flower]
#15 [ffffb14de05b7868] fl_change at ffffffffc0be6caa [cls_flower]
#16 [ffffb14de05b7908] tc_new_tfilter at ffffffffa1ab71f0

[1031218.028143] wait_for_completion+0x24/0x30
[1031218.028589] mlx5e_update_route_decap_flows+0x9a/0x1e0 [mlx5_core]
[1031218.029256] mlx5e_tc_fib_event_work+0x1ad/0x300 [mlx5_core]
[1031218.029885] process_one_work+0x24e/0x510

Actually no need to hold encap tbl lock if there is no encap action.
Fix it by checking if encap action exists or not before holding
encap tbl lock.

Fixes: 37c3b9fa7ccf ("net/mlx5e: Prevent encap offload when neigh update is running")
Signed-off-by: Chris Mi <cmi@nvidia.com>
Reviewed-by: Vlad Buslov <vladbu@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>

show more ...


Revision tags: v6.1.36, v6.4, v6.1.35, v6.1.34, v6.1.33, v6.1.32, v6.1.31, v6.1.30, v6.1.29, v6.1.28, v6.1.27, v6.1.26, v6.3, v6.1.25, v6.1.24, v6.1.23, v6.1.22, v6.1.21, v6.1.20, v6.1.19, v6.1.18, v6.1.17, v6.1.16, v6.1.15, v6.1.14, v6.1.13
# 37c3b9fa 20-Feb-2023 Chris Mi <cmi@nvidia.com>

net/mlx5e: Prevent encap offload when neigh update is running

The cited commit adds a compeletion to remove dependency on rtnl
lock. But it causes a deadlock for multiple encapsulations:

crash> bt

net/mlx5e: Prevent encap offload when neigh update is running

The cited commit adds a compeletion to remove dependency on rtnl
lock. But it causes a deadlock for multiple encapsulations:

crash> bt ffff8aece8a64000
PID: 1514557 TASK: ffff8aece8a64000 CPU: 3 COMMAND: "tc"
#0 [ffffa6d14183f368] __schedule at ffffffffb8ba7f45
#1 [ffffa6d14183f3f8] schedule at ffffffffb8ba8418
#2 [ffffa6d14183f418] schedule_preempt_disabled at ffffffffb8ba8898
#3 [ffffa6d14183f428] __mutex_lock at ffffffffb8baa7f8
#4 [ffffa6d14183f4d0] mutex_lock_nested at ffffffffb8baabeb
#5 [ffffa6d14183f4e0] mlx5e_attach_encap at ffffffffc0f48c17 [mlx5_core]
#6 [ffffa6d14183f628] mlx5e_tc_add_fdb_flow at ffffffffc0f39680 [mlx5_core]
#7 [ffffa6d14183f688] __mlx5e_add_fdb_flow at ffffffffc0f3b636 [mlx5_core]
#8 [ffffa6d14183f6f0] mlx5e_tc_add_flow at ffffffffc0f3bcdf [mlx5_core]
#9 [ffffa6d14183f728] mlx5e_configure_flower at ffffffffc0f3c1d1 [mlx5_core]
#10 [ffffa6d14183f790] mlx5e_rep_setup_tc_cls_flower at ffffffffc0f3d529 [mlx5_core]
#11 [ffffa6d14183f7a0] mlx5e_rep_setup_tc_cb at ffffffffc0f3d714 [mlx5_core]
#12 [ffffa6d14183f7b0] tc_setup_cb_add at ffffffffb8931bb8
#13 [ffffa6d14183f810] fl_hw_replace_filter at ffffffffc0dae901 [cls_flower]
#14 [ffffa6d14183f8d8] fl_change at ffffffffc0db5c57 [cls_flower]
#15 [ffffa6d14183f970] tc_new_tfilter at ffffffffb8936047
#16 [ffffa6d14183fac8] rtnetlink_rcv_msg at ffffffffb88c7c31
#17 [ffffa6d14183fb50] netlink_rcv_skb at ffffffffb8942853
#18 [ffffa6d14183fbc0] rtnetlink_rcv at ffffffffb88c1835
#19 [ffffa6d14183fbd0] netlink_unicast at ffffffffb8941f27
#20 [ffffa6d14183fc18] netlink_sendmsg at ffffffffb8942245
#21 [ffffa6d14183fc98] sock_sendmsg at ffffffffb887d482
#22 [ffffa6d14183fcb8] ____sys_sendmsg at ffffffffb887d81a
#23 [ffffa6d14183fd38] ___sys_sendmsg at ffffffffb88806e2
#24 [ffffa6d14183fe90] __sys_sendmsg at ffffffffb88807a2
#25 [ffffa6d14183ff28] __x64_sys_sendmsg at ffffffffb888080f
#26 [ffffa6d14183ff38] do_syscall_64 at ffffffffb8b9b6a8
#27 [ffffa6d14183ff50] entry_SYSCALL_64_after_hwframe at ffffffffb8c0007c
crash> bt 0xffff8aeb07544000
PID: 1110766 TASK: ffff8aeb07544000 CPU: 0 COMMAND: "kworker/u20:9"
#0 [ffffa6d14e6b7bd8] __schedule at ffffffffb8ba7f45
#1 [ffffa6d14e6b7c68] schedule at ffffffffb8ba8418
#2 [ffffa6d14e6b7c88] schedule_timeout at ffffffffb8baef88
#3 [ffffa6d14e6b7d10] wait_for_completion at ffffffffb8ba968b
#4 [ffffa6d14e6b7d60] mlx5e_take_all_encap_flows at ffffffffc0f47ec4 [mlx5_core]
#5 [ffffa6d14e6b7da0] mlx5e_rep_update_flows at ffffffffc0f3e734 [mlx5_core]
#6 [ffffa6d14e6b7df8] mlx5e_rep_neigh_update at ffffffffc0f400bb [mlx5_core]
#7 [ffffa6d14e6b7e50] process_one_work at ffffffffb80acc9c
#8 [ffffa6d14e6b7ed0] worker_thread at ffffffffb80ad012
#9 [ffffa6d14e6b7f10] kthread at ffffffffb80b615d
#10 [ffffa6d14e6b7f50] ret_from_fork at ffffffffb8001b2f

After the first encap is attached, flow will be added to encap
entry's flows list. If neigh update is running at this time, the
following encaps of the flow can't hold the encap_tbl_lock and
sleep. If neigh update thread is waiting for that flow's init_done,
deadlock happens.

Fix it by holding lock outside of the for loop. If neigh update is
running, prevent encap flows from offloading. Since the lock is held
outside of the for loop, concurrent creation of encap entries is not
allowed. So remove unnecessary wait_for_completion call for res_ready.

Fixes: 95435ad7999b ("net/mlx5e: Only access fully initialized flows in neigh update")
Signed-off-by: Chris Mi <cmi@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Reviewed-by: Vlad Buslov <vladbu@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>

show more ...


# e2ab5aa1 01-Mar-2023 Chris Mi <cmi@nvidia.com>

net/mlx5e: Extract remaining tunnel encap code to dedicated file

Move set_encap_dests() and clean_encap_dests() to the tunnel encap
dedicated file. And rename them to mlx5e_tc_tun_encap_dests_set()

net/mlx5e: Extract remaining tunnel encap code to dedicated file

Move set_encap_dests() and clean_encap_dests() to the tunnel encap
dedicated file. And rename them to mlx5e_tc_tun_encap_dests_set()
and mlx5e_tc_tun_encap_dests_unset().

No functional change in this patch. It is needed in the next patch.

Signed-off-by: Chris Mi <cmi@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>

show more ...


# be071cdb 03-Apr-2023 Vlad Buslov <vladbu@nvidia.com>

net/mlx5e: Use correct encap attribute during invalidation

With introduction of post action infrastructure most of the users of encap
attribute had been modified in order to obtain the correct attri

net/mlx5e: Use correct encap attribute during invalidation

With introduction of post action infrastructure most of the users of encap
attribute had been modified in order to obtain the correct attribute by
calling mlx5e_tc_get_encap_attr() helper instead of assuming encap action
is always on default attribute. However, the cited commit didn't modify
mlx5e_invalidate_encap() which prevents it from destroying correct modify
header action which leads to a warning [0]. Fix the issue by using correct
attribute.

[0]:

Feb 21 09:47:35 c-237-177-40-045 kernel: WARNING: CPU: 17 PID: 654 at drivers/net/ethernet/mellanox/mlx5/core/en_tc.c:684 mlx5e_tc_attach_mod_hdr+0x1cc/0x230 [mlx5_core]
Feb 21 09:47:35 c-237-177-40-045 kernel: RIP: 0010:mlx5e_tc_attach_mod_hdr+0x1cc/0x230 [mlx5_core]
Feb 21 09:47:35 c-237-177-40-045 kernel: Call Trace:
Feb 21 09:47:35 c-237-177-40-045 kernel: <TASK>
Feb 21 09:47:35 c-237-177-40-045 kernel: mlx5e_tc_fib_event_work+0x8e3/0x1f60 [mlx5_core]
Feb 21 09:47:35 c-237-177-40-045 kernel: ? mlx5e_take_all_encap_flows+0xe0/0xe0 [mlx5_core]
Feb 21 09:47:35 c-237-177-40-045 kernel: ? lock_downgrade+0x6d0/0x6d0
Feb 21 09:47:35 c-237-177-40-045 kernel: ? lockdep_hardirqs_on_prepare+0x273/0x3f0
Feb 21 09:47:35 c-237-177-40-045 kernel: ? lockdep_hardirqs_on_prepare+0x273/0x3f0
Feb 21 09:47:35 c-237-177-40-045 kernel: process_one_work+0x7c2/0x1310
Feb 21 09:47:35 c-237-177-40-045 kernel: ? lockdep_hardirqs_on_prepare+0x3f0/0x3f0
Feb 21 09:47:35 c-237-177-40-045 kernel: ? pwq_dec_nr_in_flight+0x230/0x230
Feb 21 09:47:35 c-237-177-40-045 kernel: ? rwlock_bug.part.0+0x90/0x90
Feb 21 09:47:35 c-237-177-40-045 kernel: worker_thread+0x59d/0xec0
Feb 21 09:47:35 c-237-177-40-045 kernel: ? __kthread_parkme+0xd9/0x1d0

Fixes: 8300f225268b ("net/mlx5e: Create new flow attr for multi table actions")
Signed-off-by: Vlad Buslov <vladbu@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>

show more ...


# 8cdc3223 27-Mar-2023 Kuniyuki Iwashima <kuniyu@amazon.com>

ipv6: Remove in6addr_any alternatives.

Some code defines the IPv6 wildcard address as a local variable and
use it with memcmp() or ipv6_addr_equal().

Let's use in6addr_any and ipv6_addr_any() inste

ipv6: Remove in6addr_any alternatives.

Some code defines the IPv6 wildcard address as a local variable and
use it with memcmp() or ipv6_addr_equal().

Let's use in6addr_any and ipv6_addr_any() instead.

Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Reviewed-by: Mark Bloch <mbloch@nvidia.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>

show more ...


# 58de53c1 16-Mar-2023 Gavin Li <gavinl@nvidia.com>

net/mlx5e: Add helper for encap_info_equal for tunnels with options

For tunnels with options, eg, geneve and vxlan with gbp, they share the
same way to compare the headers and options. Extract the c

net/mlx5e: Add helper for encap_info_equal for tunnels with options

For tunnels with options, eg, geneve and vxlan with gbp, they share the
same way to compare the headers and options. Extract the code as a common
function for them.

Signed-off-by: Gavin Li <gavinl@nvidia.com>
Reviewed-by: Gavi Teitz <gavi@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Reviewed-by: Maor Dickman <maord@nvidia.com>
Acked-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>

show more ...


Revision tags: v6.2, v6.1.12, v6.1.11, v6.1.10, v6.1.9, v6.1.8, v6.1.7, v6.1.6, v6.1.5, v6.0.19, v6.0.18, v6.1.4, v6.1.3, v6.0.17, v6.1.2, v6.0.16, v6.1.1, v6.0.15, v6.0.14, v6.0.13, v6.1, v6.0.12, v6.0.11, v6.0.10, v5.15.80, v6.0.9, v5.15.79, v6.0.8, v5.15.78, v6.0.7, v5.15.77, v5.15.76, v6.0.6, v6.0.5, v5.15.75, v6.0.4, v6.0.3, v6.0.2, v5.15.74, v5.15.73, v6.0.1, v5.15.72, v6.0, v5.15.71, v5.15.70
# ef78b8d5 21-Sep-2022 Roi Dayan <roid@nvidia.com>

net/mlx5e: TC, Use common function allocating flow mod hdr or encap mod hdr

Use mlx5e_tc_attach_mod_hdr() when allocating encap mod hdr and
remove mlx5e_tc_add_flow_mod_hdr() which is not being used

net/mlx5e: TC, Use common function allocating flow mod hdr or encap mod hdr

Use mlx5e_tc_attach_mod_hdr() when allocating encap mod hdr and
remove mlx5e_tc_add_flow_mod_hdr() which is not being used now.

Signed-off-by: Roi Dayan <roid@nvidia.com>
Reviewed-by: Maor Dickman <maord@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>

show more ...


# 2951b2e1 04-Dec-2022 Chris Mi <cmi@nvidia.com>

net/mlx5e: Always clear dest encap in neigh-update-del

The cited commit introduced a bug for multiple encapsulations flow.
If one dest encap becomes invalid, the flow is set slow path flag.
But when

net/mlx5e: Always clear dest encap in neigh-update-del

The cited commit introduced a bug for multiple encapsulations flow.
If one dest encap becomes invalid, the flow is set slow path flag.
But when other dests encap become invalid, they are not cleared due
to slow path flag of the flow. When neigh-update-add is running, it
will use invalid encap.

Fix it by checking slow path flag after clearing dest encap.

Fixes: 9a5f9cc794e1 ("net/mlx5e: Fix possible use-after-free deleting fdb rule")
Signed-off-by: Chris Mi <cmi@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>

show more ...


# f3774220 16-Nov-2022 Chris Mi <cmi@nvidia.com>

net/mlx5e: Offload rule only when all encaps are valid

The cited commit adds a for loop to support multiple encapsulations.
But it only checks if the last encap is valid.

Fix it by setting slow pat

net/mlx5e: Offload rule only when all encaps are valid

The cited commit adds a for loop to support multiple encapsulations.
But it only checks if the last encap is valid.

Fix it by setting slow path flag when one of the encap is invalid.

Fixes: f493f15534ec ("net/mlx5e: Move flow attr reformat action bit to per dest flags")
Signed-off-by: Chris Mi <cmi@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>

show more ...


# 6d942e40 16-Nov-2022 Roi Dayan <roid@nvidia.com>

net/mlx5: E-Switch, Set correctly vport destination

The cited commit moved from using reformat_id integer to packet_reformat
pointer which introduced the possibility to null pointer dereference.
Whe

net/mlx5: E-Switch, Set correctly vport destination

The cited commit moved from using reformat_id integer to packet_reformat
pointer which introduced the possibility to null pointer dereference.
When setting packet reformat flag and pkt_reformat pointer must
exists so checking MLX5_ESW_DEST_ENCAP is not enough, we need
to make sure the pkt_reformat is valid and check for MLX5_ESW_DEST_ENCAP_VALID.
If the dest encap valid flag does not exists then pkt_reformat can be
either invalid address or null.
Also, to make sure we don't try to access invalid pkt_reformat set it to
null when invalidated and invalidate it before calling add flow code as
its logically more correct and to be safe.

Fixes: 2b688ea5efde ("net/mlx5: Add flow steering actions to fs_cmd shim layer")
Signed-off-by: Roi Dayan <roid@nvidia.com>
Reviewed-by: Chris Mi <cmi@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>

show more ...


Revision tags: v5.15.69, v5.15.68, v5.15.67, v5.15.66, v5.15.65, v5.15.64, v5.15.63, v5.15.62, v5.15.61, v5.15.60, v5.15.59, v5.19, v5.15.58, v5.15.57, v5.15.56, v5.15.55, v5.15.54, v5.15.53, v5.15.52, v5.15.51, v5.15.50, v5.15.49, v5.15.48, v5.15.47, v5.15.46, v5.15.45, v5.15.44, v5.15.43, v5.15.42, v5.18, v5.15.41, v5.15.40, v5.15.39, v5.15.38, v5.15.37, v5.15.36, v5.15.35, v5.15.34, v5.15.33, v5.15.32, v5.15.31, v5.17, v5.15.30, v5.15.29
# 697319b2 15-Mar-2022 Maor Dickman <maord@nvidia.com>

net/mlx5e: MPLSoUDP decap, use vlan push_eth instead of pedit

Currently action pedit of source and destination MACs is used
to fill the MACs in L2 push step in MPLSoUDP decap offload,
this isn't ali

net/mlx5e: MPLSoUDP decap, use vlan push_eth instead of pedit

Currently action pedit of source and destination MACs is used
to fill the MACs in L2 push step in MPLSoUDP decap offload,
this isn't aligned to tc SW which use vlan eth_push action
to do this.

To fix that, offload support for vlan veth_push action is
added together with mpls pop action, and deprecate the use
of pedit of MACs.

Flow example:
filter protocol mpls_uc pref 1 flower chain 0
filter protocol mpls_uc pref 1 flower chain 0 handle 0x1
eth_type 8847
mpls_label 555
enc_dst_port 6635
in_hw in_hw_count 1
action order 1: tunnel_key unset pipe
index 2 ref 1 bind 1
used_hw_stats delayed

action order 2: mpls pop protocol ip pipe
index 2 ref 1 bind 1
used_hw_stats delayed

action order 3: vlan push_eth dst_mac de:a2:ec:d6:69:c8 src_mac de:a2:ec:d6:69:c8 pipe
index 2 ref 1 bind 1
used_hw_stats delayed

action order 4: mirred (Egress Redirect to device enp8s0f0_0) stolen
index 2 ref 1 bind 1
used_hw_stats delayed

Signed-off-by: Maor Dickman <maord@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>

show more ...


Revision tags: v5.15.28, v5.15.27, v5.15.26, v5.15.25, v5.15.24, v5.15.23, v5.15.22, v5.15.21, v5.15.20, v5.15.19, v5.15.18, v5.15.17, v5.4.173, v5.15.16, v5.15.15, v5.16
# c63741b4 06-Jan-2022 Maor Dickman <maord@nvidia.com>

net/mlx5e: Fix MPLSoUDP encap to use MPLS action information

Currently the MPLSoUDP encap builds the MPLS header using encap action
information (tunnel id, ttl and tos) instead of the MPLS action
in

net/mlx5e: Fix MPLSoUDP encap to use MPLS action information

Currently the MPLSoUDP encap builds the MPLS header using encap action
information (tunnel id, ttl and tos) instead of the MPLS action
information (label, ttl, tc and bos) which is wrong.

Fix by storing the MPLS action information during the flow action
parse and later using it to create the encap MPLS header.

Fixes: f828ca6a2fb6 ("net/mlx5e: Add support for hw encapsulation of MPLS over UDP")
Signed-off-by: Maor Dickman <maord@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>

show more ...


Revision tags: v5.15.10, v5.15.9, v5.15.8, v5.15.7, v5.15.6, v5.15.5, v5.15.4, v5.15.3, v5.15.2, v5.15.1, v5.15, v5.14.14, v5.14.13, v5.14.12, v5.14.11, v5.14.10, v5.14.9, v5.14.8, v5.14.7, v5.14.6, v5.10.67, v5.10.66, v5.14.5, v5.14.4, v5.10.65, v5.14.3, v5.10.64, v5.14.2, v5.10.63, v5.14.1, v5.10.62, v5.14, v5.10.61, v5.10.60
# 8300f225 08-Aug-2021 Roi Dayan <roid@nvidia.com>

net/mlx5e: Create new flow attr for multi table actions

Some TC actions use post actions for their implementation.
For example CT and sample actions.

Create a new flow attr after each multi table a

net/mlx5e: Create new flow attr for multi table actions

Some TC actions use post actions for their implementation.
For example CT and sample actions.

Create a new flow attr after each multi table action and
create a post action rule for it.

First flow attr being offloaded normally and linked to the next
attr (post action rule) with setting an id on reg_c.
Post action rules match the id on reg_c and continue to the next one.

The flow counter is allocated on the last rule.

Signed-off-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>

show more ...


# ff993167 25-Nov-2021 Roi Dayan <roid@nvidia.com>

net/mlx5e: TC, Refactor mlx5e_tc_add_flow_mod_hdr() to get flow attr

In later commit we are going to instantiate multiple attr instances
for flow instead of single attr.
Make sure mlx5e_tc_add_flow_

net/mlx5e: TC, Refactor mlx5e_tc_add_flow_mod_hdr() to get flow attr

In later commit we are going to instantiate multiple attr instances
for flow instead of single attr.
Make sure mlx5e_tc_add_flow_mod_hdr() use the correct attr and not flow->attr.

Signed-off-by: Roi Dayan <roid@nvidia.com>
Reviewed-by: Oz Shlomo <ozsh@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>

show more ...


# c118ebc9 10-Oct-2021 Roi Dayan <roid@nvidia.com>

net/mlx5e: Pass attr arg for attaching/detaching encaps

In later commit that we will have multiple attr instances per flow
we would like to pass a specific attr instance to set encaps.

Currently th

net/mlx5e: Pass attr arg for attaching/detaching encaps

In later commit that we will have multiple attr instances per flow
we would like to pass a specific attr instance to set encaps.

Currently the mlx5_flow object contains a single mlx5_attr instance.
However, multi table actions (e.g. CT) instantiate multiple attr instances.

Currently mlx5e_attach/detach_encap() reads the first attr instance
from the flow instance. Modify the functions to receive the attr
instance as a parameter which is set by the calling function.

Signed-off-by: Roi Dayan <roid@nvidia.com>
Reviewed-by: Oz Shlomo <ozsh@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>

show more ...


# 885751eb 29-Dec-2021 Maor Dickman <maord@nvidia.com>

net/mlx5e: Fix wrong usage of fib_info_nh when routes with nexthop objects are used

Creating routes with nexthop objects while in switchdev mode leads to access to
un-allocated memory and trigger be

net/mlx5e: Fix wrong usage of fib_info_nh when routes with nexthop objects are used

Creating routes with nexthop objects while in switchdev mode leads to access to
un-allocated memory and trigger bellow call trace due to hitting WARN_ON.
This is caused due to illegal usage of fib_info_nh in TC tunnel FIB event handling to
resolve the FIB device while fib_info built in with nexthop.

Fixed by ignoring attempts to use nexthop objects with routes until support can be
properly added.

WARNING: CPU: 1 PID: 1724 at include/net/nexthop.h:468 mlx5e_tc_tun_fib_event+0x448/0x570 [mlx5_core]
CPU: 1 PID: 1724 Comm: ip Not tainted 5.15.0_for_upstream_min_debug_2021_11_09_02_04 #1
RIP: 0010:mlx5e_tc_tun_fib_event+0x448/0x570 [mlx5_core]
RSP: 0018:ffff8881349f7910 EFLAGS: 00010202
RAX: ffff8881492f1980 RBX: ffff8881349f79e8 RCX: 0000000000000000
RDX: ffff8881349f79e8 RSI: 0000000000000000 RDI: 0000000000000000
RBP: ffff8881349f7950 R08: 00000000000000fe R09: 0000000000000001
R10: 0000000000000000 R11: 0000000000000000 R12: ffff88811e9d0000
R13: ffff88810eb62000 R14: ffff888106710268 R15: 0000000000000018
FS: 00007f1d5ca6e800(0000) GS:ffff88852c880000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007ffedba44ff8 CR3: 0000000129808004 CR4: 0000000000370ea0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<TASK>
atomic_notifier_call_chain+0x42/0x60
call_fib_notifiers+0x21/0x40
fib_table_insert+0x479/0x6d0
? try_charge_memcg+0x480/0x6d0
inet_rtm_newroute+0x65/0xb0
rtnetlink_rcv_msg+0x2af/0x360
? page_add_file_rmap+0x13/0x130
? do_set_pte+0xcd/0x120
? rtnl_calcit.isra.0+0x120/0x120
netlink_rcv_skb+0x4e/0xf0
netlink_unicast+0x1ee/0x2b0
netlink_sendmsg+0x22e/0x460
sock_sendmsg+0x33/0x40
____sys_sendmsg+0x1d1/0x1f0
___sys_sendmsg+0xab/0xf0
? __mod_memcg_lruvec_state+0x40/0x60
? __mod_lruvec_page_state+0x95/0xd0
? page_add_new_anon_rmap+0x4e/0xf0
? __handle_mm_fault+0xec6/0x1470
__sys_sendmsg+0x51/0x90
? internal_get_user_pages_fast+0x480/0xa10
do_syscall_64+0x3d/0x90
entry_SYSCALL_64_after_hwframe+0x44/0xae

Fixes: 8914add2c9e5 ("net/mlx5e: Handle FIB events to update tunnel endpoint device")
Signed-off-by: Maor Dickman <maord@nvidia.com>
Reviewed-by: Vlad Buslov <vladbu@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>

show more ...


# 819c319c 26-Oct-2021 Chris Mi <cmi@nvidia.com>

net/mlx5e: Specify out ifindex when looking up decap route

There is a use case that the local and remote VTEPs are in the same
host. Currently, the out ifindex is not specified when looking up the
d

net/mlx5e: Specify out ifindex when looking up decap route

There is a use case that the local and remote VTEPs are in the same
host. Currently, the out ifindex is not specified when looking up the
decap route for offloads. So in this case, a local route is returned
and the route dev is lo.

Actual tunnel interface can be created with a parameter "dev" [1],
which specifies the physical device to use for tunnel endpoint
communication. Pass this parameter to driver when looking up decap
route for offloads. So that a unicast route will be returned.

[1] ip link add name vxlan1 type vxlan id 100 dev enp4s0f0 remote 1.1.1.1 dstport 4789

Signed-off-by: Chris Mi <cmi@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>

show more ...


# 362980ea 21-Oct-2021 Vlad Buslov <vladbu@nvidia.com>

net/mlx5e: Wait for concurrent flow deletion during neigh/fib events

Function mlx5e_take_tmp_flow() skips flows with zero reference count. This
can cause syndrome 0x179e84 when the called from neigh

net/mlx5e: Wait for concurrent flow deletion during neigh/fib events

Function mlx5e_take_tmp_flow() skips flows with zero reference count. This
can cause syndrome 0x179e84 when the called from neigh or route update code
and the skipped flow is not removed from the hardware by the time
underlying encap/decap resource is deleted. Add new completion
'del_hw_done' that is completed when flow is unoffloaded. This is safe to
do because flow with reference count zero needs to be detached from
encap/decap entry before its memory is deallocated, which requires taking
the encap_tbl_lock mutex that is held by the event handlers code.

Fixes: 8914add2c9e5 ("net/mlx5e: Handle FIB events to update tunnel endpoint device")
Signed-off-by: Vlad Buslov <vladbu@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>

show more ...


Revision tags: v5.10.53, v5.10.52, v5.10.51, v5.10.50, v5.10.49, v5.13, v5.10.46, v5.10.43, v5.10.42, v5.10.41, v5.10.40, v5.10.39, v5.4.119, v5.10.36, v5.10.35, v5.10.34, v5.4.116, v5.10.33, v5.12, v5.10.32, v5.10.31, v5.10.30, v5.10.27, v5.10.26, v5.10.25, v5.10.24, v5.10.23, v5.10.22, v5.10.21, v5.10.20, v5.10.19, v5.4.101, v5.10.18, v5.10.17, v5.11, v5.10.16, v5.10.15, v5.10.14
# 100ad4e2 08-Jan-2021 Ariel Levkovich <lariel@nvidia.com>

net/mlx5e: Offload internal port as encap route device

When pefroming encap action, a route lookup is performed
to find the routing device the packet should be forwarded
to after the encapsulation.

net/mlx5e: Offload internal port as encap route device

When pefroming encap action, a route lookup is performed
to find the routing device the packet should be forwarded
to after the encapsulation. This is the device that has the
local tunnel ip address.

This change adds support to offload an encap rule where the
route device ends up being an ovs internal port.
In such case, the driver will add a HW rule that will encapsulate
the packet with the tunnel header and will overwrite the vport
metadata in reg_c0 to the internal port metadata value.
Finally, the packet will be forwarded to the root table to be
processed again with the indication that it came from an internal
port.

Signed-off-by: Ariel Levkovich <lariel@nvidia.com>
Reviewed-by: Vlad Buslov <vladbu@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>

show more ...


# 360cbb1c 29-Dec-2021 Maor Dickman <maord@nvidia.com>

net/mlx5e: Fix wrong usage of fib_info_nh when routes with nexthop objects are used

[ Upstream commit 885751eb1b01d276e38f57d78c583e4ce006c5ed ]

Creating routes with nexthop objects while in switch

net/mlx5e: Fix wrong usage of fib_info_nh when routes with nexthop objects are used

[ Upstream commit 885751eb1b01d276e38f57d78c583e4ce006c5ed ]

Creating routes with nexthop objects while in switchdev mode leads to access to
un-allocated memory and trigger bellow call trace due to hitting WARN_ON.
This is caused due to illegal usage of fib_info_nh in TC tunnel FIB event handling to
resolve the FIB device while fib_info built in with nexthop.

Fixed by ignoring attempts to use nexthop objects with routes until support can be
properly added.

WARNING: CPU: 1 PID: 1724 at include/net/nexthop.h:468 mlx5e_tc_tun_fib_event+0x448/0x570 [mlx5_core]
CPU: 1 PID: 1724 Comm: ip Not tainted 5.15.0_for_upstream_min_debug_2021_11_09_02_04 #1
RIP: 0010:mlx5e_tc_tun_fib_event+0x448/0x570 [mlx5_core]
RSP: 0018:ffff8881349f7910 EFLAGS: 00010202
RAX: ffff8881492f1980 RBX: ffff8881349f79e8 RCX: 0000000000000000
RDX: ffff8881349f79e8 RSI: 0000000000000000 RDI: 0000000000000000
RBP: ffff8881349f7950 R08: 00000000000000fe R09: 0000000000000001
R10: 0000000000000000 R11: 0000000000000000 R12: ffff88811e9d0000
R13: ffff88810eb62000 R14: ffff888106710268 R15: 0000000000000018
FS: 00007f1d5ca6e800(0000) GS:ffff88852c880000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007ffedba44ff8 CR3: 0000000129808004 CR4: 0000000000370ea0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<TASK>
atomic_notifier_call_chain+0x42/0x60
call_fib_notifiers+0x21/0x40
fib_table_insert+0x479/0x6d0
? try_charge_memcg+0x480/0x6d0
inet_rtm_newroute+0x65/0xb0
rtnetlink_rcv_msg+0x2af/0x360
? page_add_file_rmap+0x13/0x130
? do_set_pte+0xcd/0x120
? rtnl_calcit.isra.0+0x120/0x120
netlink_rcv_skb+0x4e/0xf0
netlink_unicast+0x1ee/0x2b0
netlink_sendmsg+0x22e/0x460
sock_sendmsg+0x33/0x40
____sys_sendmsg+0x1d1/0x1f0
___sys_sendmsg+0xab/0xf0
? __mod_memcg_lruvec_state+0x40/0x60
? __mod_lruvec_page_state+0x95/0xd0
? page_add_new_anon_rmap+0x4e/0xf0
? __handle_mm_fault+0xec6/0x1470
__sys_sendmsg+0x51/0x90
? internal_get_user_pages_fast+0x480/0xa10
do_syscall_64+0x3d/0x90
entry_SYSCALL_64_after_hwframe+0x44/0xae

Fixes: 8914add2c9e5 ("net/mlx5e: Handle FIB events to update tunnel endpoint device")
Signed-off-by: Maor Dickman <maord@nvidia.com>
Reviewed-by: Vlad Buslov <vladbu@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


# a49a9b92 21-Oct-2021 Vlad Buslov <vladbu@nvidia.com>

net/mlx5e: Wait for concurrent flow deletion during neigh/fib events

[ Upstream commit 362980eada85b5ea691e5e0d9257a991aa7ade47 ]

Function mlx5e_take_tmp_flow() skips flows with zero reference coun

net/mlx5e: Wait for concurrent flow deletion during neigh/fib events

[ Upstream commit 362980eada85b5ea691e5e0d9257a991aa7ade47 ]

Function mlx5e_take_tmp_flow() skips flows with zero reference count. This
can cause syndrome 0x179e84 when the called from neigh or route update code
and the skipped flow is not removed from the hardware by the time
underlying encap/decap resource is deleted. Add new completion
'del_hw_done' that is completed when flow is unoffloaded. This is safe to
do because flow with reference count zero needs to be detached from
encap/decap entry before its memory is deallocated, which requires taking
the encap_tbl_lock mutex that is held by the event handlers code.

Fixes: 8914add2c9e5 ("net/mlx5e: Handle FIB events to update tunnel endpoint device")
Signed-off-by: Vlad Buslov <vladbu@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


# 9a5f9cc7 22-Aug-2021 Roi Dayan <roid@nvidia.com>

net/mlx5e: Fix possible use-after-free deleting fdb rule

After neigh-update-add failure we are still with a slow path rule but
the driver always assume the rule is an fdb rule.
Fix neigh-update-del

net/mlx5e: Fix possible use-after-free deleting fdb rule

After neigh-update-add failure we are still with a slow path rule but
the driver always assume the rule is an fdb rule.
Fix neigh-update-del by checking slow path tc flag on the flow.
Also fix neigh-update-add for when neigh-update-del fails the same.

Fixes: 5dbe906ff1d5 ("net/mlx5e: Use a slow path rule instead if vxlan neighbour isn't available")
Signed-off-by: Roi Dayan <roid@nvidia.com>
Reviewed-by: Paul Blakey <paulb@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>

show more ...


12