Searched hist:"6 f0689fdf19ed3aca3ee3910223ad27216640693" (Results 1 – 2 of 2) sorted by relevance
/openbmc/linux/drivers/infiniband/hw/mlx5/ |
H A D | umr.h | diff 6f0689fdf19ed3aca3ee3910223ad27216640693 Tue Apr 12 02:24:01 CDT 2022 Aharon Landau <aharonl@nvidia.com> RDMA/mlx5: Introduce mlx5_umr_post_send_wait()
Introduce mlx5_umr_post_send_wait() that uses a UMR adjusted flow for posting WQEs. The next patches will gradually move UMR operations to use this flow. Once done, will get rid of mlx5_ib_post_send_wait().
mlx5_umr_post_send_wait gets already written WQE segments and will only memcpy it to the SQ. This way, we avoid packing all the data in a WR just to unpack it into the WQE.
Link: https://lore.kernel.org/r/f027dd592fde62402b2d49efded8d1d22229d22b.1649747695.git.leonro@nvidia.com Signed-off-by: Aharon Landau <aharonl@nvidia.com> Reviewed-by: Michael Guralnik <michaelgur@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
|
H A D | umr.c | diff 6f0689fdf19ed3aca3ee3910223ad27216640693 Tue Apr 12 02:24:01 CDT 2022 Aharon Landau <aharonl@nvidia.com> RDMA/mlx5: Introduce mlx5_umr_post_send_wait()
Introduce mlx5_umr_post_send_wait() that uses a UMR adjusted flow for posting WQEs. The next patches will gradually move UMR operations to use this flow. Once done, will get rid of mlx5_ib_post_send_wait().
mlx5_umr_post_send_wait gets already written WQE segments and will only memcpy it to the SQ. This way, we avoid packing all the data in a WR just to unpack it into the WQE.
Link: https://lore.kernel.org/r/f027dd592fde62402b2d49efded8d1d22229d22b.1649747695.git.leonro@nvidia.com Signed-off-by: Aharon Landau <aharonl@nvidia.com> Reviewed-by: Michael Guralnik <michaelgur@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
|