2875a0ca | 30-Jun-2023 |
Hawkins Jiawei <yin31149@gmail.com> |
vdpa: Sort vdpa_feature_bits array alphabetically
This patch sorts the vdpa_feature_bits array alphabetically in ascending order to avoid future duplicates.
Signed-off-by: Hawkins Jiawei <yin31149@
vdpa: Sort vdpa_feature_bits array alphabetically
This patch sorts the vdpa_feature_bits array alphabetically in ascending order to avoid future duplicates.
Signed-off-by: Hawkins Jiawei <yin31149@gmail.com> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
show more ...
|
b6aeee02 | 09-Jun-2023 |
Laurent Vivier <lvivier@redhat.com> |
net: socket: remove net_init_socket()
Move the file descriptor type checking before doing anything with it. If it's not usable, don't close it as it could be in use by another part of QEMU, only fai
net: socket: remove net_init_socket()
Move the file descriptor type checking before doing anything with it. If it's not usable, don't close it as it could be in use by another part of QEMU, only fail and report an error.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Laurent Vivier <lvivier@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
show more ...
|
23455ae3 | 09-Jun-2023 |
Laurent Vivier <lvivier@redhat.com> |
net: socket: move fd type checking to its own function
Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Laurent Vivier <lvivier@redhat.com> Signed-off-by: Jason Wang <jasowang@
net: socket: move fd type checking to its own function
Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Laurent Vivier <lvivier@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
show more ...
|
a0d7215e | 19-Jun-2023 |
Ani Sinha <anisinha@redhat.com> |
vhost-vdpa: do not cleanup the vdpa/vhost-net structures if peer nic is present
When a peer nic is still attached to the vdpa backend, it is too early to free up the vhost-net and vdpa structures. I
vhost-vdpa: do not cleanup the vdpa/vhost-net structures if peer nic is present
When a peer nic is still attached to the vdpa backend, it is too early to free up the vhost-net and vdpa structures. If these structures are freed here, then QEMU crashes when the guest is being shut down. The following call chain would result in an assertion failure since the pointer returned from vhost_vdpa_get_vhost_net() would be NULL:
do_vm_stop() -> vm_state_notify() -> virtio_set_status() -> virtio_net_vhost_status() -> get_vhost_net().
Therefore, we defer freeing up the structures until at guest shutdown time when qemu_cleanup() calls net_cleanup() which then calls qemu_del_net_client() which would eventually call vhost_vdpa_cleanup() again to free up the structures. This time, the loop in net_cleanup() ensures that vhost_vdpa_cleanup() will be called one last time when all the peer nics are detached and freed.
All unit tests pass with this change.
CC: imammedo@redhat.com CC: jusual@redhat.com CC: mst@redhat.com Fixes: CVE-2023-3301 Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2128929 Signed-off-by: Ani Sinha <anisinha@redhat.com> Message-Id: <20230619065209.442185-1-anisinha@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
d45243bc | 02-Jun-2023 |
Eugenio Pérez <eperezma@redhat.com> |
vdpa: fix not using CVQ buffer in case of error
Bug introducing when refactoring. Otherway, the guest never received the used buffer.
Fixes: be4278b65fc1 ("vdpa: extract vhost_vdpa_net_cvq_add fro
vdpa: fix not using CVQ buffer in case of error
Bug introducing when refactoring. Otherway, the guest never received the used buffer.
Fixes: be4278b65fc1 ("vdpa: extract vhost_vdpa_net_cvq_add from vhost_vdpa_net_handle_ctrl_avail") Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <20230602173451.1917999-1-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com>
show more ...
|
51e84244 | 02-Jun-2023 |
Eugenio Pérez <eperezma@redhat.com> |
vdpa: mask _F_CTRL_GUEST_OFFLOADS for vhost vdpa devices
QEMU does not emulate it so it must be disabled as long as the backend does not support it.
Signed-off-by: Eugenio Pérez <eperezma@redhat.co
vdpa: mask _F_CTRL_GUEST_OFFLOADS for vhost vdpa devices
QEMU does not emulate it so it must be disabled as long as the backend does not support it.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <20230602173328.1917385-1-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com>
show more ...
|
4b4a1378 | 02-Jun-2023 |
Hawkins Jiawei <yin31149@gmail.com> |
vdpa: Allow VIRTIO_NET_F_CTRL_GUEST_OFFLOADS in SVQ
Enable SVQ with VIRTIO_NET_F_CTRL_GUEST_OFFLOADS feature.
Signed-off-by: Hawkins Jiawei <yin31149@gmail.com> Acked-by: Jason Wang <jasowang@redha
vdpa: Allow VIRTIO_NET_F_CTRL_GUEST_OFFLOADS in SVQ
Enable SVQ with VIRTIO_NET_F_CTRL_GUEST_OFFLOADS feature.
Signed-off-by: Hawkins Jiawei <yin31149@gmail.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <778d642ecae6deed8a218b0e6232e4d7bb96b439.1685704856.git.yin31149@gmail.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Eugenio Pérez <eperezma@redhat.com> Tested-by: Eugenio Pérez <eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
0b58d368 | 02-Jun-2023 |
Hawkins Jiawei <yin31149@gmail.com> |
vdpa: Add vhost_vdpa_net_load_offloads()
This patch introduces vhost_vdpa_net_load_offloads() to restore offloads state at device's startup.
Signed-off-by: Hawkins Jiawei <yin31149@gmail.com> Messa
vdpa: Add vhost_vdpa_net_load_offloads()
This patch introduces vhost_vdpa_net_load_offloads() to restore offloads state at device's startup.
Signed-off-by: Hawkins Jiawei <yin31149@gmail.com> Message-Id: <7e2b5cad9c48c917df53d80dec27dbfeb513e1a3.1685704856.git.yin31149@gmail.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Eugenio Pérez <eperezma@redhat.com> Tested-by: Eugenio Pérez <eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
02d3bf09 | 02-Jun-2023 |
Hawkins Jiawei <yin31149@gmail.com> |
vdpa: reuse virtio_vdev_has_feature()
We can use virtio_vdev_has_feature() instead of manually accessing the features.
Signed-off-by: Hawkins Jiawei <yin31149@gmail.com> Acked-by: Eugenio Pérez <ep
vdpa: reuse virtio_vdev_has_feature()
We can use virtio_vdev_has_feature() instead of manually accessing the features.
Signed-off-by: Hawkins Jiawei <yin31149@gmail.com> Acked-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <ff838d30206209fd865511b16ffb34cc0d5e8d8f.1685704856.git.yin31149@gmail.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Eugenio Pérez <eperezma@redhat.com> Tested-by: Eugenio Pérez <eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
babf8b87 | 02-Jun-2023 |
Eugenio Pérez <eperezma@redhat.com> |
vdpa: map shadow vrings with MAP_SHARED
The vdpa devices that use va addresses neeeds these maps shared. Otherwise, vhost_vdpa checks will refuse to accept the maps.
The mmap call will always retur
vdpa: map shadow vrings with MAP_SHARED
The vdpa devices that use va addresses neeeds these maps shared. Otherwise, vhost_vdpa checks will refuse to accept the maps.
The mmap call will always return a page aligned address, so removing the qemu_memalign call. Keeping the ROUND_UP for the size as we still need to DMA-map them in full.
Not applying fixes tag as it never worked with va devices.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <20230602143854.1879091-4-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
915bf6cc | 02-Jun-2023 |
Eugenio Pérez <eperezma@redhat.com> |
vdpa: reorder vhost_vdpa_net_cvq_cmd_page_len function
We need to call it from resource cleanup context, as munmap needs the size of the mappings.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
vdpa: reorder vhost_vdpa_net_cvq_cmd_page_len function
We need to call it from resource cleanup context, as munmap needs the size of the mappings.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20230602143854.1879091-3-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
8bc0049e | 02-Jun-2023 |
Eugenio Pérez <eperezma@redhat.com> |
vdpa: do not block migration if device has cvq and x-svq=on
It was a mistake to forbid in all cases, as SVQ is already able to send all the CVQ messages before start forwarding data vqs. It actuall
vdpa: do not block migration if device has cvq and x-svq=on
It was a mistake to forbid in all cases, as SVQ is already able to send all the CVQ messages before start forwarding data vqs. It actually caused a regression, making impossible to migrate device previously migratable.
Fixes: 36e4647247f2 ("vdpa: add vhost_vdpa_net_valid_svq_features") Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <20230602143854.1879091-2-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com>
show more ...
|
152128d6 | 26-May-2023 |
Eugenio Pérez <eperezma@redhat.com> |
vdpa: move CVQ isolation check to net_init_vhost_vdpa
Evaluating it at start time instead of initialization time may make the guest capable of dynamically adding or removing migration blockers.
Als
vdpa: move CVQ isolation check to net_init_vhost_vdpa
Evaluating it at start time instead of initialization time may make the guest capable of dynamically adding or removing migration blockers.
Also, moving to initialization reduces the number of ioctls in the migration, reducing failure possibilities.
As a drawback we need to check for CVQ isolation twice: one time with no MQ negotiated and another one acking it, as long as the device supports it. This is because Vring ASID / group management is based on vq indexes, but we don't know the index of CVQ before negotiating MQ.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <20230526153143.470745-3-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com>
show more ...
|