9cd67f0c | 04-Jan-2024 |
Daniel P. Berrangé <berrange@redhat.com> |
net: handle QIOTask completion to report useful error message
The network stream backend uses the async QIO socket APIs for listening and connecting sockets. It does not check the task object comple
net: handle QIOTask completion to report useful error message
The network stream backend uses the async QIO socket APIs for listening and connecting sockets. It does not check the task object completion status, however, instead just looking at whether the socket FD is -1 or not.
By checking the task completion, we can set a useful error message for users instead of the non-actionable "connection error" string.
eg so users will see:
(qemu) info network net: index=0,type=stream,error: Failed to connect to '/foo.unix': No such file or directory
Signed-off-by: "Daniel P. Berrangé" <berrange@redhat.com> Message-ID: <20240104162942.211458-6-berrange@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com>
show more ...
|
6f03d9ef | 21-Dec-2023 |
Eugenio Pérez <eperezma@redhat.com> |
vdpa: use VhostVDPAShared in vdpa_dma_map and unmap
The callers only have the shared information by the end of this series. Start converting this functions.
Signed-off-by: Eugenio Pérez <eperezma@r
vdpa: use VhostVDPAShared in vdpa_dma_map and unmap
The callers only have the shared information by the end of this series. Start converting this functions.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20231221174322.3130442-12-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
f12b2498 | 21-Dec-2023 |
Eugenio Pérez <eperezma@redhat.com> |
vdpa: move file descriptor to vhost_vdpa_shared
Next patches will register the vhost_vdpa memory listener while the VM is migrating at the destination, so we can map the memory to the device before
vdpa: move file descriptor to vhost_vdpa_shared
Next patches will register the vhost_vdpa memory listener while the VM is migrating at the destination, so we can map the memory to the device before stopping the VM at the source. The main goal is to reduce the downtime.
However, the destination QEMU is unaware of which vhost_vdpa device will register its memory_listener. If the source guest has CVQ enabled, it will be the CVQ device. Otherwise, it will be the first one.
Move the file descriptor to VhostVDPAShared so all vhost_vdpa can use it, rather than always in the first / last vhost_vdpa.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20231221174322.3130442-7-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
a6e823d4 | 21-Dec-2023 |
Eugenio Pérez <eperezma@redhat.com> |
vdpa: move shadow_data to vhost_vdpa_shared
Next patches will register the vhost_vdpa memory listener while the VM is migrating at the destination, so we can map the memory to the device before stop
vdpa: move shadow_data to vhost_vdpa_shared
Next patches will register the vhost_vdpa memory listener while the VM is migrating at the destination, so we can map the memory to the device before stopping the VM at the source. The main goal is to reduce the downtime.
However, the destination QEMU is unaware of which vhost_vdpa device will register its memory_listener. If the source guest has CVQ enabled, it will be the CVQ device. Otherwise, it will be the first one.
Move the shadow_data member to VhostVDPAShared so all vhost_vdpa can use it, rather than always in the first or last vhost_vdpa.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20231221174322.3130442-5-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
ae25ff41 | 21-Dec-2023 |
Eugenio Pérez <eperezma@redhat.com> |
vdpa: move iova_range to vhost_vdpa_shared
Next patches will register the vhost_vdpa memory listener while the VM is migrating at the destination, so we can map the memory to the device before stopp
vdpa: move iova_range to vhost_vdpa_shared
Next patches will register the vhost_vdpa memory listener while the VM is migrating at the destination, so we can map the memory to the device before stopping the VM at the source. The main goal is to reduce the downtime.
However, the destination QEMU is unaware of which vhost_vdpa device will register its memory_listener. If the source guest has CVQ enabled, it will be the CVQ device. Otherwise, it will be the first one.
Move the iova range to VhostVDPAShared so all vhost_vdpa can use it, rather than always in the first or last vhost_vdpa.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20231221174322.3130442-4-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
5edb02e8 | 21-Dec-2023 |
Eugenio Pérez <eperezma@redhat.com> |
vdpa: move iova tree to the shared struct
Next patches will register the vhost_vdpa memory listener while the VM is migrating at the destination, so we can map the memory to the device before stoppi
vdpa: move iova tree to the shared struct
Next patches will register the vhost_vdpa memory listener while the VM is migrating at the destination, so we can map the memory to the device before stopping the VM at the source. The main goal is to reduce the downtime.
However, the destination QEMU is unaware of which vhost_vdpa device will register its memory_listener. If the source guest has CVQ enabled, it will be the CVQ device. Otherwise, it will be the first one.
Move the iova tree to VhostVDPAShared so all vhost_vdpa can use it, rather than always in the first or last vhost_vdpa.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20231221174322.3130442-3-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
84f85eb9 | 15-Nov-2023 |
David Woodhouse <dwmw@amazon.co.uk> |
net: do not delete nics in net_cleanup()
In net_cleanup() we only need to delete the netdevs, as those may have state which outlives Qemu when it exits, and thus may actually need to be cleaned up o
net: do not delete nics in net_cleanup()
In net_cleanup() we only need to delete the netdevs, as those may have state which outlives Qemu when it exits, and thus may actually need to be cleaned up on exit.
The nics, on the other hand, are owned by the device which created them. Most devices don't bother to clean up on exit because they don't have any state which will outlive Qemu... but XenBus devices do need to clean up their nodes in XenStore, and do have an exit handler to delete them.
When the XenBus exit handler destroys the xen-net-device, it attempts to delete its nic after net_cleanup() had already done so. And crashes.
Fix this by only deleting netdevs as we walk the list. As the comment notes, we can't use QTAILQ_FOREACH_SAFE() as each deletion may remove *multiple* entries, including the "safely" saved 'next' pointer. But we can store the *previous* entry, since nics are safe.
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Reviewed-by: Paul Durrant <paul@xen.org> Signed-off-by: Jason Wang <jasowang@redhat.com>
show more ...
|
9050f976 | 31-May-2023 |
Akihiko Odaki <akihiko.odaki@daynix.com> |
net: Update MemReentrancyGuard for NIC
Recently MemReentrancyGuard was added to DeviceState to record that the device is engaging in I/O. The network device backend needs to update it when deliverin
net: Update MemReentrancyGuard for NIC
Recently MemReentrancyGuard was added to DeviceState to record that the device is engaging in I/O. The network device backend needs to update it when delivering a packet to a device.
This implementation follows what bottom half does, but it does not add a tracepoint for the case that the network device backend started delivering a packet to a device which is already engaging in I/O. This is because such reentrancy frequently happens for qemu_flush_queued_packets() and is insignificant.
Fixes: CVE-2023-3019 Reported-by: Alexander Bulekov <alxndr@bu.edu> Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com> Acked-by: Alexander Bulekov <alxndr@bu.edu> Signed-off-by: Jason Wang <jasowang@redhat.com>
show more ...
|
07eba949 | 24-Oct-2023 |
Hawkins Jiawei <yin31149@gmail.com> |
vdpa: Allow VIRTIO_NET_F_RSS in SVQ
Enable SVQ with VIRTIO_NET_F_RSS feature.
Signed-off-by: Hawkins Jiawei <yin31149@gmail.com> Message-Id: <626449eb303207de408126b3dc7c155cd72b028b.1698195059.git
vdpa: Allow VIRTIO_NET_F_RSS in SVQ
Enable SVQ with VIRTIO_NET_F_RSS feature.
Signed-off-by: Hawkins Jiawei <yin31149@gmail.com> Message-Id: <626449eb303207de408126b3dc7c155cd72b028b.1698195059.git.yin31149@gmail.com> Acked-by: Eugenio Pérez <eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
b3c09106 | 24-Oct-2023 |
Hawkins Jiawei <yin31149@gmail.com> |
vdpa: Restore receive-side scaling state
This patch reuses vhost_vdpa_net_load_rss() with some refactorings to restore the receive-side scaling state at device's startup.
Signed-off-by: Hawkins Jia
vdpa: Restore receive-side scaling state
This patch reuses vhost_vdpa_net_load_rss() with some refactorings to restore the receive-side scaling state at device's startup.
Signed-off-by: Hawkins Jiawei <yin31149@gmail.com> Message-Id: <cf5b78a16ed0318982ceffb195f2227f6aad4ac1.1698195059.git.yin31149@gmail.com> Acked-by: Eugenio Pérez <eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
d1fd2d31 | 24-Oct-2023 |
Hawkins Jiawei <yin31149@gmail.com> |
vdpa: Add SetSteeringEBPF method for NetClientState
At present, to enable the VIRTIO_NET_F_RSS feature, eBPF must be loaded for the vhost backend.
Given that vhost-vdpa is one of the vhost backend,
vdpa: Add SetSteeringEBPF method for NetClientState
At present, to enable the VIRTIO_NET_F_RSS feature, eBPF must be loaded for the vhost backend.
Given that vhost-vdpa is one of the vhost backend, we need to implement the SetSteeringEBPF method to support RSS for vhost-vdpa, even if vhost-vdpa calculates the rss hash in the hardware device instead of in the kernel by eBPF.
Although this requires QEMU to be compiled with `--enable-bpf` configuration even if the vdpa device does not use eBPF to calculate the rss hash, this can avoid adding the specific conditional statements for vDPA case to enable the VIRTIO_NET_F_RSS feature, which reduces code maintainbility.
Suggested-by: Eugenio Pérez <eperezma@redhat.com> Signed-off-by: Hawkins Jiawei <yin31149@gmail.com> Message-Id: <280e20ddce55b6de60f1552ba0865bffffe909b2.1698195059.git.yin31149@gmail.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
556b67d4 | 24-Oct-2023 |
Hawkins Jiawei <yin31149@gmail.com> |
vdpa: Allow VIRTIO_NET_F_HASH_REPORT in SVQ
Enable SVQ with VIRTIO_NET_F_HASH_REPORT feature.
Signed-off-by: Hawkins Jiawei <yin31149@gmail.com> Message-Id: <d66b0aee501cdad7954231900c35a11cad1e13d
vdpa: Allow VIRTIO_NET_F_HASH_REPORT in SVQ
Enable SVQ with VIRTIO_NET_F_HASH_REPORT feature.
Signed-off-by: Hawkins Jiawei <yin31149@gmail.com> Message-Id: <d66b0aee501cdad7954231900c35a11cad1e13db.1698194366.git.yin31149@gmail.com> Acked-by: Eugenio Pérez <eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|