a81d8d4a | 07-Apr-2022 |
Kevin Wolf <kwolf@redhat.com> |
vhost-user: Don't pass file descriptor for VHOST_USER_REM_MEM_REG
The spec clarifies now that QEMU should not send a file descriptor in a request to remove a memory region. Change it accordingly.
F
vhost-user: Don't pass file descriptor for VHOST_USER_REM_MEM_REG
The spec clarifies now that QEMU should not send a file descriptor in a request to remove a memory region. Change it accordingly.
For libvhost-user, this is a bug fix that makes it compatible with rust-vmm's implementation that doesn't send a file descriptor. Keep accepting, but ignoring a file descriptor for compatibility with older QEMU versions.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20220407133657.155281-4-kwolf@redhat.com> Reviewed-by: Raphael Norwitz <raphael.norwitz@nutanix.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
eb99baa9 | 11-Jan-2022 |
David Hildenbrand <david@redhat.com> |
libvhost-user: Map shared RAM with MAP_NORESERVE to support virtio-mem with hugetlb
For fd-based shared memory, MAP_NORESERVE is only effective for hugetlb, otherwise it's ignored. Older Linux versi
libvhost-user: Map shared RAM with MAP_NORESERVE to support virtio-mem with hugetlb
For fd-based shared memory, MAP_NORESERVE is only effective for hugetlb, otherwise it's ignored. Older Linux versions that didn't support reservation of huge pages ignored MAP_NORESERVE completely.
The first client to mmap a hugetlb fd without MAP_NORESERVE will trigger reservation of huge pages for the whole mmapped range. There are two cases to consider:
1) QEMU mapped RAM without MAP_NORESERVE
We're not dealing with a sparse mapping, huge pages for the whole range have already been reserved by QEMU. An additional mmap() without MAP_NORESERVE won't have any effect on the reservation.
2) QEMU mapped RAM with MAP_NORESERVE
We're delaing with a sparse mapping, no huge pages should be reserved. Further mappings without MAP_NORESERVE should be avoided.
For 1), it doesn't matter if we set MAP_NORESERVE or not, so we can simply set it. For 2), we'd be overriding QEMUs decision and trigger reservation of huge pages, which might just fail if there are not sufficient huge pages around. We must map with MAP_NORESERVE.
This change is required to support virtio-mem with hugetlb: a virtio-mem device mapped into the guest physical memory corresponds to a sparse memory mapping and QEMU maps this memory with MAP_NORESERVE. Whenever memory in that sparse region will be accessed by the VM, QEMU populates huge pages for the affected range by preallocating memory and handling any preallocation errors gracefully.
So let's map shared RAM with MAP_NORESERVE. As libvhost-user only supports Linux, there shouldn't be anything to take care of in regard of other OS support.
Without this change, libvhost-user will fail mapping the region if there are currently not enough huge pages to perform the reservation: fv_panic: libvhost-user: region mmap error: Cannot allocate memory
Cc: "Marc-André Lureau" <marcandre.lureau@redhat.com> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Raphael Norwitz <raphael.norwitz@nutanix.com> Cc: Stefan Hajnoczi <stefanha@redhat.com> Cc: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20220111123939.132659-1-david@redhat.com> Acked-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
show more ...
|
4fafedc9 | 16-Jan-2022 |
Raphael Norwitz <raphael.norwitz@nutanix.com> |
libvhost-user: handle removal of identical regions
Today if QEMU (or any other VMM) has sent multiple copies of the same region to a libvhost-user based backend and then attempts to remove the regio
libvhost-user: handle removal of identical regions
Today if QEMU (or any other VMM) has sent multiple copies of the same region to a libvhost-user based backend and then attempts to remove the region, only one instance of the region will be removed, leaving stale copies of the region in dev->regions[].
This change resolves this by having vu_rem_mem_reg() iterate through all regions in dev->regions[] and delete all matching regions.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com> Message-Id: <20220117041050.19718-7-raphael.norwitz@nutanix.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com>
show more ...
|
b906a23c | 16-Jan-2022 |
Raphael Norwitz <raphael.norwitz@nutanix.com> |
libvhost-user: prevent over-running max RAM slots
When VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS support was added to libvhost-user, no guardrails were added to protect against QEMU attempting to ho
libvhost-user: prevent over-running max RAM slots
When VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS support was added to libvhost-user, no guardrails were added to protect against QEMU attempting to hot-add too many RAM slots to a VM with a libvhost-user based backed attached.
This change adds the missing error handling by introducing a check on the number of RAM slots the device has available before proceeding to process the VHOST_USER_ADD_MEM_REG message.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com> Message-Id: <20220117041050.19718-6-raphael.norwitz@nutanix.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
show more ...
|
fa3d5483 | 16-Jan-2022 |
David Hildenbrand <david@redhat.com> |
libvhost-user: fix VHOST_USER_REM_MEM_REG not closing the fd
We end up not closing the file descriptor, resulting in leaking one file descriptor for each VHOST_USER_REM_MEM_REG message.
Fixes: 875b
libvhost-user: fix VHOST_USER_REM_MEM_REG not closing the fd
We end up not closing the file descriptor, resulting in leaking one file descriptor for each VHOST_USER_REM_MEM_REG message.
Fixes: 875b9fd97b34 ("Support individual region unmap in libvhost-user") Cc: Michael S. Tsirkin <mst@redhat.com> Cc: Raphael Norwitz <raphael.norwitz@nutanix.com> Cc: "Marc-André Lureau" <marcandre.lureau@redhat.com> Cc: Stefan Hajnoczi <stefanha@redhat.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Coiby Xu <coiby.xu@gmail.com> Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com> Message-Id: <20220117041050.19718-5-raphael.norwitz@nutanix.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
4fd5ca82 | 16-Jan-2022 |
David Hildenbrand <david@redhat.com> |
libvhost-user: Simplify VHOST_USER_REM_MEM_REG
Let's avoid having to manually copy all elements. Copy only the ones necessary to close the hole and perform the operation in-place without a second ar
libvhost-user: Simplify VHOST_USER_REM_MEM_REG
Let's avoid having to manually copy all elements. Copy only the ones necessary to close the hole and perform the operation in-place without a second array.
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com> Message-Id: <20220117041050.19718-4-raphael.norwitz@nutanix.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
9f4e6349 | 16-Jan-2022 |
Raphael Norwitz <raphael.norwitz@nutanix.com> |
libvhost-user: Add vu_add_mem_reg input validation
Today if multiple FDs are sent from the VMM to the backend in a VHOST_USER_ADD_MEM_REG message, one FD will be mapped and the remaining FDs will be
libvhost-user: Add vu_add_mem_reg input validation
Today if multiple FDs are sent from the VMM to the backend in a VHOST_USER_ADD_MEM_REG message, one FD will be mapped and the remaining FDs will be leaked. Therefore if multiple FDs are sent we report an error and fail the operation, closing all FDs in the message.
Likewise in case the VMM sends a message with a size less than that of a memory region descriptor, we add a check to gracefully report an error and fail the operation rather than crashing.
Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com> Message-Id: <20220117041050.19718-3-raphael.norwitz@nutanix.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com>
show more ...
|
4fe29344 | 05-May-2021 |
Marc-André Lureau <marcandre.lureau@redhat.com> |
libvhost-user: fix -Werror=format= warnings with __u64 fields
../subprojects/libvhost-user/libvhost-user.c:1070:12: error: format ‘%lx’ expects argument of type ‘long unsigned int’, but argument 3 h
libvhost-user: fix -Werror=format= warnings with __u64 fields
../subprojects/libvhost-user/libvhost-user.c:1070:12: error: format ‘%lx’ expects argument of type ‘long unsigned int’, but argument 3 has type ‘__u64’ {aka ‘long long unsigned int’} [-Werror=format=] 1070 | DPRINT(" desc_user_addr: 0x%016" PRIx64 "\n", vra->desc_user_addr); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~ | | | __u64 {aka long long unsigned int}
Rather than using %llx, which may fail if __u64 is declared differently elsewhere, let's just cast the values. Feel free to propose a better solution!
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Message-Id: <20210505151313.203258-2-marcandre.lureau@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
e0193568 | 25-Nov-2020 |
Marc-André Lureau <marcandre.lureau@redhat.com> |
libvhost-user: add a simple link test without glib
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Message-Id: <20201125100640.366523-8-marcandre.lureau@redhat.com> Reviewed-by: Micha
libvhost-user: add a simple link test without glib
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Message-Id: <20201125100640.366523-8-marcandre.lureau@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|