b9b72cb3 | 08-Aug-2024 |
Eric Blake <eblake@redhat.com> |
nbd/server: CVE-2024-7409: Drop non-negotiating clients
A client that opens a socket but does not negotiate is merely hogging qemu's resources (an open fd and a small amount of memory); and a malici
nbd/server: CVE-2024-7409: Drop non-negotiating clients
A client that opens a socket but does not negotiate is merely hogging qemu's resources (an open fd and a small amount of memory); and a malicious client that can access the port where NBD is listening can attempt a denial of service attack by intentionally opening and abandoning lots of unfinished connections. The previous patch put a default bound on the number of such ongoing connections, but once that limit is hit, no more clients can connect (including legitimate ones). The solution is to insist that clients complete handshake within a reasonable time limit, defaulting to 10 seconds. A client that has not successfully completed NBD_OPT_GO by then (including the case of where the client didn't know TLS credentials to even reach the point of NBD_OPT_GO) is wasting our time and does not deserve to stay connected. Later patches will allow fine-tuning the limit away from the default value (including disabling it for doing integration testing of the handshake process itself).
Note that this patch in isolation actually makes it more likely to see qemu SEGV after nbd-server-stop, as any client socket still connected when the server shuts down will now be closed after 10 seconds rather than at the client's whims. That will be addressed in the next patch.
For a demo of this patch in action: $ qemu-nbd -f raw -r -t -e 10 file & $ nbdsh --opt-mode -c ' H = list() for i in range(20): print(i) H.insert(i, nbd.NBD()) H[i].set_opt_mode(True) H[i].connect_uri("nbd://localhost") ' $ kill $!
where later connections get to start progressing once earlier ones are forcefully dropped for taking too long, rather than hanging.
Suggested-by: Daniel P. Berrangé <berrange@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com> Message-ID: <20240807174943.771624-13-eblake@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> [eblake: rebase to changes earlier in series, reduce scope of timer] Signed-off-by: Eric Blake <eblake@redhat.com>
show more ...
|
fb1c2aaa | 07-Aug-2024 |
Eric Blake <eblake@redhat.com> |
nbd/server: Plumb in new args to nbd_client_add()
Upcoming patches to fix a CVE need to track an opaque pointer passed in by the owner of a client object, as well as request for a time limit on how
nbd/server: Plumb in new args to nbd_client_add()
Upcoming patches to fix a CVE need to track an opaque pointer passed in by the owner of a client object, as well as request for a time limit on how fast negotiation must complete. Prepare for that by changing the signature of nbd_client_new() and adding an accessor to get at the opaque pointer, although for now the two servers (qemu-nbd.c and blockdev-nbd.c) do not change behavior even though they pass in a new default timeout value.
Suggested-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Signed-off-by: Eric Blake <eblake@redhat.com> Message-ID: <20240807174943.771624-11-eblake@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> [eblake: s/LIMIT/MAX_SECS/ as suggested by Dan] Signed-off-by: Eric Blake <eblake@redhat.com>
show more ...
|
4fa333e0 | 08-Apr-2024 |
Eric Blake <eblake@redhat.com> |
nbd/server: Mark negotiation functions as coroutine_fn
nbd_negotiate() is already marked coroutine_fn. And given the fix in the previous patch to have nbd_negotiate_handle_starttls not create and w
nbd/server: Mark negotiation functions as coroutine_fn
nbd_negotiate() is already marked coroutine_fn. And given the fix in the previous patch to have nbd_negotiate_handle_starttls not create and wait on a g_main_loop (as that would violate coroutine constraints), it is worth marking the rest of the related static functions reachable only during option negotiation as also being coroutine_fn.
Suggested-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Signed-off-by: Eric Blake <eblake@redhat.com> Message-ID: <20240408160214.1200629-6-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> [eblake: drop one spurious coroutine_fn marking] Signed-off-by: Eric Blake <eblake@redhat.com>
show more ...
|
7075d235 | 21-Dec-2023 |
Stefan Hajnoczi <stefanha@redhat.com> |
nbd/server: introduce NBDClient->lock to protect fields
NBDClient has a number of fields that are accessed by both the export AioContext and the main loop thread. When the AioContext lock is removed
nbd/server: introduce NBDClient->lock to protect fields
NBDClient has a number of fields that are accessed by both the export AioContext and the main loop thread. When the AioContext lock is removed these fields will need another form of protection.
Add NBDClient->lock and protect fields that are accessed by both threads. Also add assertions where possible and otherwise add doc comments stating assumptions about which thread and lock holding.
Note this patch moves the client->recv_coroutine assertion from nbd_co_receive_request() to nbd_trip() where client->lock is held.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-ID: <20231221192452.1785567-7-stefanha@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
f816310d | 21-Dec-2023 |
Stefan Hajnoczi <stefanha@redhat.com> |
nbd/server: only traverse NBDExport->clients from main loop thread
The NBD clients list is currently accessed from both the export AioContext and the main loop thread. When the AioContext lock is re
nbd/server: only traverse NBDExport->clients from main loop thread
The NBD clients list is currently accessed from both the export AioContext and the main loop thread. When the AioContext lock is removed there will be nothing protecting the clients list.
Adding a lock around the clients list is tricky because NBDClient structs are refcounted and may be freed from the export AioContext or the main loop thread. nbd_export_request_shutdown() -> client_close() -> nbd_client_put() is also tricky because the list lock would be held while indirectly dropping references to NDBClients.
A simpler approach is to only allow nbd_client_put() and client_close() calls from the main loop thread. Then the NBD clients list is only accessed from the main loop thread and no fancy locking is needed.
nbd_trip() just needs to reschedule itself in the main loop AioContext before calling nbd_client_put() and client_close(). This costs more CPU cycles per NBD request so add nbd_client_put_nonzero() to optimize the common case where more references to NBDClient remain.
Note that nbd_client_get() can still be called from either thread, so make NBDClient->refcount atomic.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-ID: <20231221192452.1785567-6-stefanha@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
2dcbb11b | 25-Sep-2023 |
Eric Blake <eblake@redhat.com> |
nbd/server: Add FLAG_PAYLOAD support to CMD_BLOCK_STATUS
Allow a client to request a subset of negotiated meta contexts. For example, a client may ask to use a single connection to learn about both
nbd/server: Add FLAG_PAYLOAD support to CMD_BLOCK_STATUS
Allow a client to request a subset of negotiated meta contexts. For example, a client may ask to use a single connection to learn about both block status and dirty bitmaps, but where the dirty bitmap queries only need to be performed on a subset of the disk; forcing the server to compute that information on block status queries in the rest of the disk is wasted effort (both at the server, and on the amount of traffic sent over the wire to be parsed and ignored by the client).
Qemu as an NBD client never requests to use more than one meta context, so it has no need to use block status payloads. Testing this instead requires support from libnbd, which CAN access multiple meta contexts in parallel from a single NBD connection; an interop test submitted to the libnbd project at the same time as this patch demonstrates the feature working, as well as testing some corner cases (for example, when the payload length is longer than the export length), although other corner cases (like passing the same id duplicated) requires a protocol fuzzer because libnbd is not wired up to break the protocol that badly.
This also includes tweaks to 'qemu-nbd --list' to show when a server is advertising the capability, and to the testsuite to reflect the addition to that output.
Of note: qemu will always advertise the new feature bit during NBD_OPT_INFO if extended headers have alreay been negotiated (regardless of whether any NBD_OPT_SET_META_CONTEXT negotiation has occurred); but for NBD_OPT_GO, qemu only advertises the feature if block status is also enabled (that is, if the client does not negotiate any contexts, then NBD_CMD_BLOCK_STATUS cannot be used, so the feature is not advertised).
Signed-off-by: Eric Blake <eblake@redhat.com> Message-ID: <20230925192229.3186470-26-eblake@redhat.com> [eblake: fix logic to reject unnegotiated contexts] Signed-off-by: Eric Blake <eblake@redhat.com>
show more ...
|
1dec4643 | 25-Sep-2023 |
Eric Blake <eblake@redhat.com> |
nbd/server: Prepare for per-request filtering of BLOCK_STATUS
The next commit will add support for the optional extension NBD_CMD_FLAG_PAYLOAD during NBD_CMD_BLOCK_STATUS, where the client can reque
nbd/server: Prepare for per-request filtering of BLOCK_STATUS
The next commit will add support for the optional extension NBD_CMD_FLAG_PAYLOAD during NBD_CMD_BLOCK_STATUS, where the client can request that the server only return a subset of negotiated contexts, rather than all contexts. To make that task easier, this patch populates the list of contexts to return on a per-command basis (for now, identical to the full set of negotiated contexts).
Signed-off-by: Eric Blake <eblake@redhat.com> Message-ID: <20230925192229.3186470-25-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
show more ...
|
fd358d83 | 25-Sep-2023 |
Eric Blake <eblake@redhat.com> |
nbd/server: Refactor list of negotiated meta contexts
Peform several minor refactorings of how the list of negotiated meta contexts is managed, to make upcoming patches easier: Promote the internal
nbd/server: Refactor list of negotiated meta contexts
Peform several minor refactorings of how the list of negotiated meta contexts is managed, to make upcoming patches easier: Promote the internal type NBDExportMetaContexts to the public opaque type NBDMetaContexts, and mark exp const. Use a shorter member name in NBDClient. Hoist calls to nbd_check_meta_context() earlier in their callers, as the number of negotiated contexts may impact the flags exposed in regards to an export, which in turn requires a new parameter. Drop a redundant parameter to nbd_negotiate_meta_queries. No semantic change intended on the success path; on the failure path, dropping context in nbd_check_meta_export even when reporting an error is safer.
Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Message-ID: <20230925192229.3186470-24-eblake@redhat.com>
show more ...
|
56cf9d04 | 25-Sep-2023 |
Eric Blake <eblake@redhat.com> |
nbd/client: Request extended headers during negotiation
All the pieces are in place for a client to finally request extended headers. Note that we must not request extended headers when qemu-nbd is
nbd/client: Request extended headers during negotiation
All the pieces are in place for a client to finally request extended headers. Note that we must not request extended headers when qemu-nbd is used to connect to the kernel module (as nbd.ko does not expect them, but expects us to do the negotiation in userspace before handing the socket over to the kernel), but there is no harm in all other clients requesting them.
Extended headers are not essential to the information collected during 'qemu-nbd --list', but probing for it gives us one more piece of information in that output. Update the iotests affected by the new line of output.
Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Message-ID: <20230925192229.3186470-23-eblake@redhat.com>
show more ...
|
4fc55bf3 | 25-Sep-2023 |
Eric Blake <eblake@redhat.com> |
nbd/client: Initial support for extended headers
Update the client code to be able to send an extended request, and parse an extended header from the server. Note that since we reject any structure
nbd/client: Initial support for extended headers
Update the client code to be able to send an extended request, and parse an extended header from the server. Note that since we reject any structured reply with a too-large payload, we can always normalize a valid header back into the compact form, so that the caller need not deal with two branches of a union. Still, until a later patch lets the client negotiate extended headers, the code added here should not be reached. Note that because of the different magic numbers, it is just as easy to trace and then tolerate a non-compliant server sending the wrong header reply as it would be to insist that the server is compliant.
Signed-off-by: Eric Blake <eblake@redhat.com> Message-ID: <20230925192229.3186470-21-eblake@redhat.com> [eblake: fix trace format] Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
show more ...
|
9c1d2614 | 25-Sep-2023 |
Eric Blake <eblake@redhat.com> |
nbd/server: Enable initial support for extended headers
Time to start supporting clients that request extended headers. Now we can finally reach the code added across several previous patches.
Eve
nbd/server: Enable initial support for extended headers
Time to start supporting clients that request extended headers. Now we can finally reach the code added across several previous patches.
Even though the NBD spec has been altered to allow us to accept NBD_CMD_READ larger than the max payload size (provided our response is a hole or broken up over more than one data chunk), we are not planning to take advantage of that, and continue to cap NBD_CMD_READ to 32M regardless of header size.
For NBD_CMD_WRITE_ZEROES and NBD_CMD_TRIM, the block layer already supports 64-bit operations without any effort on our part. For NBD_CMD_BLOCK_STATUS, the client's length is a hint, and the previous patch took care of implementing the required NBD_REPLY_TYPE_BLOCK_STATUS_EXT.
We do not yet support clients that want to do request payload filtering of NBD_CMD_BLOCK_STATUS; that will be added in later patches, but is not essential for qemu as a client since qemu only requests the single context base:allocation.
Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Message-ID: <20230925192229.3186470-19-eblake@redhat.com>
show more ...
|
bcc16cc1 | 25-Sep-2023 |
Eric Blake <eblake@redhat.com> |
nbd/server: Support 64-bit block status
The NBD spec states that if the client negotiates extended headers, the server must avoid NBD_REPLY_TYPE_BLOCK_STATUS and instead use NBD_REPLY_TYPE_BLOCK_STA
nbd/server: Support 64-bit block status
The NBD spec states that if the client negotiates extended headers, the server must avoid NBD_REPLY_TYPE_BLOCK_STATUS and instead use NBD_REPLY_TYPE_BLOCK_STATUS_EXT which supports 64-bit lengths, even if the reply does not need more than 32 bits. As of this patch, client->mode is still never NBD_MODE_EXTENDED, so the code added here does not take effect until the next patch enables negotiation.
For now, all metacontexts that we know how to export never populate more than 32 bits of information, so we don't have to worry about NBD_REP_ERR_EXT_HEADER_REQD or filtering during handshake, and we always send all zeroes for the upper 32 bits of status during NBD_CMD_BLOCK_STATUS.
Note that we previously had some interesting size-juggling on call chains, such as:
nbd_co_send_block_status(uint32_t length) -> blockstatus_to_extents(uint32_t bytes) -> bdrv_block_status_above(bytes, &uint64_t num) -> nbd_extent_array_add(uint64_t num) -> store num in 32-bit length
But we were lucky that it never overflowed: bdrv_block_status_above never sets num larger than bytes, and we had previously been capping 'bytes' at 32 bits (since the protocol does not allow sending a larger request without extended headers). This patch adds some assertions that ensure we continue to avoid overflowing 32 bits for a narrow client, while fully utilizing 64-bits all the way through when the client understands that. Even in 64-bit math, overflow is not an issue, because all lengths are coming from the block layer, and we know that the block layer does not support images larger than off_t (if lengths were coming from the network, the story would be different).
Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Message-ID: <20230925192229.3186470-18-eblake@redhat.com>
show more ...
|
11d3355f | 25-Sep-2023 |
Eric Blake <eblake@redhat.com> |
nbd/server: Prepare to send extended header replies
Although extended mode is not yet enabled, once we do turn it on, we need to reply with extended headers to all messages. Update the low level en
nbd/server: Prepare to send extended header replies
Although extended mode is not yet enabled, once we do turn it on, we need to reply with extended headers to all messages. Update the low level entry points necessary so that all other callers automatically get the right header based on the current mode.
Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Message-ID: <20230925192229.3186470-17-eblake@redhat.com>
show more ...
|
c8720ca0 | 25-Sep-2023 |
Eric Blake <eblake@redhat.com> |
nbd/server: Prepare to receive extended header requests
Although extended mode is not yet enabled, once we do turn it on, we need to accept extended requests for all messages. Previous patches have
nbd/server: Prepare to receive extended header requests
Although extended mode is not yet enabled, once we do turn it on, we need to accept extended requests for all messages. Previous patches have already taken care of supporting 64-bit lengths, now we just need to read it off the wire.
Note that this implementation will block indefinitely on a buggy client that sends a non-extended payload (that is, we try to read a full packet before we ever check the magic number, but a client that mistakenly sends a simple request after negotiating extended headers doesn't send us enough bytes), but it's no different from any other client that stops talking to us partway through a packet and thus not worth coding around.
Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Message-ID: <20230925192229.3186470-16-eblake@redhat.com>
show more ...
|
8db7e2d6 | 29-Aug-2023 |
Eric Blake <eblake@redhat.com> |
nbd/server: Refactor handling of command sanity checks
Upcoming additions to support NBD 64-bit effect lengths will add a new command flag NBD_CMD_FLAG_PAYLOAD_LEN that needs to be considered in our
nbd/server: Refactor handling of command sanity checks
Upcoming additions to support NBD 64-bit effect lengths will add a new command flag NBD_CMD_FLAG_PAYLOAD_LEN that needs to be considered in our sanity checks of the client's messages (that is, more than just CMD_WRITE have the potential to carry a client payload when extended headers are in effect). But before we can start to support that, it is easier to first refactor the existing set of various if statements over open-coded combinations of request->type to instead be a single switch statement over all command types that sets witnesses, then straight-line processing based on the witnesses. No semantic change is intended.
Signed-off-by: Eric Blake <eblake@redhat.com> Message-ID: <20230829175826.377251-24-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
show more ...
|
b2578459 | 29-Aug-2023 |
Eric Blake <eblake@redhat.com> |
nbd: Prepare for 64-bit request effect lengths
Widen the length field of NBDRequest to 64-bits, although we can assert that all current uses are still under 32 bits: either because of NBD_MAX_BUFFER
nbd: Prepare for 64-bit request effect lengths
Widen the length field of NBDRequest to 64-bits, although we can assert that all current uses are still under 32 bits: either because of NBD_MAX_BUFFER_SIZE which is even smaller (and where size_t can still be appropriate, even on 32-bit platforms), or because nothing ever puts us into NBD_MODE_EXTENDED yet (and while future patches will allow larger transactions, the lengths in play here are still capped at 32-bit). There are no semantic changes, other than a typo fix in a couple of error messages.
Signed-off-by: Eric Blake <eblake@redhat.com> Message-ID: <20230829175826.377251-23-eblake@redhat.com> [eblake: fix assertion bug in nbd_co_send_simple_reply] Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
show more ...
|
d95ffb6f | 29-Aug-2023 |
Eric Blake <eblake@redhat.com> |
nbd: Add types for extended headers
Add the constants and structs necessary for later patches to start implementing the NBD_OPT_EXTENDED_HEADERS extension in both the client and server, matching rec
nbd: Add types for extended headers
Add the constants and structs necessary for later patches to start implementing the NBD_OPT_EXTENDED_HEADERS extension in both the client and server, matching recent upstream nbd.git (through commit e6f3b94a934). This patch does not change any existing behavior, but merely sets the stage for upcoming patches.
This patch does not change the status quo that neither the client nor server use a packed-struct representation for the request header. While most of the patch adds new types, there is also some churn for renaming the existing NBDExtent to NBDExtent32 to contrast it with NBDExtent64, which I thought was a nicer name than NBDExtentExt.
Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Message-ID: <20230829175826.377251-22-eblake@redhat.com>
show more ...
|
297365b4 | 29-Aug-2023 |
Eric Blake <eblake@redhat.com> |
nbd/client: Pass mode through to nbd_send_request
Once the 64-bit headers extension is enabled, the data layout we send over the wire for a client request depends on the mode negotiated with the ser
nbd/client: Pass mode through to nbd_send_request
Once the 64-bit headers extension is enabled, the data layout we send over the wire for a client request depends on the mode negotiated with the server. Rather than adding a parameter to nbd_send_request, we can add a member to struct NBDRequest, since it already does not reflect on-wire format. Some callers initialize it directly; many others rely on a common initialization point during nbd_co_send_request(). At this point, there is no semantic change.
Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Message-ID: <20230829175826.377251-21-eblake@redhat.com>
show more ...
|