#
fd5e9fcc |
| 23-Feb-2025 |
Andrew Jeffery <andrew@codeconstruct.com.au> |
Merge tag 'v6.6.79' into for/openbmc/dev-6.6
This is the 6.6.79 stable release
# -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAme4eH0ACgkQONu9yGCS # aT6dPRAAo8SiKI7
Merge tag 'v6.6.79' into for/openbmc/dev-6.6
This is the 6.6.79 stable release
# -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAme4eH0ACgkQONu9yGCS # aT6dPRAAo8SiKI7YXQB5KPEkcuD3rKr53C1EMxaiSPPXrgQw95R9HYjkPg6YQw+4 # pvyW2HNMbvC/s7HPVoT9R4VLOkcTJ8d0jtnZFgcwEjQzTNTI1Xju+rqate/lGhHY # cMwb93ThgBUqxwmKSZ3LaknzNrCb1EXopvERXM05anbSQ+JQ5+oq0kA2xOJmu5dm # GuQyi1AhkG4/Fh3r0wdUlP7/pO6Tx/XiZt6c7JZ/RQ6lGVkpzhn0m6r8U/7hkei7 # fKLGRfbQAqBrR5+tUQQq7pdSJcJWGygd7wdutD790yYhWC5pq36KzoLjxNEpDIcD # k37e4teclWlRZb6VsprhLlX4KDDc/kgot7A6Ko44tWeo+dGsx+s5jhO6TVEbPeU/ # YgNP87FlrBf5Li0uk0iMYyAT11KEQXroJ9AZJ/KuNpCA47+2scvF4B3tK1QcdSmv # 21W7ysiGepfk+NG3Gbm7vqKp7JaVZakIZquqx7CUMZz//VcWPj5AgOYzBMmtMALk # I5Bqt8Zo2I0hWiqvdmOGnQMwUYQ7B3wPfR3lWM95UfWV2bxC3NvSnC88eI8/VolM # IrH9l26UHIj5sTGJAZOahq9R7h+lBAr419tDw0Z+GeATqqXX/17BXUomHPuviGyj # /rA67sxTulhw3oSJLsIDKgmMzau8SJQdWDa5aJ5wBPdrgpvaATs= # =tz58 # -----END PGP SIGNATURE----- # gpg: Signature made Fri 21 Feb 2025 23:28:37 ACDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Good signature from "Greg Kroah-Hartman <gregkh@kernel.org>" [marginal] # gpg: gregkh@kernel.org: Verified 8 signatures in the past 4 weeks. Encrypted # 0 messages. # gpg: Warning: you have yet to encrypt a message to this key! # gpg: WARNING: This key is not certified with sufficiently trusted signatures! # gpg: It is not certain that the signature belongs to the owner. # Primary key fingerprint: 647F 2865 4894 E3BD 4571 99BE 38DB BDC8 6092 693E
show more ...
|
Revision tags: v6.6.79, v6.6.78 |
|
#
146a185f |
| 12-Feb-2025 |
Pavel Begunkov <asml.silence@gmail.com> |
io_uring/kbuf: reallocate buf lists on upgrade
commit 8802766324e1f5d414a81ac43365c20142e85603 upstream.
IORING_REGISTER_PBUF_RING can reuse an old struct io_buffer_list if it was created for legac
io_uring/kbuf: reallocate buf lists on upgrade
commit 8802766324e1f5d414a81ac43365c20142e85603 upstream.
IORING_REGISTER_PBUF_RING can reuse an old struct io_buffer_list if it was created for legacy selected buffer and has been emptied. It violates the requirement that most of the field should stay stable after publish. Always reallocate it instead.
Cc: stable@vger.kernel.org Reported-by: Pumpkin Chang <pumpkin@devco.re> Fixes: 2fcabce2d7d34 ("io_uring: disallow mixed provided buffer group registrations") Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
show more ...
|
Revision tags: v6.6.77, v6.6.76, v6.6.75, v6.6.74, v6.6.73, v6.6.72, v6.6.71, v6.12.9, v6.6.70, v6.12.8, v6.6.69, v6.12.7, v6.6.68, v6.12.6, v6.6.67, v6.12.5, v6.6.66, v6.6.65, v6.12.4, v6.6.64, v6.12.3, v6.12.2, v6.6.63, v6.12.1, v6.12, v6.6.62, v6.6.61, v6.6.60, v6.6.59, v6.6.58, v6.6.57, v6.6.56, v6.6.55, v6.6.54, v6.6.53, v6.6.52, v6.6.51, v6.6.50, v6.6.49, v6.6.48, v6.6.47, v6.6.46, v6.6.45, v6.6.44, v6.6.43, v6.6.42, v6.6.41, v6.6.40, v6.6.39, v6.6.38, v6.6.37, v6.6.36 |
|
#
6c71a057 |
| 23-Jun-2024 |
Andrew Jeffery <andrew@codeconstruct.com.au> |
Merge tag 'v6.6.35' into dev-6.6
This is the 6.6.35 stable release
|
Revision tags: v6.6.35, v6.6.34, v6.6.33 |
|
#
43cfac7b |
| 01-Jun-2024 |
Jens Axboe <axboe@kernel.dk> |
io_uring: check for non-NULL file pointer in io_file_can_poll()
commit 5fc16fa5f13b3c06fdb959ef262050bd810416a2 upstream.
In earlier kernels, it was possible to trigger a NULL pointer dereference o
io_uring: check for non-NULL file pointer in io_file_can_poll()
commit 5fc16fa5f13b3c06fdb959ef262050bd810416a2 upstream.
In earlier kernels, it was possible to trigger a NULL pointer dereference off the forced async preparation path, if no file had been assigned. The trace leading to that looks as follows:
BUG: kernel NULL pointer dereference, address: 00000000000000b0 PGD 0 P4D 0 Oops: 0000 [#1] PREEMPT SMP CPU: 67 PID: 1633 Comm: buf-ring-invali Not tainted 6.8.0-rc3+ #1 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS unknown 2/2/2022 RIP: 0010:io_buffer_select+0xc3/0x210 Code: 00 00 48 39 d1 0f 82 ae 00 00 00 48 81 4b 48 00 00 01 00 48 89 73 70 0f b7 50 0c 66 89 53 42 85 ed 0f 85 d2 00 00 00 48 8b 13 <48> 8b 92 b0 00 00 00 48 83 7a 40 00 0f 84 21 01 00 00 4c 8b 20 5b RSP: 0018:ffffb7bec38c7d88 EFLAGS: 00010246 RAX: ffff97af2be61000 RBX: ffff97af234f1700 RCX: 0000000000000040 RDX: 0000000000000000 RSI: ffff97aecfb04820 RDI: ffff97af234f1700 RBP: 0000000000000000 R08: 0000000000200030 R09: 0000000000000020 R10: ffffb7bec38c7dc8 R11: 000000000000c000 R12: ffffb7bec38c7db8 R13: ffff97aecfb05800 R14: ffff97aecfb05800 R15: ffff97af2be5e000 FS: 00007f852f74b740(0000) GS:ffff97b1eeec0000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00000000000000b0 CR3: 000000016deab005 CR4: 0000000000370ef0 Call Trace: <TASK> ? __die+0x1f/0x60 ? page_fault_oops+0x14d/0x420 ? do_user_addr_fault+0x61/0x6a0 ? exc_page_fault+0x6c/0x150 ? asm_exc_page_fault+0x22/0x30 ? io_buffer_select+0xc3/0x210 __io_import_iovec+0xb5/0x120 io_readv_prep_async+0x36/0x70 io_queue_sqe_fallback+0x20/0x260 io_submit_sqes+0x314/0x630 __do_sys_io_uring_enter+0x339/0xbc0 ? __do_sys_io_uring_register+0x11b/0xc50 ? vm_mmap_pgoff+0xce/0x160 do_syscall_64+0x5f/0x180 entry_SYSCALL_64_after_hwframe+0x46/0x4e RIP: 0033:0x55e0a110a67e Code: ba cc 00 00 00 45 31 c0 44 0f b6 92 d0 00 00 00 31 d2 41 b9 08 00 00 00 41 83 e2 01 41 c1 e2 04 41 09 c2 b8 aa 01 00 00 0f 05 <c3> 90 89 30 eb a9 0f 1f 40 00 48 8b 42 20 8b 00 a8 06 75 af 85 f6
because the request is marked forced ASYNC and has a bad file fd, and hence takes the forced async prep path.
Current kernels with the request async prep cleaned up can no longer hit this issue, but for ease of backporting, let's add this safety check in here too as it really doesn't hurt. For both cases, this will inevitably end with a CQE posted with -EBADF.
Cc: stable@vger.kernel.org Fixes: a76c0b31eef5 ("io_uring: commit non-pollable provided mapped buffers upfront") Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
show more ...
|
Revision tags: v6.6.32, v6.6.31, v6.6.30, v6.6.29, v6.6.28, v6.6.27 |
|
#
86aa961b |
| 10-Apr-2024 |
Andrew Jeffery <andrew@codeconstruct.com.au> |
Merge tag 'v6.6.26' into dev-6.6
This is the 6.6.26 stable release
|
Revision tags: v6.6.26, v6.6.25, v6.6.24 |
|
#
65938e81 |
| 02-Apr-2024 |
Jens Axboe <axboe@kernel.dk> |
io_uring/kbuf: hold io_buffer_list reference over mmap
commit 561e4f9451d65fc2f7eef564e0064373e3019793 upstream.
If we look up the kbuf, ensure that it doesn't get unregistered until after we're do
io_uring/kbuf: hold io_buffer_list reference over mmap
commit 561e4f9451d65fc2f7eef564e0064373e3019793 upstream.
If we look up the kbuf, ensure that it doesn't get unregistered until after we're done with it. Since we're inside mmap, we cannot safely use the io_uring lock. Rely on the fact that we can lookup the buffer list under RCU now and grab a reference to it, preventing it from being unregistered until we're done with it. The lookup returns the io_buffer_list directly with it referenced.
Cc: stable@vger.kernel.org # v6.4+ Fixes: 5cf4f52e6d8a ("io_uring: free io_buffer_list entries via RCU") Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
show more ...
|
Revision tags: v6.6.23 |
|
#
b392402d |
| 15-Mar-2024 |
Jens Axboe <axboe@kernel.dk> |
io_uring/kbuf: protect io_buffer_list teardown with a reference
commit 6b69c4ab4f685327d9e10caf0d84217ba23a8c4b upstream.
No functional changes in this patch, just in preparation for being able to
io_uring/kbuf: protect io_buffer_list teardown with a reference
commit 6b69c4ab4f685327d9e10caf0d84217ba23a8c4b upstream.
No functional changes in this patch, just in preparation for being able to keep the buffer list alive outside of the ctx->uring_lock.
Cc: stable@vger.kernel.org # v6.4+ Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
show more ...
|
#
4c0a5da0 |
| 14-Mar-2024 |
Jens Axboe <axboe@kernel.dk> |
io_uring/kbuf: get rid of bl->is_ready
commit 3b80cff5a4d117c53d38ce805823084eaeffbde6 upstream.
Now that xarray is being exclusively used for the buffer_list lookup, this check is no longer needed
io_uring/kbuf: get rid of bl->is_ready
commit 3b80cff5a4d117c53d38ce805823084eaeffbde6 upstream.
Now that xarray is being exclusively used for the buffer_list lookup, this check is no longer needed. Get rid of it and the is_ready member.
Cc: stable@vger.kernel.org # v6.4+ Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
show more ...
|
#
d6e03f6d |
| 14-Mar-2024 |
Jens Axboe <axboe@kernel.dk> |
io_uring/kbuf: get rid of lower BGID lists
commit 09ab7eff38202159271534d2f5ad45526168f2a5 upstream.
Just rely on the xarray for any kind of bgid. This simplifies things, and it really doesn't brin
io_uring/kbuf: get rid of lower BGID lists
commit 09ab7eff38202159271534d2f5ad45526168f2a5 upstream.
Just rely on the xarray for any kind of bgid. This simplifies things, and it really doesn't bring us much, if anything.
Cc: stable@vger.kernel.org # v6.4+ Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
show more ...
|
#
d0c44de2 |
| 10-Feb-2024 |
Andrew Jeffery <andrew@codeconstruct.com.au> |
Merge tag 'v6.6.7' into dev-6.6
This is the 6.6.7 stable release
|
Revision tags: v6.6.16, v6.6.15, v6.6.14, v6.6.13, v6.6.12, v6.6.11, v6.6.10, v6.6.9, v6.6.8 |
|
#
b97d6790 |
| 13-Dec-2023 |
Joel Stanley <joel@jms.id.au> |
Merge tag 'v6.6.6' into dev-6.6
This is the 6.6.6 stable release
Signed-off-by: Joel Stanley <joel@jms.id.au>
|
Revision tags: v6.6.7, v6.6.6, v6.6.5 |
|
#
7e6621b9 |
| 05-Dec-2023 |
Jens Axboe <axboe@kernel.dk> |
io_uring/kbuf: check for buffer list readiness after NULL check
[ Upstream commit 9865346b7e8374b57f1c3ccacdc77846c6352ff4 ]
Move the buffer list 'is_ready' check below the validity check for the b
io_uring/kbuf: check for buffer list readiness after NULL check
[ Upstream commit 9865346b7e8374b57f1c3ccacdc77846c6352ff4 ]
Move the buffer list 'is_ready' check below the validity check for the buffer list for a given group.
Fixes: 5cf4f52e6d8a ("io_uring: free io_buffer_list entries via RCU") Reported-by: Dan Carpenter <dan.carpenter@linaro.org> Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Sasha Levin <sashal@kernel.org>
show more ...
|
#
b2173a8b |
| 05-Dec-2023 |
Dan Carpenter <dan.carpenter@linaro.org> |
io_uring/kbuf: Fix an NULL vs IS_ERR() bug in io_alloc_pbuf_ring()
[ Upstream commit e53f7b54b1fdecae897f25002ff0cff04faab228 ]
The io_mem_alloc() function returns error pointers, not NULL. Update
io_uring/kbuf: Fix an NULL vs IS_ERR() bug in io_alloc_pbuf_ring()
[ Upstream commit e53f7b54b1fdecae897f25002ff0cff04faab228 ]
The io_mem_alloc() function returns error pointers, not NULL. Update the check accordingly.
Fixes: b10b73c102a2 ("io_uring/kbuf: recycle freed mapped buffer ring entries") Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org> Link: https://lore.kernel.org/r/5ed268d3-a997-4f64-bd71-47faa92101ab@moroto.mountain Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Sasha Levin <sashal@kernel.org>
show more ...
|
Revision tags: v6.6.4 |
|
#
9e1152a6 |
| 28-Nov-2023 |
Jens Axboe <axboe@kernel.dk> |
io_uring/kbuf: recycle freed mapped buffer ring entries
commit b10b73c102a2eab91e1cd62a03d6446f1dfecc64 upstream.
Right now we stash any potentially mmap'ed provided ring buffer range for freeing a
io_uring/kbuf: recycle freed mapped buffer ring entries
commit b10b73c102a2eab91e1cd62a03d6446f1dfecc64 upstream.
Right now we stash any potentially mmap'ed provided ring buffer range for freeing at release time, regardless of when they get unregistered. Since we're keeping track of these ranges anyway, keep track of their registration state as well, and use that to recycle ranges when appropriate rather than always allocate new ones.
The lookup is a basic scan of entries, checking for the best matching free entry.
Fixes: c392cbecd8ec ("io_uring/kbuf: defer release of mapped buffer rings") Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
show more ...
|
Revision tags: v6.6.3 |
|
#
7138ebbe |
| 27-Nov-2023 |
Jens Axboe <axboe@kernel.dk> |
io_uring/kbuf: defer release of mapped buffer rings
commit c392cbecd8eca4c53f2bf508731257d9d0a21c2d upstream.
If a provided buffer ring is setup with IOU_PBUF_RING_MMAP, then the kernel allocates t
io_uring/kbuf: defer release of mapped buffer rings
commit c392cbecd8eca4c53f2bf508731257d9d0a21c2d upstream.
If a provided buffer ring is setup with IOU_PBUF_RING_MMAP, then the kernel allocates the memory for it and the application is expected to mmap(2) this memory. However, io_uring uses remap_pfn_range() for this operation, so we cannot rely on normal munmap/release on freeing them for us.
Stash an io_buf_free entry away for each of these, if any, and provide a helper to free them post ->release().
Cc: stable@vger.kernel.org Fixes: c56e022c0a27 ("io_uring: add support for user mapped provided buffer ring") Reported-by: Jann Horn <jannh@google.com> Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
show more ...
|
#
09f75200 |
| 27-Nov-2023 |
Jens Axboe <axboe@kernel.dk> |
io_uring: free io_buffer_list entries via RCU
commit 5cf4f52e6d8aa2d3b7728f568abbf9d42a3af252 upstream.
mmap_lock nests under uring_lock out of necessity, as we may be doing user copies with uring_
io_uring: free io_buffer_list entries via RCU
commit 5cf4f52e6d8aa2d3b7728f568abbf9d42a3af252 upstream.
mmap_lock nests under uring_lock out of necessity, as we may be doing user copies with uring_lock held. However, for mmap of provided buffer rings, we attempt to grab uring_lock with mmap_lock already held from do_mmap(). This makes lockdep, rightfully, complain:
WARNING: possible circular locking dependency detected 6.7.0-rc1-00009-gff3337ebaf94-dirty #4438 Not tainted ------------------------------------------------------ buf-ring.t/442 is trying to acquire lock: ffff00020e1480a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_uring_validate_mmap_request.isra.0+0x4c/0x140
but task is already holding lock: ffff0000dc226190 (&mm->mmap_lock){++++}-{3:3}, at: vm_mmap_pgoff+0x124/0x264
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (&mm->mmap_lock){++++}-{3:3}: __might_fault+0x90/0xbc io_register_pbuf_ring+0x94/0x488 __arm64_sys_io_uring_register+0x8dc/0x1318 invoke_syscall+0x5c/0x17c el0_svc_common.constprop.0+0x108/0x130 do_el0_svc+0x2c/0x38 el0_svc+0x4c/0x94 el0t_64_sync_handler+0x118/0x124 el0t_64_sync+0x168/0x16c
-> #0 (&ctx->uring_lock){+.+.}-{3:3}: __lock_acquire+0x19a0/0x2d14 lock_acquire+0x2e0/0x44c __mutex_lock+0x118/0x564 mutex_lock_nested+0x20/0x28 io_uring_validate_mmap_request.isra.0+0x4c/0x140 io_uring_mmu_get_unmapped_area+0x3c/0x98 get_unmapped_area+0xa4/0x158 do_mmap+0xec/0x5b4 vm_mmap_pgoff+0x158/0x264 ksys_mmap_pgoff+0x1d4/0x254 __arm64_sys_mmap+0x80/0x9c invoke_syscall+0x5c/0x17c el0_svc_common.constprop.0+0x108/0x130 do_el0_svc+0x2c/0x38 el0_svc+0x4c/0x94 el0t_64_sync_handler+0x118/0x124 el0t_64_sync+0x168/0x16c
From that mmap(2) path, we really just need to ensure that the buffer list doesn't go away from underneath us. For the lower indexed entries, they never go away until the ring is freed and we can always sanely reference those as long as the caller has a file reference. For the higher indexed ones in our xarray, we just need to ensure that the buffer list remains valid while we return the address of it.
Free the higher indexed io_buffer_list entries via RCU. With that we can avoid needing ->uring_lock inside mmap(2), and simply hold the RCU read lock around the buffer list lookup and address check.
To ensure that the arrayed lookup either returns a valid fully formulated entry via RCU lookup, add an 'is_ready' flag that we access with store and release memory ordering. This isn't needed for the xarray lookups, but doesn't hurt either. Since this isn't a fast path, retain it across both types. Similarly, for the allocated array inside the ctx, ensure we use the proper load/acquire as setup could in theory be running in parallel with mmap.
While in there, add a few lockdep checks for documentation purposes.
Cc: stable@vger.kernel.org Fixes: c56e022c0a27 ("io_uring: add support for user mapped provided buffer ring") Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
show more ...
|
Revision tags: v6.6.2, v6.5.11, v6.6.1, v6.5.10, v6.6, v6.5.9, v6.5.8, v6.5.7, v6.5.6 |
|
#
46484864 |
| 04-Oct-2023 |
Gabriel Krisman Bertazi <krisman@suse.de> |
io_uring/kbuf: Allow the full buffer id space for provided buffers
[ Upstream commit f74c746e476b9dad51448b9a9421aae72b60e25f ]
nbufs tracks the number of buffers and not the last bgid. In 16-bit,
io_uring/kbuf: Allow the full buffer id space for provided buffers
[ Upstream commit f74c746e476b9dad51448b9a9421aae72b60e25f ]
nbufs tracks the number of buffers and not the last bgid. In 16-bit, we have 2^16 valid buffers, but the check mistakenly rejects the last bid. Let's fix it to make the interface consistent with the documentation.
Fixes: ddf0322db79c ("io_uring: add IORING_OP_PROVIDE_BUFFERS") Signed-off-by: Gabriel Krisman Bertazi <krisman@suse.de> Link: https://lore.kernel.org/r/20231005000531.30800-3-krisman@suse.de Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Sasha Levin <sashal@kernel.org>
show more ...
|
#
de92bc45 |
| 04-Oct-2023 |
Gabriel Krisman Bertazi <krisman@suse.de> |
io_uring/kbuf: Fix check of BID wrapping in provided buffers
[ Upstream commit ab69838e7c75b0edb699c1a8f42752b30333c46f ]
Commit 3851d25c75ed0 ("io_uring: check for rollover of buffer ID when provi
io_uring/kbuf: Fix check of BID wrapping in provided buffers
[ Upstream commit ab69838e7c75b0edb699c1a8f42752b30333c46f ]
Commit 3851d25c75ed0 ("io_uring: check for rollover of buffer ID when providing buffers") introduced a check to prevent wrapping the BID counter when sqe->off is provided, but it's off-by-one too restrictive, rejecting the last possible BID (65534).
i.e., the following fails with -EINVAL.
io_uring_prep_provide_buffers(sqe, addr, size, 0xFFFF, 0, 0);
Fixes: 3851d25c75ed ("io_uring: check for rollover of buffer ID when providing buffers") Signed-off-by: Gabriel Krisman Bertazi <krisman@suse.de> Link: https://lore.kernel.org/r/20231005000531.30800-2-krisman@suse.de Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Sasha Levin <sashal@kernel.org>
show more ...
|
#
a88c3869 |
| 06-Oct-2023 |
Linus Torvalds <torvalds@linux-foundation.org> |
Merge tag 'io_uring-6.6-2023-10-06' of git://git.kernel.dk/linux
Pull io_uring fixes from Jens Axboe:
- syzbot report on a crash on 32-bit arm with highmem, and went digging to check for potent
Merge tag 'io_uring-6.6-2023-10-06' of git://git.kernel.dk/linux
Pull io_uring fixes from Jens Axboe:
- syzbot report on a crash on 32-bit arm with highmem, and went digging to check for potentially similar issues and found one more (me)
- Fix a syzbot report with PROVE_LOCKING=y and setting up the ring in a disabled state (me)
- Fix for race with CPU hotplut and io-wq init (Jeff)
* tag 'io_uring-6.6-2023-10-06' of git://git.kernel.dk/linux: io-wq: fully initialize wqe before calling cpuhp_state_add_instance_nocalls() io_uring: don't allow IORING_SETUP_NO_MMAP rings on highmem pages io_uring: ensure io_lockdep_assert_cq_locked() handles disabled rings io_uring/kbuf: don't allow registered buffer rings on highmem pages
show more ...
|
#
f8024f1f |
| 02-Oct-2023 |
Jens Axboe <axboe@kernel.dk> |
io_uring/kbuf: don't allow registered buffer rings on highmem pages
syzbot reports that registering a mapped buffer ring on arm32 can trigger an OOPS. Registered buffer rings have two modes, one of
io_uring/kbuf: don't allow registered buffer rings on highmem pages
syzbot reports that registering a mapped buffer ring on arm32 can trigger an OOPS. Registered buffer rings have two modes, one of them is the application passing in the memory that the buffer ring should reside in. Once those pages are mapped, we use page_address() to get a virtual address. This will obviously fail on highmem pages, which aren't mapped.
Add a check if we have any highmem pages after mapping, and fail the attempt to register a provided buffer ring if we do. This will return the same error as kernels that don't support provided buffer rings to begin with.
Link: https://lore.kernel.org/io-uring/000000000000af635c0606bcb889@google.com/ Fixes: c56e022c0a27 ("io_uring: add support for user mapped provided buffer ring") Cc: stable@vger.kernel.org Reported-by: syzbot+2113e61b8848fa7951d8@syzkaller.appspotmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
show more ...
|
Revision tags: v6.5.5, v6.5.4, v6.5.3 |
|
#
c900529f |
| 12-Sep-2023 |
Thomas Zimmermann <tzimmermann@suse.de> |
Merge drm/drm-fixes into drm-misc-fixes
Forwarding to v6.6-rc1.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
|
Revision tags: v6.5.2, v6.1.51, v6.5.1 |
|
#
1ac731c5 |
| 30-Aug-2023 |
Dmitry Torokhov <dmitry.torokhov@gmail.com> |
Merge branch 'next' into for-linus
Prepare input updates for 6.6 merge window.
|
Revision tags: v6.1.50 |
|
#
b96a3e91 |
| 29-Aug-2023 |
Linus Torvalds <torvalds@linux-foundation.org> |
Merge tag 'mm-stable-2023-08-28-18-26' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Pull MM updates from Andrew Morton:
- Some swap cleanups from Ma Wupeng ("fix WARN_ON in add_to_a
Merge tag 'mm-stable-2023-08-28-18-26' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Pull MM updates from Andrew Morton:
- Some swap cleanups from Ma Wupeng ("fix WARN_ON in add_to_avail_list")
- Peter Xu has a series (mm/gup: Unify hugetlb, speed up thp") which reduces the special-case code for handling hugetlb pages in GUP. It also speeds up GUP handling of transparent hugepages.
- Peng Zhang provides some maple tree speedups ("Optimize the fast path of mas_store()").
- Sergey Senozhatsky has improved te performance of zsmalloc during compaction (zsmalloc: small compaction improvements").
- Domenico Cerasuolo has developed additional selftest code for zswap ("selftests: cgroup: add zswap test program").
- xu xin has doe some work on KSM's handling of zero pages. These changes are mainly to enable the user to better understand the effectiveness of KSM's treatment of zero pages ("ksm: support tracking KSM-placed zero-pages").
- Jeff Xu has fixes the behaviour of memfd's MEMFD_NOEXEC_SCOPE_NOEXEC_ENFORCED sysctl ("mm/memfd: fix sysctl MEMFD_NOEXEC_SCOPE_NOEXEC_ENFORCED").
- David Howells has fixed an fscache optimization ("mm, netfs, fscache: Stop read optimisation when folio removed from pagecache").
- Axel Rasmussen has given userfaultfd the ability to simulate memory poisoning ("add UFFDIO_POISON to simulate memory poisoning with UFFD").
- Miaohe Lin has contributed some routine maintenance work on the memory-failure code ("mm: memory-failure: remove unneeded PageHuge() check").
- Peng Zhang has contributed some maintenance work on the maple tree code ("Improve the validation for maple tree and some cleanup").
- Hugh Dickins has optimized the collapsing of shmem or file pages into THPs ("mm: free retracted page table by RCU").
- Jiaqi Yan has a patch series which permits us to use the healthy subpages within a hardware poisoned huge page for general purposes ("Improve hugetlbfs read on HWPOISON hugepages").
- Kemeng Shi has done some maintenance work on the pagetable-check code ("Remove unused parameters in page_table_check").
- More folioification work from Matthew Wilcox ("More filesystem folio conversions for 6.6"), ("Followup folio conversions for zswap"). And from ZhangPeng ("Convert several functions in page_io.c to use a folio").
- page_ext cleanups from Kemeng Shi ("minor cleanups for page_ext").
- Baoquan He has converted some architectures to use the GENERIC_IOREMAP ioremap()/iounmap() code ("mm: ioremap: Convert architectures to take GENERIC_IOREMAP way").
- Anshuman Khandual has optimized arm64 tlb shootdown ("arm64: support batched/deferred tlb shootdown during page reclamation/migration").
- Better maple tree lockdep checking from Liam Howlett ("More strict maple tree lockdep"). Liam also developed some efficiency improvements ("Reduce preallocations for maple tree").
- Cleanup and optimization to the secondary IOMMU TLB invalidation, from Alistair Popple ("Invalidate secondary IOMMU TLB on permission upgrade").
- Ryan Roberts fixes some arm64 MM selftest issues ("selftests/mm fixes for arm64").
- Kemeng Shi provides some maintenance work on the compaction code ("Two minor cleanups for compaction").
- Some reduction in mmap_lock pressure from Matthew Wilcox ("Handle most file-backed faults under the VMA lock").
- Aneesh Kumar contributes code to use the vmemmap optimization for DAX on ppc64, under some circumstances ("Add support for DAX vmemmap optimization for ppc64").
- page-ext cleanups from Kemeng Shi ("add page_ext_data to get client data in page_ext"), ("minor cleanups to page_ext header").
- Some zswap cleanups from Johannes Weiner ("mm: zswap: three cleanups").
- kmsan cleanups from ZhangPeng ("minor cleanups for kmsan").
- VMA handling cleanups from Kefeng Wang ("mm: convert to vma_is_initial_heap/stack()").
- DAMON feature work from SeongJae Park ("mm/damon/sysfs-schemes: implement DAMOS tried total bytes file"), ("Extend DAMOS filters for address ranges and DAMON monitoring targets").
- Compaction work from Kemeng Shi ("Fixes and cleanups to compaction").
- Liam Howlett has improved the maple tree node replacement code ("maple_tree: Change replacement strategy").
- ZhangPeng has a general code cleanup - use the K() macro more widely ("cleanup with helper macro K()").
- Aneesh Kumar brings memmap-on-memory to ppc64 ("Add support for memmap on memory feature on ppc64").
- pagealloc cleanups from Kemeng Shi ("Two minor cleanups for pcp list in page_alloc"), ("Two minor cleanups for get pageblock migratetype").
- Vishal Moola introduces a memory descriptor for page table tracking, "struct ptdesc" ("Split ptdesc from struct page").
- memfd selftest maintenance work from Aleksa Sarai ("memfd: cleanups for vm.memfd_noexec").
- MM include file rationalization from Hugh Dickins ("arch: include asm/cacheflush.h in asm/hugetlb.h").
- THP debug output fixes from Hugh Dickins ("mm,thp: fix sloppy text output").
- kmemleak improvements from Xiaolei Wang ("mm/kmemleak: use object_cache instead of kmemleak_initialized").
- More folio-related cleanups from Matthew Wilcox ("Remove _folio_dtor and _folio_order").
- A VMA locking scalability improvement from Suren Baghdasaryan ("Per-VMA lock support for swap and userfaults").
- pagetable handling cleanups from Matthew Wilcox ("New page table range API").
- A batch of swap/thp cleanups from David Hildenbrand ("mm/swap: stop using page->private on tail pages for THP_SWAP + cleanups").
- Cleanups and speedups to the hugetlb fault handling from Matthew Wilcox ("Change calling convention for ->huge_fault").
- Matthew Wilcox has also done some maintenance work on the MM subsystem documentation ("Improve mm documentation").
* tag 'mm-stable-2023-08-28-18-26' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (489 commits) maple_tree: shrink struct maple_tree maple_tree: clean up mas_wr_append() secretmem: convert page_is_secretmem() to folio_is_secretmem() nios2: fix flush_dcache_page() for usage from irq context hugetlb: add documentation for vma_kernel_pagesize() mm: add orphaned kernel-doc to the rst files. mm: fix clean_record_shared_mapping_range kernel-doc mm: fix get_mctgt_type() kernel-doc mm: fix kernel-doc warning from tlb_flush_rmaps() mm: remove enum page_entry_size mm: allow ->huge_fault() to be called without the mmap_lock held mm: move PMD_ORDER to pgtable.h mm: remove checks for pte_index memcg: remove duplication detection for mem_cgroup_uncharge_swap mm/huge_memory: work on folio->swap instead of page->private when splitting folio mm/swap: inline folio_set_swap_entry() and folio_swap_entry() mm/swap: use dedicated entry for swap in folio mm/swap: stop using page->private on tail pages for THP_SWAP selftests/mm: fix WARNING comparing pointer to 0 selftests: cgroup: fix test_kmem_memcg_deletion kernel mem check ...
show more ...
|
Revision tags: v6.5, v6.1.49, v6.1.48, v6.1.46 |
|
#
99a9e0b8 |
| 16-Aug-2023 |
Matthew Wilcox (Oracle) <willy@infradead.org> |
io_uring: stop calling free_compound_page()
Patch series "Remove _folio_dtor and _folio_order", v2.
This patch (of 13):
folio_put() is the standard way to write this, and it's not appreciably slo
io_uring: stop calling free_compound_page()
Patch series "Remove _folio_dtor and _folio_order", v2.
This patch (of 13):
folio_put() is the standard way to write this, and it's not appreciably slower. This is an enabling patch for removing free_compound_page() entirely.
Link: https://lkml.kernel.org/r/20230816151201.3655946-1-willy@infradead.org Link: https://lkml.kernel.org/r/20230816151201.3655946-2-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Jens Axboe <axboe@kernel.dk> Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com> Cc: Yanteng Si <siyanteng@loongson.cn> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
show more ...
|
Revision tags: v6.1.45, v6.1.44, v6.1.43, v6.1.42, v6.1.41, v6.1.40, v6.1.39 |
|
#
50501936 |
| 17-Jul-2023 |
Dmitry Torokhov <dmitry.torokhov@gmail.com> |
Merge tag 'v6.4' into next
Sync up with mainline to bring in updates to shared infrastructure.
|