Revision tags: v6.6.67, v6.6.66, v6.6.65, v6.6.64, v6.6.63, v6.6.62, v6.6.61, v6.6.60, v6.6.59, v6.6.58, v6.6.57, v6.6.56, v6.6.55, v6.6.54, v6.6.53, v6.6.52, v6.6.51, v6.6.50, v6.6.49, v6.6.48, v6.6.47, v6.6.46, v6.6.45, v6.6.44, v6.6.43, v6.6.42, v6.6.41, v6.6.40, v6.6.39, v6.6.38, v6.6.37, v6.6.36 |
|
#
6c71a057 |
| 23-Jun-2024 |
Andrew Jeffery <andrew@codeconstruct.com.au> |
Merge tag 'v6.6.35' into dev-6.6
This is the 6.6.35 stable release
|
Revision tags: v6.6.35 |
|
#
f8474caf |
| 17-Jun-2024 |
Petr Tesarik <petr.tesarik1@huawei-partners.com> |
swiotlb: extend buffer pre-padding to alloc_align_mask if necessary
commit af133562d5aff41fcdbe51f1a504ae04788b5fc0 upstream.
Allow a buffer pre-padding of up to alloc_align_mask, even if it requir
swiotlb: extend buffer pre-padding to alloc_align_mask if necessary
commit af133562d5aff41fcdbe51f1a504ae04788b5fc0 upstream.
Allow a buffer pre-padding of up to alloc_align_mask, even if it requires allocating additional IO TLB slots.
If the allocation alignment is bigger than IO_TLB_SIZE and min_align_mask covers any non-zero bits in the original address between IO_TLB_SIZE and alloc_align_mask, these bits are not preserved in the swiotlb buffer address.
To fix this case, increase the allocation size and use a larger offset within the allocated buffer. As a result, extra padding slots may be allocated before the mapping start address.
Leave orig_addr in these padding slots initialized to INVALID_PHYS_ADDR. These slots do not correspond to any CPU buffer, so attempts to sync the data should be ignored.
The padding slots should be automatically released when the buffer is unmapped. However, swiotlb_tbl_unmap_single() takes only the address of the DMA buffer slot, not the first padding slot. Save the number of padding slots in struct io_tlb_slot and use it to adjust the slot index in swiotlb_release_slots(), so all allocated slots are properly freed.
Cc: stable@vger.kernel.org # v6.6+ Fixes: 2fd4fa5d3fb5 ("swiotlb: Fix alignment checks when both allocation and DMA masks are present") Link: https://lore.kernel.org/linux-iommu/20240311210507.217daf8b@meshulam.tesarici.cz/ Signed-off-by: Petr Tesarik <petr.tesarik1@huawei-partners.com> Reviewed-by: Michael Kelley <mhklinux@outlook.com> Tested-by: Michael Kelley <mhklinux@outlook.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Fabio Estevam <festevam@denx.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
show more ...
|
#
6c385c1f |
| 17-Jun-2024 |
Will Deacon <will@kernel.org> |
swiotlb: Reinstate page-alignment for mappings >= PAGE_SIZE
commit 14cebf689a78e8a1c041138af221ef6eac6bc7da upstream.
For swiotlb allocations >= PAGE_SIZE, the slab search historically adjusted the
swiotlb: Reinstate page-alignment for mappings >= PAGE_SIZE
commit 14cebf689a78e8a1c041138af221ef6eac6bc7da upstream.
For swiotlb allocations >= PAGE_SIZE, the slab search historically adjusted the stride to avoid checking unaligned slots. This had the side-effect of aligning large mapping requests to PAGE_SIZE, but that was broken by 0eee5ae10256 ("swiotlb: fix slot alignment checks").
Since this alignment could be relied upon drivers, reinstate PAGE_SIZE alignment for swiotlb mappings >= PAGE_SIZE.
Cc: stable@vger.kernel.org # v6.6+ Reported-by: Michael Kelley <mhklinux@outlook.com> Signed-off-by: Will Deacon <will@kernel.org> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Petr Tesarik <petr.tesarik1@huawei-partners.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Michael Kelley <mhklinux@outlook.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Fabio Estevam <festevam@denx.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
show more ...
|
#
6033fc95 |
| 17-Jun-2024 |
Will Deacon <will@kernel.org> |
swiotlb: Enforce page alignment in swiotlb_alloc()
commit 823353b7cf0ea9dfb09f5181d5fb2825d727200b upstream.
When allocating pages from a restricted DMA pool in swiotlb_alloc(), the buffer address
swiotlb: Enforce page alignment in swiotlb_alloc()
commit 823353b7cf0ea9dfb09f5181d5fb2825d727200b upstream.
When allocating pages from a restricted DMA pool in swiotlb_alloc(), the buffer address is blindly converted to a 'struct page *' that is returned to the caller. In the unlikely event of an allocation bug, page-unaligned addresses are not detected and slots can silently be double-allocated.
Add a simple check of the buffer alignment in swiotlb_alloc() to make debugging a little easier if something has gone wonky.
Cc: stable@vger.kernel.org # v6.6+ Signed-off-by: Will Deacon <will@kernel.org> Reviewed-by: Michael Kelley <mhklinux@outlook.com> Reviewed-by: Petr Tesarik <petr.tesarik1@huawei-partners.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Michael Kelley <mhklinux@outlook.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Fabio Estevam <festevam@denx.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
show more ...
|
Revision tags: v6.6.34, v6.6.33, v6.6.32 |
|
#
e0d77d0f |
| 19-May-2024 |
Andrew Jeffery <andrew@codeconstruct.com.au> |
Merge tag 'v6.6.31' into dev-6.6
This is the 6.6.31 stable release
|
Revision tags: v6.6.31, v6.6.30 |
|
#
f2a6b3ed |
| 02-May-2024 |
Will Deacon <will@kernel.org> |
swiotlb: initialise restricted pool list_head when SWIOTLB_DYNAMIC=y
[ Upstream commit 75961ffb5cb3e5196f19cae7683f35cc88b50800 ]
Using restricted DMA pools (CONFIG_DMA_RESTRICTED_POOL=y) in conjun
swiotlb: initialise restricted pool list_head when SWIOTLB_DYNAMIC=y
[ Upstream commit 75961ffb5cb3e5196f19cae7683f35cc88b50800 ]
Using restricted DMA pools (CONFIG_DMA_RESTRICTED_POOL=y) in conjunction with dynamic SWIOTLB (CONFIG_SWIOTLB_DYNAMIC=y) leads to the following crash when initialising the restricted pools at boot-time:
| Unable to handle kernel NULL pointer dereference at virtual address 0000000000000008 | Internal error: Oops: 0000000096000005 [#1] PREEMPT SMP | pc : rmem_swiotlb_device_init+0xfc/0x1ec | lr : rmem_swiotlb_device_init+0xf0/0x1ec | Call trace: | rmem_swiotlb_device_init+0xfc/0x1ec | of_reserved_mem_device_init_by_idx+0x18c/0x238 | of_dma_configure_id+0x31c/0x33c | platform_dma_configure+0x34/0x80
faddr2line reveals that the crash is in the list validation code:
include/linux/list.h:83 include/linux/rculist.h:79 include/linux/rculist.h:106 kernel/dma/swiotlb.c:306 kernel/dma/swiotlb.c:1695
because add_mem_pool() is trying to list_add_rcu() to a NULL 'mem->pools'.
Fix the crash by initialising the 'mem->pools' list_head in rmem_swiotlb_device_init() before calling add_mem_pool().
Reported-by: Nikita Ioffe <ioffe@google.com> Tested-by: Nikita Ioffe <ioffe@google.com> Fixes: 1aaa736815eb ("swiotlb: allocate a new memory pool when existing pools are full") Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
show more ...
|
Revision tags: v6.6.29, v6.6.28, v6.6.27, v6.6.26, v6.6.25 |
|
#
46eeaa11 |
| 03-Apr-2024 |
Andrew Jeffery <andrew@codeconstruct.com.au> |
Merge tag 'v6.6.24' into dev-6.6
This is the 6.6.24 stable release
|
Revision tags: v6.6.24, v6.6.23 |
|
#
c803069d |
| 08-Mar-2024 |
Will Deacon <will@kernel.org> |
swiotlb: Fix alignment checks when both allocation and DMA masks are present
[ Upstream commit 51b30ecb73b481d5fac6ccf2ecb4a309c9ee3310 ]
Nicolin reports that swiotlb buffer allocations fail for an
swiotlb: Fix alignment checks when both allocation and DMA masks are present
[ Upstream commit 51b30ecb73b481d5fac6ccf2ecb4a309c9ee3310 ]
Nicolin reports that swiotlb buffer allocations fail for an NVME device behind an IOMMU using 64KiB pages. This is because we end up with a minimum allocation alignment of 64KiB (for the IOMMU to map the buffer safely) but a minimum DMA alignment mask corresponding to a 4KiB NVME page (i.e. preserving the 4KiB page offset from the original allocation). If the original address is not 4KiB-aligned, the allocation will fail because swiotlb_search_pool_area() erroneously compares these unmasked bits with the 64KiB-aligned candidate allocation.
Tweak swiotlb_search_pool_area() so that the DMA alignment mask is reduced based on the required alignment of the allocation.
Fixes: 82612d66d51d ("iommu: Allow the dma-iommu api to use bounce buffers") Link: https://lore.kernel.org/r/cover.1707851466.git.nicolinc@nvidia.com Reported-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Will Deacon <will@kernel.org> Reviewed-by: Michael Kelley <mhklinux@outlook.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Michael Kelley <mhklinux@outlook.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
show more ...
|
#
ae2f8dbe |
| 08-Mar-2024 |
Will Deacon <will@kernel.org> |
swiotlb: Honour dma_alloc_coherent() alignment in swiotlb_alloc()
[ Upstream commit cbf53074a528191df82b4dba1e3d21191102255e ]
core-api/dma-api-howto.rst states the following properties of dma_allo
swiotlb: Honour dma_alloc_coherent() alignment in swiotlb_alloc()
[ Upstream commit cbf53074a528191df82b4dba1e3d21191102255e ]
core-api/dma-api-howto.rst states the following properties of dma_alloc_coherent():
| The CPU virtual address and the DMA address are both guaranteed to | be aligned to the smallest PAGE_SIZE order which is greater than or | equal to the requested size.
However, swiotlb_alloc() passes zero for the 'alloc_align_mask' parameter of swiotlb_find_slots() and so this property is not upheld. Instead, allocations larger than a page are aligned to PAGE_SIZE,
Calculate the mask corresponding to the page order suitable for holding the allocation and pass that to swiotlb_find_slots().
Fixes: e81e99bacc9f ("swiotlb: Support aligned swiotlb buffers") Signed-off-by: Will Deacon <will@kernel.org> Reviewed-by: Michael Kelley <mhklinux@outlook.com> Reviewed-by: Petr Tesarik <petr.tesarik1@huawei-partners.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Michael Kelley <mhklinux@outlook.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
show more ...
|
#
3e7acd6e |
| 08-Mar-2024 |
Will Deacon <will@kernel.org> |
swiotlb: Fix double-allocation of slots due to broken alignment handling
[ Upstream commit 04867a7a33324c9c562ee7949dbcaab7aaad1fb4 ]
Commit bbb73a103fbb ("swiotlb: fix a braino in the alignment ch
swiotlb: Fix double-allocation of slots due to broken alignment handling
[ Upstream commit 04867a7a33324c9c562ee7949dbcaab7aaad1fb4 ]
Commit bbb73a103fbb ("swiotlb: fix a braino in the alignment check fix"), which was a fix for commit 0eee5ae10256 ("swiotlb: fix slot alignment checks"), causes a functional regression with vsock in a virtual machine using bouncing via a restricted DMA SWIOTLB pool.
When virtio allocates the virtqueues for the vsock device using dma_alloc_coherent(), the SWIOTLB search can return page-unaligned allocations if 'area->index' was left unaligned by a previous allocation from the buffer:
# Final address in brackets is the SWIOTLB address returned to the caller | virtio-pci 0000:00:07.0: orig_addr 0x0 alloc_size 0x2000, iotlb_align_mask 0x800 stride 0x2: got slot 1645-1649/7168 (0x98326800) | virtio-pci 0000:00:07.0: orig_addr 0x0 alloc_size 0x2000, iotlb_align_mask 0x800 stride 0x2: got slot 1649-1653/7168 (0x98328800) | virtio-pci 0000:00:07.0: orig_addr 0x0 alloc_size 0x2000, iotlb_align_mask 0x800 stride 0x2: got slot 1653-1657/7168 (0x9832a800)
This ends badly (typically buffer corruption and/or a hang) because swiotlb_alloc() is expecting a page-aligned allocation and so blindly returns a pointer to the 'struct page' corresponding to the allocation, therefore double-allocating the first half (2KiB slot) of the 4KiB page.
Fix the problem by treating the allocation alignment separately to any additional alignment requirements from the device, using the maximum of the two as the stride to search the buffer slots and taking care to ensure a minimum of page-alignment for buffers larger than a page.
This also resolves swiotlb allocation failures occuring due to the inclusion of ~PAGE_MASK in 'iotlb_align_mask' for large allocations and resulting in alignment requirements exceeding swiotlb_max_mapping_size().
Fixes: bbb73a103fbb ("swiotlb: fix a braino in the alignment check fix") Fixes: 0eee5ae10256 ("swiotlb: fix slot alignment checks") Signed-off-by: Will Deacon <will@kernel.org> Reviewed-by: Michael Kelley <mhklinux@outlook.com> Reviewed-by: Petr Tesarik <petr.tesarik1@huawei-partners.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Michael Kelley <mhklinux@outlook.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
show more ...
|
Revision tags: v6.6.16, v6.6.15, v6.6.14, v6.6.13, v6.6.12, v6.6.11, v6.6.10, v6.6.9, v6.6.8 |
|
#
b97d6790 |
| 13-Dec-2023 |
Joel Stanley <joel@jms.id.au> |
Merge tag 'v6.6.6' into dev-6.6
This is the 6.6.6 stable release
Signed-off-by: Joel Stanley <joel@jms.id.au>
|
Revision tags: v6.6.7, v6.6.6, v6.6.5, v6.6.4, v6.6.3, v6.6.2, v6.5.11 |
|
#
ce761249 |
| 08-Nov-2023 |
Petr Tesarik <petr.tesarik1@huawei-partners.com> |
swiotlb: fix out-of-bounds TLB allocations with CONFIG_SWIOTLB_DYNAMIC
commit 53c87e846e335e3c18044c397cc35178163d7827 upstream.
Limit the free list length to the size of the IO TLB. Transient pool
swiotlb: fix out-of-bounds TLB allocations with CONFIG_SWIOTLB_DYNAMIC
commit 53c87e846e335e3c18044c397cc35178163d7827 upstream.
Limit the free list length to the size of the IO TLB. Transient pool can be smaller than IO_TLB_SEGSIZE, but the free list is initialized with the assumption that the total number of slots is a multiple of IO_TLB_SEGSIZE. As a result, swiotlb_area_find_slots() may allocate slots past the end of a transient IO TLB buffer.
Reported-by: Niklas Schnelle <schnelle@linux.ibm.com> Closes: https://lore.kernel.org/linux-iommu/104a8c8fedffd1ff8a2890983e2ec1c26bff6810.camel@linux.ibm.com/ Fixes: 79636caad361 ("swiotlb: if swiotlb is full, fall back to a transient memory pool") Cc: stable@vger.kernel.org Signed-off-by: Petr Tesarik <petr.tesarik1@huawei-partners.com> Reviewed-by: Halil Pasic <pasic@linux.ibm.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
show more ...
|
Revision tags: v6.6.1 |
|
#
6d6ab317 |
| 02-Nov-2023 |
Petr Tesarik <petrtesarik@huaweicloud.com> |
swiotlb: do not free decrypted pages if dynamic
commit a5e3b127455d073f146a2a4ea3e7117635d34c5c upstream.
Fix these two error paths:
1. When set_memory_decrypted() fails, pages may be left fully o
swiotlb: do not free decrypted pages if dynamic
commit a5e3b127455d073f146a2a4ea3e7117635d34c5c upstream.
Fix these two error paths:
1. When set_memory_decrypted() fails, pages may be left fully or partially decrypted.
2. Decrypted pages may be freed if swiotlb_alloc_tlb() determines that the physical address is too high.
To fix the first issue, call set_memory_encrypted() on the allocated region after a failed decryption attempt. If that also fails, leak the pages.
To fix the second issue, check that the TLB physical address is below the requested limit before decrypting.
Let the caller differentiate between unsuitable physical address (=> retry from a lower zone) and allocation failures (=> no point in retrying).
Cc: stable@vger.kernel.org Fixes: 79636caad361 ("swiotlb: if swiotlb is full, fall back to a transient memory pool") Signed-off-by: Petr Tesarik <petr.tesarik1@huawei-partners.com> Reviewed-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
show more ...
|
Revision tags: v6.5.10, v6.6 |
|
#
bd80d2ef |
| 28-Oct-2023 |
Linus Torvalds <torvalds@linux-foundation.org> |
Merge tag 'dma-mapping-6.6-2023-10-28' of git://git.infradead.org/users/hch/dma-mapping
Pull dma-mapping fix from Christoph Hellwig:
- reduce the initialy dynamic swiotlb size to remove an annoyin
Merge tag 'dma-mapping-6.6-2023-10-28' of git://git.infradead.org/users/hch/dma-mapping
Pull dma-mapping fix from Christoph Hellwig:
- reduce the initialy dynamic swiotlb size to remove an annoying but harmless warning from the page allocator (Petr Tesarik)
* tag 'dma-mapping-6.6-2023-10-28' of git://git.infradead.org/users/hch/dma-mapping: swiotlb: do not try to allocate a TLB bigger than MAX_ORDER pages
show more ...
|
Revision tags: v6.5.9 |
|
#
d5090484 |
| 25-Oct-2023 |
Petr Tesarik <petr.tesarik1@huawei-partners.com> |
swiotlb: do not try to allocate a TLB bigger than MAX_ORDER pages
When allocating a new pool at runtime, reduce the number of slabs so that the allocation order is at most MAX_ORDER. This avoids a
swiotlb: do not try to allocate a TLB bigger than MAX_ORDER pages
When allocating a new pool at runtime, reduce the number of slabs so that the allocation order is at most MAX_ORDER. This avoids a kernel warning in __alloc_pages().
The warning is relatively benign, because the pool size is subsequently reduced when allocation fails, but it is silly to start with a request that is known to fail, especially since this is the default behavior if the kernel is built with CONFIG_SWIOTLB_DYNAMIC=y and booted without any swiotlb= parameter.
Reported-by: Ben Greear <greearb@candelatech.com> Closes: https://lore.kernel.org/netdev/4f173dd2-324a-0240-ff8d-abf5c191be18@candelatech.com/ Fixes: 1aaa736815eb ("swiotlb: allocate a new memory pool when existing pools are full") Signed-off-by: Petr Tesarik <petr.tesarik1@huawei-partners.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
show more ...
|
Revision tags: v6.5.8, v6.5.7, v6.5.6 |
|
#
3b517966 |
| 30-Sep-2023 |
Linus Torvalds <torvalds@linux-foundation.org> |
Merge tag 'dma-mapping-6.6-2023-09-30' of git://git.infradead.org/users/hch/dma-mapping
Pull dma-mapping fixes from Christoph Hellwig:
- fix the narea calculation in swiotlb initialization (Ross L
Merge tag 'dma-mapping-6.6-2023-09-30' of git://git.infradead.org/users/hch/dma-mapping
Pull dma-mapping fixes from Christoph Hellwig:
- fix the narea calculation in swiotlb initialization (Ross Lagerwall)
- fix the check whether a device has used swiotlb (Petr Tesarik)
* tag 'dma-mapping-6.6-2023-09-30' of git://git.infradead.org/users/hch/dma-mapping: swiotlb: fix the check whether a device has used software IO TLB swiotlb: use the calculated number of areas
show more ...
|
#
2d5780bb |
| 26-Sep-2023 |
Petr Tesarik <petr@tesarici.cz> |
swiotlb: fix the check whether a device has used software IO TLB
When CONFIG_SWIOTLB_DYNAMIC=y, devices which do not use the software IO TLB can avoid swiotlb lookup. A flag is added by commit 13957
swiotlb: fix the check whether a device has used software IO TLB
When CONFIG_SWIOTLB_DYNAMIC=y, devices which do not use the software IO TLB can avoid swiotlb lookup. A flag is added by commit 1395706a1490 ("swiotlb: search the software IO TLB only if the device makes use of it"), the flag is correctly set, but it is then never checked. Add the actual check here.
Note that this code is an alternative to the default pool check, not an additional check, because:
1. swiotlb_find_pool() also searches the default pool; 2. if dma_uses_io_tlb is false, the default swiotlb pool is not used.
Tested in a KVM guest against a QEMU RAM-backed SATA disk over virtio and *not* using software IO TLB, this patch increases IOPS by approx 2% for 4-way parallel I/O.
The write memory barrier in swiotlb_dyn_alloc() is not needed, because a newly allocated pool must always be observed by swiotlb_find_slots() before an address from that pool is passed to is_swiotlb_buffer().
Correctness was verified using the following litmus test:
C swiotlb-new-pool
(* * Result: Never * * Check that a newly allocated pool is always visible when the * corresponding swiotlb buffer is visible. *)
{ mem_pools = default; }
P0(int **mem_pools, int *pool) { /* add_mem_pool() */ WRITE_ONCE(*pool, 999); rcu_assign_pointer(*mem_pools, pool); }
P1(int **mem_pools, int *flag, int *buf) { /* swiotlb_find_slots() */ int *r0; int r1;
rcu_read_lock(); r0 = READ_ONCE(*mem_pools); r1 = READ_ONCE(*r0); rcu_read_unlock();
if (r1) { WRITE_ONCE(*flag, 1); smp_mb(); }
/* device driver (presumed) */ WRITE_ONCE(*buf, r1); }
P2(int **mem_pools, int *flag, int *buf) { /* device driver (presumed) */ int r0 = READ_ONCE(*buf);
/* is_swiotlb_buffer() */ int r1; int *r2; int r3;
smp_rmb(); r1 = READ_ONCE(*flag); if (r1) { /* swiotlb_find_pool() */ rcu_read_lock(); r2 = READ_ONCE(*mem_pools); r3 = READ_ONCE(*r2); rcu_read_unlock(); } }
exists (2:r0<>0 /\ 2:r3=0) (* Not found. *)
Fixes: 1395706a1490 ("swiotlb: search the software IO TLB only if the device makes use of it") Reported-by: Jonathan Corbet <corbet@lwn.net> Closes: https://lore.kernel.org/linux-iommu/87a5uz3ob8.fsf@meer.lwn.net/ Signed-off-by: Petr Tesarik <petr@tesarici.cz> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
show more ...
|
Revision tags: v6.5.5, v6.5.4, v6.5.3 |
|
#
a6a24176 |
| 11-Sep-2023 |
Ross Lagerwall <ross.lagerwall@citrix.com> |
swiotlb: use the calculated number of areas
Commit 8ac04063354a ("swiotlb: reduce the number of areas to match actual memory pool size") calculated the reduced number of areas in swiotlb_init_remap(
swiotlb: use the calculated number of areas
Commit 8ac04063354a ("swiotlb: reduce the number of areas to match actual memory pool size") calculated the reduced number of areas in swiotlb_init_remap() but didn't actually use the value. Replace usage of default_nareas accordingly.
Fixes: 8ac04063354a ("swiotlb: reduce the number of areas to match actual memory pool size") Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
show more ...
|
#
c900529f |
| 12-Sep-2023 |
Thomas Zimmermann <tzimmermann@suse.de> |
Merge drm/drm-fixes into drm-misc-fixes
Forwarding to v6.6-rc1.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
|
Revision tags: v6.5.2, v6.1.51, v6.5.1 |
|
#
1ac731c5 |
| 30-Aug-2023 |
Dmitry Torokhov <dmitry.torokhov@gmail.com> |
Merge branch 'next' into for-linus
Prepare input updates for 6.6 merge window.
|
Revision tags: v6.1.50 |
|
#
6c1b980a |
| 29-Aug-2023 |
Linus Torvalds <torvalds@linux-foundation.org> |
Merge tag 'dma-mapping-6.6-2023-08-29' of git://git.infradead.org/users/hch/dma-mapping
Pull dma-maping updates from Christoph Hellwig:
- allow dynamic sizing of the swiotlb buffer, to cater for s
Merge tag 'dma-mapping-6.6-2023-08-29' of git://git.infradead.org/users/hch/dma-mapping
Pull dma-maping updates from Christoph Hellwig:
- allow dynamic sizing of the swiotlb buffer, to cater for secure virtualization workloads that require all I/O to be bounce buffered (Petr Tesarik)
- move a declaration to a header (Arnd Bergmann)
- check for memory region overlap in dma-contiguous (Binglei Wang)
- remove the somewhat dangerous runtime swiotlb-xen enablement and unexport is_swiotlb_active (Christoph Hellwig, Juergen Gross)
- per-node CMA improvements (Yajun Deng)
* tag 'dma-mapping-6.6-2023-08-29' of git://git.infradead.org/users/hch/dma-mapping: swiotlb: optimize get_max_slots() swiotlb: move slot allocation explanation comment where it belongs swiotlb: search the software IO TLB only if the device makes use of it swiotlb: allocate a new memory pool when existing pools are full swiotlb: determine potential physical address limit swiotlb: if swiotlb is full, fall back to a transient memory pool swiotlb: add a flag whether SWIOTLB is allowed to grow swiotlb: separate memory pool data from other allocator data swiotlb: add documentation and rename swiotlb_do_find_slots() swiotlb: make io_tlb_default_mem local to swiotlb.c swiotlb: bail out of swiotlb_init_late() if swiotlb is already allocated dma-contiguous: check for memory region overlap dma-contiguous: support numa CMA for specified node dma-contiguous: support per-numa CMA for all architectures dma-mapping: move arch_dma_set_mask() declaration to header swiotlb: unexport is_swiotlb_active x86: always initialize xen-swiotlb when xen-pcifront is enabling xen/pci: add flag for PCI passthrough being possible
show more ...
|
Revision tags: v6.5, v6.1.49, v6.1.48, v6.1.46, v6.1.45, v6.1.44 |
|
#
d069ed28 |
| 03-Aug-2023 |
Petr Tesarik <petr.tesarik.ext@huawei.com> |
swiotlb: optimize get_max_slots()
Use a simple logical shift and increment to calculate the number of slots taken by the DMA segment boundary.
At least GCC-13 is not able to optimize the expression
swiotlb: optimize get_max_slots()
Use a simple logical shift and increment to calculate the number of slots taken by the DMA segment boundary.
At least GCC-13 is not able to optimize the expression, producing this horrible assembly code on x86:
cmpq $-1, %rcx je .L364 addq $2048, %rcx shrq $11, %rcx movq %rcx, %r13 .L331: // rest of the function here...
// after function epilogue and return: .L364: movabsq $9007199254740992, %r13 jmp .L331
After the optimization, the code looks more reasonable:
shrq $11, %r11 leaq 1(%r11), %rbx
Signed-off-by: Petr Tesarik <petr.tesarik.ext@huawei.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
show more ...
|
Revision tags: v6.1.43 |
|
#
f94cb36e |
| 02-Aug-2023 |
Petr Tesarik <petr.tesarik.ext@huawei.com> |
swiotlb: move slot allocation explanation comment where it belongs
Move the comment down in front of the loop that actually sets the list member of struct io_tlb_slot to zero.
Fixes: 26a7e094783d (
swiotlb: move slot allocation explanation comment where it belongs
Move the comment down in front of the loop that actually sets the list member of struct io_tlb_slot to zero.
Fixes: 26a7e094783d ("swiotlb: refactor swiotlb_tbl_map_single") Signed-off-by: Petr Tesarik <petr.tesarik.ext@huawei.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
show more ...
|
#
2612e3bb |
| 07-Aug-2023 |
Rodrigo Vivi <rodrigo.vivi@intel.com> |
Merge drm/drm-next into drm-intel-next
Catching-up with drm-next and drm-intel-gt-next. It will unblock a code refactor around the platform definitions (names vs acronyms).
Signed-off-by: Rodrigo V
Merge drm/drm-next into drm-intel-next
Catching-up with drm-next and drm-intel-gt-next. It will unblock a code refactor around the platform definitions (names vs acronyms).
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
show more ...
|
#
9f771739 |
| 07-Aug-2023 |
Joonas Lahtinen <joonas.lahtinen@linux.intel.com> |
Merge drm/drm-next into drm-intel-gt-next
Need to pull in b3e4aae612ec ("drm/i915/hdcp: Modify hdcp_gsc_message msg sending mechanism") as a dependency for https://patchwork.freedesktop.org/series/1
Merge drm/drm-next into drm-intel-gt-next
Need to pull in b3e4aae612ec ("drm/i915/hdcp: Modify hdcp_gsc_message msg sending mechanism") as a dependency for https://patchwork.freedesktop.org/series/121735/
Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
show more ...
|