Revision tags: v6.6.25, v6.6.24, v6.6.23, v6.6.16, v6.6.15, v6.6.14, v6.6.13, v6.6.12, v6.6.11, v6.6.10, v6.6.9, v6.6.8, v6.6.7, v6.6.6, v6.6.5, v6.6.4, v6.6.3, v6.6.2, v6.5.11, v6.6.1, v6.5.10, v6.6, v6.5.9, v6.5.8, v6.5.7, v6.5.6, v6.5.5, v6.5.4, v6.5.3, v6.5.2, v6.1.51, v6.5.1, v6.1.50, v6.5, v6.1.49, v6.1.48, v6.1.46, v6.1.45, v6.1.44, v6.1.43, v6.1.42, v6.1.41, v6.1.40, v6.1.39, v6.1.38, v6.1.37, v6.1.36, v6.4, v6.1.35, v6.1.34, v6.1.33, v6.1.32, v6.1.31, v6.1.30, v6.1.29, v6.1.28 |
|
#
3006b15b |
| 10-May-2023 |
Jason Gunthorpe <jgg@nvidia.com> |
iommu: Add for_each_group_device()
Convenience macro to iterate over every struct group_device in the group.
Replace all open coded list_for_each_entry's with this macro.
Reviewed-by: Lu Baolu <ba
iommu: Add for_each_group_device()
Convenience macro to iterate over every struct group_device in the group.
Replace all open coded list_for_each_entry's with this macro.
Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Tested-by: Heiko Stuebner <heiko@sntech.de> Tested-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/2-v5-1b99ae392328+44574-iommu_err_unwind_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
show more ...
|
Revision tags: v6.1.27, v6.1.26, v6.3, v6.1.25, v6.1.24, v6.1.23, v6.1.22, v6.1.21, v6.1.20, v6.1.19 |
|
#
74e491e5 |
| 11-Mar-2023 |
Lukas Wunner <lukas@wunner.de> |
PCI/DOE: Make mailbox creation API private
The PCI core has just been amended to create a pci_doe_mb struct for every DOE instance on device enumeration. CXL (the only in-tree DOE user so far) has
PCI/DOE: Make mailbox creation API private
The PCI core has just been amended to create a pci_doe_mb struct for every DOE instance on device enumeration. CXL (the only in-tree DOE user so far) has been migrated to use those mailboxes instead of creating its own.
That leaves pcim_doe_create_mb() and pci_doe_for_each_off() without any callers, so drop them.
pci_doe_supports_prot() is now only used internally, so declare it static.
pci_doe_destroy_mb() is no longer used as callback for devm_add_action(), so refactor it to accept a struct pci_doe_mb pointer instead of a generic void pointer.
Because pci_doe_create_mb() is only called on device enumeration, i.e. before driver binding, the workqueue name never contains a driver name. So replace dev_driver_string() with dev_bus_name() when generating the workqueue name.
Tested-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Lukas Wunner <lukas@wunner.de> Reviewed-by: Ming Li <ming4.li@intel.com> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Link: https://lore.kernel.org/r/64f614b6584982986c55d2c6229b4ee2b276dd59.1678543498.git.lukas@wunner.de Signed-off-by: Dan Williams <dan.j.williams@intel.com>
show more ...
|
#
09cc9006 |
| 30-Mar-2023 |
Mika Westerberg <mika.westerberg@linux.intel.com> |
PCI: Introduce pci_dev_for_each_resource()
Instead of open-coding it everywhere introduce a tiny helper that can be used to iterate over each resource of a PCI device, and convert the most obvious u
PCI: Introduce pci_dev_for_each_resource()
Instead of open-coding it everywhere introduce a tiny helper that can be used to iterate over each resource of a PCI device, and convert the most obvious users into it.
While at it drop doubled empty line before pdev_sort_resources().
No functional changes intended.
Suggested-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20230330162434.35055-4-andriy.shevchenko@linux.intel.com Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Krzysztof Wilczyński <kw@linux.com>
show more ...
|
Revision tags: v6.1.18, v6.1.17, v6.1.16 |
|
#
596ff4a0 |
| 04-Mar-2023 |
Linus Torvalds <torvalds@linux-foundation.org> |
cpumask: re-introduce constant-sized cpumask optimizations
Commit aa47a7c215e7 ("lib/cpumask: deprecate nr_cpumask_bits") resulted in the cpumask operations potentially becoming hugely less efficien
cpumask: re-introduce constant-sized cpumask optimizations
Commit aa47a7c215e7 ("lib/cpumask: deprecate nr_cpumask_bits") resulted in the cpumask operations potentially becoming hugely less efficient, because suddenly the cpumask was always considered to be variable-sized.
The optimization was then later added back in a limited form by commit 6f9c07be9d02 ("lib/cpumask: add FORCE_NR_CPUS config option"), but that FORCE_NR_CPUS option is not useful in a generic kernel and more of a special case for embedded situations with fixed hardware.
Instead, just re-introduce the optimization, with some changes.
Instead of depending on CPUMASK_OFFSTACK being false, and then always using the full constant cpumask width, this introduces three different cpumask "sizes":
- the exact size (nr_cpumask_bits) remains identical to nr_cpu_ids.
This is used for situations where we should use the exact size.
- the "small" size (small_cpumask_bits) is the NR_CPUS constant if it fits in a single word and the bitmap operations thus end up able to trigger the "small_const_nbits()" optimizations.
This is used for the operations that have optimized single-word cases that get inlined, notably the bit find and scanning functions.
- the "large" size (large_cpumask_bits) is the NR_CPUS constant if it is an sufficiently small constant that makes simple "copy" and "clear" operations more efficient.
This is arbitrarily set at four words or less.
As a an example of this situation, without this fixed size optimization, cpumask_clear() will generate code like
movl nr_cpu_ids(%rip), %edx addq $63, %rdx shrq $3, %rdx andl $-8, %edx callq memset@PLT
on x86-64, because it would calculate the "exact" number of longwords that need to be cleared.
In contrast, with this patch, using a MAX_CPU of 64 (which is quite a reasonable value to use), the above becomes a single
movq $0,cpumask
instruction instead, because instead of caring to figure out exactly how many CPU's the system has, it just knows that the cpumask will be a single word and can just clear it all.
Note that this does end up tightening the rules a bit from the original version in another way: operations that set bits in the cpumask are now limited to the actual nr_cpu_ids limit, whereas we used to do the nr_cpumask_bits thing almost everywhere in the cpumask code.
But if you just clear bits, or scan for bits, we can use the simpler compile-time constants.
In the process, remove 'cpumask_complement()' and 'for_each_cpu_not()' which were not useful, and which fundamentally have to be limited to 'nr_cpu_ids'. Better remove them now than have somebody introduce use of them later.
Of course, on x86-64 with MAXSMP there is no sane small compile-time constant for the cpumask sizes, and we end up using the actual CPU bits, and will generate the above kind of horrors regardless. Please don't use MAXSMP unless you really expect to have machines with thousands of cores.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
show more ...
|
Revision tags: v6.1.15, v6.1.14, v6.1.13, v6.2, v6.1.12, v6.1.11, v6.1.10, v6.1.9, v6.1.8, v6.1.7, v6.1.6, v6.1.5, v6.0.19, v6.0.18, v6.1.4, v6.1.3, v6.0.17, v6.1.2, v6.0.16, v6.1.1, v6.0.15, v6.0.14, v6.0.13, v6.1, v6.0.12, v6.0.11, v6.0.10, v5.15.80, v6.0.9, v5.15.79, v6.0.8, v5.15.78, v6.0.7, v5.15.77, v5.15.76, v6.0.6, v6.0.5, v5.15.75, v6.0.4, v6.0.3, v6.0.2, v5.15.74, v5.15.73, v6.0.1, v5.15.72, v6.0, v5.15.71, v5.15.70, v5.15.69, v5.15.68, v5.15.67, v5.15.66, v5.15.65, v5.15.64, v5.15.63, v5.15.62, v5.15.61, v5.15.60, v5.15.59, v5.19, v5.15.58, v5.15.57, v5.15.56, v5.15.55, v5.15.54, v5.15.53, v5.15.52, v5.15.51, v5.15.50, v5.15.49, v5.15.48, v5.15.47, v5.15.46, v5.15.45, v5.15.44, v5.15.43, v5.15.42, v5.18, v5.15.41, v5.15.40, v5.15.39, v5.15.38, v5.15.37, v5.15.36, v5.15.35, v5.15.34, v5.15.33, v5.15.32, v5.15.31, v5.17, v5.15.30, v5.15.29, v5.15.28, v5.15.27, v5.15.26, v5.15.25, v5.15.24, v5.15.23, v5.15.22, v5.15.21, v5.15.20, v5.15.19, v5.15.18, v5.15.17, v5.4.173, v5.15.16, v5.15.15, v5.16, v5.15.10, v5.15.9, v5.15.8, v5.15.7, v5.15.6, v5.15.5, v5.15.4, v5.15.3, v5.15.2, v5.15.1, v5.15, v5.14.14 |
|
#
837f92f0 |
| 17-Oct-2021 |
Jacopo Mondi <jacopo+renesas@jmondi.org> |
media: subdev: Add for_each_active_route() macro
Add a for_each_active_route() macro to replace the repeated pattern of iterating on the active routes of a routing table.
Signed-off-by: Jacopo Mond
media: subdev: Add for_each_active_route() macro
Add a for_each_active_route() macro to replace the repeated pattern of iterating on the active routes of a routing table.
Signed-off-by: Jacopo Mondi <jacopo+renesas@jmondi.org> Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ideasonboard.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@kernel.org>
show more ...
|
#
6c4afa79 |
| 16-Nov-2022 |
John Ogness <john.ogness@linutronix.de> |
printk: Prepare for SRCU console list protection
Provide an NMI-safe SRCU protected variant to walk the console list.
Note that all console fields are now set before adding the console to the list
printk: Prepare for SRCU console list protection
Provide an NMI-safe SRCU protected variant to walk the console list.
Note that all console fields are now set before adding the console to the list to avoid the console becoming visible by SCRU readers before being fully initialized.
This is a preparatory change for a new console infrastructure which operates independent of the console BKL.
Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: John Ogness <john.ogness@linutronix.de> Acked-by: Miguel Ojeda <ojeda@kernel.org> Reviewed-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Petr Mladek <pmladek@suse.com> Link: https://lore.kernel.org/r/20221116162152.193147-4-john.ogness@linutronix.de
show more ...
|
#
c25b7a7a |
| 29-Nov-2022 |
Florian Westphal <fw@strlen.de> |
inet: ping: use hlist_nulls rcu iterator during lookup
ping_lookup() does not acquire the table spinlock, so iteration should use hlist_nulls_for_each_entry_rcu().
Spotted during code review.
Fixe
inet: ping: use hlist_nulls rcu iterator during lookup
ping_lookup() does not acquire the table spinlock, so iteration should use hlist_nulls_for_each_entry_rcu().
Spotted during code review.
Fixes: dbca1596bbb0 ("ping: convert to RCU lookups, get rid of rwlock") Cc: Eric Dumazet <edumazet@google.com> Signed-off-by: Florian Westphal <fw@strlen.de> Link: https://lore.kernel.org/r/20221129140644.28525-1-fw@strlen.de Signed-off-by: Paolo Abeni <pabeni@redhat.com>
show more ...
|
#
51fe6141 |
| 29-Nov-2022 |
Jason Gunthorpe <jgg@nvidia.com> |
iommufd: Data structure to provide IOVA to PFN mapping
This is the remainder of the IOAS data structure. Provide an object called an io_pagetable that is composed of iopt_areas pointing at iopt_page
iommufd: Data structure to provide IOVA to PFN mapping
This is the remainder of the IOAS data structure. Provide an object called an io_pagetable that is composed of iopt_areas pointing at iopt_pages, along with a list of iommu_domains that mirror the IOVA to PFN map.
At the top this is a simple interval tree of iopt_areas indicating the map of IOVA to iopt_pages. An xarray keeps track of a list of domains. Based on the attached domains there is a minimum alignment for areas (which may be smaller than PAGE_SIZE), an interval tree of reserved IOVA that can't be mapped and an IOVA of allowed IOVA that can always be mappable.
The concept of an 'access' refers to something like a VFIO mdev that is accessing the IOVA and using a 'struct page *' for CPU based access.
Externally an API is provided that matches the requirements of the IOCTL interface for map/unmap and domain attachment.
The API provides a 'copy' primitive to establish a new IOVA map in a different IOAS from an existing mapping by re-using the iopt_pages. This is the basic mechanism to provide single pinning.
This is designed to support a pre-registration flow where userspace would setup an dummy IOAS with no domains, map in memory and then establish an access to pin all PFNs into the xarray.
Copy can then be used to create new IOVA mappings in a different IOAS, with iommu_domains attached. Upon copy the PFNs will be read out of the xarray and mapped into the iommu_domains, avoiding any pin_user_pages() overheads.
Link: https://lore.kernel.org/r/10-v6-a196d26f289e+11787-iommufd_jgg@nvidia.com Tested-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Yi Liu <yi.l.liu@intel.com> Tested-by: Lixiao Yang <lixiao.yang@intel.com> Tested-by: Matthew Rosato <mjrosato@linux.ibm.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Yi Liu <yi.l.liu@intel.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
show more ...
|
#
f394576e |
| 29-Nov-2022 |
Jason Gunthorpe <jgg@nvidia.com> |
iommufd: PFN handling for iopt_pages
The top of the data structure provides an IO Address Space (IOAS) that is similar to a VFIO container. The IOAS allows map/unmap of memory into ranges of IOVA ca
iommufd: PFN handling for iopt_pages
The top of the data structure provides an IO Address Space (IOAS) that is similar to a VFIO container. The IOAS allows map/unmap of memory into ranges of IOVA called iopt_areas. Multiple IOMMU domains (IO page tables) and in-kernel accesses (like VFIO mdevs) can be attached to the IOAS to access the PFNs that those IOVA areas cover.
The IO Address Space (IOAS) datastructure is composed of: - struct io_pagetable holding the IOVA map - struct iopt_areas representing populated portions of IOVA - struct iopt_pages representing the storage of PFNs - struct iommu_domain representing each IO page table in the system IOMMU - struct iopt_pages_access representing in-kernel accesses of PFNs (ie VFIO mdevs) - struct xarray pinned_pfns holding a list of pages pinned by in-kernel accesses
This patch introduces the lowest part of the datastructure - the movement of PFNs in a tiered storage scheme: 1) iopt_pages::pinned_pfns xarray 2) Multiple iommu_domains 3) The origin of the PFNs, i.e. the userspace pointer
PFN have to be copied between all combinations of tiers, depending on the configuration.
The interface is an iterator called a 'pfn_reader' which determines which tier each PFN is stored and loads it into a list of PFNs held in a struct pfn_batch.
Each step of the iterator will fill up the pfn_batch, then the caller can use the pfn_batch to send the PFNs to the required destination. Repeating this loop will read all the PFNs in an IOVA range.
The pfn_reader and pfn_batch also keep track of the pinned page accounting.
While PFNs are always stored and accessed as full PAGE_SIZE units the iommu_domain tier can store with a sub-page offset/length to support IOMMUs with a smaller IOPTE size than PAGE_SIZE.
Link: https://lore.kernel.org/r/8-v6-a196d26f289e+11787-iommufd_jgg@nvidia.com Reviewed-by: Kevin Tian <kevin.tian@intel.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Yi Liu <yi.l.liu@intel.com> Tested-by: Lixiao Yang <lixiao.yang@intel.com> Tested-by: Matthew Rosato <mjrosato@linux.ibm.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
show more ...
|
#
5fe93786 |
| 29-Nov-2022 |
Jason Gunthorpe <jgg@nvidia.com> |
interval-tree: Add a utility to iterate over spans in an interval tree
The span iterator travels over the indexes of the interval_tree, not the nodes, and classifies spans of indexes as either 'used
interval-tree: Add a utility to iterate over spans in an interval tree
The span iterator travels over the indexes of the interval_tree, not the nodes, and classifies spans of indexes as either 'used' or 'hole'.
'used' spans are fully covered by nodes in the tree and 'hole' spans have no node intersecting the span.
This is done greedily such that spans are maximally sized and every iteration step switches between used/hole.
As an example a trivial allocator can be written as:
for (interval_tree_span_iter_first(&span, itree, 0, ULONG_MAX); !interval_tree_span_iter_done(&span); interval_tree_span_iter_next(&span)) if (span.is_hole && span.last_hole - span.start_hole >= allocation_size - 1) return span.start_hole;
With all the tricky boundary conditions handled by the library code.
The following iommufd patches have several algorithms for its overlapping node interval trees that are significantly simplified with this kind of iteration primitive. As it seems generally useful, put it into lib/.
Link: https://lore.kernel.org/r/3-v6-a196d26f289e+11787-iommufd_jgg@nvidia.com Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Yi Liu <yi.l.liu@intel.com> Tested-by: Lixiao Yang <lixiao.yang@intel.com> Tested-by: Matthew Rosato <mjrosato@linux.ibm.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
show more ...
|
#
9d24322e |
| 19-Jul-2022 |
Jonathan Cameron <Jonathan.Cameron@huawei.com> |
PCI/DOE: Add DOE mailbox support functions
Introduced in a PCIe r6.0, sec 6.30, DOE provides a config space based mailbox with standard protocol discovery. Each mailbox is accessed through a DOE Ex
PCI/DOE: Add DOE mailbox support functions
Introduced in a PCIe r6.0, sec 6.30, DOE provides a config space based mailbox with standard protocol discovery. Each mailbox is accessed through a DOE Extended Capability.
Each DOE mailbox must support the DOE discovery protocol in addition to any number of additional protocols.
Define core PCIe functionality to manage a single PCIe DOE mailbox at a defined config space offset. Functionality includes iterating, creating, query of supported protocol, and task submission. Destruction of the mailboxes is device managed.
Cc: "Li, Ming" <ming4.li@intel.com> Cc: Bjorn Helgaas <helgaas@kernel.org> Cc: Matthew Wilcox <willy@infradead.org> Acked-by: Bjorn Helgaas <helgaas@kernel.org> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Co-developed-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Ira Weiny <ira.weiny@intel.com> Link: https://lore.kernel.org/r/20220719205249.566684-4-ira.weiny@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
show more ...
|
#
781121a7 |
| 06-May-2022 |
Brian Norris <briannorris@chromium.org> |
clang-format: Fix space after for_each macros
Set SpaceBeforeParens to ControlStatementsExceptForEachMacros to not add space between a for_each macro and the following parenthesis. This option is a
clang-format: Fix space after for_each macros
Set SpaceBeforeParens to ControlStatementsExceptForEachMacros to not add space between a for_each macro and the following parenthesis. This option is available since clang-format-11 [1] and is in line with the checkpatch.pl rules [2].
I found that this patch has also been sent by Brian Norris some weeks ago [3].
Link: https://clang.llvm.org/docs/ClangFormatStyleOptions.html [1] Link: https://lore.kernel.org/r/8b6b252b-47a6-9d52-f0bd-10d3bc4ad244@digikod.net [2] Link: https://lore.kernel.org/lkml/YmHuZjmP9MxkgJ0R@google.com/ [3] Cc: Miguel Ojeda <ojeda@kernel.org> Cc: Tom Rix <trix@redhat.com> Signed-off-by: Brian Norris <briannorris@chromium.org> Co-developed-by: Mickaël Salaün <mic@digikod.net> Signed-off-by: Mickaël Salaün <mic@digikod.net> Link: https://lore.kernel.org/r/20220506160106.522341-6-mic@digikod.net [Adjusted authorship as agreed] Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
show more ...
|
#
d7f66043 |
| 06-May-2022 |
Mickaël Salaün <mic@digikod.net> |
clang-format: Fix goto labels indentation
Thanks to IndentGotoLabels introduced with clang-format-10 [1], we can avoid goto labels identation. This follows the current coding style and it is then i
clang-format: Fix goto labels indentation
Thanks to IndentGotoLabels introduced with clang-format-10 [1], we can avoid goto labels identation. This follows the current coding style and it is then in line with the checkpatch.pl rules [2].
Link: https://clang.llvm.org/docs/ClangFormatStyleOptions.html [1] Link: https://lore.kernel.org/r/8b6b252b-47a6-9d52-f0bd-10d3bc4ad244@digikod.net [2] Cc: Miguel Ojeda <ojeda@kernel.org> Cc: Tom Rix <trix@redhat.com> Signed-off-by: Mickaël Salaün <mic@digikod.net> Link: https://lore.kernel.org/r/20220506160106.522341-4-mic@digikod.net [Updated header comment to >= 10] Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
show more ...
|
#
96232c7d |
| 06-May-2022 |
Mickaël Salaün <mic@digikod.net> |
clang-format: Update to clang-format >= 6
We get new interesting formating with clang-format greater or equal to 6 as stated in the removed comments. Miguel Ojeda suggested to even move the minimal
clang-format: Update to clang-format >= 6
We get new interesting formating with clang-format greater or equal to 6 as stated in the removed comments. Miguel Ojeda suggested to even move the minimal clang-format version to 11, which is the minimum LLVM supported at the moment [1].
Automatically updated with: sed -i 's/^\(\s*\)#\(\S*\s\+\S*\) # Unknown to clang-format.*/\1\2/' .clang-format
Link: https://lore.kernel.org/r/CANiq72nLOfmEt-CZBmm2ouEB_x6Jm9ggDVFCVJxYxKw7O0LTzQ@mail.gmail.com [1] Cc: Miguel Ojeda <ojeda@kernel.org> Cc: Tom Rix <trix@redhat.com> Signed-off-by: Mickaël Salaün <mic@digikod.net> Link: https://lore.kernel.org/r/20220506160106.522341-3-mic@digikod.net Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
show more ...
|
#
49bb63a2 |
| 06-May-2022 |
Mickaël Salaün <mic@digikod.net> |
clang-format: Extend the for_each list with tools/
Add tools/ to the shell fragment generating the for_each list and update it. This is useful to format files in the tools directory (e.g. selftests
clang-format: Extend the for_each list with tools/
Add tools/ to the shell fragment generating the for_each list and update it. This is useful to format files in the tools directory (e.g. selftests) with the same coding style as the kernel.
Cc: Miguel Ojeda <ojeda@kernel.org> Cc: Tom Rix <trix@redhat.com> Signed-off-by: Mickaël Salaün <mic@digikod.net> Link: https://lore.kernel.org/r/20220506160106.522341-2-mic@digikod.net [Reworded and rebased on top of previous commits] Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
show more ...
|
#
72e14aa9 |
| 20-May-2022 |
Miguel Ojeda <ojeda@kernel.org> |
clang-format: Simplify command with `sort -u`
Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
|
#
43120879 |
| 20-May-2022 |
Miguel Ojeda <ojeda@kernel.org> |
clang-format: Use POSIX locale for `sort`
This avoids differences when different people run the command, which is relevant for our use case, e.g.:
$ LC_ALL=en_US.UTF-8 sort test ata_for_eac
clang-format: Use POSIX locale for `sort`
This avoids differences when different people run the command, which is relevant for our use case, e.g.:
$ LC_ALL=en_US.UTF-8 sort test ata_for_each_link __ata_qc_for_each ata_qc_for_each
$ LC_ALL=C sort test __ata_qc_for_each ata_for_each_link ata_qc_for_each
Link: https://lore.kernel.org/lkml/CANiq72=7=ZpAObWRmposOmnyZ8XR_eNHCBtA3bu3fusmcPUwDA@mail.gmail.com/ Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
show more ...
|
#
88217894 |
| 20-May-2022 |
Miguel Ojeda <ojeda@kernel.org> |
clang-format: Update with v5.18-rc7's `for_each` macro list
Re-run the shell fragment that generated the original list.
This brings it up to date, so that the next patches that tweak it further are
clang-format: Update with v5.18-rc7's `for_each` macro list
Re-run the shell fragment that generated the original list.
This brings it up to date, so that the next patches that tweak it further are more clear on what they change.
Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
show more ...
|
#
ef8dd015 |
| 06-Dec-2021 |
Thomas Gleixner <tglx@linutronix.de> |
genirq/msi: Make interrupt allocation less convoluted
There is no real reason to do several loops over the MSI descriptors instead of just doing one loop. In case of an error everything is undone an
genirq/msi: Make interrupt allocation less convoluted
There is no real reason to do several loops over the MSI descriptors instead of just doing one loop. In case of an error everything is undone anyway so it does not matter whether it's a partial or a full rollback.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Michael Kelley <mikelley@microsoft.com> Tested-by: Nishanth Menon <nm@ti.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20211206210749.010234767@linutronix.de
show more ...
|
Revision tags: v5.14.13, v5.14.12, v5.14.11, v5.14.10, v5.14.9, v5.14.8, v5.14.7, v5.14.6, v5.10.67, v5.10.66, v5.14.5, v5.14.4, v5.10.65, v5.14.3, v5.10.64, v5.14.2, v5.10.63, v5.14.1, v5.10.62, v5.14, v5.10.61, v5.10.60, v5.10.53, v5.10.52, v5.10.51, v5.10.50, v5.10.49, v5.13, v5.10.46, v5.10.43, v5.10.42, v5.10.41, v5.10.40, v5.10.39, v5.4.119 |
|
#
4792f9dd |
| 12-May-2021 |
Miguel Ojeda <ojeda@kernel.org> |
clang-format: Update with the latest for_each macro list
Re-run the shell fragment that generated the original list.
Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
|
Revision tags: v5.10.36, v5.10.35, v5.10.34, v5.4.116, v5.10.33, v5.12, v5.10.32, v5.10.31, v5.10.30, v5.10.27, v5.10.26, v5.10.25, v5.10.24, v5.10.23, v5.10.22, v5.10.21, v5.10.20, v5.10.19, v5.4.101, v5.10.18, v5.10.17 |
|
#
583fa5e7 |
| 16-Feb-2021 |
Ben Widawsky <ben.widawsky@intel.com> |
cxl/mem: Add basic IOCTL interface
Add a straightforward IOCTL that provides a mechanism for userspace to query the supported memory device commands. CXL commands as they appear to userspace are des
cxl/mem: Add basic IOCTL interface
Add a straightforward IOCTL that provides a mechanism for userspace to query the supported memory device commands. CXL commands as they appear to userspace are described as part of the UAPI kerneldoc. The command list returned via this IOCTL will contain the full set of commands that the driver supports, however, some of those commands may not be available for use by userspace.
Memory device commands first appear in the CXL 2.0 specification. They are submitted through a mailbox mechanism specified in the CXL 2.0 specification.
The send command allows userspace to issue mailbox commands directly to the hardware. The list of available commands to send are the output of the query command. The driver verifies basic properties of the command and possibly inspect the input (or output) payload to determine whether or not the command is allowed (or might taint the kernel).
Reported-by: kernel test robot <lkp@intel.com> # bug in earlier revision Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Ben Widawsky <ben.widawsky@intel.com> Reviewed-by: Dan Williams <dan.j.williams@intel.com> (v2) Cc: Al Viro <viro@zeniv.linux.org.uk> Link: https://lore.kernel.org/r/20210217040958.1354670-5-ben.widawsky@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
show more ...
|
Revision tags: v5.11, v5.10.16, v5.10.15, v5.10.14 |
|
#
1074f8ec |
| 29-Jan-2021 |
Miguel Ojeda <ojeda@kernel.org> |
clang-format: Update with the latest for_each macro list
Re-run the shell fragment that generated the original list.
Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
|
Revision tags: v5.10, v5.8.17, v5.8.16, v5.8.15 |
|
#
cc6de168 |
| 13-Oct-2020 |
Mike Rapoport <rppt@linux.ibm.com> |
memblock: use separate iterators for memory and reserved regions
for_each_memblock() is used to iterate over memblock.memory in a few places that use data from memblock_region rather than the memory
memblock: use separate iterators for memory and reserved regions
for_each_memblock() is used to iterate over memblock.memory in a few places that use data from memblock_region rather than the memory ranges.
Introduce separate for_each_mem_region() and for_each_reserved_mem_region() to improve encapsulation of memblock internals from its users.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Baoquan He <bhe@redhat.com> Acked-by: Ingo Molnar <mingo@kernel.org> [x86] Acked-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> [MIPS] Acked-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> [.clang-format] Cc: Andy Lutomirski <luto@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Daniel Axtens <dja@axtens.net> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Emil Renner Berthing <kernel@esmil.dk> Cc: Hari Bathini <hbathini@linux.ibm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: https://lkml.kernel.org/r/20200818151634.14343-18-rppt@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
show more ...
|
#
9f3d5eaa |
| 13-Oct-2020 |
Mike Rapoport <rppt@linux.ibm.com> |
memblock: implement for_each_reserved_mem_region() using __next_mem_region()
Iteration over memblock.reserved with for_each_reserved_mem_region() used __next_reserved_mem_region() that implemented a
memblock: implement for_each_reserved_mem_region() using __next_mem_region()
Iteration over memblock.reserved with for_each_reserved_mem_region() used __next_reserved_mem_region() that implemented a subset of __next_mem_region().
Use __for_each_mem_range() and, essentially, __next_mem_region() with appropriate parameters to reduce code duplication.
While on it, rename for_each_reserved_mem_region() to for_each_reserved_mem_range() for consistency.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> [.clang-format] Cc: Andy Lutomirski <luto@kernel.org> Cc: Baoquan He <bhe@redhat.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Daniel Axtens <dja@axtens.net> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Emil Renner Berthing <kernel@esmil.dk> Cc: Hari Bathini <hbathini@linux.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: https://lkml.kernel.org/r/20200818151634.14343-17-rppt@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
show more ...
|
#
6e245ad4 |
| 13-Oct-2020 |
Mike Rapoport <rppt@linux.ibm.com> |
memblock: reduce number of parameters in for_each_mem_range()
Currently for_each_mem_range() and for_each_mem_range_rev() iterators are the most generic way to traverse memblock regions. As such, t
memblock: reduce number of parameters in for_each_mem_range()
Currently for_each_mem_range() and for_each_mem_range_rev() iterators are the most generic way to traverse memblock regions. As such, they have 8 parameters and they are hardly convenient to users. Most users choose to utilize one of their wrappers and the only user that actually needs most of the parameters is memblock itself.
To avoid yet another naming for memblock iterators, rename the existing for_each_mem_range[_rev]() to __for_each_mem_range[_rev]() and add a new for_each_mem_range[_rev]() wrappers with only index, start and end parameters.
The new wrapper nicely fits into init_unavailable_mem() and will be used in upcoming changes to simplify memblock traversals.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> [MIPS] Cc: Andy Lutomirski <luto@kernel.org> Cc: Baoquan He <bhe@redhat.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Daniel Axtens <dja@axtens.net> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Emil Renner Berthing <kernel@esmil.dk> Cc: Hari Bathini <hbathini@linux.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: https://lkml.kernel.org/r/20200818151634.14343-11-rppt@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
show more ...
|