History log of /openbmc/linux/drivers/of/of_reserved_mem.c (Results 1 – 25 of 108)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: v6.6.25, v6.6.24, v6.6.23, v6.6.16, v6.6.15, v6.6.14, v6.6.13, v6.6.12, v6.6.11, v6.6.10, v6.6.9, v6.6.8, v6.6.7, v6.6.6, v6.6.5, v6.6.4, v6.6.3, v6.6.2, v6.5.11, v6.6.1, v6.5.10, v6.6, v6.5.9, v6.5.8, v6.5.7, v6.5.6, v6.5.5, v6.5.4, v6.5.3, v6.5.2, v6.1.51, v6.5.1, v6.1.50, v6.5, v6.1.49, v6.1.48, v6.1.46, v6.1.45, v6.1.44, v6.1.43, v6.1.42, v6.1.41, v6.1.40, v6.1.39, v6.1.38, v6.1.37, v6.1.36, v6.4, v6.1.35
# 4cea2821 14-Jun-2023 Stephan Gerhold <stephan@gerhold.net>

of: reserved_mem: Use stable allocation order

sort() in Linux is based on heapsort which is not a stable sort
algorithm - equal elements are being reordered. For reserved memory in
the device tree t

of: reserved_mem: Use stable allocation order

sort() in Linux is based on heapsort which is not a stable sort
algorithm - equal elements are being reordered. For reserved memory in
the device tree this happens mainly for dynamic allocations: They do not
have an address to sort with, so they are reordered somewhat randomly
when adding/removing other unrelated reserved memory nodes.

Functionally this is not a big problem, but it's confusing during
development when all the addresses change after adding unrelated
reserved memory nodes.

Make the order stable by sorting dynamic allocations according to
the node order in the device tree. Static allocations are not affected
by this because they are still sorted by their (fixed) address.

Signed-off-by: Stephan Gerhold <stephan@gerhold.net>
Link: https://lore.kernel.org/r/20230510-dt-resv-bottom-up-v2-2-aeb2afc8ac25@gerhold.net
Signed-off-by: Rob Herring <robh@kernel.org>

show more ...


# 83ba7361 14-Jun-2023 Stephan Gerhold <stephan@gerhold.net>

of: reserved_mem: Try to keep range allocations contiguous

Right now dynamic reserved memory regions are allocated either
bottom-up or top-down, depending on the memblock setting of the
architecture

of: reserved_mem: Try to keep range allocations contiguous

Right now dynamic reserved memory regions are allocated either
bottom-up or top-down, depending on the memblock setting of the
architecture. This is fine when the address is arbitrary. However,
when using "alloc-ranges" the regions are often placed somewhere
in the middle of (free) RAM, even if the range starts or ends next
to another (static) reservation.

Try to detect this situation, and choose explicitly between bottom-up
or top-down to allocate the memory close to the other reservations:

1. If the "alloc-range" starts at the end or inside an existing
reservation, use bottom-up.
2. If the "alloc-range" ends at the start or inside an existing
reservation, use top-down.
3. If both or none is the case, keep the current
(architecture-specific) behavior.

There are plenty of edge cases where only a more complex algorithm
would help, but even this simple approach helps in many cases to keep
the reserved memory (and therefore also the free memory) contiguous.

Signed-off-by: Stephan Gerhold <stephan@gerhold.net>
Link: https://lore.kernel.org/r/20230510-dt-resv-bottom-up-v2-1-aeb2afc8ac25@gerhold.net
Signed-off-by: Rob Herring <robh@kernel.org>

show more ...


Revision tags: v6.1.34, v6.1.33, v6.1.32, v6.1.31, v6.1.30, v6.1.29, v6.1.28, v6.1.27, v6.1.26, v6.3, v6.1.25, v6.1.24, v6.1.23, v6.1.22, v6.1.21, v6.1.20, v6.1.19, v6.1.18, v6.1.17, v6.1.16, v6.1.15, v6.1.14, v6.1.13, v6.2
# 6ee7afba 16-Feb-2023 Geert Uytterhoeven <geert+renesas@glider.be>

of: reserved_mem: Use proper binary prefix

The printed reserved memory information uses the non-standard "K"
prefix, while all other printed values use proper binary prefixes.
Fix this by using "Ki"

of: reserved_mem: Use proper binary prefix

The printed reserved memory information uses the non-standard "K"
prefix, while all other printed values use proper binary prefixes.
Fix this by using "Ki" instead.

While at it, drop the superfluous spaces inside the parentheses, to
reduce printed line length.

Fixes: aeb9267eb6b1df99 ("of: reserved-mem: print out reserved-mem details during boot")
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Link: https://lore.kernel.org/r/20230216083725.1244817-1-geert+renesas@glider.be
Signed-off-by: Rob Herring <robh@kernel.org>

show more ...


Revision tags: v6.1.12
# aeb9267e 09-Feb-2023 Martin Liu <liumartin@google.com>

of: reserved-mem: print out reserved-mem details during boot

It's important to know reserved-mem information in mobile world
since reserved memory via device tree keeps increased in platform
(e.g.,

of: reserved-mem: print out reserved-mem details during boot

It's important to know reserved-mem information in mobile world
since reserved memory via device tree keeps increased in platform
(e.g., 45% in our platform). Therefore, it's crucial to know the
reserved memory sizes breakdown for the memory accounting.

This patch prints out reserved memory details during boot to make
them visible.

Below is an example output:

[ 0.000000] OF: reserved mem: 0x00000009f9400000..0x00000009fb3fffff ( 32768 KB ) map reusable test1
[ 0.000000] OF: reserved mem: 0x00000000ffdf0000..0x00000000ffffffff ( 2112 KB ) map non-reusable test2
[ 0.000000] OF: reserved mem: 0x0000000091000000..0x00000000912fffff ( 3072 KB ) nomap non-reusable test3

Signed-off-by: Martin Liu <liumartin@google.com>
Link: https://lore.kernel.org/r/20230209160954.1471909-1-liumartin@google.com
Signed-off-by: Rob Herring <robh@kernel.org>

show more ...


Revision tags: v6.1.11
# ce4d9a1e 08-Feb-2023 Isaac J. Manjarres <isaacmanjarres@google.com>

of: reserved_mem: Have kmemleak ignore dynamically allocated reserved mem

Patch series "Fix kmemleak crashes when scanning CMA regions", v2.

When trying to boot a device with an ARM64 kernel with t

of: reserved_mem: Have kmemleak ignore dynamically allocated reserved mem

Patch series "Fix kmemleak crashes when scanning CMA regions", v2.

When trying to boot a device with an ARM64 kernel with the following
config options enabled:

CONFIG_DEBUG_PAGEALLOC=y
CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT=y
CONFIG_DEBUG_KMEMLEAK=y

a crash is encountered when kmemleak starts to scan the list of gray
or allocated objects that it maintains. Upon closer inspection, it was
observed that these page-faults always occurred when kmemleak attempted
to scan a CMA region.

At the moment, kmemleak is made aware of CMA regions that are specified
through the devicetree to be dynamically allocated within a range of
addresses. However, kmemleak should not need to scan CMA regions or any
reserved memory region, as those regions can be used for DMA transfers
between drivers and peripherals, and thus wouldn't contain anything
useful for kmemleak.

Additionally, since CMA regions are unmapped from the kernel's address
space when they are freed to the buddy allocator at boot when
CONFIG_DEBUG_PAGEALLOC is enabled, kmemleak shouldn't attempt to access
those memory regions, as that will trigger a crash. Thus, kmemleak
should ignore all dynamically allocated reserved memory regions.


This patch (of 1):

Currently, kmemleak ignores dynamically allocated reserved memory regions
that don't have a kernel mapping. However, regions that do retain a
kernel mapping (e.g. CMA regions) do get scanned by kmemleak.

This is not ideal for two reasons:

1 kmemleak works by scanning memory regions for pointers to allocated
objects to determine if those objects have been leaked or not.
However, reserved memory regions can be used between drivers and
peripherals for DMA transfers, and thus, would not contain pointers to
allocated objects, making it unnecessary for kmemleak to scan these
reserved memory regions.

2 When CONFIG_DEBUG_PAGEALLOC is enabled, along with kmemleak, the
CMA reserved memory regions are unmapped from the kernel's address
space when they are freed to buddy at boot. These CMA reserved regions
are still tracked by kmemleak, however, and when kmemleak attempts to
scan them, a crash will happen, as accessing the CMA region will result
in a page-fault, since the regions are unmapped.

Thus, use kmemleak_ignore_phys() for all dynamically allocated reserved
memory regions, instead of those that do not have a kernel mapping
associated with them.

Link: https://lkml.kernel.org/r/20230208232001.2052777-1-isaacmanjarres@google.com
Link: https://lkml.kernel.org/r/20230208232001.2052777-2-isaacmanjarres@google.com
Fixes: a7259df76702 ("memblock: make memblock_find_in_range method private")
Signed-off-by: Isaac J. Manjarres <isaacmanjarres@google.com>
Acked-by: Mike Rapoport (IBM) <rppt@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Frank Rowand <frowand.list@gmail.com>
Cc: Kirill A. Shutemov <kirill.shtuemov@linux.intel.com>
Cc: Nick Kossifidis <mick@ics.forth.gr>
Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Cc: Rob Herring <robh@kernel.org>
Cc: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Cc: Saravana Kannan <saravanak@google.com>
Cc: <stable@vger.kernel.org> [5.15+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

show more ...


Revision tags: v6.1.10, v6.1.9, v6.1.8, v6.1.7, v6.1.6, v6.1.5, v6.0.19, v6.0.18, v6.1.4, v6.1.3, v6.0.17, v6.1.2, v6.0.16, v6.1.1, v6.0.15, v6.0.14, v6.0.13, v6.1, v6.0.12, v6.0.11, v6.0.10, v5.15.80, v6.0.9, v5.15.79, v6.0.8, v5.15.78, v6.0.7, v5.15.77, v5.15.76, v6.0.6, v6.0.5, v5.15.75, v6.0.4, v6.0.3, v6.0.2, v5.15.74, v5.15.73, v6.0.1, v5.15.72, v6.0, v5.15.71, v5.15.70, v5.15.69, v5.15.68, v5.15.67, v5.15.66, v5.15.65, v5.15.64, v5.15.63, v5.15.62, v5.15.61, v5.15.60, v5.15.59, v5.19, v5.15.58, v5.15.57, v5.15.56, v5.15.55, v5.15.54, v5.15.53, v5.15.52, v5.15.51
# 6991cd74 28-Jun-2022 Vincent Whitchurch <vincent.whitchurch@axis.com>

of: reserved-memory: Print allocation/reservation failures as error

If the allocation/reservation of reserved-memory fails, it is normally
an error, so print it as an error so that it doesn't get hi

of: reserved-memory: Print allocation/reservation failures as error

If the allocation/reservation of reserved-memory fails, it is normally
an error, so print it as an error so that it doesn't get hidden from the
console due to the loglevel. Also make the allocation failure include
the size just like the reservation failure.

Signed-off-by: Vincent Whitchurch <vincent.whitchurch@axis.com>
Signed-off-by: Rob Herring <robh@kernel.org>
Link: https://lore.kernel.org/r/20220628113540.2790835-1-vincent.whitchurch@axis.com

show more ...


Revision tags: v5.15.50, v5.15.49, v5.15.48, v5.15.47, v5.15.46, v5.15.45, v5.15.44, v5.15.43, v5.15.42, v5.18, v5.15.41, v5.15.40, v5.15.39, v5.15.38, v5.15.37, v5.15.36, v5.15.35, v5.15.34, v5.15.33, v5.15.32, v5.15.31
# e16faf26 22-Mar-2022 David Hildenbrand <david@redhat.com>

cma: factor out minimum alignment requirement

Patch series "mm: enforce pageblock_order < MAX_ORDER".

Having pageblock_order >= MAX_ORDER seems to be able to happen in corner
cases and some parts o

cma: factor out minimum alignment requirement

Patch series "mm: enforce pageblock_order < MAX_ORDER".

Having pageblock_order >= MAX_ORDER seems to be able to happen in corner
cases and some parts of the kernel are not prepared for it.

For example, Aneesh has shown [1] that such kernels can be compiled on
ppc64 with 64k base pages by setting FORCE_MAX_ZONEORDER=8, which will
run into a WARN_ON_ONCE(order >= MAX_ORDER) in comapction code right
during boot.

We can get pageblock_order >= MAX_ORDER when the default hugetlb size is
bigger than the maximum allocation granularity of the buddy, in which
case we are no longer talking about huge pages but instead gigantic
pages.

Having pageblock_order >= MAX_ORDER can only make alloc_contig_range()
of such gigantic pages more likely to succeed.

Reliable use of gigantic pages either requires boot time allcoation or
CMA, no need to overcomplicate some places in the kernel to optimize for
corner cases that are broken in other areas of the kernel.

This patch (of 2):

Let's enforce pageblock_order < MAX_ORDER and simplify.

Especially patch #1 can be regarded a cleanup before:
[PATCH v5 0/6] Use pageblock_order for cma and alloc_contig_range
alignment. [2]

[1] https://lkml.kernel.org/r/87r189a2ks.fsf@linux.ibm.com
[2] https://lkml.kernel.org/r/20220211164135.1803616-1-zi.yan@sent.com

Link: https://lkml.kernel.org/r/20220214174132.219303-2-david@redhat.com
Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Zi Yan <ziy@nvidia.com>
Acked-by: Rob Herring <robh@kernel.org>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Frank Rowand <frowand.list@gmail.com>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: John Garry via iommu <iommu@lists.linux-foundation.org>

Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

show more ...


Revision tags: v5.17, v5.15.30, v5.15.29, v5.15.28, v5.15.27, v5.15.26, v5.15.25, v5.15.24, v5.15.23, v5.15.22, v5.15.21, v5.15.20, v5.15.19, v5.15.18, v5.15.17, v5.4.173, v5.15.16, v5.15.15, v5.16, v5.15.10, v5.15.9, v5.15.8, v5.15.7, v5.15.6, v5.15.5, v5.15.4, v5.15.3, v5.15.2, v5.15.1
# 3ecc6834 05-Nov-2021 Mike Rapoport <rppt@linux.ibm.com>

memblock: rename memblock_free to memblock_phys_free

Since memblock_free() operates on a physical range, make its name
reflect it and rename it to memblock_phys_free(), so it will be a
logical count

memblock: rename memblock_free to memblock_phys_free

Since memblock_free() operates on a physical range, make its name
reflect it and rename it to memblock_phys_free(), so it will be a
logical counterpart to memblock_phys_alloc().

The callers are updated with the below semantic patch:

@@
expression addr;
expression size;
@@
- memblock_free(addr, size);
+ memblock_phys_free(addr, size);

Link: https://lkml.kernel.org/r/20210930185031.18648-6-rppt@kernel.org
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Juergen Gross <jgross@suse.com>
Cc: Shahab Vahedi <Shahab.Vahedi@synopsys.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

show more ...


Revision tags: v5.15
# 658aafc8 21-Oct-2021 Mike Rapoport <rppt@linux.ibm.com>

memblock: exclude MEMBLOCK_NOMAP regions from kmemleak

Vladimir Zapolskiy reports:

Commit a7259df76702 ("memblock: make memblock_find_in_range method
private") invokes a kernel panic while running

memblock: exclude MEMBLOCK_NOMAP regions from kmemleak

Vladimir Zapolskiy reports:

Commit a7259df76702 ("memblock: make memblock_find_in_range method
private") invokes a kernel panic while running kmemleak on OF platforms
with nomaped regions:

Unable to handle kernel paging request at virtual address fff000021e00000
[...]
scan_block+0x64/0x170
scan_gray_list+0xe8/0x17c
kmemleak_scan+0x270/0x514
kmemleak_write+0x34c/0x4ac

The memory allocated from memblock is registered with kmemleak, but if
it is marked MEMBLOCK_NOMAP it won't have linear map entries so an
attempt to scan such areas will fault.

Ideally, memblock_mark_nomap() would inform kmemleak to ignore
MEMBLOCK_NOMAP memory, but it can be called before kmemleak interfaces
operating on physical addresses can use __va() conversion.

Make sure that functions that mark allocated memory as MEMBLOCK_NOMAP
take care of informing kmemleak to ignore such memory.

Link: https://lore.kernel.org/all/8ade5174-b143-d621-8c8e-dc6a1898c6fb@linaro.org
Link: https://lore.kernel.org/all/c30ff0a2-d196-c50d-22f0-bd50696b1205@quicinc.com
Fixes: a7259df76702 ("memblock: make memblock_find_in_range method private")
Reported-by: Vladimir Zapolskiy <vladimir.zapolskiy@linaro.org>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Vladimir Zapolskiy <vladimir.zapolskiy@linaro.org>
Tested-by: Qian Cai <quic_qiancai@quicinc.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

show more ...


Revision tags: v5.14.14, v5.14.13, v5.14.12, v5.14.11, v5.14.10, v5.14.9, v5.14.8, v5.14.7, v5.14.6, v5.10.67, v5.10.66, v5.14.5, v5.14.4, v5.10.65, v5.14.3, v5.10.64, v5.14.2, v5.10.63, v5.14.1, v5.10.62
# a7259df7 02-Sep-2021 Mike Rapoport <rppt@linux.ibm.com>

memblock: make memblock_find_in_range method private

There are a lot of uses of memblock_find_in_range() along with
memblock_reserve() from the times memblock allocation APIs did not exist.

membloc

memblock: make memblock_find_in_range method private

There are a lot of uses of memblock_find_in_range() along with
memblock_reserve() from the times memblock allocation APIs did not exist.

memblock_find_in_range() is the very core of memblock allocations, so any
future changes to its internal behaviour would mandate updates of all the
users outside memblock.

Replace the calls to memblock_find_in_range() with an equivalent calls to
memblock_phys_alloc() and memblock_phys_alloc_range() and make
memblock_find_in_range() private method of memblock.

This simplifies the callers, ensures that (unlikely) errors in
memblock_reserve() are handled and improves maintainability of
memblock_find_in_range().

Link: https://lkml.kernel.org/r/20210816122622.30279-1-rppt@kernel.org
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> [arm64]
Acked-by: Kirill A. Shutemov <kirill.shtuemov@linux.intel.com>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> [ACPI]
Acked-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Acked-by: Nick Kossifidis <mick@ics.forth.gr> [riscv]
Tested-by: Guenter Roeck <linux@roeck-us.net>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

show more ...


Revision tags: v5.14, v5.10.61, v5.10.60, v5.10.53, v5.10.52, v5.10.51, v5.10.50, v5.10.49, v5.13, v5.10.46
# 7b25995f 11-Jun-2021 Dong Aisheng <aisheng.dong@nxp.com>

of: of_reserved_mem: mark nomap memory instead of removing

Since commit 86588296acbf ("fdt: Properly handle "no-map" field in the memory region"),
nomap memory is changed to call memblock_mark_nomap

of: of_reserved_mem: mark nomap memory instead of removing

Since commit 86588296acbf ("fdt: Properly handle "no-map" field in the memory region"),
nomap memory is changed to call memblock_mark_nomap() instead of
memblock_remove(). But it only changed the reserved memory with fixed
addr and size case in early_init_dt_reserve_memory_arch(), not
including the dynamical allocation by size case in
early_init_dt_alloc_reserved_memory_arch().

Cc: Rob Herring <robh+dt@kernel.org>
Cc: devicetree@vger.kernel.org
Signed-off-by: Dong Aisheng <aisheng.dong@nxp.com>
Reviewed-by: Quentin Perret <qperret@google.com>
Link: https://lore.kernel.org/r/20210611131153.3731147-2-aisheng.dong@nxp.com
Signed-off-by: Rob Herring <robh@kernel.org>

show more ...


# 3c6867a1 11-Jun-2021 Dong Aisheng <aisheng.dong@nxp.com>

of: of_reserved_mem: only call memblock_free for normal reserved memory

For nomap case, the memory block will be removed by memblock_remove()
in early_init_dt_alloc_reserved_memory_arch(). So it's m

of: of_reserved_mem: only call memblock_free for normal reserved memory

For nomap case, the memory block will be removed by memblock_remove()
in early_init_dt_alloc_reserved_memory_arch(). So it's meaningless to
call memblock_free() on error path.

Cc: Rob Herring <robh+dt@kernel.org>
Cc: devicetree@vger.kernel.org
Signed-off-by: Dong Aisheng <aisheng.dong@nxp.com>
Link: https://lore.kernel.org/r/20210611131153.3731147-1-aisheng.dong@nxp.com
Signed-off-by: Rob Herring <robh@kernel.org>

show more ...


# 2892d8a0 16-Jun-2021 Geert Uytterhoeven <geert+renesas@glider.be>

of: Fix truncation of memory sizes on 32-bit platforms

Variable "size" has type "phys_addr_t", which can be either 32-bit or
64-bit on 32-bit systems, while "unsigned long" is always 32-bit on
32-bi

of: Fix truncation of memory sizes on 32-bit platforms

Variable "size" has type "phys_addr_t", which can be either 32-bit or
64-bit on 32-bit systems, while "unsigned long" is always 32-bit on
32-bit systems. Hence the cast in

(unsigned long)size / SZ_1M

may truncate a 64-bit size to 32-bit, as casts have a higher operator
precedence than divisions.

Fix this by inverting the order of the cast and division, which should
be safe for memory blocks smaller than 4 PiB. Note that the division is
actually a shift, as SZ_1M is a power-of-two constant, hence there is no
need to use div_u64().

While at it, use "%lu" to format "unsigned long".

Fixes: e8d9d1f5485b52ec ("drivers: of: add initialization code for static reserved memory")
Fixes: 3f0c8206644836e4 ("drivers: of: add initialization code for dynamic reserved memory")
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Acked-by: Marek Szyprowski <m.szyprowski@samsung.com>
Link: https://lore.kernel.org/r/4a1117e72d13d26126f57be034c20dac02f1e915.1623835273.git.geert+renesas@glider.be
Signed-off-by: Rob Herring <robh@kernel.org>

show more ...


Revision tags: v5.10.43, v5.10.42, v5.10.41
# 12d55d3b 27-May-2021 Rob Herring <robh@kernel.org>

of: Move reserved memory private function declarations

fdt_init_reserved_mem() and fdt_reserved_mem_save_node() are private to
the DT code, so move there declarations to of_private.h. There's no nee

of: Move reserved memory private function declarations

fdt_init_reserved_mem() and fdt_reserved_mem_save_node() are private to
the DT code, so move there declarations to of_private.h. There's no need
for the dummy functions as CONFIG_OF_RESERVED_MEM is always enabled for
CONFIG_OF_EARLY_FLATTREE.

Cc: Frank Rowand <frowand.list@gmail.com>
Signed-off-by: Rob Herring <robh@kernel.org>
Link: https://lore.kernel.org/r/20210527193841.1284169-1-robh@kernel.org

show more ...


Revision tags: v5.10.40, v5.10.39, v5.4.119, v5.10.36, v5.10.35, v5.10.34, v5.4.116, v5.10.33, v5.12, v5.10.32, v5.10.31, v5.10.30, v5.10.27, v5.10.26, v5.10.25
# ad1ce1ab 18-Mar-2021 Lee Jones <lee.jones@linaro.org>

of: of_reserved_mem: Demote kernel-doc abuses

Fixes the following W=1 kernel build warning(s):

drivers/of/of_reserved_mem.c:53: warning: Function parameter or member 'node' not described in 'fdt_r

of: of_reserved_mem: Demote kernel-doc abuses

Fixes the following W=1 kernel build warning(s):

drivers/of/of_reserved_mem.c:53: warning: Function parameter or member 'node' not described in 'fdt_reserved_mem_save_node'
drivers/of/of_reserved_mem.c:53: warning: Function parameter or member 'uname' not described in 'fdt_reserved_mem_save_node'
drivers/of/of_reserved_mem.c:53: warning: Function parameter or member 'base' not described in 'fdt_reserved_mem_save_node'
drivers/of/of_reserved_mem.c:53: warning: Function parameter or member 'size' not described in 'fdt_reserved_mem_save_node'
drivers/of/of_reserved_mem.c:76: warning: Function parameter or member 'node' not described in '__reserved_mem_alloc_size'
drivers/of/of_reserved_mem.c:76: warning: Function parameter or member 'uname' not described in '__reserved_mem_alloc_size'
drivers/of/of_reserved_mem.c:76: warning: Function parameter or member 'res_base' not described in '__reserved_mem_alloc_size'
drivers/of/of_reserved_mem.c:76: warning: Function parameter or member 'res_size' not described in '__reserved_mem_alloc_size'
drivers/of/of_reserved_mem.c:171: warning: Function parameter or member 'rmem' not described in '__reserved_mem_init_node'

Cc: Rob Herring <robh+dt@kernel.org>
Cc: Frank Rowand <frowand.list@gmail.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Josh Cartwright <joshc@codeaurora.org>
Cc: devicetree@vger.kernel.org
Signed-off-by: Lee Jones <lee.jones@linaro.org>
Signed-off-by: Rob Herring <robh@kernel.org>
Link: https://lore.kernel.org/r/20210318104036.3175910-11-lee.jones@linaro.org

show more ...


# f929d21a 16-Jun-2021 Geert Uytterhoeven <geert+renesas@glider.be>

of: Fix truncation of memory sizes on 32-bit platforms

[ Upstream commit 2892d8a00d23d511a0591ac4b2ff3f050ae1f004 ]

Variable "size" has type "phys_addr_t", which can be either 32-bit or
64-bit on 3

of: Fix truncation of memory sizes on 32-bit platforms

[ Upstream commit 2892d8a00d23d511a0591ac4b2ff3f050ae1f004 ]

Variable "size" has type "phys_addr_t", which can be either 32-bit or
64-bit on 32-bit systems, while "unsigned long" is always 32-bit on
32-bit systems. Hence the cast in

(unsigned long)size / SZ_1M

may truncate a 64-bit size to 32-bit, as casts have a higher operator
precedence than divisions.

Fix this by inverting the order of the cast and division, which should
be safe for memory blocks smaller than 4 PiB. Note that the division is
actually a shift, as SZ_1M is a power-of-two constant, hence there is no
need to use div_u64().

While at it, use "%lu" to format "unsigned long".

Fixes: e8d9d1f5485b52ec ("drivers: of: add initialization code for static reserved memory")
Fixes: 3f0c8206644836e4 ("drivers: of: add initialization code for dynamic reserved memory")
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Acked-by: Marek Szyprowski <m.szyprowski@samsung.com>
Link: https://lore.kernel.org/r/4a1117e72d13d26126f57be034c20dac02f1e915.1623835273.git.geert+renesas@glider.be
Signed-off-by: Rob Herring <robh@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


Revision tags: v5.10.24, v5.10.23, v5.10.22, v5.10.21, v5.10.20, v5.10.19, v5.4.101, v5.10.18, v5.10.17, v5.11, v5.10.16, v5.10.15, v5.10.14, v5.10, v5.8.17
# ca05f333 21-Oct-2020 Vincent Whitchurch <vincent.whitchurch@axis.com>

of: Fix reserved-memory overlap detection

The reserved-memory overlap detection code fails to detect overlaps if
either of the regions starts at address 0x0. The code explicitly checks
for and igno

of: Fix reserved-memory overlap detection

The reserved-memory overlap detection code fails to detect overlaps if
either of the regions starts at address 0x0. The code explicitly checks
for and ignores such regions, apparently in order to ignore dynamically
allocated regions which have an address of 0x0 at this point. These
dynamically allocated regions also have a size of 0x0 at this point, so
fix this by removing the check and sorting the dynamically allocated
regions ahead of any static regions at address 0x0.

For example, there are two overlaps in this case but they are not
currently reported:

foo@0 {
reg = <0x0 0x2000>;
};

bar@0 {
reg = <0x0 0x1000>;
};

baz@1000 {
reg = <0x1000 0x1000>;
};

quux {
size = <0x1000>;
};

but they are after this patch:

OF: reserved mem: OVERLAP DETECTED!
bar@0 (0x00000000--0x00001000) overlaps with foo@0 (0x00000000--0x00002000)
OF: reserved mem: OVERLAP DETECTED!
foo@0 (0x00000000--0x00002000) overlaps with baz@1000 (0x00001000--0x00002000)

Signed-off-by: Vincent Whitchurch <vincent.whitchurch@axis.com>
Link: https://lore.kernel.org/r/ded6fd6b47b58741aabdcc6967f73eca6a3f311e.1603273666.git-series.vincent.whitchurch@axis.com
Signed-off-by: Rob Herring <robh@kernel.org>

show more ...


# 33def849 21-Oct-2020 Joe Perches <joe@perches.com>

treewide: Convert macro and uses of __section(foo) to __section("foo")

Use a more generic form for __section that requires quotes to avoid
complications with clang and gcc differences.

Remove the q

treewide: Convert macro and uses of __section(foo) to __section("foo")

Use a more generic form for __section that requires quotes to avoid
complications with clang and gcc differences.

Remove the quote operator # from compiler_attributes.h __section macro.

Convert all unquoted __section(foo) uses to quoted __section("foo").
Also convert __attribute__((section("foo"))) uses to __section("foo")
even if the __attribute__ has multiple list entry forms.

Conversion done using the script at:

https://lore.kernel.org/lkml/75393e5ddc272dc7403de74d645e6c6e0f4e70eb.camel@perches.com/2-convert_section.pl

Signed-off-by: Joe Perches <joe@perches.com>
Reviewed-by: Nick Desaulniers <ndesaulniers@gooogle.com>
Reviewed-by: Miguel Ojeda <ojeda@kernel.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

show more ...


Revision tags: v5.8.16, v5.8.15, v5.9, v5.8.14, v5.8.13, v5.8.12, v5.8.11, v5.8.10, v5.8.9, v5.8.8, v5.8.7, v5.8.6, v5.4.62, v5.8.5, v5.8.4, v5.4.61, v5.8.3, v5.4.60, v5.8.2, v5.4.59, v5.8.1, v5.4.58, v5.4.57, v5.4.56, v5.8, v5.7.12, v5.4.55
# 6f1188b4 30-Jul-2020 Yue Hu <huyue2@yulong.com>

of: reserved-memory: remove duplicated call to of_get_flat_dt_prop() for no-map node

Just use nomap instead of the second call to of_get_flat_dt_prop(). And
change nomap as a bool type due to != NUL

of: reserved-memory: remove duplicated call to of_get_flat_dt_prop() for no-map node

Just use nomap instead of the second call to of_get_flat_dt_prop(). And
change nomap as a bool type due to != NULL operator. Also, correct comment
about node of 'align' -> 'alignment'.

Signed-off-by: Yue Hu <huyue2@yulong.com>
Link: https://lore.kernel.org/r/20200730092353.15644-1-zbestahu@gmail.com
Signed-off-by: Rob Herring <robh@kernel.org>

show more ...


Revision tags: v5.7.11, v5.4.54, v5.7.10, v5.4.53, v5.4.52, v5.7.9, v5.7.8, v5.4.51, v5.4.50, v5.7.7, v5.4.49, v5.7.6, v5.7.5, v5.4.48, v5.7.4, v5.7.3, v5.4.47, v5.4.46, v5.7.2, v5.4.45, v5.7.1
# 418370ff 04-Jun-2020 Danny Lin <danny@kdrag0n.dev>

of: reserved_mem: Fix typo in the too-many-regions message

Minor fix for a missing preposition in the error message that appears
when there are too many reserved memory regions for the allocated arr

of: reserved_mem: Fix typo in the too-many-regions message

Minor fix for a missing preposition in the error message that appears
when there are too many reserved memory regions for the allocated array
to store.

Signed-off-by: Danny Lin <danny@kdrag0n.dev>
Link: https://lore.kernel.org/r/20200604054900.200317-1-danny@kdrag0n.dev
Signed-off-by: Rob Herring <robh@kernel.org>

show more ...


Revision tags: v5.4.44, v5.7, v5.4.43, v5.4.42, v5.4.41
# c8813f7e 11-May-2020 chenqiwu <chenqiwu@xiaomi.com>

drivers/of: keep description of function consistent with function name

Currently, there are some descriptions of function not
consistent with function name, fixing them will make
the code more reada

drivers/of: keep description of function consistent with function name

Currently, there are some descriptions of function not
consistent with function name, fixing them will make
the code more readable.

Signed-off-by: chenqiwu <chenqiwu@xiaomi.com>
Signed-off-by: Rob Herring <robh@kernel.org>

show more ...


Revision tags: v5.4.40, v5.4.39, v5.4.38, v5.4.37, v5.4.36, v5.4.35, v5.4.34, v5.4.33, v5.4.32, v5.4.31
# 081df76a 03-Apr-2020 Thierry Reding <treding@nvidia.com>

of: reserved-memory: Support multiple regions per device

While the lookup/initialization code already supports multiple memory
regions per device, the release code will only ever release the first
m

of: reserved-memory: Support multiple regions per device

While the lookup/initialization code already supports multiple memory
regions per device, the release code will only ever release the first
matching memory region.

Enhance the code to release all matching regions. Each attachment of
a region to a device is uniquely identifiable using a struct device
pointer and a pointer to the memory region's struct reserved_mem.

Reviewed-by: Rob Herring <robh@kernel.org>
Signed-off-by: Thierry Reding <treding@nvidia.com>

show more ...


# 0da0e316 03-Apr-2020 Thierry Reding <treding@nvidia.com>

of: reserved-memory: Support lookup of regions by name

Add support for looking up memory regions by name. This looks up the
given name in the newly introduced memory-region-names property and
return

of: reserved-memory: Support lookup of regions by name

Add support for looking up memory regions by name. This looks up the
given name in the newly introduced memory-region-names property and
returns the memory region at the corresponding index in the memory-
region(s) property.

Reviewed-by: Rob Herring <robh@kernel.org>
Signed-off-by: Thierry Reding <treding@nvidia.com>

show more ...


Revision tags: v5.4.30, v5.4.29, v5.6, v5.4.28, v5.4.27, v5.4.26, v5.4.25, v5.4.24, v5.4.23
# 632c9908 24-Feb-2020 Patrick Daly <pdaly@codeaurora.org>

of: of_reserved_mem: Increase limit on number of reserved regions

Certain SoCs need to support a large amount of reserved memory
regions. For example, Qualcomm's SM8150 SoC requires that 20
regions

of: of_reserved_mem: Increase limit on number of reserved regions

Certain SoCs need to support a large amount of reserved memory
regions. For example, Qualcomm's SM8150 SoC requires that 20
regions of memory be reserved for a variety of reasons (e.g.
loading a peripheral subsystem's firmware image into a
particular space).

When adding more reserved memory regions to cater to different
usecases, the remaining number of reserved memory regions--12
to be exact--becomes too small. Thus, double the existing
limit of reserved memory regions.

Signed-off-by: Patrick Daly <pdaly@codeaurora.org>
Signed-off-by: Isaac J. Manjarres <isaacm@codeaurora.org>
Signed-off-by: Rob Herring <robh@kernel.org>

show more ...


Revision tags: v5.4.22, v5.4.21, v5.4.20, v5.4.19, v5.4.18, v5.4.17, v5.4.16, v5.5, v5.4.15, v5.4.14, v5.4.13, v5.4.12, v5.4.11, v5.4.10, v5.4.9, v5.4.8, v5.4.7, v5.4.6, v5.4.5, v5.4.4, v5.4.3, v5.3.15, v5.4.2, v5.4.1, v5.3.14, v5.4, v5.3.13, v5.3.12, v5.3.11, v5.3.10, v5.3.9, v5.3.8
# 5dba5175 19-Oct-2019 Chris Goldsworthy <cgoldswo@codeaurora.org>

of: reserved_mem: add missing of_node_put() for proper ref-counting

Commit d698a388146c ("of: reserved-memory: ignore disabled memory-region
nodes") added an early return in of_reserved_mem_device_i

of: reserved_mem: add missing of_node_put() for proper ref-counting

Commit d698a388146c ("of: reserved-memory: ignore disabled memory-region
nodes") added an early return in of_reserved_mem_device_init_by_idx(), but
didn't call of_node_put() on a device_node whose ref-count was incremented
in the call to of_parse_phandle() preceding the early exit.

Fixes: d698a388146c ("of: reserved-memory: ignore disabled memory-region nodes")
Signed-off-by: Chris Goldsworthy <cgoldswo@codeaurora.org>
Cc: stable@vger.kernel.org
Reviewed-by: Bjorn Andersson <bjorn.andersson@linaro.org>
Signed-off-by: Rob Herring <robh@kernel.org>

show more ...


12345