History log of /openbmc/linux/drivers/firmware/efi/libstub/arm64-stub.c (Results 1 – 25 of 89)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: v6.6.25, v6.6.24, v6.6.23, v6.6.16, v6.6.15, v6.6.14, v6.6.13, v6.6.12, v6.6.11, v6.6.10, v6.6.9, v6.6.8, v6.6.7, v6.6.6, v6.6.5, v6.6.4, v6.6.3, v6.6.2, v6.5.11, v6.6.1, v6.5.10, v6.6, v6.5.9, v6.5.8, v6.5.7, v6.5.6, v6.5.5, v6.5.4, v6.5.3, v6.5.2, v6.1.51, v6.5.1, v6.1.50, v6.5, v6.1.49, v6.1.48, v6.1.46, v6.1.45, v6.1.44, v6.1.43, v6.1.42, v6.1.41, v6.1.40
# 6b56beb5 22-Jul-2023 Alexandre Ghiti <alexghiti@rivosinc.com>

arm64: libstub: Move KASLR handling functions to kaslr.c

This prepares for riscv to use the same functions to handle the pĥysical
kernel move when KASLR is enabled.

Signed-off-by: Alexandre Ghiti <

arm64: libstub: Move KASLR handling functions to kaslr.c

This prepares for riscv to use the same functions to handle the pĥysical
kernel move when KASLR is enabled.

Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Conor Dooley <conor.dooley@microchip.com>
Tested-by: Song Shuai <songshuaishuai@tinylab.org>
Reviewed-by: Sami Tolvanen <samitolvanen@google.com>
Tested-by: Sami Tolvanen <samitolvanen@google.com>
Link: https://lore.kernel.org/r/20230722123850.634544-4-alexghiti@rivosinc.com
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>

show more ...


# bc5ddcef 07-Aug-2023 Ard Biesheuvel <ardb@kernel.org>

efi/libstub: Add limit argument to efi_random_alloc()

x86 will need to limit the kernel memory allocation to the lowest 512
MiB of memory, to match the behavior of the existing bare metal KASLR
phys

efi/libstub: Add limit argument to efi_random_alloc()

x86 will need to limit the kernel memory allocation to the lowest 512
MiB of memory, to match the behavior of the existing bare metal KASLR
physical randomization logic. So in preparation for that, add a limit
parameter to efi_random_alloc() and wire it up.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/r/20230807162720.545787-22-ardb@kernel.org

show more ...


Revision tags: v6.1.39, v6.1.38, v6.1.37, v6.1.36, v6.4, v6.1.35, v6.1.34, v6.1.33, v6.1.32, v6.1.31, v6.1.30, v6.1.29, v6.1.28, v6.1.27, v6.1.26, v6.3, v6.1.25, v6.1.24, v6.1.23, v6.1.22, v6.1.21
# fc3608aa 21-Mar-2023 Ard Biesheuvel <ardb@kernel.org>

efi/libstub: Use relocated version of kernel's struct screen_info

In some cases, we expose the kernel's struct screen_info to the EFI stub
directly, so it gets populated before even entering the ker

efi/libstub: Use relocated version of kernel's struct screen_info

In some cases, we expose the kernel's struct screen_info to the EFI stub
directly, so it gets populated before even entering the kernel. This
means the early console is available as soon as the early param parsing
happens, which is nice. It also means we need two different ways to pass
this information, as this trick only works if the EFI stub is baked into
the core kernel image, which is not always the case.

Huacai reports that the preparatory refactoring that was needed to
implement this alternative method for zboot resulted in a non-functional
efifb earlycon for other cases as well, due to the reordering of the
kernel image relocation with the population of the screen_info struct,
and the latter now takes place after copying the image to its new
location, which means we copy the old, uninitialized state.

So let's ensure that the same-image version of alloc_screen_info()
produces the correct screen_info pointer, by taking the displacement of
the loaded image into account.

Reported-by: Huacai Chen <chenhuacai@loongson.cn>
Tested-by: Huacai Chen <chenhuacai@loongson.cn>
Link: https://lore.kernel.org/linux-efi/20230310021749.921041-1-chenhuacai@loongson.cn/
Fixes: 42c8ea3dca094ab8 ("efi: libstub: Factor out EFI stub entrypoint into separate file")
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>

show more ...


Revision tags: v6.1.20, v6.1.19, v6.1.18, v6.1.17
# 3c60f67b 10-Mar-2023 Ard Biesheuvel <ardb@kernel.org>

efi/libstub: arm64: Remap relocated image with strict permissions

After relocating the executable image, use the EFI memory attributes
protocol to remap the code and data regions with the appropriat

efi/libstub: arm64: Remap relocated image with strict permissions

After relocating the executable image, use the EFI memory attributes
protocol to remap the code and data regions with the appropriate
permissions.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>

show more ...


Revision tags: v6.1.16, v6.1.15, v6.1.14, v6.1.13, v6.2, v6.1.12, v6.1.11, v6.1.10, v6.1.9, v6.1.8, v6.1.7, v6.1.6, v6.1.5, v6.0.19
# 61786170 11-Jan-2023 Ard Biesheuvel <ardb@kernel.org>

efi: arm64: enter with MMU and caches enabled

Instead of cleaning the entire loaded kernel image to the PoC and
disabling the MMU and caches before branching to the kernel's bare metal
entry point,

efi: arm64: enter with MMU and caches enabled

Instead of cleaning the entire loaded kernel image to the PoC and
disabling the MMU and caches before branching to the kernel's bare metal
entry point, we can leave the MMU and caches enabled, and rely on EFI's
cacheable 1:1 mapping of all of system RAM (which is mandated by the
spec) to populate the initial page tables.

This removes the need for managing coherency in software, which is
tedious and error prone.

Note that we still need to clean the executable region of the image to
the PoU if this is required for I/D coherency, but only if we actually
decided to move the image in memory, as otherwise, this will have been
taken care of by the loader.

This change affects both the builtin EFI stub as well as the zboot
decompressor, which now carries the entire EFI stub along with the
decompression code and the compressed image.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20230111102236.1430401-7-ardb@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

show more ...


Revision tags: v6.0.18, v6.1.4, v6.1.3, v6.0.17, v6.1.2, v6.0.16, v6.1.1, v6.0.15, v6.0.14, v6.0.13, v6.1, v6.0.12
# a37dac5c 05-Dec-2022 Ard Biesheuvel <ardb@kernel.org>

arm64: efi: Limit allocations to 48-bit addressable physical region

The UEFI spec does not mention or reason about the configured size of
the virtual address space at all, but it does mention that a

arm64: efi: Limit allocations to 48-bit addressable physical region

The UEFI spec does not mention or reason about the configured size of
the virtual address space at all, but it does mention that all memory
should be identity mapped using a page size of 4 KiB.

This means that a LPA2 capable system that has any system memory outside
of the 48-bit addressable physical range and follows the spec to the
letter may serve page allocation requests from regions of memory that
the kernel cannot access unless it was built with LPA2 support and
enables it at runtime.

So let's ensure that all page allocations are limited to the 48-bit
range.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>

show more ...


Revision tags: v6.0.11, v6.0.10, v5.15.80, v6.0.9, v5.15.79, v6.0.8, v5.15.78, v6.0.7, v5.15.77, v5.15.76, v6.0.6, v6.0.5, v5.15.75, v6.0.4, v6.0.3, v6.0.2, v5.15.74, v5.15.73, v6.0.1, v5.15.72, v6.0, v5.15.71, v5.15.70, v5.15.69, v5.15.68, v5.15.67, v5.15.66, v5.15.65, v5.15.64, v5.15.63, v5.15.62, v5.15.61, v5.15.60, v5.15.59
# 9cf42bca 02-Aug-2022 Ard Biesheuvel <ardb@kernel.org>

efi: libstub: use EFI_LOADER_CODE region when moving the kernel in memory

The EFI spec is not very clear about which permissions are being given
when allocating pages of a certain type. However, it

efi: libstub: use EFI_LOADER_CODE region when moving the kernel in memory

The EFI spec is not very clear about which permissions are being given
when allocating pages of a certain type. However, it is quite obvious
that EFI_LOADER_CODE is more likely to permit execution than
EFI_LOADER_DATA, which becomes relevant once we permit booting the
kernel proper with the firmware's 1:1 mapping still active.

Ostensibly, recent systems such as the Surface Pro X grant executable
permissions to EFI_LOADER_CODE regions but not EFI_LOADER_DATA regions.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>

show more ...


# 550b33cf 10-Nov-2022 Ard Biesheuvel <ardb@kernel.org>

arm64: efi: Force the use of SetVirtualAddressMap() on Altra machines

Ampere Altra machines are reported to misbehave when the SetTime() EFI
runtime service is called after ExitBootServices() but be

arm64: efi: Force the use of SetVirtualAddressMap() on Altra machines

Ampere Altra machines are reported to misbehave when the SetTime() EFI
runtime service is called after ExitBootServices() but before calling
SetVirtualAddressMap(). Given that the latter is horrid, pointless and
explicitly documented as optional by the EFI spec, we no longer invoke
it at boot if the configured size of the VA space guarantees that the
EFI runtime memory regions can remain mapped 1:1 like they are at boot
time.

On Ampere Altra machines, this results in SetTime() calls issued by the
rtc-efi driver triggering synchronous exceptions during boot. We can
now recover from those without bringing down the system entirely, due to
commit 23715a26c8d81291 ("arm64: efi: Recover from synchronous
exceptions occurring in firmware"). However, it would be better to avoid
the issue entirely, given that the firmware appears to remain in a funny
state after this.

So attempt to identify these machines based on the 'family' field in the
type #1 SMBIOS record, and call SetVirtualAddressMap() unconditionally
in that case.

Tested-by: Alexandru Elisei <alexandru.elisei@gmail.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>

show more ...


# d9ffe524 13-Oct-2022 Ard Biesheuvel <ardb@kernel.org>

efi/arm64: libstub: Split off kernel image relocation for builtin stub

The arm64 build of the EFI stub is part of the core kernel image, and
therefore accesses section markers directly when it needs

efi/arm64: libstub: Split off kernel image relocation for builtin stub

The arm64 build of the EFI stub is part of the core kernel image, and
therefore accesses section markers directly when it needs to figure out
the size of the various section.

The zboot decompressor does not have access to those symbols, but
doesn't really need that either. So let's move handle_kernel_image()
into a separate file (or rather, move everything else into a separate
file) so that the zboot build does not pull in unused code that links to
symbols that it does not define.

While at it, introduce a helper routine that the generic zboot loader
will need to invoke after decompressing the image but before invoking
it, to ensure that the I-side view of memory is consistent.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>

show more ...


# 895bc3a1 12-Oct-2022 Ard Biesheuvel <ardb@kernel.org>

efi: libstub: Factor out min alignment and preferred kernel load address

Factor out the expressions that describe the preferred placement of the
loaded image as well as the minimum alignment so we c

efi: libstub: Factor out min alignment and preferred kernel load address

Factor out the expressions that describe the preferred placement of the
loaded image as well as the minimum alignment so we can reuse them in
the decompressor.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>

show more ...


# aaeb3fc6 17-Oct-2022 Ard Biesheuvel <ardb@kernel.org>

arm64: efi: Move dcache cleaning of loaded image out of efi_enter_kernel()

The efi_enter_kernel() routine will be shared between the existing EFI
stub and the zboot decompressor, and the version of

arm64: efi: Move dcache cleaning of loaded image out of efi_enter_kernel()

The efi_enter_kernel() routine will be shared between the existing EFI
stub and the zboot decompressor, and the version of
dcache_clean_to_poc() that the core kernel exports to the stub will not
be available in the latter case.

So move the handling into the .c file which will remain part of the stub
build that integrates directly with the kernel proper.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>

show more ...


# d3549a93 16-Sep-2022 Ard Biesheuvel <ardb@kernel.org>

efi/arm64: libstub: avoid SetVirtualAddressMap() when possible

EFI's SetVirtualAddressMap() runtime service is a horrid hack that we'd
like to avoid using, if possible. For 64-bit architectures such

efi/arm64: libstub: avoid SetVirtualAddressMap() when possible

EFI's SetVirtualAddressMap() runtime service is a horrid hack that we'd
like to avoid using, if possible. For 64-bit architectures such as
arm64, the user and kernel mappings are entirely disjoint, and given
that we use the user region for mapping the UEFI runtime regions when
running under the OS, we don't rely on SetVirtualAddressMap() in the
conventional way, i.e., to permit kernel mappings of the OS to coexist
with kernel region mappings of the firmware regions. This means that, in
principle, we should be able to avoid SetVirtualAddressMap() altogether,
and simply use the 1:1 mapping that UEFI uses at boot time. (Note that
omitting SetVirtualAddressMap() is explicitly permitted by the UEFI
spec).

However, there is a corner case on arm64, which, if configured for
3-level paging (or 2-level paging when using 64k pages), may not be able
to cover the entire range of firmware mappings (which might contain both
memory and MMIO peripheral mappings).

So let's avoid SetVirtualAddressMap() on arm64, but only if the VA space
is guaranteed to be of sufficient size.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>

show more ...


# 171539f5 15-Sep-2022 Ard Biesheuvel <ardb@kernel.org>

efi: libstub: install boot-time memory map as config table

Expose the EFI boot time memory map to the kernel via a configuration
table. This is arch agnostic and enables future changes that remove t

efi: libstub: install boot-time memory map as config table

Expose the EFI boot time memory map to the kernel via a configuration
table. This is arch agnostic and enables future changes that remove the
dependency on DT on architectures that don't otherwise rely on it.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>

show more ...


Revision tags: v5.19, v5.15.58, v5.15.57, v5.15.56, v5.15.55, v5.15.54, v5.15.53, v5.15.52, v5.15.51, v5.15.50, v5.15.49, v5.15.48, v5.15.47, v5.15.46, v5.15.45
# eab31265 03-Jun-2022 Ard Biesheuvel <ardb@kernel.org>

efi: libstub: simplify efi_get_memory_map() and struct efi_boot_memmap

Currently, struct efi_boot_memmap is a struct that is passed around
between callers of efi_get_memory_map() and the users of th

efi: libstub: simplify efi_get_memory_map() and struct efi_boot_memmap

Currently, struct efi_boot_memmap is a struct that is passed around
between callers of efi_get_memory_map() and the users of the resulting
data, and which carries pointers to various variables whose values are
provided by the EFI GetMemoryMap() boot service.

This is overly complex, and it is much easier to carry these values in
the struct itself. So turn the struct into one that carries these data
items directly, including a flex array for the variable number of EFI
memory descriptors that the boot service may return.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>

show more ...


# 2d987e64 05-Sep-2022 Mark Brown <broonie@kernel.org>

arm64/sysreg: Add _EL1 into ID_AA64MMFR0_EL1 definition names

Normally we include the full register name in the defines for fields within
registers but this has not been followed for ID registers. I

arm64/sysreg: Add _EL1 into ID_AA64MMFR0_EL1 definition names

Normally we include the full register name in the defines for fields within
registers but this has not been followed for ID registers. In preparation
for automatic generation of defines add the _EL1s into the defines for
ID_AA64MMFR0_EL1 to follow the convention. No functional changes.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Kristina Martsenko <kristina.martsenko@arm.com>
Link: https://lore.kernel.org/r/20220905225425.1871461-5-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

show more ...


Revision tags: v5.15.44, v5.15.43, v5.15.42, v5.18, v5.15.41, v5.15.40, v5.15.39, v5.15.38, v5.15.37, v5.15.36, v5.15.35, v5.15.34, v5.15.33, v5.15.32, v5.15.31, v5.17
# 07768c55 19-Mar-2022 Ard Biesheuvel <ardb@kernel.org>

efi/arm64: libstub: run image in place if randomized by the loader

If the loader has already placed the EFI kernel image randomly in
physical memory, and indicates having done so by installing the '

efi/arm64: libstub: run image in place if randomized by the loader

If the loader has already placed the EFI kernel image randomly in
physical memory, and indicates having done so by installing the 'fixed
placement' protocol onto the image handle, don't bother randomizing the
placement again in the EFI stub.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>

show more ...


# 416a9f84 19-Mar-2022 Ard Biesheuvel <ardb@kernel.org>

efi: libstub: pass image handle to handle_kernel_image()

In a future patch, arm64's implementation of handle_kernel_image() will
omit randomizing the placement of the kernel if the load address was

efi: libstub: pass image handle to handle_kernel_image()

In a future patch, arm64's implementation of handle_kernel_image() will
omit randomizing the placement of the kernel if the load address was
chosen randomly by the loader. In order to do this, it needs to locate a
protocol on the image handle, so pass it to handle_kernel_image().

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>

show more ...


Revision tags: v5.15.30, v5.15.29, v5.15.28, v5.15.27, v5.15.26, v5.15.25, v5.15.24, v5.15.23, v5.15.22, v5.15.21, v5.15.20, v5.15.19, v5.15.18, v5.15.17, v5.4.173, v5.15.16
# e9b7c3a4 19-Jan-2022 Mihai Carabas <mihai.carabas@oracle.com>

efi/libstub: arm64: Fix image check alignment at entry

The kernel is aligned at SEGMENT_SIZE and this is the size populated in the PE
headers:

arch/arm64/kernel/efi-header.S: .long SEGMENT_ALIGN

efi/libstub: arm64: Fix image check alignment at entry

The kernel is aligned at SEGMENT_SIZE and this is the size populated in the PE
headers:

arch/arm64/kernel/efi-header.S: .long SEGMENT_ALIGN // SectionAlignment

EFI_KIMG_ALIGN is defined as: (SEGMENT_ALIGN > THREAD_ALIGN ? SEGMENT_ALIGN :
THREAD_ALIGN)

So it depends on THREAD_ALIGN. On newer builds this message started to appear
even though the loader is taking into account the PE header (which is stating
SEGMENT_ALIGN).

Fixes: c32ac11da3f8 ("efi/libstub: arm64: Double check image alignment at entry")
Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>

show more ...


# fa005a5c 19-Jan-2022 Mihai Carabas <mihai.carabas@oracle.com>

efi/libstub: arm64: Fix image check alignment at entry

[ Upstream commit e9b7c3a4263bdcfd31bc3d03d48ce0ded7a94635 ]

The kernel is aligned at SEGMENT_SIZE and this is the size populated in the PE
he

efi/libstub: arm64: Fix image check alignment at entry

[ Upstream commit e9b7c3a4263bdcfd31bc3d03d48ce0ded7a94635 ]

The kernel is aligned at SEGMENT_SIZE and this is the size populated in the PE
headers:

arch/arm64/kernel/efi-header.S: .long SEGMENT_ALIGN // SectionAlignment

EFI_KIMG_ALIGN is defined as: (SEGMENT_ALIGN > THREAD_ALIGN ? SEGMENT_ALIGN :
THREAD_ALIGN)

So it depends on THREAD_ALIGN. On newer builds this message started to appear
even though the loader is taking into account the PE header (which is stating
SEGMENT_ALIGN).

Fixes: c32ac11da3f8 ("efi/libstub: arm64: Double check image alignment at entry")
Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


Revision tags: v5.15.15, v5.16, v5.15.10, v5.15.9, v5.15.8, v5.15.7, v5.15.6, v5.15.5, v5.15.4, v5.15.3, v5.15.2, v5.15.1, v5.15, v5.14.14, v5.14.13, v5.14.12, v5.14.11, v5.14.10, v5.14.9, v5.14.8, v5.14.7, v5.14.6, v5.10.67, v5.10.66, v5.14.5, v5.14.4, v5.10.65, v5.14.3, v5.10.64, v5.14.2, v5.10.63, v5.14.1, v5.10.62, v5.14, v5.10.61, v5.10.60
# c32ac11d 26-Jul-2021 Ard Biesheuvel <ardb@kernel.org>

efi/libstub: arm64: Double check image alignment at entry

On arm64, the stub only moves the kernel image around in memory if
needed, which is typically only for KASLR, given that relocatable
kernels

efi/libstub: arm64: Double check image alignment at entry

On arm64, the stub only moves the kernel image around in memory if
needed, which is typically only for KASLR, given that relocatable
kernels (which is the default) can run from any 64k aligned address,
which is also the minimum alignment communicated to EFI via the PE/COFF
header.

Unfortunately, some loaders appear to ignore this header, and load the
kernel at some arbitrary offset in memory. We can deal with this, but
let's check for this condition anyway, so non-compliant code can be
spotted and fixed.

Cc: <stable@vger.kernel.org> # v5.10+
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>

show more ...


# ff80ef5b 26-Jul-2021 Ard Biesheuvel <ardb@kernel.org>

efi/libstub: arm64: Warn when efi_random_alloc() fails

Randomization of the physical load address of the kernel image relies on
efi_random_alloc() returning successfully, and currently, we ignore an

efi/libstub: arm64: Warn when efi_random_alloc() fails

Randomization of the physical load address of the kernel image relies on
efi_random_alloc() returning successfully, and currently, we ignore any
failures and just carry on, using the ordinary, non-randomized page
allocator routine. This means we never find out if a failure occurs,
which could harm security, so let's at least warn about this condition.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>

show more ...


Revision tags: v5.10.53
# 3a262423 22-Jul-2021 Ard Biesheuvel <ardb@kernel.org>

efi/libstub: arm64: Relax 2M alignment again for relocatable kernels

Commit 82046702e288 ("efi/libstub/arm64: Replace 'preferred' offset with
alignment check") simplified the way the stub moves the

efi/libstub: arm64: Relax 2M alignment again for relocatable kernels

Commit 82046702e288 ("efi/libstub/arm64: Replace 'preferred' offset with
alignment check") simplified the way the stub moves the kernel image
around in memory before booting it, given that a relocatable image does
not need to be copied to a 2M aligned offset if it was loaded on a 64k
boundary by EFI.

Commit d32de9130f6c ("efi/arm64: libstub: Deal gracefully with
EFI_RNG_PROTOCOL failure") inadvertently defeated this logic by
overriding the value of efi_nokaslr if EFI_RNG_PROTOCOL is not
available, which was mistaken by the loader logic as an explicit request
on the part of the user to disable KASLR and any associated relocation
of an Image not loaded on a 2M boundary.

So let's reinstate this functionality, by capturing the value of
efi_nokaslr at function entry to choose the minimum alignment.

Fixes: d32de9130f6c ("efi/arm64: libstub: Deal gracefully with EFI_RNG_PROTOCOL failure")
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>

show more ...


# 5b94046e 26-Jul-2021 Ard Biesheuvel <ardb@kernel.org>

efi/libstub: arm64: Force Image reallocation if BSS was not reserved

Distro versions of GRUB replace the usual LoadImage/StartImage calls
used to load the kernel image with some local code that fail

efi/libstub: arm64: Force Image reallocation if BSS was not reserved

Distro versions of GRUB replace the usual LoadImage/StartImage calls
used to load the kernel image with some local code that fails to honor
the allocation requirements described in the PE/COFF header, as it
does not account for the image's BSS section at all: it fails to
allocate space for it, and fails to zero initialize it.

Since the EFI stub itself is allocated in the .init segment, which is
in the middle of the image, its BSS section is not impacted by this,
and the main consequence of this omission is that the BSS section may
overlap with memory regions that are already used by the firmware.

So let's warn about this condition, and force image reallocation to
occur in this case, which works around the problem.

Fixes: 82046702e288 ("efi/libstub/arm64: Replace 'preferred' offset with alignment check")
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>

show more ...


Revision tags: v5.10.52, v5.10.51, v5.10.50, v5.10.49, v5.13, v5.10.46, v5.10.43, v5.10.42, v5.10.41, v5.10.40, v5.10.39, v5.4.119, v5.10.36, v5.10.35, v5.10.34, v5.4.116, v5.10.33, v5.12, v5.10.32, v5.10.31, v5.10.30, v5.10.27, v5.10.26, v5.10.25, v5.10.24, v5.10.23
# 26f55386 09-Mar-2021 James Morse <james.morse@arm.com>

arm64/mm: Fix __enable_mmu() for new TGRAN range values

As per ARM ARM DDI 0487G.a, when FEAT_LPA2 is implemented, ID_AA64MMFR0_EL1
might contain a range of values to describe supported translation

arm64/mm: Fix __enable_mmu() for new TGRAN range values

As per ARM ARM DDI 0487G.a, when FEAT_LPA2 is implemented, ID_AA64MMFR0_EL1
might contain a range of values to describe supported translation granules
(4K and 16K pages sizes in particular) instead of just enabled or disabled
values. This changes __enable_mmu() function to handle complete acceptable
range of values (depending on whether the field is signed or unsigned) now
represented with ID_AA64MMFR0_TGRAN_SUPPORTED_[MIN..MAX] pair. While here,
also fix similar situations in EFI stub and KVM as well.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: James Morse <james.morse@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: kvmarm@lists.cs.columbia.edu
Cc: linux-efi@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Acked-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Link: https://lore.kernel.org/r/1615355590-21102-1-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>

show more ...


Revision tags: v5.10.22, v5.10.21, v5.10.20, v5.10.19, v5.4.101, v5.10.18, v5.10.17, v5.11, v5.10.16, v5.10.15, v5.10.14
# 1c761ee9 20-Jan-2021 Mark Brown <broonie@kernel.org>

efi/arm64: Update debug prints to reflect other entropy sources

Currently the EFI stub prints a diagnostic on boot saying that KASLR will
be disabled if it is unable to use the EFI RNG protocol to o

efi/arm64: Update debug prints to reflect other entropy sources

Currently the EFI stub prints a diagnostic on boot saying that KASLR will
be disabled if it is unable to use the EFI RNG protocol to obtain a seed
for KASLR. With the addition of support for v8.5-RNG and the SMCCC RNG
protocol it is now possible for KASLR to obtain entropy even if the EFI
RNG protocol is unsupported in the system, and the main kernel now
explicitly says if KASLR is active itself. This can result in a boot
log where the stub says KASLR has been disabled and the main kernel says
that it is enabled which is confusing for users.

Remove the explicit reference to KASLR from the diagnostics, the warnings
are still useful as EFI is the only source of entropy the stub uses when
randomizing the physical address of the kernel and the other sources may
not be available.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20210120163810.14973-1-broonie@kernel.org
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>

show more ...


1234