Revision tags: openbmc-20151217-1, openbmc-20151210-1, openbmc-20151202-1, openbmc-20151123-1, openbmc-20151118-1, openbmc-20151104-1, v4.3, openbmc-20151102-1, openbmc-20151028-1 |
|
#
09414d00 |
| 01-Oct-2015 |
Ard Biesheuvel <ard.biesheuvel@linaro.org> |
ARM: only consider memblocks with NOMAP cleared for linear mapping
Take the new memblock attribute MEMBLOCK_NOMAP into account when deciding whether a certain region is or should be covered by the k
ARM: only consider memblocks with NOMAP cleared for linear mapping
Take the new memblock attribute MEMBLOCK_NOMAP into account when deciding whether a certain region is or should be covered by the kernel direct mapping.
Tested-by: Ryan Harkin <ryan.harkin@linaro.org> Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
show more ...
|
Revision tags: v4.3-rc1, v4.2, v4.2-rc8, v4.2-rc7, v4.2-rc6, v4.2-rc5, v4.2-rc4, v4.2-rc3, v4.2-rc2, v4.2-rc1, v4.1, v4.1-rc8, v4.1-rc7, v4.1-rc6, v4.1-rc5, v4.1-rc4, v4.1-rc3, v4.1-rc2 |
|
#
c7936206 |
| 29-Apr-2015 |
Ard Biesheuvel <ard.biesheuvel@linaro.org> |
ARM: implement create_mapping_late() for EFI use
This implements create_mapping_late(), which we will use to populate the UEFI Runtime Services page tables.
Tested-by: Ryan Harkin <ryan.harkin@lina
ARM: implement create_mapping_late() for EFI use
This implements create_mapping_late(), which we will use to populate the UEFI Runtime Services page tables.
Tested-by: Ryan Harkin <ryan.harkin@linaro.org> Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
show more ...
|
#
b430e55b |
| 17-Nov-2015 |
Ard Biesheuvel <ard.biesheuvel@linaro.org> |
ARM: add support for non-global kernel mappings
Add support to the kernel translation table population routines for creating non-global mappings. This will be used by the UEFI runtime services, whic
ARM: add support for non-global kernel mappings
Add support to the kernel translation table population routines for creating non-global mappings. This will be used by the UEFI runtime services, which will use temporary mappings in the userland range.
Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
show more ...
|
#
f579b2b1 |
| 15-Sep-2015 |
Ard Biesheuvel <ard.biesheuvel@linaro.org> |
ARM: factor out allocation routine from __create_mapping()
To allow __create_mapping() to be used for populating UEFI Runtime Services page tables, factor out the allocation routine 'early_alloc' an
ARM: factor out allocation routine from __create_mapping()
To allow __create_mapping() to be used for populating UEFI Runtime Services page tables, factor out the allocation routine 'early_alloc' and pass it down as a function pointer into alloc_init_[pud|pmd|pte]. This way, new users of __create_mapping() can supply another allocation function.
Tested-by: Ryan Harkin <ryan.harkin@linaro.org> Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
show more ...
|
#
1bdb2d4e |
| 15-Sep-2015 |
Ard Biesheuvel <ard.biesheuvel@linaro.org> |
ARM: split off core mapping logic from create_mapping
In order to be able to reuse the core mapping logic of create_mapping for mapping the UEFI Runtime Services into a private set of page tables, s
ARM: split off core mapping logic from create_mapping
In order to be able to reuse the core mapping logic of create_mapping for mapping the UEFI Runtime Services into a private set of page tables, split it off from create_mapping() into a separate function __create_mapping which we will wire up in a subsequent patch.
Tested-by: Ryan Harkin <ryan.harkin@linaro.org> Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
show more ...
|
#
2937367b |
| 01-Sep-2015 |
Ard Biesheuvel <ard.biesheuvel@linaro.org> |
ARM: add support for generic early_ioremap/early_memremap
This enables the generic early_ioremap implementation for ARM.
It uses the fixmap region reserved for kmap. Since early_ioremap is only sup
ARM: add support for generic early_ioremap/early_memremap
This enables the generic early_ioremap implementation for ARM.
It uses the fixmap region reserved for kmap. Since early_ioremap is only supported before paging_init(), and kmap is only supported afterwards, this is guaranteed not to cause any clashes.
Tested-by: Ryan Harkin <ryan.harkin@linaro.org> Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
show more ...
|
Revision tags: v4.1-rc1, v4.0, v4.0-rc7, v4.0-rc6, v4.0-rc5, v4.0-rc4, v4.0-rc3, v4.0-rc2, v4.0-rc1, v3.19, v3.19-rc7, v3.19-rc6, v3.19-rc5, v3.19-rc4, v3.19-rc3, v3.19-rc2, v3.19-rc1, v3.18, v3.18-rc7, v3.18-rc6, v3.18-rc5, v3.18-rc4, v3.18-rc3, v3.18-rc2, v3.18-rc1, v3.17, v3.17-rc7, v3.17-rc6, v3.17-rc5, v3.17-rc4, v3.17-rc3, v3.17-rc2, v3.17-rc1, v3.16, v3.16-rc7, v3.16-rc6, v3.16-rc5, v3.16-rc4, v3.16-rc3, v3.16-rc2, v3.16-rc1, v3.15, v3.15-rc8, v3.15-rc7, v3.15-rc6, v3.15-rc5, v3.15-rc4, v3.15-rc3, v3.15-rc2 |
|
#
d33c43ac |
| 15-Apr-2014 |
Arnd Bergmann <arnd@arndb.de> |
ARM: make xscale iwmmxt code multiplatform aware
In a multiplatform configuration, we may end up building a kernel for both Marvell PJ1 and an ARMv4 CPU implementation. In that case, the xscale-cp0
ARM: make xscale iwmmxt code multiplatform aware
In a multiplatform configuration, we may end up building a kernel for both Marvell PJ1 and an ARMv4 CPU implementation. In that case, the xscale-cp0 code is built with gcc -march=armv4{,t}, which results in a build error from the coprocessor instructions.
Since we know this code will only have to run on an actual xscale processor, we can simply build the entire file for ARMv5TE.
Related to this, we need to handle the iWMMXT initialization sequence differently during boot, to ensure we don't try to touch xscale specific registers on other CPUs from the xscale_cp0_init initcall. cpu_is_xscale() used to be hardcoded to '1' in any configuration that enables any XScale-compatible core, but this breaks once we can have a combined kernel with MMP1 and something else.
In this patch, I replace the existing cpu_is_xscale() macro with a new cpu_is_xscale_family() macro that evaluates true for xscale, xsc3 and mohawk, which makes the behavior more deterministic.
The two existing users of cpu_is_xscale() are modified accordingly, but slightly change behavior for kernels that enable CPU_MOHAWK without also enabling CPU_XSCALE or CPU_XSC3. Previously, these would leave leave PMD_BIT4 in the page tables untouched, now they clear it as we've always done for kernels that enable both MOHAWK and the support for the older CPU types.
Since the previous behavior was inconsistent, I assume it was unintentional.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
show more ...
|
#
9254970c |
| 19-Oct-2015 |
Lucas Stach <l.stach@pengutronix.de> |
ARM: 8447/1: catch pending imprecise abort on unmask
Install a non-faulting handler just before unmasking imprecise aborts and switch back to the regular one after unmasking is done.
This catches a
ARM: 8447/1: catch pending imprecise abort on unmask
Install a non-faulting handler just before unmasking imprecise aborts and switch back to the regular one after unmasking is done.
This catches any pending imprecise abort that the firmware/bootloader may have left behind that would normally crash the kernel at that point. As there are apparently a lot of bootlaoders out there that do such a thing it makes sense to handle it in the common startup code.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de> Tested-by: Tyler Baker <tyler.baker@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
show more ...
|
#
bbeb9209 |
| 25-Aug-2015 |
Lucas Stach <l.stach@pengutronix.de> |
ARM: 8422/1: enable imprecise aborts during early kernel startup
This patch adds imprecise abort enable/disable macros and uses them to enable imprecise aborts early when starting the kernel.
This
ARM: 8422/1: enable imprecise aborts during early kernel startup
This patch adds imprecise abort enable/disable macros and uses them to enable imprecise aborts early when starting the kernel.
This helps in tracking down the real cause for such imprecise abort, as they are handled as soon as they occur. Until now those aborts would only be enabled when entering the userspace and as a consequence crash the first userspace process if any abort had been raised during kernel startup.
Signed-off-by: Fabrice Gasnier <fabrice.gasnier@st.com> Signed-off-by: Lucas Stach <l.stach@pengutronix.de> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
show more ...
|
#
a02d8dfd |
| 21-Aug-2015 |
Russell King <rmk+kernel@arm.linux.org.uk> |
ARM: domains: keep vectors in separate domain
Keep the machine vectors in its own domain to avoid software based user access control from making the vector code inaccessible, and thereby deadlocking
ARM: domains: keep vectors in separate domain
Keep the machine vectors in its own domain to avoid software based user access control from making the vector code inaccessible, and thereby deadlocking the machine.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
show more ...
|
#
a5f4c561 |
| 12-Aug-2015 |
Stefan Agner <stefan@agner.ch> |
ARM: 8415/1: early fixmap support for earlycon
Add early fixmap support, initially to support permanent, fixed mapping support for early console. A temporary, early pte is created which is migrated
ARM: 8415/1: early fixmap support for earlycon
Add early fixmap support, initially to support permanent, fixed mapping support for early console. A temporary, early pte is created which is migrated to a permanent mapping in paging_init. This is also needed since the attributes may change as the memory types are initialized. The 3MiB range of fixmap spans two pte tables, but currently only one pte is created for early fixmap support.
Re-add FIX_KMAP_BEGIN to the index calculation in highmem.c since the index for kmap does not start at zero anymore. This reverts 4221e2e6b316 ("ARM: 8031/1: fixmap: remove FIX_KMAP_BEGIN and FIX_KMAP_END") to some extent.
Cc: Mark Salter <msalter@redhat.com> Cc: Kees Cook <keescook@chromium.org> Cc: Laura Abbott <lauraa@codeaurora.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Rob Herring <robh@kernel.org> Signed-off-by: Stefan Agner <stefan@agner.ch> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
show more ...
|
#
eeb3fee8 |
| 25-Jun-2015 |
Russell King <rmk+kernel@arm.linux.org.uk> |
ARM: add helpful message when truncating physical memory
Add a nmessage to suggest that HIGHMEM is enabled when physical memory is truncated due to lack of virtual address space to map it in the low
ARM: add helpful message when truncating physical memory
Add a nmessage to suggest that HIGHMEM is enabled when physical memory is truncated due to lack of virtual address space to map it in the low memory mapping.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
show more ...
|
#
3de1f52a |
| 24-Jun-2015 |
Laura Abbott <labbott@redhat.com> |
ARM: 8394/1: update memblock limit after mapping lowmem
The memblock limit is currently used in find_limits to find the bounds for ZONE_NORMAL. The memblock limit may need to be rounded down a PMD s
ARM: 8394/1: update memblock limit after mapping lowmem
The memblock limit is currently used in find_limits to find the bounds for ZONE_NORMAL. The memblock limit may need to be rounded down a PMD size to ensure allocations are fully mapped though. This has the side effect of reducing the amount of memory in ZONE_NORMAL. Once all lowmem is mapped, it's safe to change the memblock limit back to include the unaligned section. Adjust the memblock limit after lowmem mapping is complete.
Before: # cat /proc/zoneinfo | grep managed managed 62907 managed 424
After: # cat /proc/zoneinfo | grep managed managed 63331
Signed-off-by: Laura Abbott <labbott@fedoraproject.org> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
show more ...
|
#
1221ed10 |
| 04-Apr-2015 |
Russell King <rmk+kernel@arm.linux.org.uk> |
ARM: cleanup early_paging_init() calling
Eliminate the needless nommu version of this function, and get rid of the proc_info_list structure argument - we no longer need this in order to fix up the p
ARM: cleanup early_paging_init() calling
Eliminate the needless nommu version of this function, and get rid of the proc_info_list structure argument - we no longer need this in order to fix up the page table entries.
Acked-by: Santosh Shilimkar <ssantosh@kernel.org> Tested-by: Murali Karicheri <m-karicheri2@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
show more ...
|
#
d8dc7fbd |
| 04-Apr-2015 |
Russell King <rmk+kernel@arm.linux.org.uk> |
ARM: re-implement physical address space switching
Re-implement the physical address space switching to be architecturally compliant. This involves flushing the caches, disabling the MMU, and only
ARM: re-implement physical address space switching
Re-implement the physical address space switching to be architecturally compliant. This involves flushing the caches, disabling the MMU, and only then updating the page tables. Once that is complete, the system can be brought back up again.
Since we disable the MMU, we need to do the update in assembly code. Luckily, the entries which need updating are fairly trivial, and are all setup by the early assembly code. We can merely adjust each entry by the delta required.
Not only does this fix the code to be architecturally compliant, but it fixes a couple of bugs too:
1. The original code would only ever update the first L2 entry covering a fraction of the kernel; the remainder were left untouched. 2. The L2 entries covering the DTB blob were likewise untouched.
This solution fixes up all entries.
Tested-by: Murali Karicheri <m-karicheri2@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
show more ...
|
#
c0b759d8 |
| 04-Apr-2015 |
Russell King <rmk+kernel@arm.linux.org.uk> |
ARM: keystone2: rename init_meminfo to pv_fixup
The init_meminfo() method is not about initialising meminfo - it's about fixing up the physical to virtual translation so that we use a different phys
ARM: keystone2: rename init_meminfo to pv_fixup
The init_meminfo() method is not about initialising meminfo - it's about fixing up the physical to virtual translation so that we use a different physical address space, possibly above the 4GB physical address space. Therefore, the name "init_meminfo()" is confusing.
Rename it to pv_fixup() instead.
Acked-by: Santosh Shilimkar <ssantosh@kernel.org> Tested-by: Murali Karicheri <m-karicheri2@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
show more ...
|
#
39b74fe8 |
| 04-Apr-2015 |
Russell King <rmk+kernel@arm.linux.org.uk> |
ARM: keystone2: move address space switch printk into generic code
There is no point platform code doing this, let's move it into the generic code so it doesn't get duplicated.
Acked-by: Santosh Sh
ARM: keystone2: move address space switch printk into generic code
There is no point platform code doing this, let's move it into the generic code so it doesn't get duplicated.
Acked-by: Santosh Shilimkar <ssantosh@kernel.org> Tested-by: Murali Karicheri <m-karicheri2@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
show more ...
|
#
c8ca2b4b |
| 04-Apr-2015 |
Russell King <rmk+kernel@arm.linux.org.uk> |
ARM: keystone2: move update of the phys-to-virt constants into generic code
Make the init_meminfo function return the offset to be applied to the phys-to-virt translation constants. This allows us
ARM: keystone2: move update of the phys-to-virt constants into generic code
Make the init_meminfo function return the offset to be applied to the phys-to-virt translation constants. This allows us to move the update into generic code, along with the requirements for this update.
This avoids platforms having to know the details of the phys-to-virt translation support.
Acked-by: Santosh Shilimkar <ssantosh@kernel.org> Tested-by: Murali Karicheri <m-karicheri2@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
show more ...
|
#
965278dc |
| 13-May-2015 |
Mark Rutland <mark.rutland@arm.com> |
ARM: 8356/1: mm: handle non-pmd-aligned end of RAM
At boot time we round the memblock limit down to section size in an attempt to ensure that we will have mapped this RAM with section mappings prior
ARM: 8356/1: mm: handle non-pmd-aligned end of RAM
At boot time we round the memblock limit down to section size in an attempt to ensure that we will have mapped this RAM with section mappings prior to allocating from it. When mapping RAM we iterate over PMD-sized chunks, creating these section mappings.
Section mappings are only created when the end of a chunk is aligned to section size. Unfortunately, with classic page tables (where PMD_SIZE is 2 * SECTION_SIZE) this means that if a chunk is between 1M and 2M in size the first 1M will not be mapped despite having been accounted for in the memblock limit. This has been observed to result in page tables being allocated from unmapped memory, causing boot-time hangs.
This patch modifies the memblock limit rounding to always round down to PMD_SIZE instead of SECTION_SIZE. For classic MMU this means that we will round the memblock limit down to a 2M boundary, matching the limits on section mappings, and preventing allocations from unmapped memory. For LPAE there should be no change as PMD_SIZE == SECTION_SIZE.
Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reported-by: Stefan Agner <stefan@agner.ch> Tested-by: Stefan Agner <stefan@agner.ch> Acked-by: Laura Abbott <labbott@redhat.com> Tested-by: Hans de Goede <hdegoede@redhat.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Steve Capper <steve.capper@linaro.org> Cc: stable@vger.kernel.org Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
show more ...
|
#
ac084688 |
| 23-Dec-2014 |
Grygorii Strashko <grygorii.strashko@linaro.org> |
ARM: 8253/1: mm: use phys_addr_t type in map_lowmem() for kernel mem region
Now local variables kernel_x_start and kernel_x_end defined using 'unsigned long' type which is wrong because they represe
ARM: 8253/1: mm: use phys_addr_t type in map_lowmem() for kernel mem region
Now local variables kernel_x_start and kernel_x_end defined using 'unsigned long' type which is wrong because they represent physical memory range and will be calculated wrongly if LPAE is enabled. As result, all following code in map_lowmem() will not work correctly.
For example, Keystone 2 boot is broken because kernel_x_start == 0x0000 0000 kernel_x_end == 0x0080 0000
instead of kernel_x_start == 0x0000 0008 0000 0000 kernel_x_end == 0x0000 0008 0080 0000 and as result whole low memory will be mapped with MT_MEMORY_RW permissions by code (start > kernel_x_end): } else if (start >= kernel_x_end) { map.pfn = __phys_to_pfn(start); map.virtual = __phys_to_virt(start); map.length = end - start; map.type = MT_MEMORY_RW;
create_mapping(&map); }
Hence, fix it by using phys_addr_t type for variables kernel_x_start and kernel_x_end.
Tested-by: Murali Karicheri <m-karicheri2@ti.com> Signed-off-by: Grygorii Strashko <grygorii.strashko@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
show more ...
|
#
4e802cfd |
| 28-Nov-2014 |
Jungseung Lee <js07.lee@gmail.com> |
ARM: 8238/1: mm: Refine set_memory_* functions
set_memory_* functions have same implementation except memory attribute.
This patch makes to use common function for these, and pull out the functions
ARM: 8238/1: mm: Refine set_memory_* functions
set_memory_* functions have same implementation except memory attribute.
This patch makes to use common function for these, and pull out the functions into arch/arm/mm/pageattr.c like arm64 did. It will reduce code size and enhance the readability.
Signed-off-by: Jungseung Lee <js07.lee@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
show more ...
|
#
1d4d3715 |
| 28-Nov-2014 |
Jungseung Lee <js07.lee@gmail.com> |
ARM: 8235/1: Support for the PXN CPU feature on ARMv7
Modern ARMv7-A/R cores optionally implement below new hardware feature:
- PXN: Privileged execute-never(PXN) is a security feature. PXN bit det
ARM: 8235/1: Support for the PXN CPU feature on ARMv7
Modern ARMv7-A/R cores optionally implement below new hardware feature:
- PXN: Privileged execute-never(PXN) is a security feature. PXN bit determines whether the processor can execute software from the region. This is effective solution against ret2usr attack. On an implementation that does not include the LPAE, PXN is optionally supported.
This patch set PXN bit on user page table for preventing user code execution with privilege mode.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Jungseung Lee <js07.lee@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
show more ...
|
#
4ed89f22 |
| 28-Oct-2014 |
Russell King <rmk+kernel@arm.linux.org.uk> |
ARM: convert printk(KERN_* to pr_*
Convert many (but not all) printk(KERN_* to pr_* to simplify the code. We take the opportunity to join some printk lines together so we don't split the message acr
ARM: convert printk(KERN_* to pr_*
Convert many (but not all) printk(KERN_* to pr_* to simplify the code. We take the opportunity to join some printk lines together so we don't split the message across several lines, and we also add a few levels to some messages which were previously missing them.
Tested-by: Andrew Lunn <andrew@lunn.ch> Tested-by: Felipe Balbi <balbi@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
show more ...
|
Revision tags: v3.15-rc1 |
|
#
1e6b4811 |
| 03-Apr-2014 |
Kees Cook <keescook@chromium.org> |
ARM: mm: allow non-text sections to be non-executable
Adds CONFIG_ARM_KERNMEM_PERMS to separate the kernel memory regions into section-sized areas that can have different permisions. Performs the NX
ARM: mm: allow non-text sections to be non-executable
Adds CONFIG_ARM_KERNMEM_PERMS to separate the kernel memory regions into section-sized areas that can have different permisions. Performs the NX permission changes during free_initmem, so that init memory can be reclaimed.
This uses section size instead of PMD size to reduce memory lost to padding on non-LPAE systems.
Based on work by Brad Spengler, Larry Bassel, and Laura Abbott.
Signed-off-by: Kees Cook <keescook@chromium.org> Tested-by: Laura Abbott <lauraa@codeaurora.org> Acked-by: Nicolas Pitre <nico@linaro.org>
show more ...
|
#
99b4ac9a |
| 04-Apr-2014 |
Kees Cook <keescook@chromium.org> |
arm: fixmap: implement __set_fixmap()
This is used from set_fixmap() and clear_fixmap() via asm-generic/fixmap.h. Also makes sure that the fixmap allocation fits into the expected range.
Based on p
arm: fixmap: implement __set_fixmap()
This is used from set_fixmap() and clear_fixmap() via asm-generic/fixmap.h. Also makes sure that the fixmap allocation fits into the expected range.
Based on patch by Rabin Vincent.
Signed-off-by: Kees Cook <keescook@chromium.org> Cc: Rabin Vincent <rabin@rab.in> Acked-by: Nicolas Pitre <nico@linaro.org>
show more ...
|