Home
last modified time | relevance | path

Searched hist:"5 aec715d7d3122f77cabaa7578d9d25a0c1ed20e" (Results 1 – 7 of 7) sorted by relevance

/openbmc/linux/arch/arm64/include/asm/
H A Dthread_info.hdiff 5aec715d7d3122f77cabaa7578d9d25a0c1ed20e Tue Oct 06 12:46:24 CDT 2015 Will Deacon <will.deacon@arm.com> arm64: mm: rewrite ASID allocator and MM context-switching code

Our current switch_mm implementation suffers from a number of problems:

(1) The ASID allocator relies on IPIs to synchronise the CPUs on a
rollover event

(2) Because of (1), we cannot allocate ASIDs with interrupts disabled
and therefore make use of a TIF_SWITCH_MM flag to postpone the
actual switch to finish_arch_post_lock_switch

(3) We run context switch with a reserved (invalid) TTBR0 value, even
though the ASID and pgd are updated atomically

(4) We take a global spinlock (cpu_asid_lock) during context-switch

(5) We use h/w broadcast TLB operations when they are not required
(e.g. in flush_context)

This patch addresses these problems by rewriting the ASID algorithm to
match the bitmap-based arch/arm/ implementation more closely. This in
turn allows us to remove much of the complications surrounding switch_mm,
including the ugly thread flag.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
H A Dmmu.hdiff 5aec715d7d3122f77cabaa7578d9d25a0c1ed20e Tue Oct 06 12:46:24 CDT 2015 Will Deacon <will.deacon@arm.com> arm64: mm: rewrite ASID allocator and MM context-switching code

Our current switch_mm implementation suffers from a number of problems:

(1) The ASID allocator relies on IPIs to synchronise the CPUs on a
rollover event

(2) Because of (1), we cannot allocate ASIDs with interrupts disabled
and therefore make use of a TIF_SWITCH_MM flag to postpone the
actual switch to finish_arch_post_lock_switch

(3) We run context switch with a reserved (invalid) TTBR0 value, even
though the ASID and pgd are updated atomically

(4) We take a global spinlock (cpu_asid_lock) during context-switch

(5) We use h/w broadcast TLB operations when they are not required
(e.g. in flush_context)

This patch addresses these problems by rewriting the ASID algorithm to
match the bitmap-based arch/arm/ implementation more closely. This in
turn allows us to remove much of the complications surrounding switch_mm,
including the ugly thread flag.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
H A Dmmu_context.hdiff 5aec715d7d3122f77cabaa7578d9d25a0c1ed20e Tue Oct 06 12:46:24 CDT 2015 Will Deacon <will.deacon@arm.com> arm64: mm: rewrite ASID allocator and MM context-switching code

Our current switch_mm implementation suffers from a number of problems:

(1) The ASID allocator relies on IPIs to synchronise the CPUs on a
rollover event

(2) Because of (1), we cannot allocate ASIDs with interrupts disabled
and therefore make use of a TIF_SWITCH_MM flag to postpone the
actual switch to finish_arch_post_lock_switch

(3) We run context switch with a reserved (invalid) TTBR0 value, even
though the ASID and pgd are updated atomically

(4) We take a global spinlock (cpu_asid_lock) during context-switch

(5) We use h/w broadcast TLB operations when they are not required
(e.g. in flush_context)

This patch addresses these problems by rewriting the ASID algorithm to
match the bitmap-based arch/arm/ implementation more closely. This in
turn allows us to remove much of the complications surrounding switch_mm,
including the ugly thread flag.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
/openbmc/linux/arch/arm64/mm/
H A Dcontext.cdiff 5aec715d7d3122f77cabaa7578d9d25a0c1ed20e Tue Oct 06 12:46:24 CDT 2015 Will Deacon <will.deacon@arm.com> arm64: mm: rewrite ASID allocator and MM context-switching code

Our current switch_mm implementation suffers from a number of problems:

(1) The ASID allocator relies on IPIs to synchronise the CPUs on a
rollover event

(2) Because of (1), we cannot allocate ASIDs with interrupts disabled
and therefore make use of a TIF_SWITCH_MM flag to postpone the
actual switch to finish_arch_post_lock_switch

(3) We run context switch with a reserved (invalid) TTBR0 value, even
though the ASID and pgd are updated atomically

(4) We take a global spinlock (cpu_asid_lock) during context-switch

(5) We use h/w broadcast TLB operations when they are not required
(e.g. in flush_context)

This patch addresses these problems by rewriting the ASID algorithm to
match the bitmap-based arch/arm/ implementation more closely. This in
turn allows us to remove much of the complications surrounding switch_mm,
including the ugly thread flag.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
H A Dproc.Sdiff 5aec715d7d3122f77cabaa7578d9d25a0c1ed20e Tue Oct 06 12:46:24 CDT 2015 Will Deacon <will.deacon@arm.com> arm64: mm: rewrite ASID allocator and MM context-switching code

Our current switch_mm implementation suffers from a number of problems:

(1) The ASID allocator relies on IPIs to synchronise the CPUs on a
rollover event

(2) Because of (1), we cannot allocate ASIDs with interrupts disabled
and therefore make use of a TIF_SWITCH_MM flag to postpone the
actual switch to finish_arch_post_lock_switch

(3) We run context switch with a reserved (invalid) TTBR0 value, even
though the ASID and pgd are updated atomically

(4) We take a global spinlock (cpu_asid_lock) during context-switch

(5) We use h/w broadcast TLB operations when they are not required
(e.g. in flush_context)

This patch addresses these problems by rewriting the ASID algorithm to
match the bitmap-based arch/arm/ implementation more closely. This in
turn allows us to remove much of the complications surrounding switch_mm,
including the ugly thread flag.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
/openbmc/linux/arch/arm64/kernel/
H A Defi.cdiff 5aec715d7d3122f77cabaa7578d9d25a0c1ed20e Tue Oct 06 12:46:24 CDT 2015 Will Deacon <will.deacon@arm.com> arm64: mm: rewrite ASID allocator and MM context-switching code

Our current switch_mm implementation suffers from a number of problems:

(1) The ASID allocator relies on IPIs to synchronise the CPUs on a
rollover event

(2) Because of (1), we cannot allocate ASIDs with interrupts disabled
and therefore make use of a TIF_SWITCH_MM flag to postpone the
actual switch to finish_arch_post_lock_switch

(3) We run context switch with a reserved (invalid) TTBR0 value, even
though the ASID and pgd are updated atomically

(4) We take a global spinlock (cpu_asid_lock) during context-switch

(5) We use h/w broadcast TLB operations when they are not required
(e.g. in flush_context)

This patch addresses these problems by rewriting the ASID algorithm to
match the bitmap-based arch/arm/ implementation more closely. This in
turn allows us to remove much of the complications surrounding switch_mm,
including the ugly thread flag.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
H A Dasm-offsets.cdiff 5aec715d7d3122f77cabaa7578d9d25a0c1ed20e Tue Oct 06 12:46:24 CDT 2015 Will Deacon <will.deacon@arm.com> arm64: mm: rewrite ASID allocator and MM context-switching code

Our current switch_mm implementation suffers from a number of problems:

(1) The ASID allocator relies on IPIs to synchronise the CPUs on a
rollover event

(2) Because of (1), we cannot allocate ASIDs with interrupts disabled
and therefore make use of a TIF_SWITCH_MM flag to postpone the
actual switch to finish_arch_post_lock_switch

(3) We run context switch with a reserved (invalid) TTBR0 value, even
though the ASID and pgd are updated atomically

(4) We take a global spinlock (cpu_asid_lock) during context-switch

(5) We use h/w broadcast TLB operations when they are not required
(e.g. in flush_context)

This patch addresses these problems by rewriting the ASID algorithm to
match the bitmap-based arch/arm/ implementation more closely. This in
turn allows us to remove much of the complications surrounding switch_mm,
including the ugly thread flag.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>