History log of /openbmc/linux/arch/arm64/include/asm/kvm_host.h (Results 1 – 25 of 649)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: v6.6.25, v6.6.24, v6.6.23, v6.6.16, v6.6.15, v6.6.14, v6.6.13, v6.6.12, v6.6.11, v6.6.10, v6.6.9, v6.6.8, v6.6.7, v6.6.6, v6.6.5, v6.6.4, v6.6.3, v6.6.2, v6.5.11, v6.6.1, v6.5.10, v6.6, v6.5.9, v6.5.8, v6.5.7, v6.5.6, v6.5.5, v6.5.4, v6.5.3, v6.5.2, v6.1.51, v6.5.1, v6.1.50, v6.5, v6.1.49, v6.1.48
# b1f778a2 20-Aug-2023 Marc Zyngier <maz@kernel.org>

KVM: arm64: pmu: Resync EL0 state on counter rotation

Huang Shijie reports that, when profiling a guest from the host
with a number of events that exceeds the number of available
counters, the repor

KVM: arm64: pmu: Resync EL0 state on counter rotation

Huang Shijie reports that, when profiling a guest from the host
with a number of events that exceeds the number of available
counters, the reported counts are wildly inaccurate. Without
the counter oversubscription, the reported counts are correct.

Their investigation indicates that upon counter rotation (which
takes place on the back of a timer interrupt), we fail to
re-apply the guest EL0 enabling, leading to the counting of host
events instead of guest events.

In order to solve this, add yet another hook between the host PMU
driver and KVM, re-applying the guest EL0 configuration if the
right conditions apply (the host is VHE, we are in interrupt
context, and we interrupted a running vcpu). This triggers a new
vcpu request which will apply the correct configuration on guest
reentry.

With this, we have the correct counts, even when the counters are
oversubscribed.

Reported-by: Huang Shijie <shijie@os.amperecomputing.com>
Suggested-by: Oliver Upton <oliver.upton@linux.dev>
Tested_by: Huang Shijie <shijie@os.amperecomputing.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Cc: Leo Yan <leo.yan@linaro.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20230809013953.7692-1-shijie@os.amperecomputing.com
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20230820090108.177817-1-maz@kernel.org

show more ...


Revision tags: v6.1.46
# 03fb54d0 15-Aug-2023 Marc Zyngier <maz@kernel.org>

KVM: arm64: nv: Add support for HCRX_EL2

HCRX_EL2 has an interesting effect on HFGITR_EL2, as it conditions
the traps of TLBI*nXS.

Expand the FGT support to add a new Fine Grained Filter that will

KVM: arm64: nv: Add support for HCRX_EL2

HCRX_EL2 has an interesting effect on HFGITR_EL2, as it conditions
the traps of TLBI*nXS.

Expand the FGT support to add a new Fine Grained Filter that will
get checked when the instruction gets trapped, allowing the shadow
register to override the trap as needed.

Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Jing Zhang <jingzhangos@google.com>
Link: https://lore.kernel.org/r/20230815183903.2735724-29-maz@kernel.org

show more ...


# e58ec47b 15-Aug-2023 Marc Zyngier <maz@kernel.org>

KVM: arm64: nv: Add trap forwarding infrastructure

A significant part of what a NV hypervisor needs to do is to decide
whether a trap from a L2+ guest has to be forwarded to a L1 guest
or handled lo

KVM: arm64: nv: Add trap forwarding infrastructure

A significant part of what a NV hypervisor needs to do is to decide
whether a trap from a L2+ guest has to be forwarded to a L1 guest
or handled locally. This is done by checking for the trap bits that
the guest hypervisor has set and acting accordingly, as described by
the architecture.

A previous approach was to sprinkle a bunch of checks in all the
system register accessors, but this is pretty error prone and doesn't
help getting an overview of what is happening.

Instead, implement a set of global tables that describe a trap bit,
combinations of trap bits, behaviours on trap, and what bits must
be evaluated on a system register trap.

Although this is painful to describe, this allows to specify each
and every control bit in a static manner. To make it efficient,
the table is inserted in an xarray that is global to the system,
and checked each time we trap a system register while running
a L2 guest.

Add the basic infrastructure for now, while additional patches will
implement configuration registers.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Jing Zhang <jingzhangos@google.com>
Reviewed-by: Miguel Luis <miguel.luis@oracle.com>
Link: https://lore.kernel.org/r/20230815183903.2735724-15-maz@kernel.org

show more ...


# 50d2fe46 15-Aug-2023 Marc Zyngier <maz@kernel.org>

KVM: arm64: nv: Add FGT registers

Add the 5 registers covering FEAT_FGT. The AMU-related registers
are currently left out as we don't have a plan for them. Yet.

Reviewed-by: Eric Auger <eric.auger@

KVM: arm64: nv: Add FGT registers

Add the 5 registers covering FEAT_FGT. The AMU-related registers
are currently left out as we don't have a plan for them. Yet.

Reviewed-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Miguel Luis <miguel.luis@oracle.com>
Reviewed-by: Jing Zhang <jingzhangos@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20230815183903.2735724-13-maz@kernel.org

show more ...


Revision tags: v6.1.45
# c42b6f0b 10-Aug-2023 Raghavendra Rao Ananta <rananta@google.com>

KVM: arm64: Implement kvm_arch_flush_remote_tlbs_range()

Implement kvm_arch_flush_remote_tlbs_range() for arm64
to invalidate the given range in the TLB.

Signed-off-by: Raghavendra Rao Ananta <rana

KVM: arm64: Implement kvm_arch_flush_remote_tlbs_range()

Implement kvm_arch_flush_remote_tlbs_range() for arm64
to invalidate the given range in the TLB.

Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
Reviewed-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Shaoqin Huang <shahuang@redhat.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20230811045127.3308641-12-rananta@google.com

show more ...


# 32121c81 10-Aug-2023 Raghavendra Rao Ananta <rananta@google.com>

KVM: arm64: Use kvm_arch_flush_remote_tlbs()

Stop depending on CONFIG_HAVE_KVM_ARCH_TLB_FLUSH_ALL and opt to
standardize on kvm_arch_flush_remote_tlbs() since it avoids
duplicating the generic TLB s

KVM: arm64: Use kvm_arch_flush_remote_tlbs()

Stop depending on CONFIG_HAVE_KVM_ARCH_TLB_FLUSH_ALL and opt to
standardize on kvm_arch_flush_remote_tlbs() since it avoids
duplicating the generic TLB stats across architectures that implement
their own remote TLB flush.

This adds an extra function call to the ARM64 kvm_flush_remote_tlbs()
path, but that is a small cost in comparison to flushing remote TLBs.

In addition, instead of just incrementing remote_tlb_flush_requests
stat, the generic interface would also increment the
remote_tlb_flush stat.

Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
Reviewed-by: Shaoqin Huang <shahuang@redhat.com>
Reviewed-by: Gavin Shan <gshan@redhat.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20230811045127.3308641-4-rananta@google.com

show more ...


# a6b33d00 14-Aug-2023 Yue Haibing <yuehaibing@huawei.com>

KVM: arm64: Remove unused declarations

Commit 53692908b0f5 ("KVM: arm/arm64: vgic: Fix source vcpu issues for GICv2 SGI")
removed vgic_v2_set_npie()/vgic_v3_set_npie() but not the declarations.
Comm

KVM: arm64: Remove unused declarations

Commit 53692908b0f5 ("KVM: arm/arm64: vgic: Fix source vcpu issues for GICv2 SGI")
removed vgic_v2_set_npie()/vgic_v3_set_npie() but not the declarations.
Commit 29eb5a3c57f7 ("KVM: arm64: Handle PtrAuth traps early") left behind
kvm_arm_vcpu_ptrauth_trap(), remove it.
Commit 2a0c343386ae ("KVM: arm64: Initialize trap registers for protected VMs")
declared but never implemented kvm_init_protected_traps() and
commit cf5d318865e2 ("arm/arm64: KVM: Turn off vcpus on PSCI shutdown/reboot")
declared but never implemented force_vm_exit().

Signed-off-by: Yue Haibing <yuehaibing@huawei.com>
Reviewed-by: Zenghui Yu <zenghui.yu@linux.dev>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20230814140636.45988-1-yuehaibing@huawei.com

show more ...


Revision tags: v6.1.44, v6.1.43, v6.1.42, v6.1.41, v6.1.40, v6.1.39
# b321c31c 13-Jul-2023 Marc Zyngier <maz@kernel.org>

KVM: arm64: vgic-v4: Make the doorbell request robust w.r.t preemption

Xiang reports that VMs occasionally fail to boot on GICv4.1 systems when
running a preemptible kernel, as it is possible that a

KVM: arm64: vgic-v4: Make the doorbell request robust w.r.t preemption

Xiang reports that VMs occasionally fail to boot on GICv4.1 systems when
running a preemptible kernel, as it is possible that a vCPU is blocked
without requesting a doorbell interrupt.

The issue is that any preemption that occurs between vgic_v4_put() and
schedule() on the block path will mark the vPE as nonresident and *not*
request a doorbell irq. This occurs because when the vcpu thread is
resumed on its way to block, vcpu_load() will make the vPE resident
again. Once the vcpu actually blocks, we don't request a doorbell
anymore, and the vcpu won't be woken up on interrupt delivery.

Fix it by tracking that we're entering WFI, and key the doorbell
request on that flag. This allows us not to make the vPE resident
when going through a preempt/schedule cycle, meaning we don't lose
any state.

Cc: stable@vger.kernel.org
Fixes: 8e01d9a396e6 ("KVM: arm64: vgic-v4: Move the GICv4 residency flow to be driven by vcpu_load/put")
Reported-by: Xiang Chen <chenxiang66@hisilicon.com>
Suggested-by: Zenghui Yu <yuzenghui@huawei.com>
Tested-by: Xiang Chen <chenxiang66@hisilicon.com>
Co-developed-by: Oliver Upton <oliver.upton@linux.dev>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: Zenghui Yu <yuzenghui@huawei.com>
Link: https://lore.kernel.org/r/20230713070657.3873244-1-maz@kernel.org
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>

show more ...


# 5346f7e1 10-Jul-2023 Oliver Upton <oliver.upton@linux.dev>

KVM: arm64: Always return generic v8 as the preferred target

Userspace selecting an implementation-specific vCPU target has been
completely useless for a very long time. Let's go whole hog and start

KVM: arm64: Always return generic v8 as the preferred target

Userspace selecting an implementation-specific vCPU target has been
completely useless for a very long time. Let's go whole hog and start
returning the generic v8 target across all implementations as the
preferred target.

Uphold the pre-existing behavior by tolerating either the generic target
or an implementation-specific target if the vCPU happens to be running
on one of the lucky few parts.

Acked-by: Zenghui Yu <yuzenghui@huawei.com>
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20230710193140.1706399-5-oliver.upton@linux.dev

show more ...


# ef984060 10-Jul-2023 Oliver Upton <oliver.upton@linux.dev>

KVM: arm64: Replace vCPU target with a configuration flag

The value of kvm_vcpu_arch::target has been used to determine if a vCPU
has actually been initialized. Storing this as an integer is needles

KVM: arm64: Replace vCPU target with a configuration flag

The value of kvm_vcpu_arch::target has been used to determine if a vCPU
has actually been initialized. Storing this as an integer is needless at
this point, as KVM doesn't do any microarch-specific emulation in the
first place. Instead, all we care about is whether or not the vCPU has
been initialized.

Delete the field in favor of a vCPU configuration flag indicating if
KVM_ARM_VCPU_INIT has completed for the vCPU.

Reviewed-by: Zenghui Yu <yuzenghui@huawei.com>
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20230710193140.1706399-4-oliver.upton@linux.dev

show more ...


Revision tags: v6.1.38, v6.1.37, v6.1.36, v6.4, v6.1.35, v6.1.34
# 68667240 09-Jun-2023 Oliver Upton <oliver.upton@linux.dev>

KVM: arm64: Rip out the vestiges of the 'old' ID register scheme

There's no longer a need for the baggage of the old scheme for handling
configurable ID register fields. Rip it all out in favor of t

KVM: arm64: Rip out the vestiges of the 'old' ID register scheme

There's no longer a need for the baggage of the old scheme for handling
configurable ID register fields. Rip it all out in favor of the
generalized infrastructure.

Link: https://lore.kernel.org/r/20230609190054.1542113-12-oliver.upton@linux.dev
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>

show more ...


# 47334146 09-Jun-2023 Jing Zhang <jingzhangos@google.com>

KVM: arm64: Save ID registers' sanitized value per guest

Initialize the default ID register values upon the first call to
KVM_ARM_VCPU_INIT. The vCPU feature flags are finalized at that point,
so it

KVM: arm64: Save ID registers' sanitized value per guest

Initialize the default ID register values upon the first call to
KVM_ARM_VCPU_INIT. The vCPU feature flags are finalized at that point,
so it is possible to determine the maximum feature set supported by a
particular VM configuration. Do nothing with these values for now, as we
need to rework the plumbing of what's already writable to be compatible
with the generic infrastructure.

Co-developed-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Jing Zhang <jingzhangos@google.com>
Link: https://lore.kernel.org/r/20230609190054.1542113-7-oliver.upton@linux.dev
[Oliver: Hoist everything into KVM_ARM_VCPU_INIT time, so the features
are final]
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>

show more ...


# f90f9360 09-Jun-2023 Oliver Upton <oliver.upton@linux.dev>

KVM: arm64: Rewrite IMPDEF PMU version as NI

KVM allows userspace to write an IMPDEF PMU version to the corresponding
32bit and 64bit ID register fields for the sake of backwards
compatibility with

KVM: arm64: Rewrite IMPDEF PMU version as NI

KVM allows userspace to write an IMPDEF PMU version to the corresponding
32bit and 64bit ID register fields for the sake of backwards
compatibility with kernels that lacked commit 3d0dba5764b9 ("KVM: arm64:
PMU: Move the ID_AA64DFR0_EL1.PMUver limit to VM creation"). Plumbing
that IMPDEF PMU version through to the gues is getting in the way of
progress, and really doesn't any sense in the first place.

Bite the bullet and reinterpret the IMPDEF PMU version as NI (0) for
userspace writes. Additionally, spill the dirty details into a comment.

Link: https://lore.kernel.org/r/20230609190054.1542113-5-oliver.upton@linux.dev
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>

show more ...


# 2251e9ff 09-Jun-2023 Oliver Upton <oliver.upton@linux.dev>

KVM: arm64: Make vCPU feature flags consistent VM-wide

To date KVM has allowed userspace to construct asymmetric VMs where
particular features may only be supported on a subset of vCPUs. This
wasn't

KVM: arm64: Make vCPU feature flags consistent VM-wide

To date KVM has allowed userspace to construct asymmetric VMs where
particular features may only be supported on a subset of vCPUs. This
wasn't really the intened usage pattern, and it is a total pain in the
ass to keep working in the kernel. What's more, this is at odds with CPU
features in host userspace, where asymmetric features are largely hidden
or disabled.

It's time to put an end to the whole game. Require all vCPUs in the VM
to have the same feature set, rejecting deviants in the
KVM_ARM_VCPU_INIT ioctl. Preserve some of the vestiges of per-vCPU
feature flags in case we need to reinstate the old behavior for some
limited configurations. Yes, this is a sign of cowardice around a
user-visibile change.

Hoist all of the 32-bit limitations into kvm_vcpu_init_check_features()
to avoid nested attempts to acquire the config_lock, which won't end
well.

Link: https://lore.kernel.org/r/20230609190054.1542113-4-oliver.upton@linux.dev
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>

show more ...


# a7a2c72a 09-Jun-2023 Oliver Upton <oliver.upton@linux.dev>

KVM: arm64: Separate out feature sanitisation and initialisation

kvm_vcpu_set_target() iteratively sanitises and copies feature flags in
one go. This is rather odd, especially considering the fact t

KVM: arm64: Separate out feature sanitisation and initialisation

kvm_vcpu_set_target() iteratively sanitises and copies feature flags in
one go. This is rather odd, especially considering the fact that bitmap
accessors can do the heavy lifting. A subsequent change will make vCPU
features VM-wide, and fitting that into the present implementation is
just a chore.

Rework the whole thing to use bitmap accessors to sanitise and copy
flags.

Link: https://lore.kernel.org/r/20230609190054.1542113-2-oliver.upton@linux.dev
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>

show more ...


Revision tags: v6.1.33, v6.1.32, v6.1.31, v6.1.30
# 187de7c2 23-May-2023 Mark Brown <broonie@kernel.org>

arm64/sysreg: Standardise naming of bitfield constants in OSL[AS]R_EL1

Our standard scheme for naming the constants for bitfields in system
registers includes _ELx in the name but not the SYS_, upda

arm64/sysreg: Standardise naming of bitfield constants in OSL[AS]R_EL1

Our standard scheme for naming the constants for bitfields in system
registers includes _ELx in the name but not the SYS_, update the
constants for OSL[AS]R_EL1 to follow this convention.

Reviewed-by: Shaoqin Huang <shahuang@redhat.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Oliver Upton <oliver.upton@linux.dev>
Link: https://lore.kernel.org/r/20230419-arm64-syreg-gen-v2-3-4c6add1f6257@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

show more ...


# 86f9de9d 06-Jun-2023 Joey Gouly <joey.gouly@arm.com>

KVM: arm64: Save/restore PIE registers

Define the new system registers that PIE introduces and context switch them.
The PIE feature is still hidden from the ID register, and not exposed to a VM.

Si

KVM: arm64: Save/restore PIE registers

Define the new system registers that PIE introduces and context switch them.
The PIE feature is still hidden from the ID register, and not exposed to a VM.

Signed-off-by: Joey Gouly <joey.gouly@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Oliver Upton <oliver.upton@linux.dev>
Cc: James Morse <james.morse@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Zenghui Yu <yuzenghui@huawei.com>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20230606145859.697944-10-joey.gouly@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

show more ...


# fbff5606 06-Jun-2023 Joey Gouly <joey.gouly@arm.com>

KVM: arm64: Save/restore TCR2_EL1

Define the new system register TCR2_EL1 and context switch it.

Signed-off-by: Joey Gouly <joey.gouly@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Oliver Upton <o

KVM: arm64: Save/restore TCR2_EL1

Define the new system register TCR2_EL1 and context switch it.

Signed-off-by: Joey Gouly <joey.gouly@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Oliver Upton <oliver.upton@linux.dev>
Cc: James Morse <james.morse@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Zenghui Yu <yuzenghui@huawei.com>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20230606145859.697944-9-joey.gouly@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

show more ...


# 0c2f9acf 02-Jun-2023 Reiji Watanabe <reijiw@google.com>

KVM: arm64: PMU: Don't overwrite PMUSERENR with vcpu loaded

Currently, with VHE, KVM sets ER, CR, SW and EN bits of
PMUSERENR_EL0 to 1 on vcpu_load(), and saves and restores
the register value for t

KVM: arm64: PMU: Don't overwrite PMUSERENR with vcpu loaded

Currently, with VHE, KVM sets ER, CR, SW and EN bits of
PMUSERENR_EL0 to 1 on vcpu_load(), and saves and restores
the register value for the host on vcpu_load() and vcpu_put().
If the value of those bits are cleared on a pCPU with a vCPU
loaded (armv8pmu_start() would do that when PMU counters are
programmed for the guest), PMU access from the guest EL0 might
be trapped to the guest EL1 directly regardless of the current
PMUSERENR_EL0 value of the vCPU.

Fix this by not letting armv8pmu_start() overwrite PMUSERENR_EL0
on the pCPU where PMUSERENR_EL0 for the guest is loaded, and
instead updating the saved shadow register value for the host
so that the value can be restored on vcpu_put() later.
While vcpu_{put,load}() are manipulating PMUSERENR_EL0, disable
IRQs to prevent a race condition between these processes and IPIs
that attempt to update PMUSERENR_EL0 for the host EL0.

Suggested-by: Mark Rutland <mark.rutland@arm.com>
Suggested-by: Marc Zyngier <maz@kernel.org>
Fixes: 83a7a4d643d3 ("arm64: perf: Enable PMU counter userspace access for perf event")
Signed-off-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20230603025035.3781797-3-reijiw@google.com

show more ...


# 12bdce4f 23-May-2023 Will Deacon <will@kernel.org>

KVM: arm64: Probe FF-A version and host/hyp partition ID during init

Probe FF-A during pKVM initialisation so that we can detect any
inconsistencies in the version or partition ID early on.

Signed-

KVM: arm64: Probe FF-A version and host/hyp partition ID during init

Probe FF-A during pKVM initialisation so that we can detect any
inconsistencies in the version or partition ID early on.

Signed-off-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20230523101828.7328-3-will@kernel.org
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>

show more ...


Revision tags: v6.1.29, v6.1.28, v6.1.27
# 2f440b72 26-Apr-2023 Ricardo Koller <ricarkol@google.com>

KVM: arm64: Add KVM_CAP_ARM_EAGER_SPLIT_CHUNK_SIZE

Add a capability for userspace to specify the eager split chunk size.
The chunk size specifies how many pages to break at a time, using a
single al

KVM: arm64: Add KVM_CAP_ARM_EAGER_SPLIT_CHUNK_SIZE

Add a capability for userspace to specify the eager split chunk size.
The chunk size specifies how many pages to break at a time, using a
single allocation. Bigger the chunk size, more pages need to be
allocated ahead of time.

Suggested-by: Oliver Upton <oliver.upton@linux.dev>
Signed-off-by: Ricardo Koller <ricarkol@google.com>
Reviewed-by: Gavin Shan <gshan@redhat.com>
Link: https://lore.kernel.org/r/20230426172330.1439644-6-ricarkol@google.com
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>

show more ...


Revision tags: v6.1.26, v6.3, v6.1.25
# 35dcb3ac 18-Apr-2023 Marc Zyngier <maz@kernel.org>

KVM: arm64: Make vcpu flag updates non-preemptible

Per-vcpu flags are updated using a non-atomic RMW operation.
Which means it is possible to get preempted between the read and
write operations.

An

KVM: arm64: Make vcpu flag updates non-preemptible

Per-vcpu flags are updated using a non-atomic RMW operation.
Which means it is possible to get preempted between the read and
write operations.

Another interesting thing to note is that preemption also updates
flags, as we have some flag manipulation in both the load and put
operations.

It is thus possible to lose information communicated by either
load or put, as the preempted flag update will overwrite the flags
when the thread is resumed. This is specially critical if either
load or put has stored information which depends on the physical
CPU the vcpu runs on.

This results in really elusive bugs, and kudos must be given to
Mostafa for the long hours of debugging, and finally spotting
the problem.

Fix it by disabling preemption during the RMW operation, which
ensures that the state stays consistent. Also upgrade vcpu_get_flag
path to use READ_ONCE() to make sure the field is always atomically
accessed.

Fixes: e87abb73e594 ("KVM: arm64: Add helpers to manipulate vcpu flags among a set")
Reported-by: Mostafa Saleh <smostafa@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20230418125737.2327972-1-maz@kernel.org
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>

show more ...


Revision tags: v6.1.24, v6.1.23
# fb88707d 04-Apr-2023 Oliver Upton <oliver.upton@linux.dev>

KVM: arm64: Use a maple tree to represent the SMCCC filter

Maple tree is an efficient B-tree implementation that is intended for
storing non-overlapping intervals. Such a data structure is a good fi

KVM: arm64: Use a maple tree to represent the SMCCC filter

Maple tree is an efficient B-tree implementation that is intended for
storing non-overlapping intervals. Such a data structure is a good fit
for the SMCCC filter as it is desirable to sparsely allocate the 32 bit
function ID space.

To that end, add a maple tree to kvm_arch and correctly init/teardown
along with the VM. Wire in a test against the hypercall filter for HVCs
which does nothing until the controls are exposed to userspace.

Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20230404154050.2270077-8-oliver.upton@linux.dev

show more ...


# de40bb8a 04-Apr-2023 Oliver Upton <oliver.upton@linux.dev>

KVM: arm64: Add a helper to check if a VM has ran once

The test_bit(...) pattern is quite a lot of keystrokes. Replace
existing callsites with a helper.

No functional change intended.

Signed-off-b

KVM: arm64: Add a helper to check if a VM has ran once

The test_bit(...) pattern is quite a lot of keystrokes. Replace
existing callsites with a helper.

No functional change intended.

Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20230404154050.2270077-3-oliver.upton@linux.dev

show more ...


# 81dc9504 30-Mar-2023 Marc Zyngier <maz@kernel.org>

KVM: arm64: nv: timers: Support hyp timer emulation

Emulating EL2 also means emulating the EL2 timers. To do so, we expand
our timer framework to deal with at most 4 timers. At any given time,
two t

KVM: arm64: nv: timers: Support hyp timer emulation

Emulating EL2 also means emulating the EL2 timers. To do so, we expand
our timer framework to deal with at most 4 timers. At any given time,
two timers are using the HW timers, and the two others are purely
emulated.

The role of deciding which is which at any given time is left to a
mapping function which is called every time we need to make such a
decision.

Reviewed-by: Colton Lewis <coltonlewis@google.com>
Co-developed-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20230330174800.2677007-18-maz@kernel.org

show more ...


12345678910>>...26