Skip to content

Commit

Permalink
Merge remote-tracking branch 'kvmarm/kvm-arm64/stolen-time' into kvma…
Browse files Browse the repository at this point in the history
…rm-master/next
  • Loading branch information
Marc Zyngier committed Oct 24, 2019
2 parents da34517 + c7892db commit a4b28f5
Show file tree
Hide file tree
Showing 36 changed files with 774 additions and 207 deletions.
6 changes: 3 additions & 3 deletions Documentation/admin-guide/kernel-parameters.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3083,9 +3083,9 @@
[X86,PV_OPS] Disable paravirtualized VMware scheduler
clock and use the default one.

no-steal-acc [X86,KVM] Disable paravirtualized steal time accounting.
steal time is computed, but won't influence scheduler
behaviour
no-steal-acc [X86,KVM,ARM64] Disable paravirtualized steal time
accounting. steal time is computed, but won't
influence scheduler behaviour

nolapic [X86-32,APIC] Do not enable or use the local APIC.

Expand Down
80 changes: 80 additions & 0 deletions Documentation/virt/kvm/arm/pvtime.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
.. SPDX-License-Identifier: GPL-2.0
Paravirtualized time support for arm64
======================================

Arm specification DEN0057/A defines a standard for paravirtualised time
support for AArch64 guests:

https://developer.arm.com/docs/den0057/a

KVM/arm64 implements the stolen time part of this specification by providing
some hypervisor service calls to support a paravirtualized guest obtaining a
view of the amount of time stolen from its execution.

Two new SMCCC compatible hypercalls are defined:

* PV_TIME_FEATURES: 0xC5000020
* PV_TIME_ST: 0xC5000021

These are only available in the SMC64/HVC64 calling convention as
paravirtualized time is not available to 32 bit Arm guests. The existence of
the PV_FEATURES hypercall should be probed using the SMCCC 1.1 ARCH_FEATURES
mechanism before calling it.

PV_TIME_FEATURES
============= ======== ==========
Function ID: (uint32) 0xC5000020
PV_call_id: (uint32) The function to query for support.
Currently only PV_TIME_ST is supported.
Return value: (int64) NOT_SUPPORTED (-1) or SUCCESS (0) if the relevant
PV-time feature is supported by the hypervisor.
============= ======== ==========

PV_TIME_ST
============= ======== ==========
Function ID: (uint32) 0xC5000021
Return value: (int64) IPA of the stolen time data structure for this
VCPU. On failure:
NOT_SUPPORTED (-1)
============= ======== ==========

The IPA returned by PV_TIME_ST should be mapped by the guest as normal memory
with inner and outer write back caching attributes, in the inner shareable
domain. A total of 16 bytes from the IPA returned are guaranteed to be
meaningfully filled by the hypervisor (see structure below).

PV_TIME_ST returns the structure for the calling VCPU.

Stolen Time
-----------

The structure pointed to by the PV_TIME_ST hypercall is as follows:

+-------------+-------------+-------------+----------------------------+
| Field | Byte Length | Byte Offset | Description |
+=============+=============+=============+============================+
| Revision | 4 | 0 | Must be 0 for version 1.0 |
+-------------+-------------+-------------+----------------------------+
| Attributes | 4 | 4 | Must be 0 |
+-------------+-------------+-------------+----------------------------+
| Stolen time | 8 | 8 | Stolen time in unsigned |
| | | | nanoseconds indicating how |
| | | | much time this VCPU thread |
| | | | was involuntarily not |
| | | | running on a physical CPU. |
+-------------+-------------+-------------+----------------------------+

All values in the structure are stored little-endian.

The structure will be updated by the hypervisor prior to scheduling a VCPU. It
will be present within a reserved region of the normal memory given to the
guest. The guest should not attempt to write into this memory. There is a
structure per VCPU of the guest.

It is advisable that one or more 64k pages are set aside for the purpose of
these structures and not used for other purposes, this enables the guest to map
the region using 64k pages and avoids conflicting attributes with other memory.

For the user space interface see Documentation/virt/kvm/devices/vcpu.txt
section "3. GROUP: KVM_ARM_VCPU_PVTIME_CTRL".
14 changes: 14 additions & 0 deletions Documentation/virt/kvm/devices/vcpu.txt
Original file line number Diff line number Diff line change
Expand Up @@ -60,3 +60,17 @@ time to use the number provided for a given timer, overwriting any previously
configured values on other VCPUs. Userspace should configure the interrupt
numbers on at least one VCPU after creating all VCPUs and before running any
VCPUs.

3. GROUP: KVM_ARM_VCPU_PVTIME_CTRL
Architectures: ARM64

3.1 ATTRIBUTE: KVM_ARM_VCPU_PVTIME_IPA
Parameters: 64-bit base address
Returns: -ENXIO: Stolen time not implemented
-EEXIST: Base address already set for this VCPU
-EINVAL: Base address not 64 byte aligned

Specifies the base address of the stolen time structure for this VCPU. The
base address must be 64 byte aligned and exist within a valid guest memory
region. See Documentation/virt/kvm/arm/pvtime.txt for more information
including the layout of the stolen time structure.
25 changes: 25 additions & 0 deletions arch/arm/include/asm/kvm_host.h
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
#ifndef __ARM_KVM_HOST_H__
#define __ARM_KVM_HOST_H__

#include <linux/arm-smccc.h>
#include <linux/errno.h>
#include <linux/types.h>
#include <linux/kvm_types.h>
Expand Down Expand Up @@ -38,6 +39,7 @@
KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
#define KVM_REQ_IRQ_PENDING KVM_ARCH_REQ(1)
#define KVM_REQ_VCPU_RESET KVM_ARCH_REQ(2)
#define KVM_REQ_RECORD_STEAL KVM_ARCH_REQ(3)

DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use);

Expand Down Expand Up @@ -331,6 +333,29 @@ static inline int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext)
int kvm_perf_init(void);
int kvm_perf_teardown(void);

static inline long kvm_hypercall_pv_features(struct kvm_vcpu *vcpu)
{
return SMCCC_RET_NOT_SUPPORTED;
}

static inline gpa_t kvm_init_stolen_time(struct kvm_vcpu *vcpu)
{
return GPA_INVALID;
}

static inline void kvm_update_stolen_time(struct kvm_vcpu *vcpu)
{
}

static inline void kvm_arm_pvtime_vcpu_init(struct kvm_vcpu_arch *vcpu_arch)
{
}

static inline bool kvm_arm_is_pvtime_enabled(struct kvm_vcpu_arch *vcpu_arch)
{
return false;
}

void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot);

struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr);
Expand Down
2 changes: 1 addition & 1 deletion arch/arm/kvm/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ obj-y += kvm-arm.o init.o interrupts.o
obj-y += handle_exit.o guest.o emulate.o reset.o
obj-y += coproc.o coproc_a15.o coproc_a7.o vgic-v3-coproc.o
obj-y += $(KVM)/arm/arm.o $(KVM)/arm/mmu.o $(KVM)/arm/mmio.o
obj-y += $(KVM)/arm/psci.o $(KVM)/arm/perf.o
obj-y += $(KVM)/arm/psci.o $(KVM)/arm/perf.o $(KVM)/arm/hypercalls.o
obj-y += $(KVM)/arm/aarch32.o

obj-y += $(KVM)/arm/vgic/vgic.o
Expand Down
2 changes: 1 addition & 1 deletion arch/arm/kvm/handle_exit.c
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
#include <asm/kvm_emulate.h>
#include <asm/kvm_coproc.h>
#include <asm/kvm_mmu.h>
#include <kvm/arm_psci.h>
#include <kvm/arm_hypercalls.h>
#include <trace/events/kvm.h>

#include "trace.h"
Expand Down
21 changes: 7 additions & 14 deletions arch/arm/mm/proc-v7-bugs.c
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
// SPDX-License-Identifier: GPL-2.0
#include <linux/arm-smccc.h>
#include <linux/kernel.h>
#include <linux/psci.h>
#include <linux/smp.h>

#include <asm/cp15.h>
Expand Down Expand Up @@ -75,26 +74,20 @@ static void cpu_v7_spectre_init(void)
case ARM_CPU_PART_CORTEX_A72: {
struct arm_smccc_res res;

if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
break;
arm_smccc_1_1_invoke(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
ARM_SMCCC_ARCH_WORKAROUND_1, &res);
if ((int)res.a0 != 0)
return;

switch (psci_ops.conduit) {
case PSCI_CONDUIT_HVC:
arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
ARM_SMCCC_ARCH_WORKAROUND_1, &res);
if ((int)res.a0 != 0)
break;
switch (arm_smccc_1_1_get_conduit()) {
case SMCCC_CONDUIT_HVC:
per_cpu(harden_branch_predictor_fn, cpu) =
call_hvc_arch_workaround_1;
cpu_do_switch_mm = cpu_v7_hvc_switch_mm;
spectre_v2_method = "hypervisor";
break;

case PSCI_CONDUIT_SMC:
arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
ARM_SMCCC_ARCH_WORKAROUND_1, &res);
if ((int)res.a0 != 0)
break;
case SMCCC_CONDUIT_SMC:
per_cpu(harden_branch_predictor_fn, cpu) =
call_smc_arch_workaround_1;
cpu_do_switch_mm = cpu_v7_smc_switch_mm;
Expand Down
29 changes: 29 additions & 0 deletions arch/arm64/include/asm/kvm_host.h
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@
KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
#define KVM_REQ_IRQ_PENDING KVM_ARCH_REQ(1)
#define KVM_REQ_VCPU_RESET KVM_ARCH_REQ(2)
#define KVM_REQ_RECORD_STEAL KVM_ARCH_REQ(3)

DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use);

Expand Down Expand Up @@ -346,6 +347,13 @@ struct kvm_vcpu_arch {
/* True when deferrable sysregs are loaded on the physical CPU,
* see kvm_vcpu_load_sysregs and kvm_vcpu_put_sysregs. */
bool sysregs_loaded_on_cpu;

/* Guest PV state */
struct {
u64 steal;
u64 last_steal;
gpa_t base;
} steal;
};

/* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
Expand Down Expand Up @@ -486,6 +494,27 @@ void handle_exit_early(struct kvm_vcpu *vcpu, struct kvm_run *run,
int kvm_perf_init(void);
int kvm_perf_teardown(void);

long kvm_hypercall_pv_features(struct kvm_vcpu *vcpu);
gpa_t kvm_init_stolen_time(struct kvm_vcpu *vcpu);
void kvm_update_stolen_time(struct kvm_vcpu *vcpu);

int kvm_arm_pvtime_set_attr(struct kvm_vcpu *vcpu,
struct kvm_device_attr *attr);
int kvm_arm_pvtime_get_attr(struct kvm_vcpu *vcpu,
struct kvm_device_attr *attr);
int kvm_arm_pvtime_has_attr(struct kvm_vcpu *vcpu,
struct kvm_device_attr *attr);

static inline void kvm_arm_pvtime_vcpu_init(struct kvm_vcpu_arch *vcpu_arch)
{
vcpu_arch->steal.base = GPA_INVALID;
}

static inline bool kvm_arm_is_pvtime_enabled(struct kvm_vcpu_arch *vcpu_arch)
{
return (vcpu_arch->steal.base != GPA_INVALID);
}

void kvm_set_sei_esr(struct kvm_vcpu *vcpu, u64 syndrome);

struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr);
Expand Down
9 changes: 8 additions & 1 deletion arch/arm64/include/asm/paravirt.h
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,13 @@ static inline u64 paravirt_steal_clock(int cpu)
{
return pv_ops.time.steal_clock(cpu);
}
#endif

int __init pv_time_init(void);

#else

#define pv_time_init() do {} while (0)

#endif // CONFIG_PARAVIRT

#endif
17 changes: 17 additions & 0 deletions arch/arm64/include/asm/pvclock-abi.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright (C) 2019 Arm Ltd. */

#ifndef __ASM_PVCLOCK_ABI_H
#define __ASM_PVCLOCK_ABI_H

/* The below structure is defined in ARM DEN0057A */

struct pvclock_vcpu_stolen_time {
__le32 revision;
__le32 attributes;
__le64 stolen_time;
/* Structure must be 64 byte aligned, pad to that size */
u8 padding[48];
} __packed;

#endif
2 changes: 2 additions & 0 deletions arch/arm64/include/uapi/asm/kvm.h
Original file line number Diff line number Diff line change
Expand Up @@ -324,6 +324,8 @@ struct kvm_vcpu_events {
#define KVM_ARM_VCPU_TIMER_CTRL 1
#define KVM_ARM_VCPU_TIMER_IRQ_VTIMER 0
#define KVM_ARM_VCPU_TIMER_IRQ_PTIMER 1
#define KVM_ARM_VCPU_PVTIME_CTRL 2
#define KVM_ARM_VCPU_PVTIME_IPA 0

/* KVM_IRQ_LINE irq field index values */
#define KVM_ARM_IRQ_VCPU2_SHIFT 28
Expand Down
Loading

0 comments on commit a4b28f5

Please sign in to comment.