├── README.md ├── backports └── pmu │ ├── patch0.txt │ ├── patch1.txt │ ├── patch10.txt │ ├── patch11.txt │ ├── patch2.txt │ ├── patch3.txt │ ├── patch4.txt │ ├── patch5.txt │ ├── patch6.txt │ ├── patch7.txt │ ├── patch8.txt │ └── patch9.txt ├── edk2.patch └── kvm.patch /README.md: -------------------------------------------------------------------------------- 1 | # KVM patch for Samsung SM-A600FN (exynos7870) 2 | 3 | This repository contains a patch for Samsung's official kernel sources that provides KVM support on exynos boards. This can probably be applied to other Samsung exynos kernels with minor changes. 4 | 5 | ## How to use 6 | 7 | 1. Download official kernel sources from [opensource.samsung.com](https://opensource.samsung.com/uploadList?menuItem=mobile&classification1=mobile_phone). 8 | 2. Apply the patch 9 | 3. Copy `/proc/config.gz` from your device, and unzip to `.config` 10 | 4. Install and configure a cross-compiler (not documented here). 11 | 5. In `make menuconfig` disable everything about TIMA (?) and RKP under "Boot Options" (they are incompatible with KVM), and enable KVM under "Virtualization". 12 | 6. Build and flash your shiny KVM-enabled kernel! 13 | 14 | ## Known bugs 15 | 16 | * Older versions of Samsung's TrustZone do not play well with this. They probably implemented big.LITTLE scheduling outside of the kernel, which breaks KVM's assumption that VM's do not suddenly jump from one core to another. This seems to have been fixed in the firmware update based on Android 10, open an issue if this persists. The symptom is that the phone **sometimes** instantly reboots when a VM is launched. 17 | * The register `cntfrq_el0`, which the bootloader should have configured, is set to zero. ARM documentation states that is's only writeable from the highest privilege level (i.e.TrustZone), but readable from any privilege level (impossible to trap accesses), and guest OSes will expect it to hold the architected timer frequency (26.0 MHz). There also does not seem to be an `smc` call to set this value. This means that guest OSes will need to be patched until a TrustZone exploit is found to initialize the register properly. 18 | * Timer handling is somehow broken. Linux boots (with a custom DTB that specifies timer frequency explicitly), but OVMF hangs on the first sleep due to interrupts not arriving. UPD: KVM bug, [fixed upstream](https://github.com/torvalds/linux/commit/f120cd6533d21075ab103ae6c225b1697853660d). 19 | 20 | ## Technical details 21 | 22 | Normally Linux needs to be booted in EL2 (HYP mode in ARM terminology) to be able to utilize the virtualization extensions. SBOOT boots the Linux kernel in EL1, but fortunately for us they implemented a backdoor in TrustZone to load and execute custom code in EL2. This interface is utilized by `init/vmm.c` in Samsung's kernel to load the proprietary RKP hypervisor, and looks as follows: 23 | 24 | ``` 25 | #define VMM_64BIT_SMC_CALL_MAGIC 0xC2000400 26 | #define VMM_STACK_OFFSET 4096 27 | #define VMM_MODE_AARCH64 1 28 | status = _vmm_goto_EL2(VMM_64BIT_SMC_CALL_MAGIC, entry_point, VMM_STACK_OFFSET, VMM_MODE_AARCH64, vmm, vmm_size); 29 | ``` 30 | 31 | Here `_vmm_goto_EL2` is a simple wrapper around `smc #0`, `entry_point` is a physical address of the initialization routine, and the last two parameters are passed to it in `x0` and `x1` registers. To return, the initialization routine calls `smc #0` with `x0=0xc2000401, x1=status` (the only piece of information that was obtained by disassembling the proprietary hypervisor). 32 | 33 | The semantics of this interface are as follows: 34 | * `x1` (`status`) is passed to the kernel as the return value of `_vmm_goto_EL2`. If it is nonzero, the firmware resets the HYP state to default, and further `hvc` calls result in an exception. 35 | * EL2 initialization code only runs on the boot CPU. When further CPUs are brought up, it's EL2 state is copied from the one established by the initialization routine, with one exception: `sp` is set to `bootcore.sp + VMM_STACK_OFFSET * core_index`. The numbering used is the same as in Linux kernel. 36 | * The firmware saves the value of `vbar_el2` at exit from the initialization routine, and restores it to this value at some random (unknown) points. This means that the address of the exception vector can not be changed later by EL2 code. 37 | 38 | Normal KVM/ARM bootstrap process: 39 | * The code in `head.S` detects being booted in EL2 and sets the EL2 exception vector to a so-called "HYP stub" (basically a backdoor), and drops to EL1 to continue booting. 40 | * When the KVM subsystem begins initialization, it calls the backdoor to run its initialization code, and changes `vbar_el2` to point to the real exception vector. 41 | 42 | KVM/ARM bootstrap process with this patch: 43 | * After `mm_init()` is called from `start_kernel`, a new function `preinit_hyp_mode()` is called, that calls the KVM initialization code via the TrustZone backdoor (initialization code itself was also changed to exit via `smc #0` instead of `eret`). This point is chosen because before that that code would fail on a memory allocation, and if done too late other cores could be already running. 44 | * When the normal ("late") KVM initialization routine starts, it initializes **everything except** the EL2 state. 45 | * The check for EL2 boot in `arch/arm64/include/asm/virt.h` is replaced with a simple `return 1;` 46 | 47 | **What did not work:** 48 | * In the early bootup code, use the backdoor to enter EL2, and continue booting from there, imitating normal EL2 boot. This probably fails later when secondary cores boot up, causing a sanity check in `arch/arm64/include/asm/virt.h` to fail (boot CPU booted in EL2, others in EL1). 49 | * Use the backdoor early to bootstrap a valid-looking HYP stub, then let KVM boot normally. Does not work due to the custom `vbar_el2` handling, see above. 50 | -------------------------------------------------------------------------------- /backports/pmu/patch0.txt: -------------------------------------------------------------------------------- 1 | From b8cfadfcefdc8c306ca2c0b1bdbdd4e01f0155e3 Mon Sep 17 00:00:00 2001 2 | From: Shannon Zhao 3 | Date: Thu, 24 Mar 2016 16:01:16 +0000 4 | Subject: [PATCH] arm64: perf: Move PMU register related defines to 5 | asm/perf_event.h 6 | 7 | To use the ARMv8 PMU related register defines from the KVM code, we move 8 | the relevant definitions to asm/perf_event.h header file and rename them 9 | with prefix ARMV8_PMU_. This allows us to get rid of kvm_perf_event.h. 10 | 11 | Signed-off-by: Anup Patel 12 | Signed-off-by: Shannon Zhao 13 | Acked-by: Marc Zyngier 14 | Reviewed-by: Andrew Jones 15 | Signed-off-by: Marc Zyngier 16 | Signed-off-by: Will Deacon 17 | --- 18 | arch/arm64/include/asm/kvm_host.h | 1 - 19 | arch/arm64/include/asm/kvm_hyp.h | 1 - 20 | arch/arm64/include/asm/kvm_perf_event.h | 68 ----------------------- 21 | arch/arm64/include/asm/perf_event.h | 47 ++++++++++++++++ 22 | arch/arm64/kernel/perf_event.c | 72 +++++++------------------ 23 | 5 files changed, 66 insertions(+), 123 deletions(-) 24 | delete mode 100644 arch/arm64/include/asm/kvm_perf_event.h 25 | 26 | diff --git a/arch/arm64/include/asm/perf_event.h b/arch/arm64/include/asm/perf_event.h 27 | index 7bd3cdb533ea80..2065f46fa7407d 100644 28 | --- a/arch/arm64/include/asm/perf_event.h 29 | +++ b/arch/arm64/include/asm/perf_event.h 30 | @@ -17,6 +17,53 @@ 31 | #ifndef __ASM_PERF_EVENT_H 32 | #define __ASM_PERF_EVENT_H 33 | 34 | +#define ARMV8_PMU_MAX_COUNTERS 32 35 | +#define ARMV8_PMU_COUNTER_MASK (ARMV8_PMU_MAX_COUNTERS - 1) 36 | + 37 | +/* 38 | + * Per-CPU PMCR: config reg 39 | + */ 40 | +#define ARMV8_PMU_PMCR_E (1 << 0) /* Enable all counters */ 41 | +#define ARMV8_PMU_PMCR_P (1 << 1) /* Reset all counters */ 42 | +#define ARMV8_PMU_PMCR_C (1 << 2) /* Cycle counter reset */ 43 | +#define ARMV8_PMU_PMCR_D (1 << 3) /* CCNT counts every 64th cpu cycle */ 44 | +#define ARMV8_PMU_PMCR_X (1 << 4) /* Export to ETM */ 45 | +#define ARMV8_PMU_PMCR_DP (1 << 5) /* Disable CCNT if non-invasive debug*/ 46 | +#define ARMV8_PMU_PMCR_LC (1 << 6) /* Overflow on 64 bit cycle counter */ 47 | +#define ARMV8_PMU_PMCR_N_SHIFT 11 /* Number of counters supported */ 48 | +#define ARMV8_PMU_PMCR_N_MASK 0x1f 49 | +#define ARMV8_PMU_PMCR_MASK 0x7f /* Mask for writable bits */ 50 | + 51 | +/* 52 | + * PMOVSR: counters overflow flag status reg 53 | + */ 54 | +#define ARMV8_PMU_OVSR_MASK 0xffffffff /* Mask for writable bits */ 55 | +#define ARMV8_PMU_OVERFLOWED_MASK ARMV8_PMU_OVSR_MASK 56 | + 57 | +/* 58 | + * PMXEVTYPER: Event selection reg 59 | + */ 60 | +#define ARMV8_PMU_EVTYPE_MASK 0xc800ffff /* Mask for writable bits */ 61 | +#define ARMV8_PMU_EVTYPE_EVENT 0xffff /* Mask for EVENT bits */ 62 | + 63 | +#define ARMV8_PMU_EVTYPE_EVENT_SW_INCR 0 /* Software increment event */ 64 | + 65 | +/* 66 | + * Event filters for PMUv3 67 | + */ 68 | +#define ARMV8_PMU_EXCLUDE_EL1 (1 << 31) 69 | +#define ARMV8_PMU_EXCLUDE_EL0 (1 << 30) 70 | +#define ARMV8_PMU_INCLUDE_EL2 (1 << 27) 71 | + 72 | +/* 73 | + * PMUSERENR: user enable reg 74 | + */ 75 | +#define ARMV8_PMU_USERENR_MASK 0xf /* Mask for writable bits */ 76 | +#define ARMV8_PMU_USERENR_EN (1 << 0) /* PMU regs can be accessed at EL0 */ 77 | +#define ARMV8_PMU_USERENR_SW (1 << 1) /* PMSWINC can be written at EL0 */ 78 | +#define ARMV8_PMU_USERENR_CR (1 << 2) /* Cycle counter can be read at EL0 */ 79 | +#define ARMV8_PMU_USERENR_ER (1 << 3) /* Event counter can be read at EL0 */ 80 | + 81 | #ifdef CONFIG_HW_PERF_EVENTS 82 | struct pt_regs; 83 | extern unsigned long perf_instruction_pointer(struct pt_regs *regs); 84 | -------------------------------------------------------------------------------- /backports/pmu/patch1.txt: -------------------------------------------------------------------------------- 1 | From 04fe472615d0216ec0bdd66d9f3f1812b642ada6 Mon Sep 17 00:00:00 2001 2 | From: Shannon Zhao 3 | Date: Fri, 11 Sep 2015 09:38:32 +0800 4 | Subject: [PATCH] arm64: KVM: Define PMU data structure for each vcpu 5 | 6 | Here we plan to support virtual PMU for guest by full software 7 | emulation, so define some basic structs and functions preparing for 8 | futher steps. Define struct kvm_pmc for performance monitor counter and 9 | struct kvm_pmu for performance monitor unit for each vcpu. According to 10 | ARMv8 spec, the PMU contains at most 32(ARMV8_PMU_MAX_COUNTERS) 11 | counters. 12 | 13 | Since this only supports ARM64 (or PMUv3), add a separate config symbol 14 | for it. 15 | 16 | Signed-off-by: Shannon Zhao 17 | Acked-by: Marc Zyngier 18 | Reviewed-by: Andrew Jones 19 | Signed-off-by: Marc Zyngier 20 | --- 21 | arch/arm64/include/asm/kvm_host.h | 2 ++ 22 | arch/arm64/kvm/Kconfig | 7 ++++++ 23 | include/kvm/arm_pmu.h | 42 +++++++++++++++++++++++++++++++ 24 | 3 files changed, 51 insertions(+) 25 | create mode 100644 include/kvm/arm_pmu.h 26 | 27 | diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h 28 | index 15851f52096b3a..fb57fdc6a433d3 100644 29 | --- a/arch/arm64/include/asm/kvm_host.h 30 | +++ b/arch/arm64/include/asm/kvm_host.h 31 | @@ -38,6 +38,7 @@ 32 | 33 | #include 34 | #include 35 | +#include 36 | 37 | #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS 38 | 39 | @@ -213,6 +214,7 @@ struct kvm_vcpu_arch { 40 | /* VGIC state */ 41 | struct vgic_cpu vgic_cpu; 42 | struct arch_timer_cpu timer_cpu; 43 | + struct kvm_pmu pmu; 44 | 45 | /* 46 | * Anything that is not used directly from assembly code goes 47 | diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig 48 | index a5272c07d1cbf3..de7450df762996 100644 49 | --- a/arch/arm64/kvm/Kconfig 50 | +++ b/arch/arm64/kvm/Kconfig 51 | @@ -26,6 +26,7 @@ 52 | select KVM_ARM_HOST 53 | select KVM_ARM_VGIC 54 | select KVM_ARM_TIMER 55 | + select KVM_ARM_PMU if HW_PERF_EVENTS 56 | ---help--- 57 | Support hosting virtualized guest machines. 58 | 59 | @@ -36,6 +37,12 @@ 60 | ---help--- 61 | Provides host support for ARM processors. 62 | 63 | +config KVM_ARM_PMU 64 | + bool 65 | + ---help--- 66 | + Adds support for a virtual Performance Monitoring Unit (PMU) in 67 | + virtual machines. 68 | + 69 | config KVM_ARM_MAX_VCPUS 70 | int "Number maximum supported virtual CPUs per VM" 71 | depends on KVM_ARM_HOST 72 | diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h 73 | new file mode 100644 74 | index 00000000000000..3c2fd568e0a811 75 | --- /dev/null 76 | +++ b/include/kvm/arm_pmu.h 77 | @@ -0,0 +1,42 @@ 78 | +/* 79 | + * Copyright (C) 2015 Linaro Ltd. 80 | + * Author: Shannon Zhao 81 | + * 82 | + * This program is free software; you can redistribute it and/or modify 83 | + * it under the terms of the GNU General Public License version 2 as 84 | + * published by the Free Software Foundation. 85 | + * 86 | + * This program is distributed in the hope that it will be useful, 87 | + * but WITHOUT ANY WARRANTY; without even the implied warranty of 88 | + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 89 | + * GNU General Public License for more details. 90 | + * 91 | + * You should have received a copy of the GNU General Public License 92 | + * along with this program. If not, see . 93 | + */ 94 | + 95 | +#ifndef __ASM_ARM_KVM_PMU_H 96 | +#define __ASM_ARM_KVM_PMU_H 97 | + 98 | +#ifdef CONFIG_KVM_ARM_PMU 99 | + 100 | +#include 101 | +#include 102 | + 103 | +struct kvm_pmc { 104 | + u8 idx; /* index into the pmu->pmc array */ 105 | + struct perf_event *perf_event; 106 | + u64 bitmask; 107 | +}; 108 | + 109 | +struct kvm_pmu { 110 | + int irq_num; 111 | + struct kvm_pmc pmc[ARMV8_PMU_MAX_COUNTERS]; 112 | + bool ready; 113 | +}; 114 | +#else 115 | +struct kvm_pmu { 116 | +}; 117 | +#endif 118 | + 119 | +#endif 120 | -------------------------------------------------------------------------------- /backports/pmu/patch10.txt: -------------------------------------------------------------------------------- 1 | diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c 2 | index f34745cb3d236f..dfbce781d284d6 100644 3 | --- a/arch/arm64/kvm/reset.c 4 | +++ b/arch/arm64/kvm/reset.c 5 | @@ -120,6 +120,9 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) 6 | /* Reset system registers */ 7 | kvm_reset_sys_regs(vcpu); 8 | 9 | + /* Reset PMU */ 10 | + kvm_pmu_vcpu_reset(vcpu); 11 | + 12 | /* Reset timer */ 13 | return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq); 14 | } 15 | diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h 16 | index 9c184edb8e07e5..b4993eb76aa100 100644 17 | --- a/include/kvm/arm_pmu.h 18 | +++ b/include/kvm/arm_pmu.h 19 | @@ -42,6 +42,7 @@ struct kvm_pmu { 20 | u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx); 21 | void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val); 22 | u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu); 23 | +void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu); 24 | void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val); 25 | void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val); 26 | void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val); 27 | @@ -67,6 +68,7 @@ static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu) 28 | { 29 | return 0; 30 | } 31 | +static inline void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) {} 32 | static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {} 33 | static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {} 34 | static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {} 35 | diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c 36 | index 74e858c42ae156..1dbbc2c5155916 100644 37 | --- a/virt/kvm/arm/pmu.c 38 | +++ b/virt/kvm/arm/pmu.c 39 | @@ -84,6 +84,23 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc) 40 | } 41 | } 42 | 43 | +/** 44 | + * kvm_pmu_vcpu_reset - reset pmu state for cpu 45 | + * @vcpu: The vcpu pointer 46 | + * 47 | + */ 48 | +void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) 49 | +{ 50 | + int i; 51 | + struct kvm_pmu *pmu = &vcpu->arch.pmu; 52 | + 53 | + for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) { 54 | + kvm_pmu_stop_counter(vcpu, &pmu->pmc[i]); 55 | + pmu->pmc[i].idx = i; 56 | + pmu->pmc[i].bitmask = 0xffffffffUL; 57 | + } 58 | +} 59 | + 60 | u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu) 61 | { 62 | u64 val = vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMU_PMCR_N_SHIFT; 63 | -------------------------------------------------------------------------------- /backports/pmu/patch11.txt: -------------------------------------------------------------------------------- 1 | From bb0c70bcca6ba3c84afc2da7426f3b923bbe6825 Mon Sep 17 00:00:00 2001 2 | From: Shannon Zhao 3 | Date: Mon, 11 Jan 2016 21:35:32 +0800 4 | Subject: [PATCH] arm64: KVM: Add a new vcpu device control group for PMUv3 5 | 6 | To configure the virtual PMUv3 overflow interrupt number, we use the 7 | vcpu kvm_device ioctl, encapsulating the KVM_ARM_VCPU_PMU_V3_IRQ 8 | attribute within the KVM_ARM_VCPU_PMU_V3_CTRL group. 9 | 10 | After configuring the PMUv3, call the vcpu ioctl with attribute 11 | KVM_ARM_VCPU_PMU_V3_INIT to initialize the PMUv3. 12 | 13 | Signed-off-by: Shannon Zhao 14 | Acked-by: Peter Maydell 15 | Reviewed-by: Andrew Jones 16 | Reviewed-by: Christoffer Dall 17 | Signed-off-by: Marc Zyngier 18 | --- 19 | Documentation/virtual/kvm/devices/vcpu.txt | 25 +++++ 20 | arch/arm/include/asm/kvm_host.h | 15 +++ 21 | arch/arm/kvm/arm.c | 3 + 22 | arch/arm64/include/asm/kvm_host.h | 6 ++ 23 | arch/arm64/include/uapi/asm/kvm.h | 5 + 24 | arch/arm64/kvm/guest.c | 51 ++++++++++ 25 | include/kvm/arm_pmu.h | 23 +++++ 26 | virt/kvm/arm/pmu.c | 112 +++++++++++++++++++++ 27 | 8 files changed, 240 insertions(+) 28 | 29 | diff --git a/Documentation/virtual/kvm/devices/vcpu.txt b/Documentation/virtual/kvm/devices/vcpu.txt 30 | index 3cc59c5e44ce14..c04165868faff9 100644 31 | --- a/Documentation/virtual/kvm/devices/vcpu.txt 32 | +++ b/Documentation/virtual/kvm/devices/vcpu.txt 33 | @@ -6,3 +6,28 @@ KVM_GET_DEVICE_ATTR, and KVM_HAS_DEVICE_ATTR. The interface uses the same struct 34 | kvm_device_attr as other devices, but targets VCPU-wide settings and controls. 35 | 36 | The groups and attributes per virtual cpu, if any, are architecture specific. 37 | + 38 | +1. GROUP: KVM_ARM_VCPU_PMU_V3_CTRL 39 | +Architectures: ARM64 40 | + 41 | +1.1. ATTRIBUTE: KVM_ARM_VCPU_PMU_V3_IRQ 42 | +Parameters: in kvm_device_attr.addr the address for PMU overflow interrupt is a 43 | + pointer to an int 44 | +Returns: -EBUSY: The PMU overflow interrupt is already set 45 | + -ENXIO: The overflow interrupt not set when attempting to get it 46 | + -ENODEV: PMUv3 not supported 47 | + -EINVAL: Invalid PMU overflow interrupt number supplied 48 | + 49 | +A value describing the PMUv3 (Performance Monitor Unit v3) overflow interrupt 50 | +number for this vcpu. This interrupt could be a PPI or SPI, but the interrupt 51 | +type must be same for each vcpu. As a PPI, the interrupt number is the same for 52 | +all vcpus, while as an SPI it must be a separate number per vcpu. 53 | + 54 | +1.2 ATTRIBUTE: KVM_ARM_VCPU_PMU_V3_INIT 55 | +Parameters: no additional parameter in kvm_device_attr.addr 56 | +Returns: -ENODEV: PMUv3 not supported 57 | + -ENXIO: PMUv3 not properly configured as required prior to calling this 58 | + attribute 59 | + -EBUSY: PMUv3 already initialized 60 | + 61 | +Request the initialization of the PMUv3. 62 | diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h 63 | index 19e9aba85463c2..385070180c2587 100644 64 | --- a/arch/arm/include/asm/kvm_host.h 65 | +++ b/arch/arm/include/asm/kvm_host.h 66 | @@ -238,5 +238,20 @@ 67 | static inline void kvm_arch_sync_events(struct kvm *kvm) {} 68 | static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {} 69 | static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {} 70 | +static inline int kvm_arm_vcpu_arch_set_attr(struct kvm_vcpu *vcpu, 71 | + struct kvm_device_attr *attr) 72 | +{ 73 | + return -ENXIO; 74 | +} 75 | +static inline int kvm_arm_vcpu_arch_get_attr(struct kvm_vcpu *vcpu, 76 | + struct kvm_device_attr *attr) 77 | +{ 78 | + return -ENXIO; 79 | +} 80 | +static inline int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu, 81 | + struct kvm_device_attr *attr) 82 | +{ 83 | + return -ENXIO; 84 | +} 85 | 86 | #endif /* __ARM_KVM_HOST_H__ */ 87 | diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c 88 | index 16623235629168..75c7fed5d14c98 100644 89 | --- a/arch/arm/kvm/arm.c 90 | +++ b/arch/arm/kvm/arm.c 91 | @@ -696,6 +696,7 @@ 92 | 93 | switch (attr->group) { 94 | default: 95 | + ret = kvm_arm_vcpu_arch_set_attr(vcpu, attr); 96 | break; 97 | } 98 | 99 | @@ -709,6 +710,7 @@ 100 | 101 | switch (attr->group) { 102 | default: 103 | + ret = kvm_arm_vcpu_arch_get_attr(vcpu, attr); 104 | break; 105 | } 106 | 107 | @@ -722,6 +724,7 @@ 108 | 109 | switch (attr->group) { 110 | default: 111 | + ret = kvm_arm_vcpu_arch_has_attr(vcpu, attr); 112 | break; 113 | } 114 | 115 | diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h 116 | index b02ef0828f220d..71fa6fe9d54add 100644 117 | --- a/arch/arm64/include/asm/kvm_host.h 118 | +++ b/arch/arm64/include/asm/kvm_host.h 119 | @@ -253,5 +253,11 @@ 120 | static inline void kvm_arch_sync_events(struct kvm *kvm) {} 121 | static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {} 122 | static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {} 123 | +int kvm_arm_vcpu_arch_set_attr(struct kvm_vcpu *vcpu, 124 | + struct kvm_device_attr *attr); 125 | +int kvm_arm_vcpu_arch_get_attr(struct kvm_vcpu *vcpu, 126 | + struct kvm_device_attr *attr); 127 | +int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu, 128 | + struct kvm_device_attr *attr); 129 | 130 | #endif /* __ARM64_KVM_HOST_H__ */ 131 | diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h 132 | index 6aedbe3144320c..f209ea151dca8a 100644 133 | --- a/arch/arm64/include/uapi/asm/kvm.h 134 | +++ b/arch/arm64/include/uapi/asm/kvm.h 135 | @@ -205,6 +205,11 @@ struct kvm_arch_memory_slot { 136 | #define KVM_DEV_ARM_VGIC_GRP_CTRL 4 137 | #define KVM_DEV_ARM_VGIC_CTRL_INIT 0 138 | 139 | +/* Device Control API on vcpu fd */ 140 | +#define KVM_ARM_VCPU_PMU_V3_CTRL 0 141 | +#define KVM_ARM_VCPU_PMU_V3_IRQ 0 142 | +#define KVM_ARM_VCPU_PMU_V3_INIT 1 143 | + 144 | /* KVM_IRQ_LINE irq field index values */ 145 | #define KVM_ARM_IRQ_TYPE_SHIFT 24 146 | #define KVM_ARM_IRQ_TYPE_MASK 0xff 147 | diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c 148 | index fcb778899a3804..dbe45c364bbb15 100644 149 | --- a/arch/arm64/kvm/guest.c 150 | +++ b/arch/arm64/kvm/guest.c 151 | @@ -380,3 +380,54 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu, 152 | } 153 | return 0; 154 | } 155 | + 156 | +int kvm_arm_vcpu_arch_set_attr(struct kvm_vcpu *vcpu, 157 | + struct kvm_device_attr *attr) 158 | +{ 159 | + int ret; 160 | + 161 | + switch (attr->group) { 162 | + case KVM_ARM_VCPU_PMU_V3_CTRL: 163 | + ret = kvm_arm_pmu_v3_set_attr(vcpu, attr); 164 | + break; 165 | + default: 166 | + ret = -ENXIO; 167 | + break; 168 | + } 169 | + 170 | + return ret; 171 | +} 172 | + 173 | +int kvm_arm_vcpu_arch_get_attr(struct kvm_vcpu *vcpu, 174 | + struct kvm_device_attr *attr) 175 | +{ 176 | + int ret; 177 | + 178 | + switch (attr->group) { 179 | + case KVM_ARM_VCPU_PMU_V3_CTRL: 180 | + ret = kvm_arm_pmu_v3_get_attr(vcpu, attr); 181 | + break; 182 | + default: 183 | + ret = -ENXIO; 184 | + break; 185 | + } 186 | + 187 | + return ret; 188 | +} 189 | + 190 | +int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu, 191 | + struct kvm_device_attr *attr) 192 | +{ 193 | + int ret; 194 | + 195 | + switch (attr->group) { 196 | + case KVM_ARM_VCPU_PMU_V3_CTRL: 197 | + ret = kvm_arm_pmu_v3_has_attr(vcpu, attr); 198 | + break; 199 | + default: 200 | + ret = -ENXIO; 201 | + break; 202 | + } 203 | + 204 | + return ret; 205 | +} 206 | diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h 207 | index ee62497d46f755..fe389ac3148915 100644 208 | --- a/include/kvm/arm_pmu.h 209 | +++ b/include/kvm/arm_pmu.h 210 | @@ -39,6 +39,7 @@ struct kvm_pmu { 211 | }; 212 | 213 | #define kvm_arm_pmu_v3_ready(v) ((v)->arch.pmu.ready) 214 | +#define kvm_arm_pmu_irq_initialized(v) ((v)->arch.pmu.irq_num >= VGIC_NR_SGIS) 215 | u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx); 216 | void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val); 217 | u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu); 218 | @@ -54,11 +55,18 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val); 219 | void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data, 220 | u64 select_idx); 221 | bool kvm_arm_support_pmu_v3(void); 222 | +int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, 223 | + struct kvm_device_attr *attr); 224 | +int kvm_arm_pmu_v3_get_attr(struct kvm_vcpu *vcpu, 225 | + struct kvm_device_attr *attr); 226 | +int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, 227 | + struct kvm_device_attr *attr); 228 | #else 229 | struct kvm_pmu { 230 | }; 231 | 232 | #define kvm_arm_pmu_v3_ready(v) (false) 233 | +#define kvm_arm_pmu_irq_initialized(v) (false) 234 | static inline u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, 235 | u64 select_idx) 236 | { 237 | @@ -82,6 +90,21 @@ static inline void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) {} 238 | static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, 239 | u64 data, u64 select_idx) {} 240 | static inline bool kvm_arm_support_pmu_v3(void) { return false; } 241 | +static inline int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, 242 | + struct kvm_device_attr *attr) 243 | +{ 244 | + return -ENXIO; 245 | +} 246 | +static inline int kvm_arm_pmu_v3_get_attr(struct kvm_vcpu *vcpu, 247 | + struct kvm_device_attr *attr) 248 | +{ 249 | + return -ENXIO; 250 | +} 251 | +static inline int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, 252 | + struct kvm_device_attr *attr) 253 | +{ 254 | + return -ENXIO; 255 | +} 256 | #endif 257 | 258 | #endif 259 | diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c 260 | index 6e28f4f86cc677..b5754c6c5508f9 100644 261 | --- a/virt/kvm/arm/pmu.c 262 | +++ b/virt/kvm/arm/pmu.c 263 | @@ -19,6 +19,7 @@ 264 | #include 265 | #include 266 | #include 267 | +#include 268 | #include 269 | #include 270 | #include 271 | @@ -415,3 +416,114 @@ bool kvm_arm_support_pmu_v3(void) 272 | */ 273 | return (perf_num_counters() > 0); 274 | } 275 | + 276 | +static int kvm_arm_pmu_v3_init(struct kvm_vcpu *vcpu) 277 | +{ 278 | + if (!kvm_arm_support_pmu_v3()) 279 | + return -ENODEV; 280 | + 281 | + if (!test_bit(KVM_ARM_VCPU_PMU_V3, vcpu->arch.features) || 282 | + !kvm_arm_pmu_irq_initialized(vcpu)) 283 | + return -ENXIO; 284 | + 285 | + if (kvm_arm_pmu_v3_ready(vcpu)) 286 | + return -EBUSY; 287 | + 288 | + kvm_pmu_vcpu_reset(vcpu); 289 | + vcpu->arch.pmu.ready = true; 290 | + 291 | + return 0; 292 | +} 293 | + 294 | +static bool irq_is_valid(struct kvm *kvm, int irq, bool is_ppi) 295 | +{ 296 | + int i; 297 | + struct kvm_vcpu *vcpu; 298 | + 299 | + kvm_for_each_vcpu(i, vcpu, kvm) { 300 | + if (!kvm_arm_pmu_irq_initialized(vcpu)) 301 | + continue; 302 | + 303 | + if (is_ppi) { 304 | + if (vcpu->arch.pmu.irq_num != irq) 305 | + return false; 306 | + } else { 307 | + if (vcpu->arch.pmu.irq_num == irq) 308 | + return false; 309 | + } 310 | + } 311 | + 312 | + return true; 313 | +} 314 | + 315 | + 316 | +int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) 317 | +{ 318 | + switch (attr->attr) { 319 | + case KVM_ARM_VCPU_PMU_V3_IRQ: { 320 | + int __user *uaddr = (int __user *)(long)attr->addr; 321 | + int irq; 322 | + 323 | + if (!test_bit(KVM_ARM_VCPU_PMU_V3, vcpu->arch.features)) 324 | + return -ENODEV; 325 | + 326 | + if (get_user(irq, uaddr)) 327 | + return -EFAULT; 328 | + 329 | + /* 330 | + * The PMU overflow interrupt could be a PPI or SPI, but for one 331 | + * VM the interrupt type must be same for each vcpu. As a PPI, 332 | + * the interrupt number is the same for all vcpus, while as an 333 | + * SPI it must be a separate number per vcpu. 334 | + */ 335 | + if (irq < VGIC_NR_SGIS || irq >= vcpu->kvm->arch.vgic.nr_irqs || 336 | + !irq_is_valid(vcpu->kvm, irq, irq < VGIC_NR_PRIVATE_IRQS)) 337 | + return -EINVAL; 338 | + 339 | + if (kvm_arm_pmu_irq_initialized(vcpu)) 340 | + return -EBUSY; 341 | + 342 | + kvm_debug("Set kvm ARM PMU irq: %d\n", irq); 343 | + vcpu->arch.pmu.irq_num = irq; 344 | + return 0; 345 | + } 346 | + case KVM_ARM_VCPU_PMU_V3_INIT: 347 | + return kvm_arm_pmu_v3_init(vcpu); 348 | + } 349 | + 350 | + return -ENXIO; 351 | +} 352 | + 353 | +int kvm_arm_pmu_v3_get_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) 354 | +{ 355 | + switch (attr->attr) { 356 | + case KVM_ARM_VCPU_PMU_V3_IRQ: { 357 | + int __user *uaddr = (int __user *)(long)attr->addr; 358 | + int irq; 359 | + 360 | + if (!test_bit(KVM_ARM_VCPU_PMU_V3, vcpu->arch.features)) 361 | + return -ENODEV; 362 | + 363 | + if (!kvm_arm_pmu_irq_initialized(vcpu)) 364 | + return -ENXIO; 365 | + 366 | + irq = vcpu->arch.pmu.irq_num; 367 | + return put_user(irq, uaddr); 368 | + } 369 | + } 370 | + 371 | + return -ENXIO; 372 | +} 373 | + 374 | +int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) 375 | +{ 376 | + switch (attr->attr) { 377 | + case KVM_ARM_VCPU_PMU_V3_IRQ: 378 | + case KVM_ARM_VCPU_PMU_V3_INIT: 379 | + if (kvm_arm_support_pmu_v3() && 380 | + test_bit(KVM_ARM_VCPU_PMU_V3, vcpu->arch.features)) 381 | + return 0; 382 | + } 383 | + 384 | + return -ENXIO; 385 | +} 386 | -------------------------------------------------------------------------------- /backports/pmu/patch2.txt: -------------------------------------------------------------------------------- 1 | From ab9468340d2bcc2a837b8b536fa819a0fc05a32e Mon Sep 17 00:00:00 2001 2 | From: Shannon Zhao 3 | Date: Thu, 18 Jun 2015 16:01:53 +0800 4 | Subject: [PATCH] arm64: KVM: Add access handler for PMCR register 5 | 6 | Add reset handler which gets host value of PMCR_EL0 and make writable 7 | bits architecturally UNKNOWN except PMCR.E which is zero. Add an access 8 | handler for PMCR. 9 | 10 | Signed-off-by: Shannon Zhao 11 | Reviewed-by: Andrew Jones 12 | Signed-off-by: Marc Zyngier 13 | --- 14 | arch/arm64/include/asm/kvm_host.h | 3 +++ 15 | arch/arm64/kvm/sys_regs.c | 42 +++++++++++++++++++++++++++++-- 16 | include/kvm/arm_pmu.h | 4 +++ 17 | 3 files changed, 47 insertions(+), 2 deletions(-) 18 | 19 | diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h 20 | index fb57fdc6a433d3..5def605b452597 100644 21 | --- a/arch/arm64/include/asm/kvm_asm.h 22 | +++ b/arch/arm64/include/asm/kvm_asm.h 23 | @@ -56,14 +56,17 @@ 24 | #define DBGWVR15_EL1 86 25 | #define MDCCINT_EL1 87 /* Monitor Debug Comms Channel Interrupt Enable Reg */ 26 | 27 | +/* Performance Monitor Registers */ 28 | +#define PMCR_EL0 88 /* Control Register */ 29 | + 30 | /* 32bit specific registers. Keep them at the end of the range */ 31 | -#define DACR32_EL2 88 /* Domain Access Control Register */ 32 | -#define IFSR32_EL2 89 /* Instruction Fault Status Register */ 33 | -#define FPEXC32_EL2 90 /* Floating-Point Exception Control Register */ 34 | -#define DBGVCR32_EL2 91 /* Debug Vector Catch Register */ 35 | -#define TEECR32_EL1 92 /* ThumbEE Configuration Register */ 36 | -#define TEEHBR32_EL1 93 /* ThumbEE Handler Base Register */ 37 | -#define NR_SYS_REGS 94 38 | +#define DACR32_EL2 89 /* Domain Access Control Register */ 39 | +#define IFSR32_EL2 90 /* Instruction Fault Status Register */ 40 | +#define FPEXC32_EL2 91 /* Floating-Point Exception Control Register */ 41 | +#define DBGVCR32_EL2 92 /* Debug Vector Catch Register */ 42 | +#define TEECR32_EL1 93 /* ThumbEE Configuration Register */ 43 | +#define TEEHBR32_EL1 94 /* ThumbEE Handler Base Register */ 44 | +#define NR_SYS_REGS 95 45 | 46 | /* 32bit mapping */ 47 | #define c0_MPIDR (MPIDR_EL1 * 2) /* MultiProcessor ID Register */ 48 | diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c 49 | index 2e90371cfb378b..e88ae2d809a555 100644 50 | --- a/arch/arm64/kvm/sys_regs.c 51 | +++ b/arch/arm64/kvm/sys_regs.c 52 | @@ -28,6 +28,7 @@ 53 | #include 54 | #include 55 | #include 56 | +#include 57 | #include 58 | #include 59 | #include 60 | @@ -439,6 +440,43 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) 61 | vcpu_sys_reg(vcpu, MPIDR_EL1) = (1ULL << 31) | mpidr; 62 | } 63 | 64 | +static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) 65 | +{ 66 | + u64 pmcr, val; 67 | + 68 | + asm volatile("mrs %0, pmcr_el0\n" : "=r" (pmcr)); 69 | + /* Writable bits of PMCR_EL0 (ARMV8_PMU_PMCR_MASK) is reset to UNKNOWN 70 | + * except PMCR.E resetting to zero. 71 | + */ 72 | + val = ((pmcr & ~ARMV8_PMU_PMCR_MASK) 73 | + | (ARMV8_PMU_PMCR_MASK & 0xdecafbad)) & (~ARMV8_PMU_PMCR_E); 74 | + vcpu_sys_reg(vcpu, PMCR_EL0) = val; 75 | +} 76 | + 77 | +static bool access_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_params *p, 78 | + const struct sys_reg_desc *r) 79 | +{ 80 | + u64 val; 81 | + 82 | + if (!kvm_arm_pmu_v3_ready(vcpu)) 83 | + return trap_raz_wi(vcpu, p, r); 84 | + 85 | + if (p->is_write) { 86 | + /* Only update writeable bits of PMCR */ 87 | + val = vcpu_sys_reg(vcpu, PMCR_EL0); 88 | + val &= ~ARMV8_PMU_PMCR_MASK; 89 | + val |= *vcpu_reg(vcpu, p->Rt) & ARMV8_PMU_PMCR_MASK; 90 | + vcpu_sys_reg(vcpu, PMCR_EL0) = val; 91 | + } else { 92 | + /* PMCR.P & PMCR.C are RAZ */ 93 | + val = vcpu_sys_reg(vcpu, PMCR_EL0) 94 | + & ~(ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C); 95 | + *vcpu_reg(vcpu, p->Rt) = val; 96 | + } 97 | + 98 | + return true; 99 | +} 100 | + 101 | /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */ 102 | #define DBG_BCR_BVR_WCR_WVR_EL1(n) \ 103 | /* DBGBVRn_EL1 */ \ 104 | @@ -623,7 +661,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { 105 | 106 | /* PMCR_EL0 */ 107 | { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b000), 108 | - trap_raz_wi }, 109 | + access_pmcr, reset_pmcr, }, 110 | /* PMCNTENSET_EL0 */ 111 | { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001), 112 | trap_raz_wi }, 113 | @@ -885,7 +923,7 @@ static const struct sys_reg_desc cp15_regs[] = { 114 | { Op1( 0), CRn( 7), CRm(14), Op2( 2), access_dcsw }, 115 | 116 | /* PMU */ 117 | - { Op1( 0), CRn( 9), CRm(12), Op2( 0), trap_raz_wi }, 118 | + { Op1( 0), CRn( 9), CRm(12), Op2( 0), access_pmcr }, 119 | { Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi }, 120 | { Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi }, 121 | { Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi }, 122 | diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h 123 | index 3c2fd568e0a811..8157fe5bcbb0ea 100644 124 | --- a/include/kvm/arm_pmu.h 125 | +++ b/include/kvm/arm_pmu.h 126 | @@ -34,9 +34,13 @@ struct kvm_pmu { 127 | struct kvm_pmc pmc[ARMV8_PMU_MAX_COUNTERS]; 128 | bool ready; 129 | }; 130 | + 131 | +#define kvm_arm_pmu_v3_ready(v) ((v)->arch.pmu.ready) 132 | #else 133 | struct kvm_pmu { 134 | }; 135 | + 136 | +#define kvm_arm_pmu_v3_ready(v) (false) 137 | #endif 138 | 139 | #endif 140 | -------------------------------------------------------------------------------- /backports/pmu/patch3.txt: -------------------------------------------------------------------------------- 1 | From 3965c3ce751ab5a97618a2818eec4497576f4654 Mon Sep 17 00:00:00 2001 2 | From: Shannon Zhao 3 | Date: Mon, 31 Aug 2015 17:20:22 +0800 4 | Subject: [PATCH] arm64: KVM: Add access handler for PMSELR register 5 | 6 | Since the reset value of PMSELR_EL0 is UNKNOWN, use reset_unknown for 7 | its reset handler. When reading PMSELR, return the PMSELR.SEL field to 8 | guest. 9 | 10 | Signed-off-by: Shannon Zhao 11 | Reviewed-by: Andrew Jones 12 | Signed-off-by: Marc Zyngier 13 | --- 14 | arch/arm64/include/asm/kvm_host.h | 1 + 15 | arch/arm64/kvm/sys_regs.c | 20 ++++++++++++++++++-- 16 | 2 files changed, 19 insertions(+), 2 deletions(-) 17 | 18 | diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h 19 | index 5def605b452597..57a2d8f76c2f62 100644 20 | --- a/arch/arm64/include/asm/kvm_asm.h 21 | +++ b/arch/arm64/include/asm/kvm_asm.h 22 | @@ -58,15 +58,16 @@ 23 | 24 | /* Performance Monitor Registers */ 25 | #define PMCR_EL0 88 /* Control Register */ 26 | +#define PMSELR_EL0 89 /* Event Counter Selection Register */ 27 | 28 | /* 32bit specific registers. Keep them at the end of the range */ 29 | -#define DACR32_EL2 89 /* Domain Access Control Register */ 30 | -#define IFSR32_EL2 90 /* Instruction Fault Status Register */ 31 | -#define FPEXC32_EL2 91 /* Floating-Point Exception Control Register */ 32 | -#define DBGVCR32_EL2 92 /* Debug Vector Catch Register */ 33 | -#define TEECR32_EL1 93 /* ThumbEE Configuration Register */ 34 | -#define TEEHBR32_EL1 94 /* ThumbEE Handler Base Register */ 35 | -#define NR_SYS_REGS 95 36 | +#define DACR32_EL2 90 /* Domain Access Control Register */ 37 | +#define IFSR32_EL2 91 /* Instruction Fault Status Register */ 38 | +#define FPEXC32_EL2 92 /* Floating-Point Exception Control Register */ 39 | +#define DBGVCR32_EL2 93 /* Debug Vector Catch Register */ 40 | +#define TEECR32_EL1 94 /* ThumbEE Configuration Register */ 41 | +#define TEEHBR32_EL1 95 /* ThumbEE Handler Base Register */ 42 | +#define NR_SYS_REGS 96 43 | 44 | /* 32bit mapping */ 45 | #define c0_MPIDR (MPIDR_EL1 * 2) /* MultiProcessor ID Register */ 46 | diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c 47 | index e88ae2d809a555..b05e20f8a3b963 100644 48 | --- a/arch/arm64/kvm/sys_regs.c 49 | +++ b/arch/arm64/kvm/sys_regs.c 50 | @@ -477,6 +477,22 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_params *p, 51 | return true; 52 | } 53 | 54 | +static bool access_pmselr(struct kvm_vcpu *vcpu, const struct sys_reg_params *p, 55 | + const struct sys_reg_desc *r) 56 | +{ 57 | + if (!kvm_arm_pmu_v3_ready(vcpu)) 58 | + return trap_raz_wi(vcpu, p, r); 59 | + 60 | + if (p->is_write) 61 | + vcpu_sys_reg(vcpu, PMSELR_EL0) = *vcpu_reg(vcpu, p->Rt); 62 | + else 63 | + /* return PMSELR.SEL field */ 64 | + *vcpu_reg(vcpu, p->Rt) = vcpu_sys_reg(vcpu, PMSELR_EL0) 65 | + & ARMV8_PMU_COUNTER_MASK; 66 | + 67 | + return true; 68 | +} 69 | + 70 | /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */ 71 | #define DBG_BCR_BVR_WCR_WVR_EL1(n) \ 72 | /* DBGBVRn_EL1 */ \ 73 | @@ -676,7 +692,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { 74 | trap_raz_wi }, 75 | /* PMSELR_EL0 */ 76 | { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b101), 77 | - trap_raz_wi }, 78 | + access_pmselr, reset_unknown, PMSELR_EL0 }, 79 | /* PMCEID0_EL0 */ 80 | { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b110), 81 | trap_raz_wi }, 82 | @@ -927,7 +943,7 @@ static const struct sys_reg_desc cp15_regs[] = { 83 | { Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi }, 84 | { Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi }, 85 | { Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi }, 86 | - { Op1( 0), CRn( 9), CRm(12), Op2( 5), trap_raz_wi }, 87 | + { Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmselr }, 88 | { Op1( 0), CRn( 9), CRm(12), Op2( 6), trap_raz_wi }, 89 | { Op1( 0), CRn( 9), CRm(12), Op2( 7), trap_raz_wi }, 90 | { Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi }, 91 | -------------------------------------------------------------------------------- /backports/pmu/patch4.txt: -------------------------------------------------------------------------------- 1 | From 051ff581ce70e822729e9474941f3c206cbf7436 Mon Sep 17 00:00:00 2001 2 | From: Shannon Zhao 3 | Date: Tue, 8 Dec 2015 15:29:06 +0800 4 | Subject: [PATCH] arm64: KVM: Add access handler for event counter register 5 | 6 | These kind of registers include PMEVCNTRn, PMCCNTR and PMXEVCNTR which 7 | is mapped to PMEVCNTRn. 8 | 9 | The access handler translates all aarch32 register offsets to aarch64 10 | ones and uses vcpu_sys_reg() to access their values to avoid taking care 11 | of big endian. 12 | 13 | When reading these registers, return the sum of register value and the 14 | value perf event counts. 15 | 16 | Signed-off-by: Shannon Zhao 17 | Reviewed-by: Andrew Jones 18 | Signed-off-by: Marc Zyngier 19 | --- 20 | arch/arm64/include/asm/kvm_host.h | 3 + 21 | arch/arm64/kvm/Makefile | 1 + 22 | arch/arm64/kvm/sys_regs.c | 139 +++++++++++++++++++++++++++++- 23 | include/kvm/arm_pmu.h | 11 +++ 24 | virt/kvm/arm/pmu.c | 63 ++++++++++++++ 25 | 5 files changed, 213 insertions(+), 4 deletions(-) 26 | create mode 100644 virt/kvm/arm/pmu.c 27 | 28 | diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h 29 | index 57a2d8f76c2f62..4ae27fe3424082 100644 30 | --- a/arch/arm64/include/asm/kvm_asm.h 31 | +++ b/arch/arm64/include/asm/kvm_asm.h 32 | @@ -59,15 +59,18 @@ 33 | /* Performance Monitor Registers */ 34 | #define PMCR_EL0 88 /* Control Register */ 35 | #define PMSELR_EL0 89 /* Event Counter Selection Register */ 36 | +#define PMEVCNTR0_EL0 90 /* Event Counter Register */ 37 | +#define PMEVCNTR30_EL0 120 38 | +#define PMCCNTR_EL0 121 /* Cycle Counter Register */ 39 | 40 | /* 32bit specific registers. Keep them at the end of the range */ 41 | -#define DACR32_EL2 90 /* Domain Access Control Register */ 42 | -#define IFSR32_EL2 91 /* Instruction Fault Status Register */ 43 | -#define FPEXC32_EL2 92 /* Floating-Point Exception Control Register */ 44 | -#define DBGVCR32_EL2 93 /* Debug Vector Catch Register */ 45 | -#define TEECR32_EL1 94 /* ThumbEE Configuration Register */ 46 | -#define TEEHBR32_EL1 95 /* ThumbEE Handler Base Register */ 47 | -#define NR_SYS_REGS 96 48 | +#define DACR32_EL2 122 /* Domain Access Control Register */ 49 | +#define IFSR32_EL2 123 /* Instruction Fault Status Register */ 50 | +#define FPEXC32_EL2 124 /* Floating-Point Exception Control Register */ 51 | +#define DBGVCR32_EL2 125 /* Debug Vector Catch Register */ 52 | +#define TEECR32_EL1 126 /* ThumbEE Configuration Register */ 53 | +#define TEEHBR32_EL1 127 /* ThumbEE Handler Base Register */ 54 | +#define NR_SYS_REGS 128 55 | 56 | /* 32bit mapping */ 57 | #define c0_MPIDR (MPIDR_EL1 * 2) /* MultiProcessor ID Register */ 58 | diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile 59 | index caee9ee8e12af1..122cff482ac459 100644 60 | --- a/arch/arm64/kvm/Makefile 61 | +++ b/arch/arm64/kvm/Makefile 62 | @@ -25,3 +25,5 @@ 63 | kvm-$(CONFIG_KVM_ARM_VGIC) += $(KVM)/arm/vgic-v3.o 64 | kvm-$(CONFIG_KVM_ARM_VGIC) += vgic-v3-switch.o 65 | kvm-$(CONFIG_KVM_ARM_TIMER) += $(KVM)/arm/arch_timer.o 66 | + 67 | +kvm-$(CONFIG_KVM_ARM_PMU) += $(KVM)/arm/pmu.o 68 | diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c 69 | index ca8cdf6d83cf70..ff3214b6fbc87d 100644 70 | --- a/arch/arm64/kvm/sys_regs.c 71 | +++ b/arch/arm64/kvm/sys_regs.c 72 | @@ -513,6 +513,56 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, const struct sys_reg_params *p, 73 | return true; 74 | } 75 | 76 | +static bool pmu_counter_idx_valid(struct kvm_vcpu *vcpu, u64 idx) 77 | +{ 78 | + u64 pmcr, val; 79 | + 80 | + pmcr = vcpu_sys_reg(vcpu, PMCR_EL0); 81 | + val = (pmcr >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK; 82 | + if (idx >= val && idx != ARMV8_PMU_CYCLE_IDX) 83 | + return false; 84 | + 85 | + return true; 86 | +} 87 | + 88 | +static bool access_pmu_evcntr(struct kvm_vcpu *vcpu, 89 | + const struct sys_reg_params *p, 90 | + const struct sys_reg_desc *r) 91 | +{ 92 | + u64 idx; 93 | + 94 | + if (!kvm_arm_pmu_v3_ready(vcpu)) 95 | + return trap_raz_wi(vcpu, p, r); 96 | + 97 | + if (r->CRn == 9 && r->CRm == 13) { 98 | + if (r->Op2 == 2) { 99 | + /* PMXEVCNTR_EL0 */ 100 | + idx = vcpu_sys_reg(vcpu, PMSELR_EL0) 101 | + & ARMV8_PMU_COUNTER_MASK; 102 | + } else if (r->Op2 == 0) { 103 | + /* PMCCNTR_EL0 */ 104 | + idx = ARMV8_PMU_CYCLE_IDX; 105 | + } else { 106 | + BUG(); 107 | + } 108 | + } else if (r->CRn == 14 && (r->CRm & 12) == 8) { 109 | + /* PMEVCNTRn_EL0 */ 110 | + idx = ((r->CRm & 3) << 3) | (r->Op2 & 7); 111 | + } else { 112 | + BUG(); 113 | + } 114 | + 115 | + if (!pmu_counter_idx_valid(vcpu, idx)) 116 | + return false; 117 | + 118 | + if (p->is_write) 119 | + kvm_pmu_set_counter_value(vcpu, idx, *vcpu_reg(vcpu, p->Rt)); 120 | + else 121 | + *vcpu_reg(vcpu, p->Rt) = kvm_pmu_get_counter_value(vcpu, idx); 122 | + 123 | + return true; 124 | +} 125 | + 126 | /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */ 127 | #define DBG_BCR_BVR_WCR_WVR_EL1(n) \ 128 | /* DBGBVRn_EL1 */ \ 129 | @@ -528,6 +578,13 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, const struct sys_reg_params *p, 130 | { Op0(0b10), Op1(0b000), CRn(0b0000), CRm((n)), Op2(0b111), \ 131 | trap_wcr, reset_wcr, n, 0, get_wcr, set_wcr } 132 | 133 | +/* Macro to expand the PMEVCNTRn_EL0 register */ 134 | +#define PMU_PMEVCNTR_EL0(n) \ 135 | + /* PMEVCNTRn_EL0 */ \ 136 | + { Op0(0b11), Op1(0b011), CRn(0b1110), \ 137 | + CRm((0b1000 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)), \ 138 | + access_pmu_evcntr, reset_unknown, (PMEVCNTR0_EL0 + n), } 139 | + 140 | /* 141 | * Architected system registers. 142 | * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2 143 | @@ -721,13 +778,13 @@ static const struct sys_reg_desc sys_reg_descs[] = { 144 | access_pmceid }, 145 | /* PMCCNTR_EL0 */ 146 | { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b000), 147 | - trap_raz_wi }, 148 | + access_pmu_evcntr, reset_unknown, PMCCNTR_EL0 }, 149 | /* PMXEVTYPER_EL0 */ 150 | { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b001), 151 | trap_raz_wi }, 152 | /* PMXEVCNTR_EL0 */ 153 | { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b010), 154 | - trap_raz_wi }, 155 | + access_pmu_evcntr }, 156 | /* PMUSERENR_EL0 */ 157 | { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b000), 158 | trap_raz_wi }, 159 | @@ -742,6 +799,39 @@ static const struct sys_reg_desc sys_reg_descs[] = { 160 | { Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b011), 161 | NULL, reset_unknown, TPIDRRO_EL0 }, 162 | 163 | + /* PMEVCNTRn_EL0 */ 164 | + PMU_PMEVCNTR_EL0(0), 165 | + PMU_PMEVCNTR_EL0(1), 166 | + PMU_PMEVCNTR_EL0(2), 167 | + PMU_PMEVCNTR_EL0(3), 168 | + PMU_PMEVCNTR_EL0(4), 169 | + PMU_PMEVCNTR_EL0(5), 170 | + PMU_PMEVCNTR_EL0(6), 171 | + PMU_PMEVCNTR_EL0(7), 172 | + PMU_PMEVCNTR_EL0(8), 173 | + PMU_PMEVCNTR_EL0(9), 174 | + PMU_PMEVCNTR_EL0(10), 175 | + PMU_PMEVCNTR_EL0(11), 176 | + PMU_PMEVCNTR_EL0(12), 177 | + PMU_PMEVCNTR_EL0(13), 178 | + PMU_PMEVCNTR_EL0(14), 179 | + PMU_PMEVCNTR_EL0(15), 180 | + PMU_PMEVCNTR_EL0(16), 181 | + PMU_PMEVCNTR_EL0(17), 182 | + PMU_PMEVCNTR_EL0(18), 183 | + PMU_PMEVCNTR_EL0(19), 184 | + PMU_PMEVCNTR_EL0(20), 185 | + PMU_PMEVCNTR_EL0(21), 186 | + PMU_PMEVCNTR_EL0(22), 187 | + PMU_PMEVCNTR_EL0(23), 188 | + PMU_PMEVCNTR_EL0(24), 189 | + PMU_PMEVCNTR_EL0(25), 190 | + PMU_PMEVCNTR_EL0(26), 191 | + PMU_PMEVCNTR_EL0(27), 192 | + PMU_PMEVCNTR_EL0(28), 193 | + PMU_PMEVCNTR_EL0(29), 194 | + PMU_PMEVCNTR_EL0(30), 195 | + 196 | /* DACR32_EL2 */ 197 | { Op0(0b11), Op1(0b100), CRn(0b0011), CRm(0b0000), Op2(0b000), 198 | NULL, reset_unknown, DACR32_EL2 }, 199 | @@ -931,6 +1021,13 @@ static const struct sys_reg_desc cp14_64_regs[] = { 200 | { Op1( 0), CRm( 2), .access = trap_raz_wi }, 201 | }; 202 | 203 | +/* Macro to expand the PMEVCNTRn register */ 204 | +#define PMU_PMEVCNTR(n) \ 205 | + /* PMEVCNTRn */ \ 206 | + { Op1(0), CRn(0b1110), \ 207 | + CRm((0b1000 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)), \ 208 | + access_pmu_evcntr } 209 | + 210 | /* 211 | * Trapped cp15 registers. TTBR0/TTBR1 get a double encoding, 212 | * depending on the way they are accessed (as a 32bit or a 64bit 213 | @@ -687,9 +784,9 @@ 214 | { Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmselr }, 215 | { Op1( 0), CRn( 9), CRm(12), Op2( 6), trap_raz_wi }, 216 | { Op1( 0), CRn( 9), CRm(12), Op2( 7), trap_raz_wi }, 217 | - { Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi }, 218 | + { Op1( 0), CRn( 9), CRm(13), Op2( 0), access_pmu_evcntr }, 219 | { Op1( 0), CRn( 9), CRm(13), Op2( 1), trap_raz_wi }, 220 | - { Op1( 0), CRn( 9), CRm(13), Op2( 2), trap_raz_wi }, 221 | + { Op1( 0), CRn( 9), CRm(13), Op2( 2), access_pmu_evcntr }, 222 | { Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi }, 223 | { Op1( 0), CRn( 9), CRm(14), Op2( 1), trap_raz_wi }, 224 | { Op1( 0), CRn( 9), CRm(14), Op2( 2), trap_raz_wi }, 225 | @@ -703,10 +800,44 @@ 226 | { Op1( 0), CRn(12), CRm(12), Op2( 5), trap_raz_wi }, 227 | 228 | { Op1( 0), CRn(13), CRm( 0), Op2( 1), access_vm_reg, NULL, c13_CID }, 229 | + 230 | + /* PMEVCNTRn */ 231 | + PMU_PMEVCNTR(0), 232 | + PMU_PMEVCNTR(1), 233 | + PMU_PMEVCNTR(2), 234 | + PMU_PMEVCNTR(3), 235 | + PMU_PMEVCNTR(4), 236 | + PMU_PMEVCNTR(5), 237 | + PMU_PMEVCNTR(6), 238 | + PMU_PMEVCNTR(7), 239 | + PMU_PMEVCNTR(8), 240 | + PMU_PMEVCNTR(9), 241 | + PMU_PMEVCNTR(10), 242 | + PMU_PMEVCNTR(11), 243 | + PMU_PMEVCNTR(12), 244 | + PMU_PMEVCNTR(13), 245 | + PMU_PMEVCNTR(14), 246 | + PMU_PMEVCNTR(15), 247 | + PMU_PMEVCNTR(16), 248 | + PMU_PMEVCNTR(17), 249 | + PMU_PMEVCNTR(18), 250 | + PMU_PMEVCNTR(19), 251 | + PMU_PMEVCNTR(20), 252 | + PMU_PMEVCNTR(21), 253 | + PMU_PMEVCNTR(22), 254 | + PMU_PMEVCNTR(23), 255 | + PMU_PMEVCNTR(24), 256 | + PMU_PMEVCNTR(25), 257 | + PMU_PMEVCNTR(26), 258 | + PMU_PMEVCNTR(27), 259 | + PMU_PMEVCNTR(28), 260 | + PMU_PMEVCNTR(29), 261 | + PMU_PMEVCNTR(30), 262 | }; 263 | 264 | static const struct sys_reg_desc cp15_64_regs[] = { 265 | { Op1( 0), CRn( 0), CRm( 2), Op2( 0), access_vm_reg, NULL, c2_TTBR0 }, 266 | + { Op1( 0), CRn( 0), CRm( 9), Op2( 0), access_pmu_evcntr }, 267 | { Op1( 1), CRn( 0), CRm( 2), Op2( 0), access_vm_reg, NULL, c2_TTBR1 }, 268 | }; 269 | 270 | diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h 271 | index 8157fe5bcbb0ea..bcb7698058399a 100644 272 | --- a/include/kvm/arm_pmu.h 273 | +++ b/include/kvm/arm_pmu.h 274 | @@ -23,6 +23,8 @@ 275 | #include 276 | #include 277 | 278 | +#define ARMV8_PMU_CYCLE_IDX (ARMV8_PMU_MAX_COUNTERS - 1) 279 | + 280 | struct kvm_pmc { 281 | u8 idx; /* index into the pmu->pmc array */ 282 | struct perf_event *perf_event; 283 | @@ -36,11 +38,20 @@ struct kvm_pmu { 284 | }; 285 | 286 | #define kvm_arm_pmu_v3_ready(v) ((v)->arch.pmu.ready) 287 | +u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx); 288 | +void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val); 289 | #else 290 | struct kvm_pmu { 291 | }; 292 | 293 | #define kvm_arm_pmu_v3_ready(v) (false) 294 | +static inline u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, 295 | + u64 select_idx) 296 | +{ 297 | + return 0; 298 | +} 299 | +static inline void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, 300 | + u64 select_idx, u64 val) {} 301 | #endif 302 | 303 | #endif 304 | diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c 305 | new file mode 100644 306 | index 00000000000000..cd74e6367cd61e 307 | --- /dev/null 308 | +++ b/virt/kvm/arm/pmu.c 309 | @@ -0,0 +1,63 @@ 310 | +/* 311 | + * Copyright (C) 2015 Linaro Ltd. 312 | + * Author: Shannon Zhao 313 | + * 314 | + * This program is free software; you can redistribute it and/or modify 315 | + * it under the terms of the GNU General Public License version 2 as 316 | + * published by the Free Software Foundation. 317 | + * 318 | + * This program is distributed in the hope that it will be useful, 319 | + * but WITHOUT ANY WARRANTY; without even the implied warranty of 320 | + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 321 | + * GNU General Public License for more details. 322 | + * 323 | + * You should have received a copy of the GNU General Public License 324 | + * along with this program. If not, see . 325 | + */ 326 | + 327 | +#include 328 | +#include 329 | +#include 330 | +#include 331 | +#include 332 | +#include 333 | + 334 | +/** 335 | + * kvm_pmu_get_counter_value - get PMU counter value 336 | + * @vcpu: The vcpu pointer 337 | + * @select_idx: The counter index 338 | + */ 339 | +u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx) 340 | +{ 341 | + u64 counter, reg, enabled, running; 342 | + struct kvm_pmu *pmu = &vcpu->arch.pmu; 343 | + struct kvm_pmc *pmc = &pmu->pmc[select_idx]; 344 | + 345 | + reg = (select_idx == ARMV8_PMU_CYCLE_IDX) 346 | + ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx; 347 | + counter = vcpu_sys_reg(vcpu, reg); 348 | + 349 | + /* The real counter value is equal to the value of counter register plus 350 | + * the value perf event counts. 351 | + */ 352 | + if (pmc->perf_event) 353 | + counter += perf_event_read_value(pmc->perf_event, &enabled, 354 | + &running); 355 | + 356 | + return counter & pmc->bitmask; 357 | +} 358 | + 359 | +/** 360 | + * kvm_pmu_set_counter_value - set PMU counter value 361 | + * @vcpu: The vcpu pointer 362 | + * @select_idx: The counter index 363 | + * @val: The counter value 364 | + */ 365 | +void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val) 366 | +{ 367 | + u64 reg; 368 | + 369 | + reg = (select_idx == ARMV8_PMU_CYCLE_IDX) 370 | + ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx; 371 | + vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx); 372 | +} 373 | -------------------------------------------------------------------------------- /backports/pmu/patch5.txt: -------------------------------------------------------------------------------- 1 | From 96b0eebcc6a14e3bdb9ff0e7176fbfc225bdde94 Mon Sep 17 00:00:00 2001 2 | From: Shannon Zhao 3 | Date: Tue, 8 Sep 2015 12:26:13 +0800 4 | Subject: [PATCH] arm64: KVM: Add access handler for PMCNTENSET and PMCNTENCLR 5 | register 6 | 7 | Since the reset value of PMCNTENSET and PMCNTENCLR is UNKNOWN, use 8 | reset_unknown for its reset handler. Add a handler to emulate writing 9 | PMCNTENSET or PMCNTENCLR register. 10 | 11 | When writing to PMCNTENSET, call perf_event_enable to enable the perf 12 | event. When writing to PMCNTENCLR, call perf_event_disable to disable 13 | the perf event. 14 | 15 | Signed-off-by: Shannon Zhao 16 | Signed-off-by: Marc Zyngier 17 | --- 18 | arch/arm64/include/asm/kvm_host.h | 1 + 19 | arch/arm64/kvm/sys_regs.c | 35 ++++++++++++++-- 20 | include/kvm/arm_pmu.h | 9 +++++ 21 | virt/kvm/arm/pmu.c | 66 +++++++++++++++++++++++++++++++ 22 | 4 files changed, 107 insertions(+), 4 deletions(-) 23 | 24 | diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h 25 | index 4ae27fe3424082..993793b422aa94 100644 26 | --- a/arch/arm64/include/asm/kvm_asm.h 27 | +++ b/arch/arm64/include/asm/kvm_asm.h 28 | @@ -62,15 +62,16 @@ 29 | #define PMEVCNTR0_EL0 90 /* Event Counter Register */ 30 | #define PMEVCNTR30_EL0 120 31 | #define PMCCNTR_EL0 121 /* Cycle Counter Register */ 32 | +#define PMCNTENSET_EL0 122 /* Count Enable Set Register */ 33 | 34 | /* 32bit specific registers. Keep them at the end of the range */ 35 | -#define DACR32_EL2 122 /* Domain Access Control Register */ 36 | -#define IFSR32_EL2 123 /* Instruction Fault Status Register */ 37 | -#define FPEXC32_EL2 124 /* Floating-Point Exception Control Register */ 38 | -#define DBGVCR32_EL2 125 /* Debug Vector Catch Register */ 39 | -#define TEECR32_EL1 126 /* ThumbEE Configuration Register */ 40 | -#define TEEHBR32_EL1 127 /* ThumbEE Handler Base Register */ 41 | -#define NR_SYS_REGS 128 42 | +#define DACR32_EL2 123 /* Domain Access Control Register */ 43 | +#define IFSR32_EL2 124 /* Instruction Fault Status Register */ 44 | +#define FPEXC32_EL2 125 /* Floating-Point Exception Control Register */ 45 | +#define DBGVCR32_EL2 126 /* Debug Vector Catch Register */ 46 | +#define TEECR32_EL1 127 /* ThumbEE Configuration Register */ 47 | +#define TEEHBR32_EL1 128 /* ThumbEE Handler Base Register */ 48 | +#define NR_SYS_REGS 129 49 | 50 | /* 32bit mapping */ 51 | #define c0_MPIDR (MPIDR_EL1 * 2) /* MultiProcessor ID Register */ 52 | diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c 53 | index ff3214b6fbc87d..d4b6ae3c09b560 100644 54 | --- a/arch/arm64/kvm/sys_regs.c 55 | +++ b/arch/arm64/kvm/sys_regs.c 56 | @@ -563,6 +563,33 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu, 57 | return true; 58 | } 59 | 60 | +static bool access_pmcnten(struct kvm_vcpu *vcpu, const struct sys_reg_params *p, 61 | + const struct sys_reg_desc *r) 62 | +{ 63 | + u64 val, mask; 64 | + 65 | + if (!kvm_arm_pmu_v3_ready(vcpu)) 66 | + return trap_raz_wi(vcpu, p, r); 67 | + 68 | + mask = kvm_pmu_valid_counter_mask(vcpu); 69 | + if (p->is_write) { 70 | + val = *vcpu_reg(vcpu, p->Rt) & mask; 71 | + if (r->Op2 & 0x1) { 72 | + /* accessing PMCNTENSET_EL0 */ 73 | + vcpu_sys_reg(vcpu, PMCNTENSET_EL0) |= val; 74 | + kvm_pmu_enable_counter(vcpu, val); 75 | + } else { 76 | + /* accessing PMCNTENCLR_EL0 */ 77 | + vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= ~val; 78 | + kvm_pmu_disable_counter(vcpu, val); 79 | + } 80 | + } else { 81 | + *vcpu_reg(vcpu, p->Rt) = vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask; 82 | + } 83 | + 84 | + return true; 85 | +} 86 | + 87 | /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */ 88 | #define DBG_BCR_BVR_WCR_WVR_EL1(n) \ 89 | /* DBGBVRn_EL1 */ \ 90 | @@ -757,10 +784,10 @@ static const struct sys_reg_desc sys_reg_descs[] = { 91 | access_pmcr, reset_pmcr, }, 92 | /* PMCNTENSET_EL0 */ 93 | { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001), 94 | - trap_raz_wi }, 95 | + access_pmcnten, reset_unknown, PMCNTENSET_EL0 }, 96 | /* PMCNTENCLR_EL0 */ 97 | { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b010), 98 | - trap_raz_wi }, 99 | + access_pmcnten, NULL, PMCNTENSET_EL0 }, 100 | /* PMOVSCLR_EL0 */ 101 | { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b011), 102 | trap_raz_wi }, 103 | @@ -1057,8 +1084,8 @@ static const struct sys_reg_desc cp15_regs[] = { 104 | 105 | /* PMU */ 106 | { Op1( 0), CRn( 9), CRm(12), Op2( 0), access_pmcr }, 107 | - { Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi }, 108 | - { Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi }, 109 | + { Op1( 0), CRn( 9), CRm(12), Op2( 1), access_pmcnten }, 110 | + { Op1( 0), CRn( 9), CRm(12), Op2( 2), access_pmcnten }, 111 | { Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi }, 112 | { Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmselr }, 113 | { Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmceid }, 114 | diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h 115 | index bcb7698058399a..b70058ef1dd636 100644 116 | --- a/include/kvm/arm_pmu.h 117 | +++ b/include/kvm/arm_pmu.h 118 | @@ -40,6 +40,9 @@ struct kvm_pmu { 119 | #define kvm_arm_pmu_v3_ready(v) ((v)->arch.pmu.ready) 120 | u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx); 121 | void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val); 122 | +u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu); 123 | +void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val); 124 | +void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val); 125 | #else 126 | struct kvm_pmu { 127 | }; 128 | @@ -52,6 +55,12 @@ static inline u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, 129 | } 130 | static inline void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, 131 | u64 select_idx, u64 val) {} 132 | +static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu) 133 | +{ 134 | + return 0; 135 | +} 136 | +static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {} 137 | +static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {} 138 | #endif 139 | 140 | #endif 141 | diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c 142 | index cd74e6367cd61e..f8dc174308135a 100644 143 | --- a/virt/kvm/arm/pmu.c 144 | +++ b/virt/kvm/arm/pmu.c 145 | @@ -61,3 +61,69 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val) 146 | ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx; 147 | vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx); 148 | } 149 | + 150 | +u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu) 151 | +{ 152 | + u64 val = vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMU_PMCR_N_SHIFT; 153 | + 154 | + val &= ARMV8_PMU_PMCR_N_MASK; 155 | + if (val == 0) 156 | + return BIT(ARMV8_PMU_CYCLE_IDX); 157 | + else 158 | + return GENMASK(val - 1, 0) | BIT(ARMV8_PMU_CYCLE_IDX); 159 | +} 160 | + 161 | +/** 162 | + * kvm_pmu_enable_counter - enable selected PMU counter 163 | + * @vcpu: The vcpu pointer 164 | + * @val: the value guest writes to PMCNTENSET register 165 | + * 166 | + * Call perf_event_enable to start counting the perf event 167 | + */ 168 | +void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) 169 | +{ 170 | + int i; 171 | + struct kvm_pmu *pmu = &vcpu->arch.pmu; 172 | + struct kvm_pmc *pmc; 173 | + 174 | + if (!(vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val) 175 | + return; 176 | + 177 | + for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) { 178 | + if (!(val & BIT(i))) 179 | + continue; 180 | + 181 | + pmc = &pmu->pmc[i]; 182 | + if (pmc->perf_event) { 183 | + perf_event_enable(pmc->perf_event); 184 | + if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE) 185 | + kvm_debug("fail to enable perf event\n"); 186 | + } 187 | + } 188 | +} 189 | + 190 | +/** 191 | + * kvm_pmu_disable_counter - disable selected PMU counter 192 | + * @vcpu: The vcpu pointer 193 | + * @val: the value guest writes to PMCNTENCLR register 194 | + * 195 | + * Call perf_event_disable to stop counting the perf event 196 | + */ 197 | +void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) 198 | +{ 199 | + int i; 200 | + struct kvm_pmu *pmu = &vcpu->arch.pmu; 201 | + struct kvm_pmc *pmc; 202 | + 203 | + if (!val) 204 | + return; 205 | + 206 | + for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) { 207 | + if (!(val & BIT(i))) 208 | + continue; 209 | + 210 | + pmc = &pmu->pmc[i]; 211 | + if (pmc->perf_event) 212 | + perf_event_disable(pmc->perf_event); 213 | + } 214 | +} 215 | -------------------------------------------------------------------------------- /backports/pmu/patch6.txt: -------------------------------------------------------------------------------- 1 | From 7f7663587165fe1a81c3390358cb70eb7234706f Mon Sep 17 00:00:00 2001 2 | From: Shannon Zhao 3 | Date: Fri, 3 Jul 2015 14:27:25 +0800 4 | Subject: [PATCH] arm64: KVM: PMU: Add perf event map and introduce perf event 5 | creating function 6 | 7 | When we use tools like perf on host, perf passes the event type and the 8 | id of this event type category to kernel, then kernel will map them to 9 | hardware event number and write this number to PMU PMEVTYPER_EL0 10 | register. When getting the event number in KVM, directly use raw event 11 | type to create a perf_event for it. 12 | 13 | Signed-off-by: Shannon Zhao 14 | Reviewed-by: Marc Zyngier 15 | Signed-off-by: Marc Zyngier 16 | --- 17 | include/kvm/arm_pmu.h | 4 +++ 18 | virt/kvm/arm/pmu.c | 74 +++++++++++++++++++++++++++++++++++++++++++ 19 | 2 files changed, 78 insertions(+) 20 | 21 | diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h 22 | index b70058ef1dd636..c57377970d4e3a 100644 23 | --- a/include/kvm/arm_pmu.h 24 | +++ b/include/kvm/arm_pmu.h 25 | @@ -43,6 +43,8 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val); 26 | u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu); 27 | void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val); 28 | void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val); 29 | +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data, 30 | + u64 select_idx); 31 | #else 32 | struct kvm_pmu { 33 | }; 34 | @@ -61,6 +63,8 @@ static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu) 35 | } 36 | static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {} 37 | static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {} 38 | +static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, 39 | + u64 data, u64 select_idx) {} 40 | #endif 41 | 42 | #endif 43 | diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c 44 | index f8dc174308135a..591a11d1bd1344 100644 45 | --- a/virt/kvm/arm/pmu.c 46 | +++ b/virt/kvm/arm/pmu.c 47 | @@ -62,6 +62,27 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val) 48 | vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx); 49 | } 50 | 51 | +/** 52 | + * kvm_pmu_stop_counter - stop PMU counter 53 | + * @pmc: The PMU counter pointer 54 | + * 55 | + * If this counter has been configured to monitor some event, release it here. 56 | + */ 57 | +static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc) 58 | +{ 59 | + u64 counter, reg; 60 | + 61 | + if (pmc->perf_event) { 62 | + counter = kvm_pmu_get_counter_value(vcpu, pmc->idx); 63 | + reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX) 64 | + ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx; 65 | + vcpu_sys_reg(vcpu, reg) = counter; 66 | + perf_event_disable(pmc->perf_event); 67 | + perf_event_release_kernel(pmc->perf_event); 68 | + pmc->perf_event = NULL; 69 | + } 70 | +} 71 | + 72 | u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu) 73 | { 74 | u64 val = vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMU_PMCR_N_SHIFT; 75 | @@ -127,3 +148,56 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) 76 | perf_event_disable(pmc->perf_event); 77 | } 78 | } 79 | + 80 | +static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx) 81 | +{ 82 | + return (vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) && 83 | + (vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & BIT(select_idx)); 84 | +} 85 | + 86 | +/** 87 | + * kvm_pmu_set_counter_event_type - set selected counter to monitor some event 88 | + * @vcpu: The vcpu pointer 89 | + * @data: The data guest writes to PMXEVTYPER_EL0 90 | + * @select_idx: The number of selected counter 91 | + * 92 | + * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an 93 | + * event with given hardware event number. Here we call perf_event API to 94 | + * emulate this action and create a kernel perf event for it. 95 | + */ 96 | +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data, 97 | + u64 select_idx) 98 | +{ 99 | + struct kvm_pmu *pmu = &vcpu->arch.pmu; 100 | + struct kvm_pmc *pmc = &pmu->pmc[select_idx]; 101 | + struct perf_event *event; 102 | + struct perf_event_attr attr; 103 | + u64 eventsel, counter; 104 | + 105 | + kvm_pmu_stop_counter(vcpu, pmc); 106 | + eventsel = data & ARMV8_PMU_EVTYPE_EVENT; 107 | + 108 | + memset(&attr, 0, sizeof(struct perf_event_attr)); 109 | + attr.type = PERF_TYPE_RAW; 110 | + attr.size = sizeof(attr); 111 | + attr.pinned = 1; 112 | + attr.disabled = !kvm_pmu_counter_is_enabled(vcpu, select_idx); 113 | + attr.exclude_user = data & ARMV8_PMU_EXCLUDE_EL0 ? 1 : 0; 114 | + attr.exclude_kernel = data & ARMV8_PMU_EXCLUDE_EL1 ? 1 : 0; 115 | + attr.exclude_hv = 1; /* Don't count EL2 events */ 116 | + attr.exclude_host = 1; /* Don't count host events */ 117 | + attr.config = eventsel; 118 | + 119 | + counter = kvm_pmu_get_counter_value(vcpu, select_idx); 120 | + /* The initial sample period (overflow count) of an event. */ 121 | + attr.sample_period = (-counter) & pmc->bitmask; 122 | + 123 | + event = perf_event_create_kernel_counter(&attr, -1, current, NULL, pmc); 124 | + if (IS_ERR(event)) { 125 | + pr_err_once("kvm: pmu event creation failed %ld\n", 126 | + PTR_ERR(event)); 127 | + return; 128 | + } 129 | + 130 | + pmc->perf_event = event; 131 | +} 132 | -------------------------------------------------------------------------------- /backports/pmu/patch7.txt: -------------------------------------------------------------------------------- 1 | From 9feb21ac57d53003557ddc01f9aee496269996c7 Mon Sep 17 00:00:00 2001 2 | From: Shannon Zhao 3 | Date: Tue, 23 Feb 2016 11:11:27 +0800 4 | Subject: [PATCH] arm64: KVM: Add access handler for event type register 5 | 6 | These kind of registers include PMEVTYPERn, PMCCFILTR and PMXEVTYPER 7 | which is mapped to PMEVTYPERn or PMCCFILTR. 8 | 9 | The access handler translates all aarch32 register offsets to aarch64 10 | ones and uses vcpu_sys_reg() to access their values to avoid taking care 11 | of big endian. 12 | 13 | When writing to these registers, create a perf_event for the selected 14 | event type. 15 | 16 | Signed-off-by: Shannon Zhao 17 | Reviewed-by: Andrew Jones 18 | Signed-off-by: Marc Zyngier 19 | --- 20 | arch/arm64/include/asm/kvm_host.h | 3 + 21 | arch/arm64/kvm/sys_regs.c | 126 +++++++++++++++++++++++++++++- 22 | 2 files changed, 127 insertions(+), 2 deletions(-) 23 | 24 | diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h 25 | index 993793b422aa94..121182dd0947bb 100644 26 | --- a/arch/arm64/include/asm/kvm_asm.h 27 | +++ b/arch/arm64/include/asm/kvm_asm.h 28 | @@ -62,16 +62,19 @@ 29 | #define PMEVCNTR0_EL0 90 /* Event Counter Register */ 30 | #define PMEVCNTR30_EL0 120 31 | #define PMCCNTR_EL0 121 /* Cycle Counter Register */ 32 | -#define PMCNTENSET_EL0 122 /* Count Enable Set Register */ 33 | +#define PMEVTYPER0_EL0 122 /* Event Type Register (0-30) */ 34 | +#define PMEVTYPER30_EL0 152 35 | +#define PMCCFILTR_EL0 153 /* Cycle Count Filter Register */ 36 | +#define PMCNTENSET_EL0 154 /* Count Enable Set Register */ 37 | 38 | /* 32bit specific registers. Keep them at the end of the range */ 39 | -#define DACR32_EL2 123 /* Domain Access Control Register */ 40 | -#define IFSR32_EL2 124 /* Instruction Fault Status Register */ 41 | -#define FPEXC32_EL2 125 /* Floating-Point Exception Control Register */ 42 | -#define DBGVCR32_EL2 126 /* Debug Vector Catch Register */ 43 | -#define TEECR32_EL1 127 /* ThumbEE Configuration Register */ 44 | -#define TEEHBR32_EL1 128 /* ThumbEE Handler Base Register */ 45 | -#define NR_SYS_REGS 129 46 | +#define DACR32_EL2 155 /* Domain Access Control Register */ 47 | +#define IFSR32_EL2 156 /* Instruction Fault Status Register */ 48 | +#define FPEXC32_EL2 157 /* Floating-Point Exception Control Register */ 49 | +#define DBGVCR32_EL2 158 /* Debug Vector Catch Register */ 50 | +#define TEECR32_EL1 159 /* ThumbEE Configuration Register */ 51 | +#define TEEHBR32_EL1 160 /* ThumbEE Handler Base Register */ 52 | +#define NR_SYS_REGS 161 53 | 54 | /* 32bit mapping */ 55 | #define c0_MPIDR (MPIDR_EL1 * 2) /* MultiProcessor ID Register */ 56 | diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c 57 | index d4b6ae3c09b560..4faf324c9be945 100644 58 | --- a/arch/arm64/kvm/sys_regs.c 59 | +++ b/arch/arm64/kvm/sys_regs.c 60 | @@ -563,6 +563,42 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu, 61 | return true; 62 | } 63 | 64 | +static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, const struct sys_reg_params *p, 65 | + const struct sys_reg_desc *r) 66 | +{ 67 | + u64 idx, reg; 68 | + 69 | + if (!kvm_arm_pmu_v3_ready(vcpu)) 70 | + return trap_raz_wi(vcpu, p, r); 71 | + 72 | + if (r->CRn == 9 && r->CRm == 13 && r->Op2 == 1) { 73 | + /* PMXEVTYPER_EL0 */ 74 | + idx = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_PMU_COUNTER_MASK; 75 | + reg = PMEVTYPER0_EL0 + idx; 76 | + } else if (r->CRn == 14 && (r->CRm & 12) == 12) { 77 | + idx = ((r->CRm & 3) << 3) | (r->Op2 & 7); 78 | + if (idx == ARMV8_PMU_CYCLE_IDX) 79 | + reg = PMCCFILTR_EL0; 80 | + else 81 | + /* PMEVTYPERn_EL0 */ 82 | + reg = PMEVTYPER0_EL0 + idx; 83 | + } else { 84 | + BUG(); 85 | + } 86 | + 87 | + if (!pmu_counter_idx_valid(vcpu, idx)) 88 | + return false; 89 | + 90 | + if (p->is_write) { 91 | + kvm_pmu_set_counter_event_type(vcpu, *vcpu_reg(vcpu, p->Rt), idx); 92 | + vcpu_sys_reg(vcpu, reg) = (*vcpu_reg(vcpu, p->Rt)) & ARMV8_PMU_EVTYPE_MASK; 93 | + } else { 94 | + *vcpu_reg(vcpu, p->Rt) = vcpu_sys_reg(vcpu, reg) & ARMV8_PMU_EVTYPE_MASK; 95 | + } 96 | + 97 | + return true; 98 | +} 99 | + 100 | static bool access_pmcnten(struct kvm_vcpu *vcpu, const struct sys_reg_params *p, 101 | const struct sys_reg_desc *r) 102 | { 103 | @@ -612,6 +648,13 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p, 104 | CRm((0b1000 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)), \ 105 | access_pmu_evcntr, reset_unknown, (PMEVCNTR0_EL0 + n), } 106 | 107 | +/* Macro to expand the PMEVTYPERn_EL0 register */ 108 | +#define PMU_PMEVTYPER_EL0(n) \ 109 | + /* PMEVTYPERn_EL0 */ \ 110 | + { Op0(0b11), Op1(0b011), CRn(0b1110), \ 111 | + CRm((0b1100 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)), \ 112 | + access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), } 113 | + 114 | /* 115 | * Architected system registers. 116 | * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2 117 | @@ -808,7 +851,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { 118 | access_pmu_evcntr, reset_unknown, PMCCNTR_EL0 }, 119 | /* PMXEVTYPER_EL0 */ 120 | { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b001), 121 | - trap_raz_wi }, 122 | + access_pmu_evtyper }, 123 | /* PMXEVCNTR_EL0 */ 124 | { Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b010), 125 | access_pmu_evcntr }, 126 | @@ -858,6 +901,44 @@ static const struct sys_reg_desc sys_reg_descs[] = { 127 | PMU_PMEVCNTR_EL0(28), 128 | PMU_PMEVCNTR_EL0(29), 129 | PMU_PMEVCNTR_EL0(30), 130 | + /* PMEVTYPERn_EL0 */ 131 | + PMU_PMEVTYPER_EL0(0), 132 | + PMU_PMEVTYPER_EL0(1), 133 | + PMU_PMEVTYPER_EL0(2), 134 | + PMU_PMEVTYPER_EL0(3), 135 | + PMU_PMEVTYPER_EL0(4), 136 | + PMU_PMEVTYPER_EL0(5), 137 | + PMU_PMEVTYPER_EL0(6), 138 | + PMU_PMEVTYPER_EL0(7), 139 | + PMU_PMEVTYPER_EL0(8), 140 | + PMU_PMEVTYPER_EL0(9), 141 | + PMU_PMEVTYPER_EL0(10), 142 | + PMU_PMEVTYPER_EL0(11), 143 | + PMU_PMEVTYPER_EL0(12), 144 | + PMU_PMEVTYPER_EL0(13), 145 | + PMU_PMEVTYPER_EL0(14), 146 | + PMU_PMEVTYPER_EL0(15), 147 | + PMU_PMEVTYPER_EL0(16), 148 | + PMU_PMEVTYPER_EL0(17), 149 | + PMU_PMEVTYPER_EL0(18), 150 | + PMU_PMEVTYPER_EL0(19), 151 | + PMU_PMEVTYPER_EL0(20), 152 | + PMU_PMEVTYPER_EL0(21), 153 | + PMU_PMEVTYPER_EL0(22), 154 | + PMU_PMEVTYPER_EL0(23), 155 | + PMU_PMEVTYPER_EL0(24), 156 | + PMU_PMEVTYPER_EL0(25), 157 | + PMU_PMEVTYPER_EL0(26), 158 | + PMU_PMEVTYPER_EL0(27), 159 | + PMU_PMEVTYPER_EL0(28), 160 | + PMU_PMEVTYPER_EL0(29), 161 | + PMU_PMEVTYPER_EL0(30), 162 | + /* PMCCFILTR_EL0 163 | + * This register resets as unknown in 64bit mode while it resets as zero 164 | + * in 32bit mode. Here we choose to reset it as zero for consistency. 165 | + */ 166 | + { Op0(0b11), Op1(0b011), CRn(0b1110), CRm(0b1111), Op2(0b111), 167 | + access_pmu_evtyper, reset_val, PMCCFILTR_EL0, 0 }, 168 | 169 | /* DACR32_EL2 */ 170 | { Op0(0b11), Op1(0b100), CRn(0b0011), CRm(0b0000), Op2(0b000), 171 | @@ -1055,6 +1136,13 @@ static const struct sys_reg_desc cp14_64_regs[] = { 172 | CRm((0b1000 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)), \ 173 | access_pmu_evcntr } 174 | 175 | +/* Macro to expand the PMEVTYPERn register */ 176 | +#define PMU_PMEVTYPER(n) \ 177 | + /* PMEVTYPERn */ \ 178 | + { Op1(0), CRn(0b1110), \ 179 | + CRm((0b1100 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)), \ 180 | + access_pmu_evtyper } 181 | + 182 | /* 183 | * Trapped cp15 registers. TTBR0/TTBR1 get a double encoding, 184 | * depending on the way they are accessed (as a 32bit or a 64bit 185 | @@ -1091,7 +1179,7 @@ static const struct sys_reg_desc cp15_regs[] = { 186 | { Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmceid }, 187 | { Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmceid }, 188 | { Op1( 0), CRn( 9), CRm(13), Op2( 0), access_pmu_evcntr }, 189 | - { Op1( 0), CRn( 9), CRm(13), Op2( 1), trap_raz_wi }, 190 | + { Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_evtyper }, 191 | { Op1( 0), CRn( 9), CRm(13), Op2( 2), access_pmu_evcntr }, 192 | { Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi }, 193 | { Op1( 0), CRn( 9), CRm(14), Op2( 1), trap_raz_wi }, 194 | @@ -1139,6 +1227,40 @@ static const struct sys_reg_desc cp15_regs[] = { 195 | PMU_PMEVCNTR(28), 196 | PMU_PMEVCNTR(29), 197 | PMU_PMEVCNTR(30), 198 | + /* PMEVTYPERn */ 199 | + PMU_PMEVTYPER(0), 200 | + PMU_PMEVTYPER(1), 201 | + PMU_PMEVTYPER(2), 202 | + PMU_PMEVTYPER(3), 203 | + PMU_PMEVTYPER(4), 204 | + PMU_PMEVTYPER(5), 205 | + PMU_PMEVTYPER(6), 206 | + PMU_PMEVTYPER(7), 207 | + PMU_PMEVTYPER(8), 208 | + PMU_PMEVTYPER(9), 209 | + PMU_PMEVTYPER(10), 210 | + PMU_PMEVTYPER(11), 211 | + PMU_PMEVTYPER(12), 212 | + PMU_PMEVTYPER(13), 213 | + PMU_PMEVTYPER(14), 214 | + PMU_PMEVTYPER(15), 215 | + PMU_PMEVTYPER(16), 216 | + PMU_PMEVTYPER(17), 217 | + PMU_PMEVTYPER(18), 218 | + PMU_PMEVTYPER(19), 219 | + PMU_PMEVTYPER(20), 220 | + PMU_PMEVTYPER(21), 221 | + PMU_PMEVTYPER(22), 222 | + PMU_PMEVTYPER(23), 223 | + PMU_PMEVTYPER(24), 224 | + PMU_PMEVTYPER(25), 225 | + PMU_PMEVTYPER(26), 226 | + PMU_PMEVTYPER(27), 227 | + PMU_PMEVTYPER(28), 228 | + PMU_PMEVTYPER(29), 229 | + PMU_PMEVTYPER(30), 230 | + /* PMCCFILTR */ 231 | + { Op1(0), CRn(14), CRm(15), Op2(7), access_pmu_evtyper }, 232 | }; 233 | 234 | static const struct sys_reg_desc cp15_64_regs[] = { 235 | -------------------------------------------------------------------------------- /backports/pmu/patch8.txt: -------------------------------------------------------------------------------- 1 | From f577f6c2a6a5ccabe98061f256a1e2ff468d5e93 Mon Sep 17 00:00:00 2001 2 | From: Shannon Zhao 3 | Date: Mon, 11 Jan 2016 20:56:17 +0800 4 | Subject: [PATCH] arm64: KVM: Introduce per-vcpu kvm device controls 5 | 6 | In some cases it needs to get/set attributes specific to a vcpu and so 7 | needs something else than ONE_REG. 8 | 9 | Let's copy the KVM_DEVICE approach, and define the respective ioctls 10 | for the vcpu file descriptor. 11 | 12 | Signed-off-by: Shannon Zhao 13 | Reviewed-by: Andrew Jones 14 | Acked-by: Peter Maydell 15 | Signed-off-by: Marc Zyngier 16 | --- 17 | Documentation/virtual/kvm/api.txt | 10 ++-- 18 | Documentation/virtual/kvm/devices/vcpu.txt | 8 ++++ 19 | arch/arm/kvm/arm.c | 55 ++++++++++++++++++++++ 20 | arch/arm64/kvm/reset.c | 1 + 21 | include/uapi/linux/kvm.h | 1 + 22 | 5 files changed, 71 insertions(+), 4 deletions(-) 23 | create mode 100644 Documentation/virtual/kvm/devices/vcpu.txt 24 | 25 | diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt 26 | index 9684f8dc6bb241..cb2ef0bcdcb52b 100644 27 | --- a/Documentation/virtual/kvm/api.txt 28 | +++ b/Documentation/virtual/kvm/api.txt 29 | @@ -2507,8 +2507,9 @@ struct kvm_create_device { 30 | 31 | 4.80 KVM_SET_DEVICE_ATTR/KVM_GET_DEVICE_ATTR 32 | 33 | -Capability: KVM_CAP_DEVICE_CTRL, KVM_CAP_VM_ATTRIBUTES for vm device 34 | -Type: device ioctl, vm ioctl 35 | +Capability: KVM_CAP_DEVICE_CTRL, KVM_CAP_VM_ATTRIBUTES for vm device, 36 | + KVM_CAP_VCPU_ATTRIBUTES for vcpu device 37 | +Type: device ioctl, vm ioctl, vcpu ioctl 38 | Parameters: struct kvm_device_attr 39 | Returns: 0 on success, -1 on error 40 | Errors: 41 | @@ -2533,8 +2534,9 @@ struct kvm_device_attr { 42 | 43 | 4.81 KVM_HAS_DEVICE_ATTR 44 | 45 | -Capability: KVM_CAP_DEVICE_CTRL, KVM_CAP_VM_ATTRIBUTES for vm device 46 | -Type: device ioctl, vm ioctl 47 | +Capability: KVM_CAP_DEVICE_CTRL, KVM_CAP_VM_ATTRIBUTES for vm device, 48 | + KVM_CAP_VCPU_ATTRIBUTES for vcpu device 49 | +Type: device ioctl, vm ioctl, vcpu ioctl 50 | Parameters: struct kvm_device_attr 51 | Returns: 0 on success, -1 on error 52 | Errors: 53 | diff --git a/Documentation/virtual/kvm/devices/vcpu.txt b/Documentation/virtual/kvm/devices/vcpu.txt 54 | new file mode 100644 55 | index 00000000000000..3cc59c5e44ce14 56 | --- /dev/null 57 | +++ b/Documentation/virtual/kvm/devices/vcpu.txt 58 | @@ -0,0 +1,8 @@ 59 | +Generic vcpu interface 60 | +==================================== 61 | + 62 | +The virtual cpu "device" also accepts the ioctls KVM_SET_DEVICE_ATTR, 63 | +KVM_GET_DEVICE_ATTR, and KVM_HAS_DEVICE_ATTR. The interface uses the same struct 64 | +kvm_device_attr as other devices, but targets VCPU-wide settings and controls. 65 | + 66 | +The groups and attributes per virtual cpu, if any, are architecture specific. 67 | diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c 68 | index 9d133df2da5316..16623235629168 100644 69 | --- a/arch/arm/kvm/arm.c 70 | +++ b/arch/arm/kvm/arm.c 71 | @@ -828,11 +828,51 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu, 72 | return 0; 73 | } 74 | 75 | +static int kvm_arm_vcpu_set_attr(struct kvm_vcpu *vcpu, 76 | + struct kvm_device_attr *attr) 77 | +{ 78 | + int ret = -ENXIO; 79 | + 80 | + switch (attr->group) { 81 | + default: 82 | + break; 83 | + } 84 | + 85 | + return ret; 86 | +} 87 | + 88 | +static int kvm_arm_vcpu_get_attr(struct kvm_vcpu *vcpu, 89 | + struct kvm_device_attr *attr) 90 | +{ 91 | + int ret = -ENXIO; 92 | + 93 | + switch (attr->group) { 94 | + default: 95 | + break; 96 | + } 97 | + 98 | + return ret; 99 | +} 100 | + 101 | +static int kvm_arm_vcpu_has_attr(struct kvm_vcpu *vcpu, 102 | + struct kvm_device_attr *attr) 103 | +{ 104 | + int ret = -ENXIO; 105 | + 106 | + switch (attr->group) { 107 | + default: 108 | + break; 109 | + } 110 | + 111 | + return ret; 112 | +} 113 | + 114 | long kvm_arch_vcpu_ioctl(struct file *filp, 115 | unsigned int ioctl, unsigned long arg) 116 | { 117 | struct kvm_vcpu *vcpu = filp->private_data; 118 | void __user *argp = (void __user *)arg; 119 | + struct kvm_device_attr attr; 120 | 121 | switch (ioctl) { 122 | case KVM_ARM_VCPU_INIT: { 123 | @@ -875,6 +915,21 @@ long kvm_arch_vcpu_ioctl(struct file *filp, 124 | return -E2BIG; 125 | return kvm_arm_copy_reg_indices(vcpu, user_list->reg); 126 | } 127 | + case KVM_SET_DEVICE_ATTR: { 128 | + if (copy_from_user(&attr, argp, sizeof(attr))) 129 | + return -EFAULT; 130 | + return kvm_arm_vcpu_set_attr(vcpu, &attr); 131 | + } 132 | + case KVM_GET_DEVICE_ATTR: { 133 | + if (copy_from_user(&attr, argp, sizeof(attr))) 134 | + return -EFAULT; 135 | + return kvm_arm_vcpu_get_attr(vcpu, &attr); 136 | + } 137 | + case KVM_HAS_DEVICE_ATTR: { 138 | + if (copy_from_user(&attr, argp, sizeof(attr))) 139 | + return -EFAULT; 140 | + return kvm_arm_vcpu_has_attr(vcpu, &attr); 141 | + } 142 | default: 143 | return -EINVAL; 144 | } 145 | diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c 146 | index cf4f28a7a5144c..9677bf069bcc49 100644 147 | --- a/arch/arm64/kvm/reset.c 148 | +++ b/arch/arm64/kvm/reset.c 149 | @@ -64,6 +64,9 @@ 150 | case KVM_CAP_ARM_EL1_32BIT: 151 | r = cpu_has_32bit_el1(); 152 | break; 153 | + case KVM_CAP_VCPU_ATTRIBUTES: 154 | + r = 1; 155 | + break; 156 | default: 157 | r = 0; 158 | } 159 | diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h 160 | index dc16d3084d4a4f..50f44a22921253 100644 161 | --- a/include/uapi/linux/kvm.h 162 | +++ b/include/uapi/linux/kvm.h 163 | @@ -761,6 +761,7 @@ 164 | #define KVM_CAP_PPC_FIXUP_HCALL 103 165 | #define KVM_CAP_PPC_ENABLE_HCALL 104 166 | #define KVM_CAP_CHECK_EXTENSION_VM 105 167 | +#define KVM_CAP_VCPU_ATTRIBUTES 127 168 | 169 | #ifdef KVM_CAP_IRQ_ROUTING 170 | 171 | -------------------------------------------------------------------------------- /backports/pmu/patch9.txt: -------------------------------------------------------------------------------- 1 | diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt 2 | index 07e4cdf0240730..9684f8dc6bb241 100644 3 | --- a/Documentation/virtual/kvm/api.txt 4 | +++ b/Documentation/virtual/kvm/api.txt 5 | @@ -2577,6 +2577,8 @@ Possible features: 6 | Depends on KVM_CAP_ARM_EL1_32BIT (arm64 only). 7 | - KVM_ARM_VCPU_PSCI_0_2: Emulate PSCI v0.2 for the CPU. 8 | Depends on KVM_CAP_ARM_PSCI_0_2. 9 | + - KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU. 10 | + Depends on KVM_CAP_ARM_PMU_V3. 11 | 12 | 13 | 4.83 KVM_ARM_PREFERRED_TARGET 14 | diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h 15 | index a819c6debce40f..b02ef0828f220d 100644 16 | --- a/arch/arm64/include/asm/kvm_host.h 17 | +++ b/arch/arm64/include/asm/kvm_host.h 18 | @@ -42,7 +42,7 @@ 19 | 20 | #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS 21 | 22 | -#define KVM_VCPU_MAX_FEATURES 3 23 | +#define KVM_VCPU_MAX_FEATURES 4 24 | 25 | int __attribute_const__ kvm_target_cpu(void); 26 | int kvm_reset_vcpu(struct kvm_vcpu *vcpu); 27 | diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h 28 | index 2d4ca4bb0dd34a..6aedbe3144320c 100644 29 | --- a/arch/arm64/include/uapi/asm/kvm.h 30 | +++ b/arch/arm64/include/uapi/asm/kvm.h 31 | @@ -94,6 +94,7 @@ struct kvm_regs { 32 | #define KVM_ARM_VCPU_POWER_OFF 0 /* CPU is started in OFF state */ 33 | #define KVM_ARM_VCPU_EL1_32BIT 1 /* CPU running a 32bit VM */ 34 | #define KVM_ARM_VCPU_PSCI_0_2 2 /* CPU uses PSCI v0.2 */ 35 | +#define KVM_ARM_VCPU_PMU_V3 3 /* Support guest PMUv3 */ 36 | 37 | struct kvm_vcpu_init { 38 | __u32 target; 39 | diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c 40 | index dfbce781d284d6..cf4f28a7a5144c 100644 41 | --- a/arch/arm64/kvm/reset.c 42 | +++ b/arch/arm64/kvm/reset.c 43 | @@ -64,6 +64,9 @@ 44 | case KVM_CAP_ARM_EL1_32BIT: 45 | r = cpu_has_32bit_el1(); 46 | break; 47 | + case KVM_CAP_ARM_PMU_V3: 48 | + r = kvm_arm_support_pmu_v3(); 49 | + break; 50 | case KVM_CAP_VCPU_ATTRIBUTES: 51 | r = 1; 52 | break; 53 | diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h 54 | index 9f87d717ef8423..ee62497d46f755 100644 55 | --- a/include/kvm/arm_pmu.h 56 | +++ b/include/kvm/arm_pmu.h 57 | @@ -53,6 +53,7 @@ void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val); 58 | void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val); 59 | void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data, 60 | u64 select_idx); 61 | +bool kvm_arm_support_pmu_v3(void); 62 | #else 63 | struct kvm_pmu { 64 | }; 65 | @@ -80,6 +81,7 @@ static inline void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val) {} 66 | static inline void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) {} 67 | static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, 68 | u64 data, u64 select_idx) {} 69 | +static inline bool kvm_arm_support_pmu_v3(void) { return false; } 70 | #endif 71 | 72 | #endif 73 | diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h 74 | index 9da905157ceeeb..dc16d3084d4a4f 100644 75 | --- a/include/uapi/linux/kvm.h 76 | +++ b/include/uapi/linux/kvm.h 77 | @@ -760,6 +760,7 @@ 78 | #define KVM_CAP_PPC_FIXUP_HCALL 103 79 | #define KVM_CAP_PPC_ENABLE_HCALL 104 80 | #define KVM_CAP_CHECK_EXTENSION_VM 105 81 | +#define KVM_CAP_ARM_PMU_V3 126 82 | #define KVM_CAP_VCPU_ATTRIBUTES 127 83 | 84 | #ifdef KVM_CAP_IRQ_ROUTING 85 | diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c 86 | index 9b83857da195e1..6e28f4f86cc677 100644 87 | --- a/virt/kvm/arm/pmu.c 88 | +++ b/virt/kvm/arm/pmu.c 89 | @@ -405,3 +405,13 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data, 90 | 91 | pmc->perf_event = event; 92 | } 93 | + 94 | +bool kvm_arm_support_pmu_v3(void) 95 | +{ 96 | + /* 97 | + * Check if HW_PERF_EVENTS are supported by checking the number of 98 | + * hardware performance counters. This could ensure the presence of 99 | + * a physical PMU and CONFIG_PERF_EVENT is selected. 100 | + */ 101 | + return (perf_num_counters() > 0); 102 | +} 103 | -------------------------------------------------------------------------------- /edk2.patch: -------------------------------------------------------------------------------- 1 | diff --git a/ArmPkg/Library/ArmGenericTimerVirtCounterLib/ArmGenericTimerVirtCounterLib.c b/ArmPkg/Library/ArmGenericTimerVirtCounterLib/ArmGenericTimerVirtCounterLib.c 2 | index 74c85dd756..7bcd16d69c 100644 3 | --- a/ArmPkg/Library/ArmGenericTimerVirtCounterLib/ArmGenericTimerVirtCounterLib.c 4 | +++ b/ArmPkg/Library/ArmGenericTimerVirtCounterLib/ArmGenericTimerVirtCounterLib.c 5 | @@ -59,7 +59,10 @@ ArmGenericTimerGetTimerFreq ( 6 | VOID 7 | ) 8 | { 9 | - return ArmReadCntFrq (); 10 | + UINTN ans = ArmReadCntFrq (); 11 | + if(!ans) 12 | + ans = 26000000; 13 | + return ans; 14 | } 15 | 16 | UINTN 17 | -------------------------------------------------------------------------------- /kvm.patch: -------------------------------------------------------------------------------- 1 | diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c 2 | index 9929cdd8..935d0eca 100644 3 | --- a/arch/arm/kvm/arm.c 4 | +++ b/arch/arm/kvm/arm.c 5 | @@ -804,6 +804,8 @@ long kvm_arch_vm_ioctl(struct file *filp, 6 | } 7 | } 8 | 9 | +static unsigned long hyp_stack_base; 10 | +void vmm_init_kvm(phys_addr_t code, phys_addr_t boot_pgd_ptr, phys_addr_t pgd_ptr, unsigned long hyp_stack_ptr, unsigned long vector_ptr); 11 | static void cpu_init_hyp_mode(void *dummy) 12 | { 13 | phys_addr_t boot_pgd_ptr; 14 | @@ -813,15 +815,15 @@ static void cpu_init_hyp_mode(void *dummy) 15 | unsigned long vector_ptr; 16 | 17 | /* Switch from the HYP stub to our own HYP init vector */ 18 | - __hyp_set_vectors(kvm_get_idmap_vector()); 19 | + //__hyp_set_vectors(kvm_get_idmap_vector()); 20 | 21 | boot_pgd_ptr = kvm_mmu_get_boot_httbr(); 22 | pgd_ptr = kvm_mmu_get_httbr(); 23 | - stack_page = __this_cpu_read(kvm_arm_hyp_stack_page); 24 | + stack_page = hyp_stack_base; //__this_cpu_read(kvm_arm_hyp_stack_page); 25 | hyp_stack_ptr = stack_page + PAGE_SIZE; 26 | vector_ptr = (unsigned long)__kvm_hyp_vector; 27 | 28 | - __cpu_init_hyp_mode(boot_pgd_ptr, pgd_ptr, hyp_stack_ptr, vector_ptr); 29 | + vmm_init_kvm(kvm_get_idmap_vector(), boot_pgd_ptr, pgd_ptr, hyp_stack_ptr, vector_ptr); 30 | } 31 | 32 | static int hyp_init_cpu_notify(struct notifier_block *self, 33 | @@ -870,13 +872,10 @@ static inline void hyp_cpu_pm_init(void) 34 | } 35 | #endif 36 | 37 | -/** 38 | - * Inits Hyp-mode on all online CPUs 39 | - */ 40 | -static int init_hyp_mode(void) 41 | +static int preinit_status = -EINVAL; 42 | +void preinit_hyp_mode(void) 43 | { 44 | - int cpu; 45 | - int err = 0; 46 | + int err; 47 | 48 | /* 49 | * Allocate Hyp PGD and setup Hyp identity mapping 50 | @@ -886,14 +885,69 @@ static int init_hyp_mode(void) 51 | goto out_err; 52 | 53 | /* 54 | - * It is probably enough to obtain the default on one 55 | - * CPU. It's unlikely to be different on the others. 56 | + * Allocate stack pages for Hypervisor-mode 57 | */ 58 | - hyp_default_vectors = __hyp_get_vectors(); 59 | + hyp_stack_base = __get_free_pages(GFP_KERNEL, 3); 60 | + if (!hyp_stack_base) { 61 | + err = -ENOMEM; 62 | + goto out_err; 63 | + } 64 | 65 | /* 66 | - * Allocate stack pages for Hypervisor-mode 67 | + * Map the Hyp-code called directly from the host 68 | */ 69 | + err = create_hyp_mappings(__kvm_hyp_code_start, __kvm_hyp_code_end); 70 | + if (err) { 71 | + kvm_err("Cannot map world-switch code\n"); 72 | + goto out_free_mappings; 73 | + } 74 | + 75 | + /* 76 | + * Map the Hyp stack 77 | + */ 78 | + err = create_hyp_mappings((void*)hyp_stack_base, (void*)(hyp_stack_base+8*PAGE_SIZE)); 79 | + if (err) { 80 | + kvm_err("Cannot map Hyp stack\n"); 81 | + goto out_free_mappings; 82 | + } 83 | + 84 | + cpu_init_hyp_mode(NULL); 85 | + preinit_status = 0; 86 | + kvm_info("Hyp mode pre-initialized successfully\n"); 87 | + return; 88 | +out_free_mappings: 89 | + free_hyp_pgds(); 90 | + //TODO: free stack 91 | +out_err: 92 | + preinit_status = err; 93 | + return; 94 | +} 95 | + 96 | +/** 97 | + * Inits Hyp-mode on all online CPUs 98 | + */ 99 | +static int init_hyp_mode(void) 100 | +{ 101 | + int cpu; 102 | + int err = 0; 103 | + 104 | + /* 105 | + * It is probably enough to obtain the default on one 106 | + * CPU. It's unlikely to be different on the others. 107 | + */ 108 | + hyp_default_vectors = 0xdeadbeefdeadbeef; //__hyp_get_vectors(); 109 | + 110 | + if (preinit_status != 0) { 111 | + kvm_err("Hyp mode preinit failed, see above"); 112 | + err = preinit_status; 113 | + goto out_err; 114 | + } 115 | + 116 | +#if 0 117 | + if (!stack_base) { 118 | + err = -ENOMEM; 119 | + goto out_free_stack_base; 120 | + } 121 | for_each_possible_cpu(cpu) { 122 | unsigned long stack_page; 123 | 124 | @@ -906,15 +960,6 @@ static int init_hyp_mode(void) 125 | per_cpu(kvm_arm_hyp_stack_page, cpu) = stack_page; 126 | } 127 | 128 | - /* 129 | - * Map the Hyp-code called directly from the host 130 | - */ 131 | - err = create_hyp_mappings(__kvm_hyp_code_start, __kvm_hyp_code_end); 132 | - if (err) { 133 | - kvm_err("Cannot map world-switch code\n"); 134 | - goto out_free_mappings; 135 | - } 136 | - 137 | /* 138 | * Map the Hyp stack pages 139 | */ 140 | @@ -927,6 +972,7 @@ static int init_hyp_mode(void) 141 | goto out_free_mappings; 142 | } 143 | } 144 | +#endif 145 | 146 | /* 147 | * Map the host CPU structures 148 | @@ -953,7 +999,7 @@ static int init_hyp_mode(void) 149 | /* 150 | * Execute the init code on each CPU. 151 | */ 152 | - on_each_cpu(cpu_init_hyp_mode, NULL, 1); 153 | + //on_each_cpu(cpu_init_hyp_mode, NULL, 1); 154 | 155 | /* 156 | * Init HYP view of VGIC 157 | @@ -986,9 +1032,12 @@ out_free_context: 158 | free_percpu(kvm_host_cpu_state); 159 | out_free_mappings: 160 | free_hyp_pgds(); 161 | -out_free_stack_pages: 162 | +out_free_stack_base: 163 | +#if 0 164 | for_each_possible_cpu(cpu) 165 | free_page(per_cpu(kvm_arm_hyp_stack_page, cpu)); 166 | +#endif 167 | + //__free_pages(stack_base, 3); 168 | out_err: 169 | kvm_err("error initializing Hyp mode: %d\n", err); 170 | return err; 171 | diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h 172 | index 7a5df525..86f7d785 100644 173 | --- a/arch/arm64/include/asm/virt.h 174 | +++ b/arch/arm64/include/asm/virt.h 175 | @@ -40,8 +40,9 @@ phys_addr_t __hyp_get_vectors(void); 176 | /* Reports the availability of HYP mode */ 177 | static inline bool is_hyp_mode_available(void) 178 | { 179 | - return (__boot_cpu_mode[0] == BOOT_CPU_MODE_EL2 && 180 | - __boot_cpu_mode[1] == BOOT_CPU_MODE_EL2); 181 | + /*return (__boot_cpu_mode[0] == BOOT_CPU_MODE_EL2 && 182 | + __boot_cpu_mode[1] == BOOT_CPU_MODE_EL2);*/ 183 | + return 1; 184 | } 185 | 186 | /* Check if the bootloader has booted CPUs in different modes */ 187 | diff --git a/arch/arm64/kvm/hyp-init.S b/arch/arm64/kvm/hyp-init.S 188 | index c3191168..b14649f4 100644 189 | --- a/arch/arm64/kvm/hyp-init.S 190 | +++ b/arch/arm64/kvm/hyp-init.S 191 | @@ -27,6 +27,7 @@ 192 | .align 11 193 | 194 | ENTRY(__kvm_hyp_init) 195 | +#if 0 196 | ventry __invalid // Synchronous EL2t 197 | ventry __invalid // IRQ EL2t 198 | ventry __invalid // FIQ EL2t 199 | @@ -49,6 +50,13 @@ ENTRY(__kvm_hyp_init) 200 | 201 | __invalid: 202 | b . 203 | +#endif 204 | + 205 | +exynos_entry: 206 | +ldr x3, [x0, #24] 207 | +ldr x2, [x0, #16] 208 | +ldr x1, [x0, #8] 209 | +ldr x0, [x0] 210 | 211 | /* 212 | * x0: HYP boot pgd 213 | @@ -111,8 +119,14 @@ target: /* We're now in the trampoline code, switch page tables */ 214 | kern_hyp_va x3 215 | msr vbar_el2, x3 216 | 217 | + 218 | + mov x0, #0 219 | + msr vttbr_el2, x0 220 | + 221 | /* Hello, World! */ 222 | - eret 223 | + ldr x0, =0xc2000401 224 | + mov x1, 0 225 | + smc #0 226 | ENDPROC(__kvm_hyp_init) 227 | 228 | .ltorg 229 | diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c 230 | index f4001cb1..cb3056ec 100644 231 | --- a/arch/arm64/kvm/sys_regs.c 232 | +++ b/arch/arm64/kvm/sys_regs.c 233 | @@ -436,6 +436,10 @@ static const struct sys_reg_desc sys_reg_descs[] = { 234 | { Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b011), 235 | NULL, reset_unknown, TPIDRRO_EL0 }, 236 | 237 | + /* PMCCFILTR_EL0 */ 238 | + { Op0(0b11), Op1(0b011), CRn(0b1110), CRm(0b1111), Op2(0b111), 239 | + trap_raz_wi }, 240 | + 241 | /* DACR32_EL2 */ 242 | { Op0(0b11), Op1(0b100), CRn(0b0011), CRm(0b0000), Op2(0b000), 243 | NULL, reset_unknown, DACR32_EL2 }, 244 | diff --git a/init/Makefile b/init/Makefile 245 | index a6d79459..bd442b10 100644 246 | --- a/init/Makefile 247 | +++ b/init/Makefile 248 | @@ -12,6 +12,10 @@ obj-y += _vmm.o vmm.o 249 | obj-y += ld.o 250 | endif 251 | 252 | +ifeq ($(CONFIG_KVM), y) 253 | +obj-y += _vmm.o vmm-kvm.o 254 | +endif 255 | + 256 | ifneq ($(CONFIG_ARCH_INIT_TASK),y) 257 | obj-y += init_task.o 258 | endif 259 | diff --git a/init/main.c b/init/main.c 260 | index 63b3eafd..e11c96b2 100644 261 | --- a/init/main.c 262 | +++ b/init/main.c 263 | @@ -631,6 +631,9 @@ asmlinkage __visible void __init start_kernel(void) 264 | set_init_arg); 265 | 266 | #ifdef CONFIG_TIMA_RKP 267 | +#ifdef CONFIG_KVM 268 | +#error "RKP and KVM cannot coexist!" 269 | +#endif 270 | #ifdef CONFIG_KNOX_KAP 271 | if (boot_mode_security) 272 | vmm_init(); 273 | @@ -651,6 +654,10 @@ asmlinkage __visible void __init start_kernel(void) 274 | sort_main_extable(); 275 | trap_init(); 276 | mm_init(); 277 | +#ifdef CONFIG_KVM 278 | + void preinit_hyp_mode(void); 279 | + preinit_hyp_mode(); 280 | +#endif 281 | 282 | /* 283 | * Set up the scheduler prior starting any interrupts (such as the 284 | diff --git a/init/vmm-kvm.c b/init/vmm-kvm.c 285 | new file mode 100644 286 | index 00000000..c9cc66cd 287 | --- /dev/null 288 | +++ b/init/vmm-kvm.c 289 | @@ -0,0 +1,23 @@ 290 | +#include 291 | +#include 292 | + 293 | +#define VMM_32BIT_SMC_CALL_MAGIC 0x82000400 294 | +#define VMM_64BIT_SMC_CALL_MAGIC 0xC2000400 295 | + 296 | +#define VMM_STACK_OFFSET 4096 297 | + 298 | +#define VMM_MODE_AARCH32 0 299 | +#define VMM_MODE_AARCH64 1 300 | + 301 | +int _vmm_goto_EL2(int magic, void *label, int offset, int mode, void *base, int size); 302 | + 303 | +static unsigned long hyp_params[4]; 304 | +void vmm_init_kvm(phys_addr_t code, phys_addr_t boot_pgd_ptr, phys_addr_t pgd_ptr, unsigned long hyp_stack_ptr, unsigned long vector_ptr) 305 | +{ 306 | + hyp_params[0] = boot_pgd_ptr; 307 | + hyp_params[1] = pgd_ptr; 308 | + hyp_params[2] = hyp_stack_ptr; 309 | + hyp_params[3] = vector_ptr; 310 | + __flush_dcache_area(hyp_params, sizeof(hyp_params)); 311 | + _vmm_goto_EL2(VMM_64BIT_SMC_CALL_MAGIC, (void*)code, VMM_STACK_OFFSET, VMM_MODE_AARCH64, (void*)virt_to_phys(hyp_params), 0); 312 | +} 313 | diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c 314 | index 1c0772b3..6a30ff12 100644 315 | --- a/virt/kvm/arm/arch_timer.c 316 | +++ b/virt/kvm/arm/arch_timer.c 317 | @@ -64,7 +64,7 @@ static void kvm_timer_inject_irq(struct kvm_vcpu *vcpu) 318 | int ret; 319 | struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu; 320 | 321 | - timer->cntv_ctl |= ARCH_TIMER_CTRL_IT_MASK; 322 | + //timer->cntv_ctl |= ARCH_TIMER_CTRL_IT_MASK; 323 | ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, 324 | timer->irq->irq, 325 | timer->irq->level); 326 | @@ -149,8 +149,11 @@ void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu) 327 | * looking. Inject the interrupt and carry on. 328 | */ 329 | kvm_timer_inject_irq(vcpu); 330 | + disable_percpu_irq(host_vtimer_irq); 331 | return; 332 | } 333 | + else 334 | + enable_percpu_irq(host_vtimer_irq, 0); 335 | 336 | ns = cyclecounter_cyc2ns(timecounter->cc, cval - now); 337 | timer_arm(timer, ns); 338 | --------------------------------------------------------------------------------