2016-12-26 17:48:35

by Jintack Lim

[permalink] [raw]
Subject: [RFC 0/8] Provide the EL1 physical timer to the VM

The ARM architecture defines the EL1 physical timer and the virtual
timer, and it is reasonable for an OS to expect to be able to access
both. However, the current KVM implementation does not provide the EL1
physical timer to VMs but terminates VMs on access to the timer.

On VHE systems, this would be as simple as allowing full access to the
EL1 physical timer to VMs because the KVM host does not use the EL1
physical timer. However, on non-VHE systems, the KVM host already uses
the EL1 physical timer which prevents us from granting full access of
the EL1 physical timer to VMs.

This patchset enables VMs to use the EL1 physical timer through
trap-and-emulate. The KVM host emulates each EL1 physical timer
register access and sets up the background timer accordingly. When the
background timer expires, the KVM host injects EL1 physical timer
interrupts to the VM. Alternatively, it's also possible to allow VMs to
access the EL1 physical timer without trapping. However, this requires
somehow using the EL2 physical timer for the Linux host while running
the VM instead of the EL1 physical timer. Right now I just implemented
trap-and-emulate because this was straightforward to do, and I leave it
to future work to determine if transferring the EL1 physical timer state
to the EL2 timer provides any performance benefit.

This feature will be useful for any OS that wishes to access the EL1
physical timer. Nested virtualization is one of those use cases. A
nested hypervisor running inside a VM would think it has full access to
the hardware and naturally tries to use the EL1 physical timer as Linux
would do. Other nested hypervisors may try to use the EL2 physical timer
as Xen would do, but supporting the EL2 physical timer to the VM is out
of scope of this patchset. This patchset will make it easy to add the
EL2 timer support in the future, though.

Note, Linux VMs booting in EL1 will be unaffected by this patch set and
will continue to use only the virtual timer and this patch set will
therefore not introduce any performance degredation as a result of
trap-and-emulate.

Jintack Lim (8):
KVM: arm/arm64: Abstract virtual timer context into separate structure
KVM: arm/arm64: Decouple kvm timer functions from virtual timer
KVM: arm/arm64: Add the EL1 physical timer context
KVM: arm/arm64: Initialize the emulated EL1 physical timer
KVM: arm64: Add the EL1 physical timer access handler
KVM: arm/arm64: Update the physical timer interrupt level
KVM: arm/arm64: Set up a background timer for the physical timer
emulation
KVM: arm/arm64: Emulate the EL1 phys timer register access

arch/arm/kvm/arm.c | 3 +-
arch/arm/kvm/reset.c | 9 +-
arch/arm64/kvm/reset.c | 9 +-
arch/arm64/kvm/sys_regs.c | 63 +++++++++++++
include/kvm/arm_arch_timer.h | 43 +++++----
virt/kvm/arm/arch_timer.c | 214 ++++++++++++++++++++++++++++++-------------
virt/kvm/arm/hyp/timer-sr.c | 16 ++--
7 files changed, 264 insertions(+), 93 deletions(-)

--
1.9.1



2016-12-26 17:46:21

by Jintack Lim

[permalink] [raw]
Subject: [RFC 3/8] KVM: arm/arm64: Add the EL1 physical timer context

Add the EL1 physical timer context.

Signed-off-by: Jintack Lim <[email protected]>
---
include/kvm/arm_arch_timer.h | 2 ++
1 file changed, 2 insertions(+)

diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
index cf84145..d21652a 100644
--- a/include/kvm/arm_arch_timer.h
+++ b/include/kvm/arm_arch_timer.h
@@ -45,6 +45,7 @@ struct arch_timer_context {

struct arch_timer_cpu {
struct arch_timer_context vtimer;
+ struct arch_timer_context ptimer;

/* Background timer used when the guest is not running */
struct hrtimer timer;
@@ -77,4 +78,5 @@ bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
void kvm_timer_vcpu_put(struct kvm_vcpu *vcpu);

#define vcpu_vtimer(v) (&(v)->arch.timer_cpu.vtimer)
+#define vcpu_ptimer(v) (&(v)->arch.timer_cpu.ptimer)
#endif
--
1.9.1


2016-12-26 17:46:57

by Jintack Lim

[permalink] [raw]
Subject: [RFC 5/8] KVM: arm64: Add the EL1 physical timer access handler

KVM traps on the EL1 phys timer accesses from VMs, but it doesn't handle
those traps. This results in terminating VMs. Instead, set a handler for
the EL1 phys timer access, and inject an undefined exception as an
intermediate step.

Signed-off-by: Jintack Lim <[email protected]>
---
arch/arm64/kvm/sys_regs.c | 34 ++++++++++++++++++++++++++++++++++
1 file changed, 34 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 87e7e66..fd9e747 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -820,6 +820,30 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
CRm((0b1100 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)), \
access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), }

+static bool access_cntp_tval(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ kvm_inject_undefined(vcpu);
+ return true;
+}
+
+static bool access_cntp_ctl(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ kvm_inject_undefined(vcpu);
+ return true;
+}
+
+static bool access_cntp_cval(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ kvm_inject_undefined(vcpu);
+ return true;
+}
+
/*
* Architected system registers.
* Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
@@ -1029,6 +1053,16 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
{ Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b011),
NULL, reset_unknown, TPIDRRO_EL0 },

+ /* CNTP_TVAL_EL0 */
+ { Op0(0b11), Op1(0b011), CRn(0b1110), CRm(0b0010), Op2(0b000),
+ access_cntp_tval },
+ /* CNTP_CTL_EL0 */
+ { Op0(0b11), Op1(0b011), CRn(0b1110), CRm(0b0010), Op2(0b001),
+ access_cntp_ctl },
+ /* CNTP_CVAL_EL0 */
+ { Op0(0b11), Op1(0b011), CRn(0b1110), CRm(0b0010), Op2(0b010),
+ access_cntp_cval },
+
/* PMEVCNTRn_EL0 */
PMU_PMEVCNTR_EL0(0),
PMU_PMEVCNTR_EL0(1),
--
1.9.1


2016-12-26 17:47:03

by Jintack Lim

[permalink] [raw]
Subject: [RFC 6/8] KVM: arm/arm64: Update the physical timer interrupt level

Now that we maintain the EL1 physical timer register states of the VM,
update the physical timer interrupt level along with the virtual one.

Note that the emulated EL1 physical timer is not mapped to any hardware
timer, so we let vgic know that.

With this commit, VMs are able to get the physical timer interrupts
while they are runnable. But they won't get interrupts once vcpus go to
sleep since we don't have code to wake vcpus up on the emulated physical
timer expiration yet.

Signed-off-by: Jintack Lim <[email protected]>
---
arch/arm/kvm/arm.c | 3 +-
virt/kvm/arm/arch_timer.c | 76 ++++++++++++++++++++++++++++++++++++++---------
2 files changed, 64 insertions(+), 15 deletions(-)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 37d1623..d2dfa32 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -295,7 +295,8 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)

int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
{
- return kvm_timer_should_fire(vcpu, vcpu_vtimer(vcpu));
+ return kvm_timer_should_fire(vcpu, vcpu_vtimer(vcpu)) ||
+ kvm_timer_should_fire(vcpu, vcpu_ptimer(vcpu));
}

void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu)
diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
index ed80864..aa7e243 100644
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -91,7 +91,8 @@ static void kvm_timer_inject_irq_work(struct work_struct *work)
vcpu = container_of(work, struct kvm_vcpu, arch.timer_cpu.expired);
vcpu->arch.timer_cpu.armed = false;

- WARN_ON(!kvm_timer_should_fire(vcpu, vcpu_vtimer(vcpu)));
+ WARN_ON(!kvm_timer_should_fire(vcpu, vcpu_vtimer(vcpu)) &&
+ !kvm_timer_should_fire(vcpu, vcpu_ptimer(vcpu)));

/*
* If the vcpu is blocked we want to wake it up so that it will see
@@ -130,6 +131,33 @@ static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu,
return 0;
}

+static bool kvm_timer_irq_can_fire(struct arch_timer_context *timer_ctx)
+{
+ return !(timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
+ (timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_ENABLE);
+}
+
+/*
+ * Returns minimal timer expiration time in ns among guest timers.
+ * Note that it will return inf time if none of timers can fire.
+ */
+static u64 kvm_timer_min_block(struct kvm_vcpu *vcpu)
+{
+ u64 min_virt = ULLONG_MAX, min_phys = ULLONG_MAX;
+ struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
+ struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
+
+ if (kvm_timer_irq_can_fire(vtimer))
+ min_virt = kvm_timer_compute_delta(vcpu, vtimer);
+
+ if (kvm_timer_irq_can_fire(ptimer))
+ min_phys = kvm_timer_compute_delta(vcpu, ptimer);
+
+ WARN_ON((min_virt == ULLONG_MAX) && (min_phys == ULLONG_MAX));
+
+ return min(min_virt, min_phys);
+}
+
static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
{
struct arch_timer_cpu *timer;
@@ -144,7 +172,7 @@ static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
* PoV (NTP on the host may have forced it to expire
* early). If we should have slept longer, restart it.
*/
- ns = kvm_timer_compute_delta(vcpu, vcpu_vtimer(vcpu));
+ ns = kvm_timer_min_block(vcpu);
if (unlikely(ns)) {
hrtimer_forward_now(hrt, ns_to_ktime(ns));
return HRTIMER_RESTART;
@@ -154,12 +182,6 @@ static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
return HRTIMER_NORESTART;
}

-static bool kvm_timer_irq_can_fire(struct arch_timer_context *timer_ctx)
-{
- return !(timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
- (timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_ENABLE);
-}
-
bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
struct arch_timer_context *timer_ctx)
{
@@ -191,6 +213,21 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
WARN_ON(ret);
}

+static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
+ struct arch_timer_context *timer)
+{
+ int ret;
+
+ BUG_ON(!vgic_initialized(vcpu->kvm));
+
+ timer->irq.level = new_level;
+ trace_kvm_timer_update_irq(vcpu->vcpu_id, timer->irq.irq,
+ timer->irq.level);
+ ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, timer->irq.irq,
+ timer->irq.level);
+ WARN_ON(ret);
+}
+
/*
* Check if there was a change in the timer state (should we raise or lower
* the line level to the GIC).
@@ -198,6 +235,7 @@ static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
{
struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
+ struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);

/*
* If userspace modified the timer registers via SET_ONE_REG before
@@ -211,6 +249,10 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
if (kvm_timer_should_fire(vcpu, vtimer) != vtimer->irq.level)
kvm_timer_update_mapped_irq(vcpu, !vtimer->irq.level, vtimer);

+ /* The emulated EL1 physical timer irq is not mapped to hardware */
+ if (kvm_timer_should_fire(vcpu, ptimer) != ptimer->irq.level)
+ kvm_timer_update_irq(vcpu, !ptimer->irq.level, ptimer);
+
return 0;
}

@@ -223,26 +265,32 @@ void kvm_timer_schedule(struct kvm_vcpu *vcpu)
{
struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
+ struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);

BUG_ON(timer_is_armed(timer));

/*
- * No need to schedule a background timer if the guest timer has
+ * No need to schedule a background timer if any guest timer has
* already expired, because kvm_vcpu_block will return before putting
* the thread to sleep.
*/
- if (kvm_timer_should_fire(vcpu, vtimer))
+ if (kvm_timer_should_fire(vcpu, vtimer) ||
+ kvm_timer_should_fire(vcpu, ptimer))
return;

/*
- * If the timer is not capable of raising interrupts (disabled or
+ * If both timers are not capable of raising interrupts (disabled or
* masked), then there's no more work for us to do.
*/
- if (!kvm_timer_irq_can_fire(vtimer))
+ if (!kvm_timer_irq_can_fire(vtimer) &&
+ !kvm_timer_irq_can_fire(ptimer))
return;

- /* The timer has not yet expired, schedule a background timer */
- timer_arm(timer, kvm_timer_compute_delta(vcpu, vtimer));
+ /*
+ * The guest timers have not yet expired, schedule a background timer.
+ * Pick smaller expiration time between phys and virt timer.
+ */
+ timer_arm(timer, kvm_timer_min_block(vcpu));
}

void kvm_timer_unschedule(struct kvm_vcpu *vcpu)
--
1.9.1


2016-12-26 17:47:37

by Jintack Lim

[permalink] [raw]
Subject: [RFC 7/8] KVM: arm/arm64: Set up a background timer for the physical timer emulation

Set a background timer for the EL1 physical timer emulation while VMs are
running, so that VMs get interrupts for the physical timer in a timely
manner.

We still use just one background timer. When a VM is runnable, we use
the background timer for the physical timer emulation. When the VM is
about to be blocked, we use the background timer to wake up the vcpu at
the earliest timer expiration among timers the VM is using.

As a result, the assumption that the background timer is not armed while
VMs are running does not hold any more. So, remove BUG_ON()s and
WARN_ON()s accordingly.

Signed-off-by: Jintack Lim <[email protected]>
---
virt/kvm/arm/arch_timer.c | 42 +++++++++++++++++++++++++++++++-----------
1 file changed, 31 insertions(+), 11 deletions(-)

diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
index aa7e243..be8d953 100644
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -91,9 +91,6 @@ static void kvm_timer_inject_irq_work(struct work_struct *work)
vcpu = container_of(work, struct kvm_vcpu, arch.timer_cpu.expired);
vcpu->arch.timer_cpu.armed = false;

- WARN_ON(!kvm_timer_should_fire(vcpu, vcpu_vtimer(vcpu)) &&
- !kvm_timer_should_fire(vcpu, vcpu_ptimer(vcpu)));
-
/*
* If the vcpu is blocked we want to wake it up so that it will see
* the timer has expired when entering the guest.
@@ -139,7 +136,6 @@ static bool kvm_timer_irq_can_fire(struct arch_timer_context *timer_ctx)

/*
* Returns minimal timer expiration time in ns among guest timers.
- * Note that it will return inf time if none of timers can fire.
*/
static u64 kvm_timer_min_block(struct kvm_vcpu *vcpu)
{
@@ -153,7 +149,9 @@ static u64 kvm_timer_min_block(struct kvm_vcpu *vcpu)
if (kvm_timer_irq_can_fire(ptimer))
min_phys = kvm_timer_compute_delta(vcpu, ptimer);

- WARN_ON((min_virt == ULLONG_MAX) && (min_phys == ULLONG_MAX));
+ /* If none of timers can fire, then return 0 */
+ if ((min_virt == ULLONG_MAX) && (min_phys == ULLONG_MAX))
+ return 0;

return min(min_virt, min_phys);
}
@@ -257,6 +255,26 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
}

/*
+ * Schedule the background timer for the emulated timer. The background timer
+ * runs whenever vcpu is runnable and the timer is not expired.
+ */
+static void kvm_timer_emulate(struct kvm_vcpu *vcpu,
+ struct arch_timer_context *timer_ctx)
+{
+ struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+
+ if (kvm_timer_should_fire(vcpu, timer_ctx))
+ return;
+
+ if (!kvm_timer_irq_can_fire(timer_ctx))
+ return;
+
+ /* The timer has not yet expired, schedule a background timer */
+ timer_disarm(timer);
+ timer_arm(timer, kvm_timer_compute_delta(vcpu, timer_ctx));
+}
+
+/*
* Schedule the background timer before calling kvm_vcpu_block, so that this
* thread is removed from its waitqueue and made runnable when there's a timer
* interrupt to handle.
@@ -267,8 +285,6 @@ void kvm_timer_schedule(struct kvm_vcpu *vcpu)
struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);

- BUG_ON(timer_is_armed(timer));
-
/*
* No need to schedule a background timer if any guest timer has
* already expired, because kvm_vcpu_block will return before putting
@@ -290,13 +306,21 @@ void kvm_timer_schedule(struct kvm_vcpu *vcpu)
* The guest timers have not yet expired, schedule a background timer.
* Pick smaller expiration time between phys and virt timer.
*/
+ timer_disarm(timer);
timer_arm(timer, kvm_timer_min_block(vcpu));
}

void kvm_timer_unschedule(struct kvm_vcpu *vcpu)
{
struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+
timer_disarm(timer);
+
+ /*
+ * Now we return from the blocking. If we have any timer to emulate,
+ * and it's not expired, set the background timer for it.
+ */
+ kvm_timer_emulate(vcpu, vcpu_ptimer(vcpu));
}

/**
@@ -375,10 +399,6 @@ void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu)
*/
void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu)
{
- struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
-
- BUG_ON(timer_is_armed(timer));
-
/*
* The guest could have modified the timer registers or the timer
* could have expired, update the timer state.
--
1.9.1


2016-12-26 17:48:31

by Jintack Lim

[permalink] [raw]
Subject: [RFC 8/8] KVM: arm/arm64: Emulate the EL1 phys timer register access

Emulate read and write operations to CNTP_TVAL, CNTP_CVAL and CNTP_CTL.
Now the VM is able to use the EL1 physical timer.

Signed-off-by: Jintack Lim <[email protected]>
---
arch/arm64/kvm/sys_regs.c | 35 ++++++++++++++++++++++++++++++++---
include/kvm/arm_arch_timer.h | 3 +++
virt/kvm/arm/arch_timer.c | 4 ++--
3 files changed, 37 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index fd9e747..7cef94f 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -824,7 +824,15 @@ static bool access_cntp_tval(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
- kvm_inject_undefined(vcpu);
+ struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
+ cycle_t now = kvm_phys_timer_read();
+
+ if (p->is_write) {
+ ptimer->cnt_cval = p->regval + now;
+ kvm_timer_emulate(vcpu, ptimer);
+ } else
+ p->regval = ptimer->cnt_cval - now;
+
return true;
}

@@ -832,7 +840,21 @@ static bool access_cntp_ctl(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
- kvm_inject_undefined(vcpu);
+ struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
+
+ if (p->is_write) {
+ /* ISTATUS bit is read-only */
+ ptimer->cnt_ctl = p->regval & ~ARCH_TIMER_CTRL_IT_STAT;
+ kvm_timer_emulate(vcpu, ptimer);
+ } else {
+ cycle_t now = kvm_phys_timer_read();
+
+ p->regval = ptimer->cnt_ctl;
+ /* Set ISTATUS bit if it's expired */
+ if (ptimer->cnt_cval <= now)
+ p->regval |= ARCH_TIMER_CTRL_IT_STAT;
+ }
+
return true;
}

@@ -840,7 +862,14 @@ static bool access_cntp_cval(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
- kvm_inject_undefined(vcpu);
+ struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
+
+ if (p->is_write) {
+ ptimer->cnt_cval = p->regval;
+ kvm_timer_emulate(vcpu, ptimer);
+ } else
+ p->regval = ptimer->cnt_cval;
+
return true;
}

diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
index 04ed9c1..776579b 100644
--- a/include/kvm/arm_arch_timer.h
+++ b/include/kvm/arm_arch_timer.h
@@ -75,6 +75,9 @@ bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
struct arch_timer_context *timer_ctx);
void kvm_timer_schedule(struct kvm_vcpu *vcpu);
void kvm_timer_unschedule(struct kvm_vcpu *vcpu);
+void kvm_timer_emulate(struct kvm_vcpu *vcpu, struct arch_timer_context *timer);
+
+cycle_t kvm_phys_timer_read(void);

void kvm_timer_vcpu_put(struct kvm_vcpu *vcpu);

diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
index be8d953..7a161f8 100644
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -39,7 +39,7 @@ void kvm_timer_vcpu_put(struct kvm_vcpu *vcpu)
vcpu_vtimer(vcpu)->active_cleared_last = false;
}

-static cycle_t kvm_phys_timer_read(void)
+cycle_t kvm_phys_timer_read(void)
{
return timecounter->cc->read(timecounter->cc);
}
@@ -258,7 +258,7 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
* Schedule the background timer for the emulated timer. The background timer
* runs whenever vcpu is runnable and the timer is not expired.
*/
-static void kvm_timer_emulate(struct kvm_vcpu *vcpu,
+void kvm_timer_emulate(struct kvm_vcpu *vcpu,
struct arch_timer_context *timer_ctx)
{
struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
--
1.9.1


2016-12-26 17:48:38

by Jintack Lim

[permalink] [raw]
Subject: [RFC 4/8] KVM: arm/arm64: Initialize the emulated EL1 physical timer

Initialize the emulated EL1 physical timer with the default irq number.

Signed-off-by: Jintack Lim <[email protected]>
---
arch/arm/kvm/reset.c | 9 ++++++++-
arch/arm64/kvm/reset.c | 9 ++++++++-
include/kvm/arm_arch_timer.h | 3 ++-
virt/kvm/arm/arch_timer.c | 12 ++++++++++--
4 files changed, 28 insertions(+), 5 deletions(-)

diff --git a/arch/arm/kvm/reset.c b/arch/arm/kvm/reset.c
index 4b5e802..1da8b2d 100644
--- a/arch/arm/kvm/reset.c
+++ b/arch/arm/kvm/reset.c
@@ -37,6 +37,11 @@
.usr_regs.ARM_cpsr = SVC_MODE | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT,
};

+static const struct kvm_irq_level cortexa_ptimer_irq = {
+ { .irq = 30 },
+ .level = 1,
+};
+
static const struct kvm_irq_level cortexa_vtimer_irq = {
{ .irq = 27 },
.level = 1,
@@ -58,6 +63,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
{
struct kvm_regs *reset_regs;
const struct kvm_irq_level *cpu_vtimer_irq;
+ const struct kvm_irq_level *cpu_ptimer_irq;

switch (vcpu->arch.target) {
case KVM_ARM_TARGET_CORTEX_A7:
@@ -65,6 +71,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
reset_regs = &cortexa_regs_reset;
vcpu->arch.midr = read_cpuid_id();
cpu_vtimer_irq = &cortexa_vtimer_irq;
+ cpu_ptimer_irq = &cortexa_ptimer_irq;
break;
default:
return -ENODEV;
@@ -77,5 +84,5 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
kvm_reset_coprocs(vcpu);

/* Reset arch_timer context */
- return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
+ return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq, cpu_ptimer_irq);
}
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 5bc4608..74322c2 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -46,6 +46,11 @@
COMPAT_PSR_I_BIT | COMPAT_PSR_F_BIT),
};

+static const struct kvm_irq_level default_ptimer_irq = {
+ .irq = 30,
+ .level = 1,
+};
+
static const struct kvm_irq_level default_vtimer_irq = {
.irq = 27,
.level = 1,
@@ -110,6 +115,7 @@ int kvm_arch_dev_ioctl_check_extension(struct kvm *kvm, long ext)
int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
{
const struct kvm_irq_level *cpu_vtimer_irq;
+ const struct kvm_irq_level *cpu_ptimer_irq;
const struct kvm_regs *cpu_reset;

switch (vcpu->arch.target) {
@@ -123,6 +129,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
}

cpu_vtimer_irq = &default_vtimer_irq;
+ cpu_ptimer_irq = &default_ptimer_irq;
break;
}

@@ -136,5 +143,5 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
kvm_pmu_vcpu_reset(vcpu);

/* Reset timer */
- return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
+ return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq, cpu_ptimer_irq);
}
diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
index d21652a..04ed9c1 100644
--- a/include/kvm/arm_arch_timer.h
+++ b/include/kvm/arm_arch_timer.h
@@ -61,7 +61,8 @@ struct arch_timer_cpu {
int kvm_timer_enable(struct kvm_vcpu *vcpu);
void kvm_timer_init(struct kvm *kvm);
int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
- const struct kvm_irq_level *irq);
+ const struct kvm_irq_level *virt_irq,
+ const struct kvm_irq_level *phys_irq);
void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu);
void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu);
void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu);
diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
index 3bd6063..ed80864 100644
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -339,9 +339,11 @@ void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu)
}

int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
- const struct kvm_irq_level *irq)
+ const struct kvm_irq_level *virt_irq,
+ const struct kvm_irq_level *phys_irq)
{
struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
+ struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);

/*
* The vcpu timer irq number cannot be determined in
@@ -349,7 +351,8 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
* kvm_vcpu_set_target(). To handle this, we determine
* vcpu timer irq number when the vcpu is reset.
*/
- vtimer->irq.irq = irq->irq;
+ vtimer->irq.irq = virt_irq->irq;
+ ptimer->irq.irq = phys_irq->irq;

/*
* The bits in CNTV_CTL are architecturally reset to UNKNOWN for ARMv8
@@ -358,6 +361,7 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
* the ARMv7 architecture.
*/
vtimer->cnt_ctl = 0;
+ ptimer->cnt_ctl = 0;
kvm_timer_update_state(vcpu);

return 0;
@@ -477,11 +481,15 @@ void kvm_timer_vcpu_terminate(struct kvm_vcpu *vcpu)
int kvm_timer_enable(struct kvm_vcpu *vcpu)
{
struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
+ struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
struct irq_desc *desc;
struct irq_data *data;
int phys_irq;
int ret;

+ /* Always enable emulated the EL1 physical timer */
+ ptimer->enabled = 1;
+
if (vtimer->enabled)
return 0;

--
1.9.1


2016-12-26 17:48:54

by Jintack Lim

[permalink] [raw]
Subject: [RFC 1/8] KVM: arm/arm64: Abstract virtual timer context into separate structure

Abstract virtual timer context into a separate structure and change all
callers referring to timer registers, irq state and so on. No change in
functionality.

This is about to become very handy when adding the EL1 physical timer.

Signed-off-by: Jintack Lim <[email protected]>
Acked-by: Christoffer Dall <[email protected]>
---
include/kvm/arm_arch_timer.h | 32 +++++++++---------
virt/kvm/arm/arch_timer.c | 77 ++++++++++++++++++++++----------------------
virt/kvm/arm/hyp/timer-sr.c | 16 ++++-----
3 files changed, 63 insertions(+), 62 deletions(-)

diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
index dda39d8..7dabe56 100644
--- a/include/kvm/arm_arch_timer.h
+++ b/include/kvm/arm_arch_timer.h
@@ -28,15 +28,23 @@ struct arch_timer_kvm {
cycle_t cntvoff;
};

-struct arch_timer_cpu {
+struct arch_timer_context {
/* Registers: control register, timer value */
- u32 cntv_ctl; /* Saved/restored */
- cycle_t cntv_cval; /* Saved/restored */
+ u32 cnt_ctl;
+ cycle_t cnt_cval;
+
+ /* Timer IRQ */
+ struct kvm_irq_level irq;

- /*
- * Anything that is not used directly from assembly code goes
- * here.
- */
+ /* Active IRQ state caching */
+ bool active_cleared_last;
+
+ /* Is the timer enabled */
+ bool enabled;
+};
+
+struct arch_timer_cpu {
+ struct arch_timer_context vtimer;

/* Background timer used when the guest is not running */
struct hrtimer timer;
@@ -46,15 +54,6 @@ struct arch_timer_cpu {

/* Background timer active */
bool armed;
-
- /* Timer IRQ */
- struct kvm_irq_level irq;
-
- /* Active IRQ state caching */
- bool active_cleared_last;
-
- /* Is the timer enabled */
- bool enabled;
};

int kvm_timer_hyp_init(void);
@@ -76,4 +75,5 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,

void kvm_timer_vcpu_put(struct kvm_vcpu *vcpu);

+#define vcpu_vtimer(v) (&(v)->arch.timer_cpu.vtimer)
#endif
diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
index 27a1f63..30a64df 100644
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -36,7 +36,7 @@

void kvm_timer_vcpu_put(struct kvm_vcpu *vcpu)
{
- vcpu->arch.timer_cpu.active_cleared_last = false;
+ vcpu_vtimer(vcpu)->active_cleared_last = false;
}

static cycle_t kvm_phys_timer_read(void)
@@ -104,7 +104,7 @@ static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu)
{
cycle_t cval, now;

- cval = vcpu->arch.timer_cpu.cntv_cval;
+ cval = vcpu_vtimer(vcpu)->cnt_cval;
now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;

if (now < cval) {
@@ -146,21 +146,21 @@ static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)

static bool kvm_timer_irq_can_fire(struct kvm_vcpu *vcpu)
{
- struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+ struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);

- return !(timer->cntv_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
- (timer->cntv_ctl & ARCH_TIMER_CTRL_ENABLE);
+ return !(vtimer->cnt_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
+ (vtimer->cnt_ctl & ARCH_TIMER_CTRL_ENABLE);
}

bool kvm_timer_should_fire(struct kvm_vcpu *vcpu)
{
- struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+ struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
cycle_t cval, now;

if (!kvm_timer_irq_can_fire(vcpu))
return false;

- cval = timer->cntv_cval;
+ cval = vtimer->cnt_cval;
now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;

return cval <= now;
@@ -169,17 +169,17 @@ bool kvm_timer_should_fire(struct kvm_vcpu *vcpu)
static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level)
{
int ret;
- struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+ struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);

BUG_ON(!vgic_initialized(vcpu->kvm));

- timer->active_cleared_last = false;
- timer->irq.level = new_level;
- trace_kvm_timer_update_irq(vcpu->vcpu_id, timer->irq.irq,
- timer->irq.level);
+ vtimer->active_cleared_last = false;
+ vtimer->irq.level = new_level;
+ trace_kvm_timer_update_irq(vcpu->vcpu_id, vtimer->irq.irq,
+ vtimer->irq.level);
ret = kvm_vgic_inject_mapped_irq(vcpu->kvm, vcpu->vcpu_id,
- timer->irq.irq,
- timer->irq.level);
+ vtimer->irq.irq,
+ vtimer->irq.level);
WARN_ON(ret);
}

@@ -189,19 +189,19 @@ static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level)
*/
static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
{
- struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+ struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);

/*
* If userspace modified the timer registers via SET_ONE_REG before
- * the vgic was initialized, we mustn't set the timer->irq.level value
+ * the vgic was initialized, we mustn't set the vtimer->irq.level value
* because the guest would never see the interrupt. Instead wait
* until we call this function from kvm_timer_flush_hwstate.
*/
- if (!vgic_initialized(vcpu->kvm) || !timer->enabled)
+ if (!vgic_initialized(vcpu->kvm) || !vtimer->enabled)
return -ENODEV;

- if (kvm_timer_should_fire(vcpu) != timer->irq.level)
- kvm_timer_update_irq(vcpu, !timer->irq.level);
+ if (kvm_timer_should_fire(vcpu) != vtimer->irq.level)
+ kvm_timer_update_irq(vcpu, !vtimer->irq.level);

return 0;
}
@@ -251,7 +251,7 @@ void kvm_timer_unschedule(struct kvm_vcpu *vcpu)
*/
void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu)
{
- struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+ struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
bool phys_active;
int ret;

@@ -275,8 +275,8 @@ void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu)
* to ensure that hardware interrupts from the timer triggers a guest
* exit.
*/
- phys_active = timer->irq.level ||
- kvm_vgic_map_is_active(vcpu, timer->irq.irq);
+ phys_active = vtimer->irq.level ||
+ kvm_vgic_map_is_active(vcpu, vtimer->irq.irq);

/*
* We want to avoid hitting the (re)distributor as much as
@@ -298,7 +298,7 @@ void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu)
* - cached value is "active clear"
* - value to be programmed is "active clear"
*/
- if (timer->active_cleared_last && !phys_active)
+ if (vtimer->active_cleared_last && !phys_active)
return;

ret = irq_set_irqchip_state(host_vtimer_irq,
@@ -306,7 +306,7 @@ void kvm_timer_flush_hwstate(struct kvm_vcpu *vcpu)
phys_active);
WARN_ON(ret);

- timer->active_cleared_last = !phys_active;
+ vtimer->active_cleared_last = !phys_active;
}

/**
@@ -332,7 +332,7 @@ void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu)
int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
const struct kvm_irq_level *irq)
{
- struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+ struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);

/*
* The vcpu timer irq number cannot be determined in
@@ -340,7 +340,7 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
* kvm_vcpu_set_target(). To handle this, we determine
* vcpu timer irq number when the vcpu is reset.
*/
- timer->irq.irq = irq->irq;
+ vtimer->irq.irq = irq->irq;

/*
* The bits in CNTV_CTL are architecturally reset to UNKNOWN for ARMv8
@@ -348,7 +348,7 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
* resets the timer to be disabled and unmasked and is compliant with
* the ARMv7 architecture.
*/
- timer->cntv_ctl = 0;
+ vtimer->cnt_ctl = 0;
kvm_timer_update_state(vcpu);

return 0;
@@ -370,17 +370,17 @@ static void kvm_timer_init_interrupt(void *info)

int kvm_arm_timer_set_reg(struct kvm_vcpu *vcpu, u64 regid, u64 value)
{
- struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+ struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);

switch (regid) {
case KVM_REG_ARM_TIMER_CTL:
- timer->cntv_ctl = value;
+ vtimer->cnt_ctl = value;
break;
case KVM_REG_ARM_TIMER_CNT:
vcpu->kvm->arch.timer.cntvoff = kvm_phys_timer_read() - value;
break;
case KVM_REG_ARM_TIMER_CVAL:
- timer->cntv_cval = value;
+ vtimer->cnt_cval = value;
break;
default:
return -1;
@@ -392,15 +392,15 @@ int kvm_arm_timer_set_reg(struct kvm_vcpu *vcpu, u64 regid, u64 value)

u64 kvm_arm_timer_get_reg(struct kvm_vcpu *vcpu, u64 regid)
{
- struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+ struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);

switch (regid) {
case KVM_REG_ARM_TIMER_CTL:
- return timer->cntv_ctl;
+ return vtimer->cnt_ctl;
case KVM_REG_ARM_TIMER_CNT:
return kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
case KVM_REG_ARM_TIMER_CVAL:
- return timer->cntv_cval;
+ return vtimer->cnt_cval;
}
return (u64)-1;
}
@@ -459,20 +459,21 @@ int kvm_timer_hyp_init(void)
void kvm_timer_vcpu_terminate(struct kvm_vcpu *vcpu)
{
struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+ struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);

timer_disarm(timer);
- kvm_vgic_unmap_phys_irq(vcpu, timer->irq.irq);
+ kvm_vgic_unmap_phys_irq(vcpu, vtimer->irq.irq);
}

int kvm_timer_enable(struct kvm_vcpu *vcpu)
{
- struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+ struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
struct irq_desc *desc;
struct irq_data *data;
int phys_irq;
int ret;

- if (timer->enabled)
+ if (vtimer->enabled)
return 0;

/*
@@ -494,7 +495,7 @@ int kvm_timer_enable(struct kvm_vcpu *vcpu)
* Tell the VGIC that the virtual interrupt is tied to a
* physical interrupt. We do that once per VCPU.
*/
- ret = kvm_vgic_map_phys_irq(vcpu, timer->irq.irq, phys_irq);
+ ret = kvm_vgic_map_phys_irq(vcpu, vtimer->irq.irq, phys_irq);
if (ret)
return ret;

@@ -508,7 +509,7 @@ int kvm_timer_enable(struct kvm_vcpu *vcpu)
* the arch timers are enabled.
*/
if (timecounter)
- timer->enabled = 1;
+ vtimer->enabled = 1;

return 0;
}
diff --git a/virt/kvm/arm/hyp/timer-sr.c b/virt/kvm/arm/hyp/timer-sr.c
index 798866a..4bbd36c 100644
--- a/virt/kvm/arm/hyp/timer-sr.c
+++ b/virt/kvm/arm/hyp/timer-sr.c
@@ -24,12 +24,12 @@
/* vcpu is already in the HYP VA space */
void __hyp_text __timer_save_state(struct kvm_vcpu *vcpu)
{
- struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+ struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
u64 val;

- if (timer->enabled) {
- timer->cntv_ctl = read_sysreg_el0(cntv_ctl);
- timer->cntv_cval = read_sysreg_el0(cntv_cval);
+ if (vtimer->enabled) {
+ vtimer->cnt_ctl = read_sysreg_el0(cntv_ctl);
+ vtimer->cnt_cval = read_sysreg_el0(cntv_cval);
}

/* Disable the virtual timer */
@@ -47,7 +47,7 @@ void __hyp_text __timer_save_state(struct kvm_vcpu *vcpu)
void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu)
{
struct kvm *kvm = kern_hyp_va(vcpu->kvm);
- struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+ struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
u64 val;

/*
@@ -59,10 +59,10 @@ void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu)
val |= CNTHCTL_EL1PCTEN;
write_sysreg(val, cnthctl_el2);

- if (timer->enabled) {
+ if (vtimer->enabled) {
write_sysreg(kvm->arch.timer.cntvoff, cntvoff_el2);
- write_sysreg_el0(timer->cntv_cval, cntv_cval);
+ write_sysreg_el0(vtimer->cnt_cval, cntv_cval);
isb();
- write_sysreg_el0(timer->cntv_ctl, cntv_ctl);
+ write_sysreg_el0(vtimer->cnt_ctl, cntv_ctl);
}
}
--
1.9.1


2016-12-26 17:49:20

by Jintack Lim

[permalink] [raw]
Subject: [RFC 2/8] KVM: arm/arm64: Decouple kvm timer functions from virtual timer

Now that we have a separate structure for timer context, make functions
general so that they can work on any timer context, not just the virtual
timer context. This does not change the virtual timer functionality.

Signed-off-by: Jintack Lim <[email protected]>
---
arch/arm/kvm/arm.c | 2 +-
include/kvm/arm_arch_timer.h | 3 +-
virt/kvm/arm/arch_timer.c | 65 +++++++++++++++++++++++++-------------------
3 files changed, 40 insertions(+), 30 deletions(-)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 19b5f5c..37d1623 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -295,7 +295,7 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)

int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
{
- return kvm_timer_should_fire(vcpu);
+ return kvm_timer_should_fire(vcpu, vcpu_vtimer(vcpu));
}

void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu)
diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
index 7dabe56..cf84145 100644
--- a/include/kvm/arm_arch_timer.h
+++ b/include/kvm/arm_arch_timer.h
@@ -69,7 +69,8 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu,
u64 kvm_arm_timer_get_reg(struct kvm_vcpu *, u64 regid);
int kvm_arm_timer_set_reg(struct kvm_vcpu *, u64 regid, u64 value);

-bool kvm_timer_should_fire(struct kvm_vcpu *vcpu);
+bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
+ struct arch_timer_context *timer_ctx);
void kvm_timer_schedule(struct kvm_vcpu *vcpu);
void kvm_timer_unschedule(struct kvm_vcpu *vcpu);

diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
index 30a64df..3bd6063 100644
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -91,7 +91,7 @@ static void kvm_timer_inject_irq_work(struct work_struct *work)
vcpu = container_of(work, struct kvm_vcpu, arch.timer_cpu.expired);
vcpu->arch.timer_cpu.armed = false;

- WARN_ON(!kvm_timer_should_fire(vcpu));
+ WARN_ON(!kvm_timer_should_fire(vcpu, vcpu_vtimer(vcpu)));

/*
* If the vcpu is blocked we want to wake it up so that it will see
@@ -100,12 +100,22 @@ static void kvm_timer_inject_irq_work(struct work_struct *work)
kvm_vcpu_kick(vcpu);
}

-static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu)
+static u64 kvm_timer_cntvoff(struct kvm_vcpu *vcpu,
+ struct arch_timer_context *timer_ctx)
+{
+ if (timer_ctx == vcpu_vtimer(vcpu))
+ return vcpu->kvm->arch.timer.cntvoff;
+
+ return 0;
+}
+
+static u64 kvm_timer_compute_delta(struct kvm_vcpu *vcpu,
+ struct arch_timer_context *timer_ctx)
{
cycle_t cval, now;

- cval = vcpu_vtimer(vcpu)->cnt_cval;
- now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
+ cval = timer_ctx->cnt_cval;
+ now = kvm_phys_timer_read() - kvm_timer_cntvoff(vcpu, timer_ctx);

if (now < cval) {
u64 ns;
@@ -134,7 +144,7 @@ static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
* PoV (NTP on the host may have forced it to expire
* early). If we should have slept longer, restart it.
*/
- ns = kvm_timer_compute_delta(vcpu);
+ ns = kvm_timer_compute_delta(vcpu, vcpu_vtimer(vcpu));
if (unlikely(ns)) {
hrtimer_forward_now(hrt, ns_to_ktime(ns));
return HRTIMER_RESTART;
@@ -144,42 +154,40 @@ static enum hrtimer_restart kvm_timer_expire(struct hrtimer *hrt)
return HRTIMER_NORESTART;
}

-static bool kvm_timer_irq_can_fire(struct kvm_vcpu *vcpu)
+static bool kvm_timer_irq_can_fire(struct arch_timer_context *timer_ctx)
{
- struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
-
- return !(vtimer->cnt_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
- (vtimer->cnt_ctl & ARCH_TIMER_CTRL_ENABLE);
+ return !(timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_IT_MASK) &&
+ (timer_ctx->cnt_ctl & ARCH_TIMER_CTRL_ENABLE);
}

-bool kvm_timer_should_fire(struct kvm_vcpu *vcpu)
+bool kvm_timer_should_fire(struct kvm_vcpu *vcpu,
+ struct arch_timer_context *timer_ctx)
{
- struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
cycle_t cval, now;

- if (!kvm_timer_irq_can_fire(vcpu))
+ if (!kvm_timer_irq_can_fire(timer_ctx))
return false;

- cval = vtimer->cnt_cval;
- now = kvm_phys_timer_read() - vcpu->kvm->arch.timer.cntvoff;
+ cval = timer_ctx->cnt_cval;
+ now = kvm_phys_timer_read() - kvm_timer_cntvoff(vcpu, timer_ctx);

return cval <= now;
}

-static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level)
+static void kvm_timer_update_mapped_irq(struct kvm_vcpu *vcpu, bool new_level,
+ struct arch_timer_context *timer_ctx)
{
int ret;
- struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);

BUG_ON(!vgic_initialized(vcpu->kvm));

- vtimer->active_cleared_last = false;
- vtimer->irq.level = new_level;
- trace_kvm_timer_update_irq(vcpu->vcpu_id, vtimer->irq.irq,
- vtimer->irq.level);
+ timer_ctx->active_cleared_last = false;
+ timer_ctx->irq.level = new_level;
+ trace_kvm_timer_update_irq(vcpu->vcpu_id, timer_ctx->irq.irq,
+ timer_ctx->irq.level);
ret = kvm_vgic_inject_mapped_irq(vcpu->kvm, vcpu->vcpu_id,
- vtimer->irq.irq,
- vtimer->irq.level);
+ timer_ctx->irq.irq,
+ timer_ctx->irq.level);
WARN_ON(ret);
}

@@ -200,8 +208,8 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
if (!vgic_initialized(vcpu->kvm) || !vtimer->enabled)
return -ENODEV;

- if (kvm_timer_should_fire(vcpu) != vtimer->irq.level)
- kvm_timer_update_irq(vcpu, !vtimer->irq.level);
+ if (kvm_timer_should_fire(vcpu, vtimer) != vtimer->irq.level)
+ kvm_timer_update_mapped_irq(vcpu, !vtimer->irq.level, vtimer);

return 0;
}
@@ -214,6 +222,7 @@ static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
void kvm_timer_schedule(struct kvm_vcpu *vcpu)
{
struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+ struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);

BUG_ON(timer_is_armed(timer));

@@ -222,18 +231,18 @@ void kvm_timer_schedule(struct kvm_vcpu *vcpu)
* already expired, because kvm_vcpu_block will return before putting
* the thread to sleep.
*/
- if (kvm_timer_should_fire(vcpu))
+ if (kvm_timer_should_fire(vcpu, vtimer))
return;

/*
* If the timer is not capable of raising interrupts (disabled or
* masked), then there's no more work for us to do.
*/
- if (!kvm_timer_irq_can_fire(vcpu))
+ if (!kvm_timer_irq_can_fire(vtimer))
return;

/* The timer has not yet expired, schedule a background timer */
- timer_arm(timer, kvm_timer_compute_delta(vcpu));
+ timer_arm(timer, kvm_timer_compute_delta(vcpu, vtimer));
}

void kvm_timer_unschedule(struct kvm_vcpu *vcpu)
--
1.9.1