2019-03-19 08:32:01

by Amit Kachhap

[permalink] [raw]
Subject: [PATCH v7 0/10] Add ARMv8.3 pointer authentication for kvm guest

Hi,

This patch series adds pointer authentication support for KVM guest and
is based on top of Linux 5.1-rc1. The basic patches in this series was
originally posted by Mark Rutland earlier[1,2] and contains some history
of this work.

Extension Overview:
=============================================

The ARMv8.3 pointer authentication extension adds functionality to detect
modification of pointer values, mitigating certain classes of attack such as
stack smashing, and making return oriented programming attacks harder.

The extension introduces the concept of a pointer authentication code (PAC),
which is stored in some upper bits of pointers. Each PAC is derived from the
original pointer, another 64-bit value (e.g. the stack pointer), and a secret
128-bit key.

New instructions are added which can be used to:

* Insert a PAC into a pointer
* Strip a PAC from a pointer
* Authenticate and strip a PAC from a pointer

The detailed description of ARMv8.3 pointer authentication support in
userspace/kernel and can be found in Kristina's generic pointer authentication
patch series[3].

KVM guest work:
==============================================

If pointer authentication is enabled for KVM guests then the new PAC instructions
will not trap to EL2. If not then they may be ignored if in HINT region or trapped
in EL2 as illegal instruction. Since KVM guest vcpu runs as a thread so they have
a key initialized which will be used by PAC. When world switch happens between
host and guest then this key is exchanged.

The current v7 patch series contains review comments and suggestions by James
Morse, Mark Rutland and Dave Martin.

The first two patch is cherry picked from Dave Martin [9] v5 series which adds
SVE support for KVM guest.

Changes since v6 [8]: Major changes are listed below.

* Pointer authentication key switch entirely in assembly now.
* isb instruction added after Key switched to host.
* Use __hyp_this_cpu_ptr for both VHE and nVHE mode.
* 2 separate flags for address and generic authentication.
* kvm_arm_vcpu_ptrauth_allowed renamed to has_vcpu_ptrauth.
* kvm_arm_vcpu_ptrauth_reset renamed to kvm_arm_vcpu_ptrauth_setup_lazy.
* Save of host Key registers now done in ptrauth instruction trap.
* A fix to add kern_hyp_va to get correct host_ctxt pointer in nVHE mode.
* Patches re-structured to better reflect ABI change.

Changes since v5 [7]: Major changes are listed below.

* Split hcr_el2 and mdcr_el2 save/restore in two patches.
* Reverted back save/restore of sys-reg keys as done in V4 [5]. There was
suggestion by James Morse to use ptrauth utilities in a single place
in arm core and use them from kvm. However this change deviates from the
existing sys-reg implementations and is not scalable.
* Invoked the key switch C functions from __guest_enter/__guest_exit assembly.
* Host key save is now done inside vcpu_load.
* Reverted back masking of cpufeature ID registers for ptrauth when disabled
from userpace.
* Reset of ptrauth key registers not done conditionally.
* Code and Documentation cleanup.

Changes since v4 [6]: Several suggestions from James Morse
* Move host registers to be saved/restored inside struct kvm_cpu_context.
* Similar to hcr_el2, save/restore mdcr_el2 register also.
* Added save routines for ptrauth keys in generic arm core and
use them during KVM context switch.
* Defined a GCC attribute __no_ptrauth which discards generating
ptrauth instructions in a function. This is taken from Kristina's
earlier kernel pointer authentication support patches [4].
* Dropped a patch to mask cpufeature when not enabled from userspace and
now only key registers are masked from register list.

Changes since v3 [5]:
* Use pointer authentication only when VHE is present as ARM8.3 implies ARM8.1
features to be present.
* Added lazy context handling of ptrauth instructions from V2 version again.
* Added more details in Documentation.

Changes since v2 [1,2]:
* Allow host and guest to have different HCR_EL2 settings and not just constant
value HCR_HOST_VHE_FLAGS or HCR_HOST_NVHE_FLAGS.
* Optimise the reading of HCR_EL2 in host/guest switch by fetching it once
during KVM initialisation state and using it later.
* Context switch pointer authentication keys when switching between guest
and host. Pointer authentication was enabled in a lazy context earlier[2] and
is removed now to make it simple. However it can be revisited later if there
is significant performance issue.
* Added a userspace option to choose pointer authentication.
* Based on the userspace option, ptrauth cpufeature will be visible.
* Based on the userspace option, ptrauth key registers will be accessible.
* A small document is added on how to enable pointer authentication from
userspace KVM API.

Looking for feedback and comments.

Thanks,
Amit

[1]: https://lore.kernel.org/lkml/[email protected]/
[2]: https://lore.kernel.org/lkml/[email protected]/
[3]: https://lkml.org/lkml/2018/12/7/666
[4]: https://lore.kernel.org/lkml/[email protected]/
[5]: https://lkml.org/lkml/2018/10/17/594
[6]: https://lkml.org/lkml/2018/12/18/80
[7]: https://lkml.org/lkml/2019/1/28/49
[8]: https://lkml.org/lkml/2019/2/19/190
[9]: https://patchwork.kernel.org/cover/10818797/

Linux (5.1-rc1 based):

Amit Daniel Kachhap (5):
KVM: arm64: Move hyp_symbol_addr to fix dependency
KVM: arm/arm64: preserve host MDCR_EL2 value
KVM: arm64: Add vcpu feature flags to control ptrauth accessibility
KVM: arm64: Add capability to advertise pointer authentication for
guest
KVM: arm64: docs: document KVM support of pointer authentication

Dave Martin (2):
KVM: arm64: Propagate vcpu into read_id_reg()
KVM: arm64: Support runtime sysreg visibility filtering

Mark Rutland (2):
KVM: arm/arm64: preserve host HCR_EL2 value
KVM: arm/arm64: context-switch ptrauth registers

Documentation/arm64/pointer-authentication.txt | 15 +++-
Documentation/virtual/kvm/api.txt | 6 ++
arch/arm/include/asm/kvm_host.h | 3 +-
arch/arm64/include/asm/kvm_asm.h | 22 ++++++
arch/arm64/include/asm/kvm_emulate.h | 24 +++---
arch/arm64/include/asm/kvm_host.h | 46 ++++++++++--
arch/arm64/include/asm/kvm_hyp.h | 2 +-
arch/arm64/include/asm/kvm_mmu.h | 20 -----
arch/arm64/include/asm/kvm_ptrauth_asm.h | 100 +++++++++++++++++++++++++
arch/arm64/include/uapi/asm/kvm.h | 2 +
arch/arm64/kernel/asm-offsets.c | 6 ++
arch/arm64/kvm/debug.c | 28 ++-----
arch/arm64/kvm/guest.c | 16 +++-
arch/arm64/kvm/handle_exit.c | 24 ++++--
arch/arm64/kvm/hyp/entry.S | 7 ++
arch/arm64/kvm/hyp/switch.c | 40 +++++-----
arch/arm64/kvm/hyp/sysreg-sr.c | 22 +++++-
arch/arm64/kvm/hyp/tlb.c | 6 +-
arch/arm64/kvm/reset.c | 11 +++
arch/arm64/kvm/sys_regs.c | 93 +++++++++++++++++++----
arch/arm64/kvm/sys_regs.h | 13 ++++
include/uapi/linux/kvm.h | 1 +
virt/kvm/arm/arm.c | 4 +-
23 files changed, 395 insertions(+), 116 deletions(-)
create mode 100644 arch/arm64/include/asm/kvm_ptrauth_asm.h

kvmtool:

Repo: git.kernel.org/pub/scm/linux/kernel/git/will/kvmtool.git

Amit Daniel Kachhap (1):
KVM: arm/arm64: Add a vcpu feature for pointer authentication

arm/aarch32/include/kvm/kvm-cpu-arch.h | 1 +
arm/aarch64/include/asm/kvm.h | 2 ++
arm/aarch64/include/kvm/kvm-config-arch.h | 4 +++-
arm/aarch64/include/kvm/kvm-cpu-arch.h | 2 ++
arm/include/arm-common/kvm-config-arch.h | 1 +
arm/kvm-cpu.c | 6 ++++++
include/linux/kvm.h | 1 +
7 files changed, 16 insertions(+), 1 deletion(-)

--
2.7.4



2019-03-19 08:32:10

by Amit Kachhap

[permalink] [raw]
Subject: [PATCH v7 1/10] KVM: arm64: Propagate vcpu into read_id_reg()

From: Dave Martin <[email protected]>

Architecture features that are conditionally visible to the guest
will require run-time checks in the ID register accessor functions.
In particular, read_id_reg() will need to perform checks in order
to generate the correct emulated value for certain ID register
fields such as ID_AA64PFR0_EL1.SVE for example.

This patch propagates vcpu into read_id_reg() so that future
patches can add run-time checks on the guest configuration here.

For now, there is no functional change.

Signed-off-by: Dave Martin <[email protected]>
Reviewed-by: Alex Bennee <[email protected]>
---
arch/arm64/kvm/sys_regs.c | 23 +++++++++++++----------
1 file changed, 13 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 539feec..a5d14b5 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1044,7 +1044,8 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu,
}

/* Read a sanitised cpufeature ID register by sys_reg_desc */
-static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
+static u64 read_id_reg(const struct kvm_vcpu *vcpu,
+ struct sys_reg_desc const *r, bool raz)
{
u32 id = sys_reg((u32)r->Op0, (u32)r->Op1,
(u32)r->CRn, (u32)r->CRm, (u32)r->Op2);
@@ -1078,7 +1079,7 @@ static bool __access_id_reg(struct kvm_vcpu *vcpu,
if (p->is_write)
return write_to_read_only(vcpu, p, r);

- p->regval = read_id_reg(r, raz);
+ p->regval = read_id_reg(vcpu, r, raz);
return true;
}

@@ -1107,16 +1108,18 @@ static u64 sys_reg_to_index(const struct sys_reg_desc *reg);
* are stored, and for set_id_reg() we don't allow the effective value
* to be changed.
*/
-static int __get_id_reg(const struct sys_reg_desc *rd, void __user *uaddr,
+static int __get_id_reg(const struct kvm_vcpu *vcpu,
+ const struct sys_reg_desc *rd, void __user *uaddr,
bool raz)
{
const u64 id = sys_reg_to_index(rd);
- const u64 val = read_id_reg(rd, raz);
+ const u64 val = read_id_reg(vcpu, rd, raz);

return reg_to_user(uaddr, &val, id);
}

-static int __set_id_reg(const struct sys_reg_desc *rd, void __user *uaddr,
+static int __set_id_reg(const struct kvm_vcpu *vcpu,
+ const struct sys_reg_desc *rd, void __user *uaddr,
bool raz)
{
const u64 id = sys_reg_to_index(rd);
@@ -1128,7 +1131,7 @@ static int __set_id_reg(const struct sys_reg_desc *rd, void __user *uaddr,
return err;

/* This is what we mean by invariant: you can't change it. */
- if (val != read_id_reg(rd, raz))
+ if (val != read_id_reg(vcpu, rd, raz))
return -EINVAL;

return 0;
@@ -1137,25 +1140,25 @@ static int __set_id_reg(const struct sys_reg_desc *rd, void __user *uaddr,
static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
const struct kvm_one_reg *reg, void __user *uaddr)
{
- return __get_id_reg(rd, uaddr, false);
+ return __get_id_reg(vcpu, rd, uaddr, false);
}

static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
const struct kvm_one_reg *reg, void __user *uaddr)
{
- return __set_id_reg(rd, uaddr, false);
+ return __set_id_reg(vcpu, rd, uaddr, false);
}

static int get_raz_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
const struct kvm_one_reg *reg, void __user *uaddr)
{
- return __get_id_reg(rd, uaddr, true);
+ return __get_id_reg(vcpu, rd, uaddr, true);
}

static int set_raz_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
const struct kvm_one_reg *reg, void __user *uaddr)
{
- return __set_id_reg(rd, uaddr, true);
+ return __set_id_reg(vcpu, rd, uaddr, true);
}

static bool access_ctr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
--
2.7.4


2019-03-19 08:32:19

by Amit Kachhap

[permalink] [raw]
Subject: [PATCH v7 2/10] KVM: arm64: Support runtime sysreg visibility filtering

From: Dave Martin <[email protected]>

Some optional features of the Arm architecture add new system
registers that are not present in the base architecture.

Where these features are optional for the guest, the visibility of
these registers may need to depend on some runtime configuration,
such as a flag passed to KVM_ARM_VCPU_INIT.

For example, ZCR_EL1 and ID_AA64ZFR0_EL1 need to be hidden if SVE
is not enabled for the guest, even though these registers may be
present in the hardware and visible to the host at EL2.

Adding special-case checks all over the place for individual
registers is going to get messy as the number of conditionally-
visible registers grows.

In order to help solve this problem, this patch adds a new sysreg
method restrictions() that can be used to hook in any needed
runtime visibility checks. This method can currently return
REG_NO_USER to inhibit enumeration and ioctl access to the register
for userspace, and REG_NO_GUEST to inhibit runtime access by the
guest using MSR/MRS.

This allows a conditionally modified view of individual system
registers such as the CPU ID registers, in addition to completely
hiding register where appropriate.

Signed-off-by: Dave Martin <[email protected]>
---
arch/arm64/kvm/sys_regs.c | 24 +++++++++++++++++++++---
arch/arm64/kvm/sys_regs.h | 13 +++++++++++++
2 files changed, 34 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index a5d14b5..75942f6 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1927,6 +1927,12 @@ static void perform_access(struct kvm_vcpu *vcpu,
{
trace_kvm_sys_access(*vcpu_pc(vcpu), params, r);

+ /* Check for regs disabled by runtime config */
+ if (restrictions(vcpu, r) & REG_NO_GUEST) {
+ kvm_inject_undefined(vcpu);
+ return;
+ }
+
/*
* Not having an accessor means that we have configured a trap
* that we don't know how to handle. This certainly qualifies
@@ -2438,6 +2444,10 @@ int kvm_arm_sys_reg_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg
if (!r)
return get_invariant_sys_reg(reg->id, uaddr);

+ /* Check for regs disabled by runtime config */
+ if (restrictions(vcpu, r) & REG_NO_USER)
+ return -ENOENT;
+
if (r->get_user)
return (r->get_user)(vcpu, r, reg, uaddr);

@@ -2459,6 +2469,10 @@ int kvm_arm_sys_reg_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg
if (!r)
return set_invariant_sys_reg(reg->id, uaddr);

+ /* Check for regs disabled by runtime config */
+ if (restrictions(vcpu, r) & REG_NO_USER)
+ return -ENOENT;
+
if (r->set_user)
return (r->set_user)(vcpu, r, reg, uaddr);

@@ -2515,7 +2529,8 @@ static bool copy_reg_to_user(const struct sys_reg_desc *reg, u64 __user **uind)
return true;
}

-static int walk_one_sys_reg(const struct sys_reg_desc *rd,
+static int walk_one_sys_reg(const struct kvm_vcpu *vcpu,
+ const struct sys_reg_desc *rd,
u64 __user **uind,
unsigned int *total)
{
@@ -2526,6 +2541,9 @@ static int walk_one_sys_reg(const struct sys_reg_desc *rd,
if (!(rd->reg || rd->get_user))
return 0;

+ if (restrictions(vcpu, rd) & REG_NO_USER)
+ return 0;
+
if (!copy_reg_to_user(rd, uind))
return -EFAULT;

@@ -2554,9 +2572,9 @@ static int walk_sys_regs(struct kvm_vcpu *vcpu, u64 __user *uind)
int cmp = cmp_sys_reg(i1, i2);
/* target-specific overrides generic entry. */
if (cmp <= 0)
- err = walk_one_sys_reg(i1, &uind, &total);
+ err = walk_one_sys_reg(vcpu, i1, &uind, &total);
else
- err = walk_one_sys_reg(i2, &uind, &total);
+ err = walk_one_sys_reg(vcpu, i2, &uind, &total);

if (err)
return err;
diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
index 3b1bc7f..b9dd4a9 100644
--- a/arch/arm64/kvm/sys_regs.h
+++ b/arch/arm64/kvm/sys_regs.h
@@ -64,8 +64,15 @@ struct sys_reg_desc {
const struct kvm_one_reg *reg, void __user *uaddr);
int (*set_user)(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
const struct kvm_one_reg *reg, void __user *uaddr);
+
+ /* Return mask of REG_* runtime restriction flags */
+ unsigned int (*restrictions)(const struct kvm_vcpu *vcpu,
+ const struct sys_reg_desc *rd);
};

+#define REG_NO_USER (1 << 0) /* hidden from userspace ioctl interface */
+#define REG_NO_GUEST (1 << 1) /* hidden from guest */
+
static inline void print_sys_reg_instr(const struct sys_reg_params *p)
{
/* Look, we even formatted it for you to paste into the table! */
@@ -102,6 +109,12 @@ static inline void reset_val(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r
__vcpu_sys_reg(vcpu, r->reg) = r->val;
}

+static inline unsigned int restrictions(const struct kvm_vcpu *vcpu,
+ const struct sys_reg_desc *r)
+{
+ return unlikely(r->restrictions) ? r->restrictions(vcpu, r) : 0;
+}
+
static inline int cmp_sys_reg(const struct sys_reg_desc *i1,
const struct sys_reg_desc *i2)
{
--
2.7.4


2019-03-19 08:32:28

by Amit Kachhap

[permalink] [raw]
Subject: [PATCH v7 3/10] KVM: arm64: Move hyp_symbol_addr to fix dependency

Currently hyp_symbol_addr is palced in kvm_mmu.h which is mostly
used by __hyp_this_cpu_ptr in kvm_asm.h but it cannot include
kvm_mmu.h directly as kvm_mmu.h uses kvm_ksym_ref which is
defined inside kvm_asm.h. Hence, hyp_symbol_addr is moved inside
kvm_asm.h to fix this dependency on each other.

Also kvm_ksym_ref is corresponding counterpart of hyp_symbol_addr
so should be ideally placed inside kvm_asm.h.

Suggested by: James Morse <[email protected]>
Signed-off-by: Amit Daniel Kachhap <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Christoffer Dall <[email protected]>
Cc: [email protected]
---
arch/arm64/include/asm/kvm_asm.h | 20 ++++++++++++++++++++
arch/arm64/include/asm/kvm_mmu.h | 20 --------------------
2 files changed, 20 insertions(+), 20 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index f5b79e9..57a07e8 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -80,6 +80,26 @@ extern void __vgic_v3_init_lrs(void);

extern u32 __kvm_get_mdcr_el2(void);

+/*
+ * Obtain the PC-relative address of a kernel symbol
+ * s: symbol
+ *
+ * The goal of this macro is to return a symbol's address based on a
+ * PC-relative computation, as opposed to a loading the VA from a
+ * constant pool or something similar. This works well for HYP, as an
+ * absolute VA is guaranteed to be wrong. Only use this if trying to
+ * obtain the address of a symbol (i.e. not something you obtained by
+ * following a pointer).
+ */
+#define hyp_symbol_addr(s) \
+ ({ \
+ typeof(s) *addr; \
+ asm("adrp %0, %1\n" \
+ "add %0, %0, :lo12:%1\n" \
+ : "=r" (addr) : "S" (&s)); \
+ addr; \
+ })
+
/* Home-grown __this_cpu_{ptr,read} variants that always work at HYP */
#define __hyp_this_cpu_ptr(sym) \
({ \
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index b0742a1..3dea6af 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -118,26 +118,6 @@ static inline unsigned long __kern_hyp_va(unsigned long v)
#define kern_hyp_va(v) ((typeof(v))(__kern_hyp_va((unsigned long)(v))))

/*
- * Obtain the PC-relative address of a kernel symbol
- * s: symbol
- *
- * The goal of this macro is to return a symbol's address based on a
- * PC-relative computation, as opposed to a loading the VA from a
- * constant pool or something similar. This works well for HYP, as an
- * absolute VA is guaranteed to be wrong. Only use this if trying to
- * obtain the address of a symbol (i.e. not something you obtained by
- * following a pointer).
- */
-#define hyp_symbol_addr(s) \
- ({ \
- typeof(s) *addr; \
- asm("adrp %0, %1\n" \
- "add %0, %0, :lo12:%1\n" \
- : "=r" (addr) : "S" (&s)); \
- addr; \
- })
-
-/*
* We currently support using a VM-specified IPA size. For backward
* compatibility, the default IPA size is fixed to 40bits.
*/
--
2.7.4


2019-03-19 08:32:44

by Amit Kachhap

[permalink] [raw]
Subject: [PATCH v7 5/10] KVM: arm/arm64: preserve host MDCR_EL2 value

Save host MDCR_EL2 value during kvm HYP initialisation and restore
after every switch from host to guest. There should not be any
change in functionality due to this.

The value of mdcr_el2 is now stored in struct kvm_cpu_context as
both host and guest can now use this field in a common way.

Signed-off-by: Amit Daniel Kachhap <[email protected]>
Acked-by: Mark Rutland <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Christoffer Dall <[email protected]>
Cc: [email protected]
---
arch/arm/include/asm/kvm_host.h | 1 -
arch/arm64/include/asm/kvm_host.h | 6 ++----
arch/arm64/include/asm/kvm_hyp.h | 2 +-
arch/arm64/kvm/debug.c | 28 ++++++----------------------
arch/arm64/kvm/hyp/switch.c | 18 +++++-------------
arch/arm64/kvm/hyp/sysreg-sr.c | 8 +++++++-
virt/kvm/arm/arm.c | 1 -
7 files changed, 21 insertions(+), 43 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 6d0aac4..a928565 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -343,7 +343,6 @@ static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}

-static inline void kvm_arm_init_debug(void) {}
static inline void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) {}
static inline void kvm_arm_clear_debug(struct kvm_vcpu *vcpu) {}
static inline void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu) {}
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 3b09fd0..e3ccd7b 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -211,6 +211,8 @@ struct kvm_cpu_context {

/* HYP host/guest configuration */
u64 hcr_el2;
+ u32 mdcr_el2;
+
struct kvm_vcpu *__hyp_running_vcpu;
};

@@ -226,9 +228,6 @@ struct vcpu_reset_state {
struct kvm_vcpu_arch {
struct kvm_cpu_context ctxt;

- /* HYP configuration */
- u32 mdcr_el2;
-
/* Exception Information */
struct kvm_vcpu_fault_info fault;

@@ -498,7 +497,6 @@ static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}

-void kvm_arm_init_debug(void);
void kvm_arm_setup_debug(struct kvm_vcpu *vcpu);
void kvm_arm_clear_debug(struct kvm_vcpu *vcpu);
void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 4da765f..7fcde8a 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -152,7 +152,7 @@ void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs);
bool __fpsimd_enabled(void);

void activate_traps_vhe_load(struct kvm_vcpu *vcpu);
-void deactivate_traps_vhe_put(void);
+void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu);

u64 __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt);
void __noreturn __hyp_do_panic(unsigned long, ...);
diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
index fd917d6..99dc0a4 100644
--- a/arch/arm64/kvm/debug.c
+++ b/arch/arm64/kvm/debug.c
@@ -32,8 +32,6 @@
DBG_MDSCR_KDE | \
DBG_MDSCR_MDE)

-static DEFINE_PER_CPU(u32, mdcr_el2);
-
/**
* save/restore_guest_debug_regs
*
@@ -65,21 +63,6 @@ static void restore_guest_debug_regs(struct kvm_vcpu *vcpu)
}

/**
- * kvm_arm_init_debug - grab what we need for debug
- *
- * Currently the sole task of this function is to retrieve the initial
- * value of mdcr_el2 so we can preserve MDCR_EL2.HPMN which has
- * presumably been set-up by some knowledgeable bootcode.
- *
- * It is called once per-cpu during CPU hyp initialisation.
- */
-
-void kvm_arm_init_debug(void)
-{
- __this_cpu_write(mdcr_el2, kvm_call_hyp_ret(__kvm_get_mdcr_el2));
-}
-
-/**
* kvm_arm_reset_debug_ptr - reset the debug ptr to point to the vcpu state
*/

@@ -111,6 +94,7 @@ void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu)

void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
{
+ kvm_cpu_context_t *host_cxt = this_cpu_ptr(&kvm_host_cpu_state);
bool trap_debug = !(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY);
unsigned long mdscr;

@@ -120,8 +104,8 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
* This also clears MDCR_EL2_E2PB_MASK to disable guest access
* to the profiling buffer.
*/
- vcpu->arch.mdcr_el2 = __this_cpu_read(mdcr_el2) & MDCR_EL2_HPMN_MASK;
- vcpu->arch.mdcr_el2 |= (MDCR_EL2_TPM |
+ vcpu->arch.ctxt.mdcr_el2 = host_cxt->mdcr_el2 & MDCR_EL2_HPMN_MASK;
+ vcpu->arch.ctxt.mdcr_el2 |= (MDCR_EL2_TPM |
MDCR_EL2_TPMS |
MDCR_EL2_TPMCR |
MDCR_EL2_TDRA |
@@ -130,7 +114,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
/* Is Guest debugging in effect? */
if (vcpu->guest_debug) {
/* Route all software debug exceptions to EL2 */
- vcpu->arch.mdcr_el2 |= MDCR_EL2_TDE;
+ vcpu->arch.ctxt.mdcr_el2 |= MDCR_EL2_TDE;

/* Save guest debug state */
save_guest_debug_regs(vcpu);
@@ -202,13 +186,13 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)

/* Trap debug register access */
if (trap_debug)
- vcpu->arch.mdcr_el2 |= MDCR_EL2_TDA;
+ vcpu->arch.ctxt.mdcr_el2 |= MDCR_EL2_TDA;

/* If KDE or MDE are set, perform a full save/restore cycle. */
if (vcpu_read_sys_reg(vcpu, MDSCR_EL1) & (DBG_MDSCR_KDE | DBG_MDSCR_MDE))
vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;

- trace_kvm_arm_set_dreg32("MDCR_EL2", vcpu->arch.mdcr_el2);
+ trace_kvm_arm_set_dreg32("MDCR_EL2", vcpu->arch.ctxt.mdcr_el2);
trace_kvm_arm_set_dreg32("MDSCR_EL1", vcpu_read_sys_reg(vcpu, MDSCR_EL1));
}

diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index f5cefa1..fe76e24 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -84,7 +84,7 @@ static void __hyp_text __activate_traps_common(struct kvm_vcpu *vcpu)
*/
write_sysreg(0, pmselr_el0);
write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0);
- write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
+ write_sysreg(vcpu->arch.ctxt.mdcr_el2, mdcr_el2);
}

static void __hyp_text __deactivate_traps_common(void)
@@ -161,15 +161,11 @@ NOKPROBE_SYMBOL(deactivate_traps_vhe);

static void __hyp_text __deactivate_traps_nvhe(struct kvm_cpu_context *host_ctxt)
{
- u64 mdcr_el2 = read_sysreg(mdcr_el2);
struct kvm_cpu_context *hyp_host_ctxt = kern_hyp_va(host_ctxt);

__deactivate_traps_common();

- mdcr_el2 &= MDCR_EL2_HPMN_MASK;
- mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
-
- write_sysreg(mdcr_el2, mdcr_el2);
+ write_sysreg(hyp_host_ctxt->mdcr_el2, mdcr_el2);
write_sysreg(hyp_host_ctxt->hcr_el2, hcr_el2);
write_sysreg(CPTR_EL2_DEFAULT, cptr_el2);
}
@@ -199,15 +195,11 @@ void activate_traps_vhe_load(struct kvm_vcpu *vcpu)
__activate_traps_common(vcpu);
}

-void deactivate_traps_vhe_put(void)
+void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu)
{
- u64 mdcr_el2 = read_sysreg(mdcr_el2);
-
- mdcr_el2 &= MDCR_EL2_HPMN_MASK |
- MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT |
- MDCR_EL2_TPMS;
+ struct kvm_cpu_context *host_ctxt = vcpu->arch.host_cpu_context;

- write_sysreg(mdcr_el2, mdcr_el2);
+ write_sysreg(host_ctxt->mdcr_el2, mdcr_el2);

__deactivate_traps_common();
}
diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c
index 277f82b..6afd309 100644
--- a/arch/arm64/kvm/hyp/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/sysreg-sr.c
@@ -298,7 +298,7 @@ void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu)
if (!has_vhe())
return;

- deactivate_traps_vhe_put();
+ deactivate_traps_vhe_put(vcpu);

__sysreg_save_el1_state(guest_ctxt);
__sysreg_save_user_state(guest_ctxt);
@@ -333,4 +333,10 @@ void __hyp_text __kvm_populate_host_regs(void)

host_ctxt = __hyp_this_cpu_ptr(kvm_host_cpu_state);
host_ctxt->hcr_el2 = read_sysreg(hcr_el2);
+ /*
+ * Retrieve the initial value of mdcr_el2 so we can preserve
+ * MDCR_EL2.HPMN which has presumably been set-up by some
+ * knowledgeable bootcode.
+ */
+ host_ctxt->mdcr_el2 = read_sysreg(mdcr_el2);
}
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index e8c2ee6..58de0ca 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -1322,7 +1322,6 @@ static void cpu_hyp_reinit(void)
else
cpu_init_hyp_mode(NULL);

- kvm_arm_init_debug();
cpu_init_host_ctxt();

if (vgic_present)
--
2.7.4


2019-03-19 08:32:49

by Amit Kachhap

[permalink] [raw]
Subject: [PATCH v7 6/10] KVM: arm64: Add vcpu feature flags to control ptrauth accessibility

Since Pointer authentication will be enabled or disabled on a
per-vcpu basis, vcpu feature flags are added in order to know which
vcpus have it enabled from userspace.

This features will allow the KVM guest to allow the handling of
pointer authentication instructions or to treat them as undefined
if not set.

The helper macro added checks the feature flag along with
other conditions such as VHE mode present and system supports
pointer address/generic authentication.

Signed-off-by: Amit Daniel Kachhap <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Christoffer Dall <[email protected]>
Cc: [email protected]
---
arch/arm64/include/asm/kvm_host.h | 8 +++++++-
arch/arm64/include/uapi/asm/kvm.h | 2 ++
2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index e3ccd7b..9dd2918 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -45,7 +45,7 @@

#define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS

-#define KVM_VCPU_MAX_FEATURES 4
+#define KVM_VCPU_MAX_FEATURES 6

#define KVM_REQ_SLEEP \
KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
@@ -491,6 +491,12 @@ static inline bool kvm_arch_requires_vhe(void)
return false;
}

+#define vcpu_has_ptrauth(vcpu) (has_vhe() && \
+ system_supports_address_auth() && \
+ system_supports_generic_auth() && \
+ test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) && \
+ test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features))
+
static inline void kvm_arch_hardware_unsetup(void) {}
static inline void kvm_arch_sync_events(struct kvm *kvm) {}
static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index 97c3478..8806f71 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -102,6 +102,8 @@ struct kvm_regs {
#define KVM_ARM_VCPU_EL1_32BIT 1 /* CPU running a 32bit VM */
#define KVM_ARM_VCPU_PSCI_0_2 2 /* CPU uses PSCI v0.2 */
#define KVM_ARM_VCPU_PMU_V3 3 /* Support guest PMUv3 */
+#define KVM_ARM_VCPU_PTRAUTH_ADDRESS 4 /* VCPU uses address authentication */
+#define KVM_ARM_VCPU_PTRAUTH_GENERIC 5 /* VCPU uses generic authentication */

struct kvm_vcpu_init {
__u32 target;
--
2.7.4


2019-03-19 08:33:22

by Amit Kachhap

[permalink] [raw]
Subject: [PATCH v7 9/10] KVM: arm64: docs: document KVM support of pointer authentication

This adds sections for KVM API extension for pointer authentication.
A brief description about usage of pointer authentication for KVM guests
is added in the arm64 documentations.

Signed-off-by: Amit Daniel Kachhap <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Christoffer Dall <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: [email protected]
---
Documentation/arm64/pointer-authentication.txt | 15 +++++++++++----
Documentation/virtual/kvm/api.txt | 6 ++++++
2 files changed, 17 insertions(+), 4 deletions(-)

diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt
index 5baca42..4b769e6 100644
--- a/Documentation/arm64/pointer-authentication.txt
+++ b/Documentation/arm64/pointer-authentication.txt
@@ -87,7 +87,14 @@ used to get and set the keys for a thread.
Virtualization
--------------

-Pointer authentication is not currently supported in KVM guests. KVM
-will mask the feature bits from ID_AA64ISAR1_EL1, and attempted use of
-the feature will result in an UNDEFINED exception being injected into
-the guest.
+Pointer authentication is enabled in KVM guest when each virtual cpu is
+initialised by passing flags KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] and
+requesting this feature to be enabled. Without this flag, pointer
+authentication is not enabled in KVM guests and attempted use of the
+feature will result in an UNDEFINED exception being injected into the
+guest.
+
+Additionally, when these vcpu feature flags are not set then KVM will
+filter out the Pointer Authentication system key registers from
+KVM_GET/SET_REG_* ioctls and mask those features from cpufeature ID
+register.
diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index 7de9eee..b5c66bc 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -2659,6 +2659,12 @@ Possible features:
Depends on KVM_CAP_ARM_PSCI_0_2.
- KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.
Depends on KVM_CAP_ARM_PMU_V3.
+ - KVM_ARM_VCPU_PTRAUTH_ADDRESS:
+ - KVM_ARM_VCPU_PTRAUTH_GENERIC:
+ Enables Pointer authentication for the CPU.
+ Depends on KVM_CAP_ARM_PTRAUTH and only on arm64 architecture. If
+ set, then the KVM guest allows the execution of pointer authentication
+ instructions. Otherwise, KVM treats these instructions as undefined.


4.83 KVM_ARM_PREFERRED_TARGET
--
2.7.4


2019-03-19 08:33:49

by Amit Kachhap

[permalink] [raw]
Subject: [PATCH v7 4/10] KVM: arm/arm64: preserve host HCR_EL2 value

From: Mark Rutland <[email protected]>

When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
is a constant value. This works today, as the host HCR_EL2 value is
always the same, but this will get in the way of supporting extensions
that require HCR_EL2 bits to be set conditionally for the host.

To allow such features to work without KVM having to explicitly handle
every possible host feature combination, this patch has KVM save/restore
for the host HCR when switching to/from a guest HCR. The saving of the
register is done once during cpu hypervisor initialization state and is
just restored after switch from guest.

For fetching HCR_EL2 during kvm initialisation, a hyp call is made using
kvm_call_hyp and is helpful in non-VHE case.

For the hyp TLB maintenance code, __tlb_switch_to_host_vhe() is updated
to toggle the TGE bit with a RMW sequence, as we already do in
__tlb_switch_to_guest_vhe().

The value of hcr_el2 is now stored in struct kvm_cpu_context as both host
and guest can now use this field in a common way.

Signed-off-by: Mark Rutland <[email protected]>
[Added cpu_init_host_ctxt, hcr_el2 field in struct kvm_cpu_context,
save hcr_el2 in hyp init stage]
Signed-off-by: Amit Daniel Kachhap <[email protected]>
Reviewed-by: James Morse <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Christoffer Dall <[email protected]>
Cc: [email protected]
---
arch/arm/include/asm/kvm_host.h | 2 ++
arch/arm64/include/asm/kvm_asm.h | 2 ++
arch/arm64/include/asm/kvm_emulate.h | 24 ++++++++++++------------
arch/arm64/include/asm/kvm_host.h | 15 ++++++++++++++-
arch/arm64/kvm/guest.c | 2 +-
arch/arm64/kvm/hyp/switch.c | 22 +++++++++++++---------
arch/arm64/kvm/hyp/sysreg-sr.c | 14 ++++++++++++++
arch/arm64/kvm/hyp/tlb.c | 6 +++++-
virt/kvm/arm/arm.c | 1 +
9 files changed, 64 insertions(+), 24 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 770d732..6d0aac4 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -322,6 +322,8 @@ static inline void __cpu_init_stage2(void)
kvm_call_hyp(__init_stage2_translation);
}

+static inline void cpu_init_host_ctxt(void) {}
+
static inline int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext)
{
return 0;
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 57a07e8..a68205c 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -80,6 +80,8 @@ extern void __vgic_v3_init_lrs(void);

extern u32 __kvm_get_mdcr_el2(void);

+extern void __kvm_populate_host_regs(void);
+
/*
* Obtain the PC-relative address of a kernel symbol
* s: symbol
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index d384279..426815d 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -50,25 +50,25 @@ void kvm_inject_pabt32(struct kvm_vcpu *vcpu, unsigned long addr);

static inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu)
{
- return !(vcpu->arch.hcr_el2 & HCR_RW);
+ return !(vcpu->arch.ctxt.hcr_el2 & HCR_RW);
}

static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
{
- vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS;
+ vcpu->arch.ctxt.hcr_el2 = HCR_GUEST_FLAGS;
if (is_kernel_in_hyp_mode())
- vcpu->arch.hcr_el2 |= HCR_E2H;
+ vcpu->arch.ctxt.hcr_el2 |= HCR_E2H;
if (cpus_have_const_cap(ARM64_HAS_RAS_EXTN)) {
/* route synchronous external abort exceptions to EL2 */
- vcpu->arch.hcr_el2 |= HCR_TEA;
+ vcpu->arch.ctxt.hcr_el2 |= HCR_TEA;
/* trap error record accesses */
- vcpu->arch.hcr_el2 |= HCR_TERR;
+ vcpu->arch.ctxt.hcr_el2 |= HCR_TERR;
}
if (cpus_have_const_cap(ARM64_HAS_STAGE2_FWB))
- vcpu->arch.hcr_el2 |= HCR_FWB;
+ vcpu->arch.ctxt.hcr_el2 |= HCR_FWB;

if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features))
- vcpu->arch.hcr_el2 &= ~HCR_RW;
+ vcpu->arch.ctxt.hcr_el2 &= ~HCR_RW;

/*
* TID3: trap feature register accesses that we virtualise.
@@ -76,26 +76,26 @@ static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
* are currently virtualised.
*/
if (!vcpu_el1_is_32bit(vcpu))
- vcpu->arch.hcr_el2 |= HCR_TID3;
+ vcpu->arch.ctxt.hcr_el2 |= HCR_TID3;

if (cpus_have_const_cap(ARM64_MISMATCHED_CACHE_TYPE) ||
vcpu_el1_is_32bit(vcpu))
- vcpu->arch.hcr_el2 |= HCR_TID2;
+ vcpu->arch.ctxt.hcr_el2 |= HCR_TID2;
}

static inline unsigned long *vcpu_hcr(struct kvm_vcpu *vcpu)
{
- return (unsigned long *)&vcpu->arch.hcr_el2;
+ return (unsigned long *)&vcpu->arch.ctxt.hcr_el2;
}

static inline void vcpu_clear_wfe_traps(struct kvm_vcpu *vcpu)
{
- vcpu->arch.hcr_el2 &= ~HCR_TWE;
+ vcpu->arch.ctxt.hcr_el2 &= ~HCR_TWE;
}

static inline void vcpu_set_wfe_traps(struct kvm_vcpu *vcpu)
{
- vcpu->arch.hcr_el2 |= HCR_TWE;
+ vcpu->arch.ctxt.hcr_el2 |= HCR_TWE;
}

static inline unsigned long vcpu_get_vsesr(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index a01fe087..3b09fd0 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -209,6 +209,8 @@ struct kvm_cpu_context {
u32 copro[NR_COPRO_REGS];
};

+ /* HYP host/guest configuration */
+ u64 hcr_el2;
struct kvm_vcpu *__hyp_running_vcpu;
};

@@ -225,7 +227,6 @@ struct kvm_vcpu_arch {
struct kvm_cpu_context ctxt;

/* HYP configuration */
- u64 hcr_el2;
u32 mdcr_el2;

/* Exception Information */
@@ -510,6 +511,18 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,

static inline void __cpu_init_stage2(void) {}

+/**
+ * cpu_init_host_ctxt - save the boot hyp configuration registers
+ *
+ * It is called per-cpu during CPU hyp initialisation and the
+ * registers context saved may be used during host/guest context
+ * switch.
+ */
+static inline void cpu_init_host_ctxt(void)
+{
+ kvm_call_hyp(__kvm_populate_host_regs);
+}
+
/* Guest/host FPSIMD coordination helpers */
int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index dd436a5..e2f0268 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -345,7 +345,7 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
int __kvm_arm_vcpu_get_events(struct kvm_vcpu *vcpu,
struct kvm_vcpu_events *events)
{
- events->exception.serror_pending = !!(vcpu->arch.hcr_el2 & HCR_VSE);
+ events->exception.serror_pending = !!(vcpu->arch.ctxt.hcr_el2 & HCR_VSE);
events->exception.serror_has_esr = cpus_have_const_cap(ARM64_HAS_RAS_EXTN);

if (events->exception.serror_pending && events->exception.serror_has_esr)
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index 3563fe6..f5cefa1 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -129,7 +129,7 @@ static void __hyp_text __activate_traps_nvhe(struct kvm_vcpu *vcpu)

static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
{
- u64 hcr = vcpu->arch.hcr_el2;
+ u64 hcr = vcpu->arch.ctxt.hcr_el2;

write_sysreg(hcr, hcr_el2);

@@ -142,10 +142,10 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
__activate_traps_nvhe(vcpu);
}

-static void deactivate_traps_vhe(void)
+static void deactivate_traps_vhe(struct kvm_cpu_context *host_ctxt)
{
extern char vectors[]; /* kernel exception vectors */
- write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2);
+ write_sysreg(host_ctxt->hcr_el2, hcr_el2);

/*
* ARM erratum 1165522 requires the actual execution of the above
@@ -159,9 +159,10 @@ static void deactivate_traps_vhe(void)
}
NOKPROBE_SYMBOL(deactivate_traps_vhe);

-static void __hyp_text __deactivate_traps_nvhe(void)
+static void __hyp_text __deactivate_traps_nvhe(struct kvm_cpu_context *host_ctxt)
{
u64 mdcr_el2 = read_sysreg(mdcr_el2);
+ struct kvm_cpu_context *hyp_host_ctxt = kern_hyp_va(host_ctxt);

__deactivate_traps_common();

@@ -169,25 +170,28 @@ static void __hyp_text __deactivate_traps_nvhe(void)
mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;

write_sysreg(mdcr_el2, mdcr_el2);
- write_sysreg(HCR_HOST_NVHE_FLAGS, hcr_el2);
+ write_sysreg(hyp_host_ctxt->hcr_el2, hcr_el2);
write_sysreg(CPTR_EL2_DEFAULT, cptr_el2);
}

static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
{
+ struct kvm_cpu_context *host_ctxt;
+
+ host_ctxt = vcpu->arch.host_cpu_context;
/*
* If we pended a virtual abort, preserve it until it gets
* cleared. See D1.14.3 (Virtual Interrupts) for details, but
* the crucial bit is "On taking a vSError interrupt,
* HCR_EL2.VSE is cleared to 0."
*/
- if (vcpu->arch.hcr_el2 & HCR_VSE)
- vcpu->arch.hcr_el2 = read_sysreg(hcr_el2);
+ if (vcpu->arch.ctxt.hcr_el2 & HCR_VSE)
+ vcpu->arch.ctxt.hcr_el2 = read_sysreg(hcr_el2);

if (has_vhe())
- deactivate_traps_vhe();
+ deactivate_traps_vhe(host_ctxt);
else
- __deactivate_traps_nvhe();
+ __deactivate_traps_nvhe(host_ctxt);
}

void activate_traps_vhe_load(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c
index c52a845..277f82b 100644
--- a/arch/arm64/kvm/hyp/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/sysreg-sr.c
@@ -320,3 +320,17 @@ void __hyp_text __kvm_enable_ssbs(void)
"msr sctlr_el2, %0"
: "=&r" (tmp) : "L" (SCTLR_ELx_DSSBS));
}
+
+/**
+ * __kvm_populate_host_regs - Fetches host register values
+ *
+ * This function acts as a function handler parameter for kvm_call_hyp and
+ * may be called from EL1 exception level to fetch the host register values.
+ */
+void __hyp_text __kvm_populate_host_regs(void)
+{
+ struct kvm_cpu_context *host_ctxt;
+
+ host_ctxt = __hyp_this_cpu_ptr(kvm_host_cpu_state);
+ host_ctxt->hcr_el2 = read_sysreg(hcr_el2);
+}
diff --git a/arch/arm64/kvm/hyp/tlb.c b/arch/arm64/kvm/hyp/tlb.c
index 76c3086..c5e7144 100644
--- a/arch/arm64/kvm/hyp/tlb.c
+++ b/arch/arm64/kvm/hyp/tlb.c
@@ -86,12 +86,16 @@ static hyp_alternate_select(__tlb_switch_to_guest,
static void __hyp_text __tlb_switch_to_host_vhe(struct kvm *kvm,
struct tlb_inv_context *cxt)
{
+ u64 val;
+
/*
* We're done with the TLB operation, let's restore the host's
* view of HCR_EL2.
*/
write_sysreg(0, vttbr_el2);
- write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2);
+ val = read_sysreg(hcr_el2);
+ val |= HCR_TGE;
+ write_sysreg(val, hcr_el2);
isb();

if (cpus_have_const_cap(ARM64_WORKAROUND_1165522)) {
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 99c3738..e8c2ee6 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -1323,6 +1323,7 @@ static void cpu_hyp_reinit(void)
cpu_init_hyp_mode(NULL);

kvm_arm_init_debug();
+ cpu_init_host_ctxt();

if (vgic_present)
kvm_vgic_init_cpu_hardware();
--
2.7.4


2019-03-19 08:34:14

by Amit Kachhap

[permalink] [raw]
Subject: [PATCH v7 7/10] KVM: arm/arm64: context-switch ptrauth registers

From: Mark Rutland <[email protected]>

When pointer authentication is supported, a guest may wish to use it.
This patch adds the necessary KVM infrastructure for this to work, with
a semi-lazy context switch of the pointer auth state.

Pointer authentication feature is only enabled when VHE is built
in the kernel and present in the CPU implementation so only VHE code
paths are modified.

When we schedule a vcpu, we disable guest usage of pointer
authentication instructions and accesses to the keys. While these are
disabled, we avoid context-switching the keys. When we trap the guest
trying to use pointer authentication functionality, we change to eagerly
context-switching the keys, and enable the feature. The next time the
vcpu is scheduled out/in, we start again. However the host key save is
optimized and implemented inside ptrauth instruction/register access
trap.

Pointer authentication consists of address authentication and generic
authentication, and CPUs in a system might have varied support for
either. Where support for either feature is not uniform, it is hidden
from guests via ID register emulation, as a result of the cpufeature
framework in the host.

Unfortunately, address authentication and generic authentication cannot
be trapped separately, as the architecture provides a single EL2 trap
covering both. If we wish to expose one without the other, we cannot
prevent a (badly-written) guest from intermittently using a feature
which is not uniformly supported (when scheduled on a physical CPU which
supports the relevant feature). Hence, this patch expects both type of
authentication to be present in a cpu.

This switch of key is done from guest enter/exit assembly as preperation
for the upcoming in-kernel pointer authentication support. Hence, these
key switching routines are not implemented in C code as they may cause
pointer authentication key signing error in some situations.

Signed-off-by: Mark Rutland <[email protected]>
[Only VHE, key switch in full assembly, vcpu_has_ptrauth checks
, save host key in ptrauth exception trap]
Signed-off-by: Amit Daniel Kachhap <[email protected]>
Reviewed-by: Julien Thierry <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Christoffer Dall <[email protected]>
Cc: [email protected]
---
arch/arm64/include/asm/kvm_host.h | 17 ++++++
arch/arm64/include/asm/kvm_ptrauth_asm.h | 100 +++++++++++++++++++++++++++++++
arch/arm64/kernel/asm-offsets.c | 6 ++
arch/arm64/kvm/guest.c | 14 +++++
arch/arm64/kvm/handle_exit.c | 24 +++++---
arch/arm64/kvm/hyp/entry.S | 7 +++
arch/arm64/kvm/reset.c | 7 +++
arch/arm64/kvm/sys_regs.c | 46 +++++++++++++-
virt/kvm/arm/arm.c | 2 +
9 files changed, 212 insertions(+), 11 deletions(-)
create mode 100644 arch/arm64/include/asm/kvm_ptrauth_asm.h

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 9dd2918..61239a6 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -152,6 +152,18 @@ enum vcpu_sysreg {
PMSWINC_EL0, /* Software Increment Register */
PMUSERENR_EL0, /* User Enable Register */

+ /* Pointer Authentication Registers */
+ APIAKEYLO_EL1,
+ APIAKEYHI_EL1,
+ APIBKEYLO_EL1,
+ APIBKEYHI_EL1,
+ APDAKEYLO_EL1,
+ APDAKEYHI_EL1,
+ APDBKEYLO_EL1,
+ APDBKEYHI_EL1,
+ APGAKEYLO_EL1,
+ APGAKEYHI_EL1,
+
/* 32bit specific registers. Keep them at the end of the range */
DACR32_EL2, /* Domain Access Control Register */
IFSR32_EL2, /* Instruction Fault Status Register */
@@ -497,6 +509,11 @@ static inline bool kvm_arch_requires_vhe(void)
test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) && \
test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features))

+void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu);
+void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu);
+void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu);
+void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu);
+
static inline void kvm_arch_hardware_unsetup(void) {}
static inline void kvm_arch_sync_events(struct kvm *kvm) {}
static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
diff --git a/arch/arm64/include/asm/kvm_ptrauth_asm.h b/arch/arm64/include/asm/kvm_ptrauth_asm.h
new file mode 100644
index 0000000..97bb040
--- /dev/null
+++ b/arch/arm64/include/asm/kvm_ptrauth_asm.h
@@ -0,0 +1,100 @@
+/* SPDX-License-Identifier: GPL-2.0
+ * arch/arm64/include/asm/kvm_ptrauth_asm.h: Guest/host ptrauth save/restore
+ * Copyright 2019 Arm Limited
+ * Author: Mark Rutland <[email protected]>
+ * Amit Daniel Kachhap <[email protected]>
+ */
+
+#ifndef __ASM_KVM_ASM_PTRAUTH_H
+#define __ASM_KVM_ASM_PTRAUTH_H
+
+#ifndef __ASSEMBLY__
+
+#define __ptrauth_save_key(regs, key) \
+({ \
+ regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1); \
+ regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1); \
+})
+
+#define __ptrauth_save_state(ctxt) \
+({ \
+ __ptrauth_save_key(ctxt->sys_regs, APIA); \
+ __ptrauth_save_key(ctxt->sys_regs, APIB); \
+ __ptrauth_save_key(ctxt->sys_regs, APDA); \
+ __ptrauth_save_key(ctxt->sys_regs, APDB); \
+ __ptrauth_save_key(ctxt->sys_regs, APGA); \
+})
+
+#else /* __ASSEMBLY__ */
+
+#include <asm/sysreg.h>
+
+#ifdef CONFIG_ARM64_PTR_AUTH
+
+#define PTRAUTH_REG_OFFSET(x) (x - CPU_APIAKEYLO_EL1)
+
+.macro ptrauth_save_state base, reg1, reg2
+ mrs_s \reg1, SYS_APIAKEYLO_EL1
+ mrs_s \reg2, SYS_APIAKEYHI_EL1
+ stp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
+ mrs_s \reg1, SYS_APIBKEYLO_EL1
+ mrs_s \reg2, SYS_APIBKEYHI_EL1
+ stp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)]
+ mrs_s \reg1, SYS_APDAKEYLO_EL1
+ mrs_s \reg2, SYS_APDAKEYHI_EL1
+ stp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)]
+ mrs_s \reg1, SYS_APDBKEYLO_EL1
+ mrs_s \reg2, SYS_APDBKEYHI_EL1
+ stp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)]
+ mrs_s \reg1, SYS_APGAKEYLO_EL1
+ mrs_s \reg2, SYS_APGAKEYHI_EL1
+ stp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)]
+.endm
+
+.macro ptrauth_restore_state base, reg1, reg2
+ ldp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
+ msr_s SYS_APIAKEYLO_EL1, \reg1
+ msr_s SYS_APIAKEYHI_EL1, \reg2
+ ldp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)]
+ msr_s SYS_APIBKEYLO_EL1, \reg1
+ msr_s SYS_APIBKEYHI_EL1, \reg2
+ ldp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)]
+ msr_s SYS_APDAKEYLO_EL1, \reg1
+ msr_s SYS_APDAKEYHI_EL1, \reg2
+ ldp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)]
+ msr_s SYS_APDBKEYLO_EL1, \reg1
+ msr_s SYS_APDBKEYHI_EL1, \reg2
+ ldp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)]
+ msr_s SYS_APGAKEYLO_EL1, \reg1
+ msr_s SYS_APGAKEYHI_EL1, \reg2
+ .endm
+
+.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3
+ ldr \reg1, [\g_ctxt, #CPU_HCR_EL2]
+ and \reg1, \reg1, #(HCR_API | HCR_APK)
+ cbz \reg1, skip_switch_to_guest
+ add \reg1, \g_ctxt, #CPU_APIAKEYLO_EL1
+ ptrauth_restore_state \reg1, \reg2, \reg3
+skip_switch_to_guest:
+.endm
+
+.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3
+ ldr \reg1, [\g_ctxt, #CPU_HCR_EL2]
+ and \reg1, \reg1, #(HCR_API | HCR_APK)
+ cbz \reg1, skip_switch_to_host
+ add \reg1, \g_ctxt, #CPU_APIAKEYLO_EL1
+ ptrauth_save_state \reg1, \reg2, \reg3
+ add \reg1, \h_ctxt, #CPU_APIAKEYLO_EL1
+ ptrauth_restore_state \reg1, \reg2, \reg3
+ isb
+skip_switch_to_host:
+.endm
+
+#else /* !CONFIG_ARM64_PTR_AUTH */
+.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3
+.endm
+.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3
+.endm
+#endif /* CONFIG_ARM64_PTR_AUTH */
+#endif /* __ASSEMBLY__ */
+#endif /* __ASM_KVM_ASM_PTRAUTH_H */
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 7f40dcb..12ca916 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -126,6 +126,12 @@ int main(void)
DEFINE(VCPU_FAULT_DISR, offsetof(struct kvm_vcpu, arch.fault.disr_el1));
DEFINE(VCPU_WORKAROUND_FLAGS, offsetof(struct kvm_vcpu, arch.workaround_flags));
DEFINE(CPU_GP_REGS, offsetof(struct kvm_cpu_context, gp_regs));
+ DEFINE(CPU_APIAKEYLO_EL1, offsetof(struct kvm_cpu_context, sys_regs[APIAKEYLO_EL1]));
+ DEFINE(CPU_APIBKEYLO_EL1, offsetof(struct kvm_cpu_context, sys_regs[APIBKEYLO_EL1]));
+ DEFINE(CPU_APDAKEYLO_EL1, offsetof(struct kvm_cpu_context, sys_regs[APDAKEYLO_EL1]));
+ DEFINE(CPU_APDBKEYLO_EL1, offsetof(struct kvm_cpu_context, sys_regs[APDBKEYLO_EL1]));
+ DEFINE(CPU_APGAKEYLO_EL1, offsetof(struct kvm_cpu_context, sys_regs[APGAKEYLO_EL1]));
+ DEFINE(CPU_HCR_EL2, offsetof(struct kvm_cpu_context, hcr_el2));
DEFINE(CPU_USER_PT_REGS, offsetof(struct kvm_regs, regs));
DEFINE(HOST_CONTEXT_VCPU, offsetof(struct kvm_cpu_context, __hyp_running_vcpu));
#endif
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index e2f0268..9f591ad 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -544,3 +544,17 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,

return ret;
}
+
+/**
+ * kvm_arm_vcpu_ptrauth_setup_lazy - setup lazy ptrauth for vcpu schedule
+ *
+ * @vcpu: The VCPU pointer
+ *
+ * This function may be used to disable ptrauth and use it in a lazy context
+ * via traps.
+ */
+void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu)
+{
+ if (vcpu_has_ptrauth(vcpu))
+ kvm_arm_vcpu_ptrauth_disable(vcpu);
+}
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 0b79834..5838ff9 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -30,6 +30,7 @@
#include <asm/kvm_coproc.h>
#include <asm/kvm_emulate.h>
#include <asm/kvm_mmu.h>
+#include <asm/kvm_ptrauth_asm.h>
#include <asm/debug-monitors.h>
#include <asm/traps.h>

@@ -174,19 +175,26 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
}

/*
+ * Handle the guest trying to use a ptrauth instruction, or trying to access a
+ * ptrauth register.
+ */
+void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
+{
+ if (vcpu_has_ptrauth(vcpu)) {
+ kvm_arm_vcpu_ptrauth_enable(vcpu);
+ __ptrauth_save_state(vcpu->arch.host_cpu_context);
+ } else {
+ kvm_inject_undefined(vcpu);
+ }
+}
+
+/*
* Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
* a NOP).
*/
static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
{
- /*
- * We don't currently support ptrauth in a guest, and we mask the ID
- * registers to prevent well-behaved guests from trying to make use of
- * it.
- *
- * Inject an UNDEF, as if the feature really isn't present.
- */
- kvm_inject_undefined(vcpu);
+ kvm_arm_vcpu_ptrauth_trap(vcpu);
return 1;
}

diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
index 675fdc1..3a70213 100644
--- a/arch/arm64/kvm/hyp/entry.S
+++ b/arch/arm64/kvm/hyp/entry.S
@@ -24,6 +24,7 @@
#include <asm/kvm_arm.h>
#include <asm/kvm_asm.h>
#include <asm/kvm_mmu.h>
+#include <asm/kvm_ptrauth_asm.h>

#define CPU_GP_REG_OFFSET(x) (CPU_GP_REGS + x)
#define CPU_XREG_OFFSET(x) CPU_GP_REG_OFFSET(CPU_USER_PT_REGS + 8*x)
@@ -64,6 +65,9 @@ ENTRY(__guest_enter)

add x18, x0, #VCPU_CONTEXT

+ // Macro ptrauth_switch_to_guest(guest cxt, tmp1, tmp2, tmp3).
+ ptrauth_switch_to_guest x18, x0, x1, x2
+
// Restore guest regs x0-x17
ldp x0, x1, [x18, #CPU_XREG_OFFSET(0)]
ldp x2, x3, [x18, #CPU_XREG_OFFSET(2)]
@@ -118,6 +122,9 @@ ENTRY(__guest_exit)

get_host_ctxt x2, x3

+ // Macro ptrauth_switch_to_host(guest cxt, host cxt, tmp1, tmp2, tmp3).
+ ptrauth_switch_to_host x1, x2, x3, x4, x5
+
// Now restore the host regs
restore_callee_saved_regs x2

diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index f16a5f8..00f0639 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -128,6 +128,13 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
if (loaded)
kvm_arch_vcpu_put(vcpu);

+ if (test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
+ test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features)) {
+ /* Verify that KVM startup matches the conditions for ptrauth */
+ if (WARN_ON(!vcpu_has_ptrauth(vcpu)))
+ return -EINVAL;
+ }
+
switch (vcpu->arch.target) {
default:
if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) {
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 75942f6..ed6613e 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1007,6 +1007,38 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
{ SYS_DESC(SYS_PMEVTYPERn_EL0(n)), \
access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), }

+void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu)
+{
+ vcpu->arch.ctxt.hcr_el2 |= (HCR_API | HCR_APK);
+}
+
+void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu)
+{
+ vcpu->arch.ctxt.hcr_el2 &= ~(HCR_API | HCR_APK);
+}
+
+static bool trap_ptrauth(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *rd)
+{
+ kvm_arm_vcpu_ptrauth_trap(vcpu);
+ return false;
+}
+
+static unsigned int ptrauth_restrictions(const struct kvm_vcpu *vcpu,
+ const struct sys_reg_desc *rd)
+{
+ return vcpu_has_ptrauth(vcpu) ? 0 : REG_NO_USER | REG_NO_GUEST;
+}
+
+#define __PTRAUTH_KEY(k) \
+ { SYS_DESC(SYS_## k), trap_ptrauth, reset_unknown, k, \
+ .restrictions = ptrauth_restrictions}
+
+#define PTRAUTH_KEY(k) \
+ __PTRAUTH_KEY(k ## KEYLO_EL1), \
+ __PTRAUTH_KEY(k ## KEYHI_EL1)
+
static bool access_arch_timer(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
@@ -1061,9 +1093,11 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu,
(0xfUL << ID_AA64ISAR1_API_SHIFT) |
(0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
(0xfUL << ID_AA64ISAR1_GPI_SHIFT);
- if (val & ptrauth_mask)
- kvm_debug("ptrauth unsupported for guests, suppressing\n");
- val &= ~ptrauth_mask;
+ if (!vcpu_has_ptrauth(vcpu)) {
+ if (val & ptrauth_mask)
+ kvm_debug("ptrauth unsupported for guests, suppressing\n");
+ val &= ~ptrauth_mask;
+ }
}

return val;
@@ -1387,6 +1421,12 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 },
{ SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 },

+ PTRAUTH_KEY(APIA),
+ PTRAUTH_KEY(APIB),
+ PTRAUTH_KEY(APDA),
+ PTRAUTH_KEY(APDB),
+ PTRAUTH_KEY(APGA),
+
{ SYS_DESC(SYS_AFSR0_EL1), access_vm_reg, reset_unknown, AFSR0_EL1 },
{ SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 },
{ SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 },
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 58de0ca..50ae066 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -385,6 +385,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
vcpu_clear_wfe_traps(vcpu);
else
vcpu_set_wfe_traps(vcpu);
+
+ kvm_arm_vcpu_ptrauth_setup_lazy(vcpu);
}

void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
--
2.7.4


2019-03-19 08:34:26

by Amit Kachhap

[permalink] [raw]
Subject: [PATCH v7 8/10] KVM: arm64: Add capability to advertise ptrauth for guest

This patch advertises the capability of pointer authentication
when system supports pointer authentication and VHE mode present.

Signed-off-by: Amit Daniel Kachhap <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Christoffer Dall <[email protected]>
Cc: [email protected]
---
arch/arm64/kvm/reset.c | 4 ++++
include/uapi/linux/kvm.h | 1 +
2 files changed, 5 insertions(+)

diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 00f0639..a3b269e 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -92,6 +92,10 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext)
case KVM_CAP_ARM_VM_IPA_SIZE:
r = kvm_ipa_limit;
break;
+ case KVM_CAP_ARM_PTRAUTH:
+ r = has_vhe() && system_supports_address_auth() &&
+ system_supports_generic_auth();
+ break;
default:
r = 0;
}
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 6d4ea4b..a553477 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -988,6 +988,7 @@ struct kvm_ppc_resize_hpt {
#define KVM_CAP_ARM_VM_IPA_SIZE 165
#define KVM_CAP_MANUAL_DIRTY_LOG_PROTECT 166
#define KVM_CAP_HYPERV_CPUID 167
+#define KVM_CAP_ARM_PTRAUTH 168

#ifdef KVM_CAP_IRQ_ROUTING

--
2.7.4


2019-03-19 08:34:35

by Amit Kachhap

[permalink] [raw]
Subject: [kvmtool PATCH v7 10/10] KVM: arm/arm64: Add a vcpu feature for pointer authentication

This is a runtime capabality for KVM tool to enable Arm64 8.3 Pointer
Authentication in guest kernel. A command line option --ptrauth is
required for this.

Signed-off-by: Amit Daniel Kachhap <[email protected]>
---
arm/aarch32/include/kvm/kvm-cpu-arch.h | 1 +
arm/aarch64/include/asm/kvm.h | 2 ++
arm/aarch64/include/kvm/kvm-config-arch.h | 4 +++-
arm/aarch64/include/kvm/kvm-cpu-arch.h | 2 ++
arm/include/arm-common/kvm-config-arch.h | 1 +
arm/kvm-cpu.c | 6 ++++++
include/linux/kvm.h | 1 +
7 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/arm/aarch32/include/kvm/kvm-cpu-arch.h b/arm/aarch32/include/kvm/kvm-cpu-arch.h
index d28ea67..520ea76 100644
--- a/arm/aarch32/include/kvm/kvm-cpu-arch.h
+++ b/arm/aarch32/include/kvm/kvm-cpu-arch.h
@@ -13,4 +13,5 @@
#define ARM_CPU_ID 0, 0, 0
#define ARM_CPU_ID_MPIDR 5

+#define ARM_VCPU_PTRAUTH_FEATURE 0
#endif /* KVM__KVM_CPU_ARCH_H */
diff --git a/arm/aarch64/include/asm/kvm.h b/arm/aarch64/include/asm/kvm.h
index 97c3478..d4d0d8c 100644
--- a/arm/aarch64/include/asm/kvm.h
+++ b/arm/aarch64/include/asm/kvm.h
@@ -102,6 +102,8 @@ struct kvm_regs {
#define KVM_ARM_VCPU_EL1_32BIT 1 /* CPU running a 32bit VM */
#define KVM_ARM_VCPU_PSCI_0_2 2 /* CPU uses PSCI v0.2 */
#define KVM_ARM_VCPU_PMU_V3 3 /* Support guest PMUv3 */
+#define KVM_ARM_VCPU_PTRAUTH_ADDRESS 4 /* CPU uses address pointer authentication */
+#define KVM_ARM_VCPU_PTRAUTH_GENERIC 5 /* CPU uses generic pointer authentication */

struct kvm_vcpu_init {
__u32 target;
diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h b/arm/aarch64/include/kvm/kvm-config-arch.h
index 04be43d..2074684 100644
--- a/arm/aarch64/include/kvm/kvm-config-arch.h
+++ b/arm/aarch64/include/kvm/kvm-config-arch.h
@@ -8,7 +8,9 @@
"Create PMUv3 device"), \
OPT_U64('\0', "kaslr-seed", &(cfg)->kaslr_seed, \
"Specify random seed for Kernel Address Space " \
- "Layout Randomization (KASLR)"),
+ "Layout Randomization (KASLR)"), \
+ OPT_BOOLEAN('\0', "ptrauth", &(cfg)->has_ptrauth, \
+ "Enable address authentication"),

#include "arm-common/kvm-config-arch.h"

diff --git a/arm/aarch64/include/kvm/kvm-cpu-arch.h b/arm/aarch64/include/kvm/kvm-cpu-arch.h
index a9d8563..fcc2107 100644
--- a/arm/aarch64/include/kvm/kvm-cpu-arch.h
+++ b/arm/aarch64/include/kvm/kvm-cpu-arch.h
@@ -17,4 +17,6 @@
#define ARM_CPU_CTRL 3, 0, 1, 0
#define ARM_CPU_CTRL_SCTLR_EL1 0

+#define ARM_VCPU_PTRAUTH_FEATURE ((1UL << KVM_ARM_VCPU_PTRAUTH_ADDRESS) \
+ | (1UL << KVM_ARM_VCPU_PTRAUTH_GENERIC))
#endif /* KVM__KVM_CPU_ARCH_H */
diff --git a/arm/include/arm-common/kvm-config-arch.h b/arm/include/arm-common/kvm-config-arch.h
index 5734c46..5badcbd 100644
--- a/arm/include/arm-common/kvm-config-arch.h
+++ b/arm/include/arm-common/kvm-config-arch.h
@@ -10,6 +10,7 @@ struct kvm_config_arch {
bool aarch32_guest;
bool has_pmuv3;
u64 kaslr_seed;
+ bool has_ptrauth;
enum irqchip_type irqchip;
u64 fw_addr;
};
diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
index 7780251..4ac80f8 100644
--- a/arm/kvm-cpu.c
+++ b/arm/kvm-cpu.c
@@ -68,6 +68,12 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned long cpu_id)
vcpu_init.features[0] |= (1UL << KVM_ARM_VCPU_PSCI_0_2);
}

+ /* Set KVM_ARM_VCPU_PTRAUTH if available */
+ if (kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH)) {
+ if (kvm->cfg.arch.has_ptrauth)
+ vcpu_init.features[0] |= ARM_VCPU_PTRAUTH_FEATURE;
+ }
+
/*
* If the preferred target ioctl is successful then
* use preferred target else try each and every target type
diff --git a/include/linux/kvm.h b/include/linux/kvm.h
index 6d4ea4b..a553477 100644
--- a/include/linux/kvm.h
+++ b/include/linux/kvm.h
@@ -988,6 +988,7 @@ struct kvm_ppc_resize_hpt {
#define KVM_CAP_ARM_VM_IPA_SIZE 165
#define KVM_CAP_MANUAL_DIRTY_LOG_PROTECT 166
#define KVM_CAP_HYPERV_CPUID 167
+#define KVM_CAP_ARM_PTRAUTH 168

#ifdef KVM_CAP_IRQ_ROUTING

--
2.7.4


2019-03-20 08:50:26

by Julien Thierry

[permalink] [raw]
Subject: Re: [PATCH v7 3/10] KVM: arm64: Move hyp_symbol_addr to fix dependency

Hi Amit,

On 19/03/2019 08:30, Amit Daniel Kachhap wrote:
> Currently hyp_symbol_addr is palced in kvm_mmu.h which is mostly
> used by __hyp_this_cpu_ptr in kvm_asm.h but it cannot include
> kvm_mmu.h directly as kvm_mmu.h uses kvm_ksym_ref which is
> defined inside kvm_asm.h. Hence, hyp_symbol_addr is moved inside
> kvm_asm.h to fix this dependency on each other.
>
> Also kvm_ksym_ref is corresponding counterpart of hyp_symbol_addr
> so should be ideally placed inside kvm_asm.h.
>

This part is a bit confusing, it lead me to think that kvm_ksym_ref was
in kvm_mmu.h and should moved to kvm_asm.h as well. I'd suggest
rephrasing it with something along the lines:

"Also, hyp_symbol_addr corresponding counterpart, kvm_ksym_ref, is
already in kvm_asm.h, making it more sensible to move kvm_symbol_addr to
the same file."

Otherwise:

Reviewed-by: Julien Thierry <[email protected]>

Cheers,

Julien

> Suggested by: James Morse <[email protected]>
> Signed-off-by: Amit Daniel Kachhap <[email protected]>
> Cc: Marc Zyngier <[email protected]>
> Cc: Christoffer Dall <[email protected]>
> Cc: [email protected]
> ---
> arch/arm64/include/asm/kvm_asm.h | 20 ++++++++++++++++++++
> arch/arm64/include/asm/kvm_mmu.h | 20 --------------------
> 2 files changed, 20 insertions(+), 20 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
> index f5b79e9..57a07e8 100644
> --- a/arch/arm64/include/asm/kvm_asm.h
> +++ b/arch/arm64/include/asm/kvm_asm.h
> @@ -80,6 +80,26 @@ extern void __vgic_v3_init_lrs(void);
>
> extern u32 __kvm_get_mdcr_el2(void);
>
> +/*
> + * Obtain the PC-relative address of a kernel symbol
> + * s: symbol
> + *
> + * The goal of this macro is to return a symbol's address based on a
> + * PC-relative computation, as opposed to a loading the VA from a
> + * constant pool or something similar. This works well for HYP, as an
> + * absolute VA is guaranteed to be wrong. Only use this if trying to
> + * obtain the address of a symbol (i.e. not something you obtained by
> + * following a pointer).
> + */
> +#define hyp_symbol_addr(s) \
> + ({ \
> + typeof(s) *addr; \
> + asm("adrp %0, %1\n" \
> + "add %0, %0, :lo12:%1\n" \
> + : "=r" (addr) : "S" (&s)); \
> + addr; \
> + })
> +
> /* Home-grown __this_cpu_{ptr,read} variants that always work at HYP */
> #define __hyp_this_cpu_ptr(sym) \
> ({ \
> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> index b0742a1..3dea6af 100644
> --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
> @@ -118,26 +118,6 @@ static inline unsigned long __kern_hyp_va(unsigned long v)
> #define kern_hyp_va(v) ((typeof(v))(__kern_hyp_va((unsigned long)(v))))
>
> /*
> - * Obtain the PC-relative address of a kernel symbol
> - * s: symbol
> - *
> - * The goal of this macro is to return a symbol's address based on a
> - * PC-relative computation, as opposed to a loading the VA from a
> - * constant pool or something similar. This works well for HYP, as an
> - * absolute VA is guaranteed to be wrong. Only use this if trying to
> - * obtain the address of a symbol (i.e. not something you obtained by
> - * following a pointer).
> - */
> -#define hyp_symbol_addr(s) \
> - ({ \
> - typeof(s) *addr; \
> - asm("adrp %0, %1\n" \
> - "add %0, %0, :lo12:%1\n" \
> - : "=r" (addr) : "S" (&s)); \
> - addr; \
> - })
> -
> -/*
> * We currently support using a VM-specified IPA size. For backward
> * compatibility, the default IPA size is fixed to 40bits.
> */
>

--
Julien Thierry

2019-03-20 13:39:53

by Julien Thierry

[permalink] [raw]
Subject: Re: [PATCH v7 9/10] KVM: arm64: docs: document KVM support of pointer authentication

Hi Amit,

On 19/03/2019 08:30, Amit Daniel Kachhap wrote:
> This adds sections for KVM API extension for pointer authentication.
> A brief description about usage of pointer authentication for KVM guests
> is added in the arm64 documentations.
>
> Signed-off-by: Amit Daniel Kachhap <[email protected]>
> Cc: Mark Rutland <[email protected]>
> Cc: Christoffer Dall <[email protected]>
> Cc: Marc Zyngier <[email protected]>
> Cc: [email protected]
> ---
> Documentation/arm64/pointer-authentication.txt | 15 +++++++++++----
> Documentation/virtual/kvm/api.txt | 6 ++++++
> 2 files changed, 17 insertions(+), 4 deletions(-)
>
> diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt
> index 5baca42..4b769e6 100644
> --- a/Documentation/arm64/pointer-authentication.txt
> +++ b/Documentation/arm64/pointer-authentication.txt
> @@ -87,7 +87,14 @@ used to get and set the keys for a thread.
> Virtualization
> --------------
>
> -Pointer authentication is not currently supported in KVM guests. KVM
> -will mask the feature bits from ID_AA64ISAR1_EL1, and attempted use of
> -the feature will result in an UNDEFINED exception being injected into
> -the guest.
> +Pointer authentication is enabled in KVM guest when each virtual cpu is
> +initialised by passing flags KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] and
> +requesting this feature to be enabled. Without this flag, pointer

"Without these flags"*

> +authentication is not enabled in KVM guests and attempted use of the
> +feature will result in an UNDEFINED exception being injected into the
> +guest.
> +
> +Additionally, when these vcpu feature flags are not set then KVM will
> +filter out the Pointer Authentication system key registers from
> +KVM_GET/SET_REG_* ioctls and mask those features from cpufeature ID
> +register.
> diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
> index 7de9eee..b5c66bc 100644
> --- a/Documentation/virtual/kvm/api.txt
> +++ b/Documentation/virtual/kvm/api.txt
> @@ -2659,6 +2659,12 @@ Possible features:
> Depends on KVM_CAP_ARM_PSCI_0_2.
> - KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.
> Depends on KVM_CAP_ARM_PMU_V3.
> + - KVM_ARM_VCPU_PTRAUTH_ADDRESS:
> + - KVM_ARM_VCPU_PTRAUTH_GENERIC:
> + Enables Pointer authentication for the CPU.
> + Depends on KVM_CAP_ARM_PTRAUTH and only on arm64 architecture. If
> + set, then the KVM guest allows the execution of pointer authentication
> + instructions. Otherwise, KVM treats these instructions as undefined.
>

Overall I feel one could easily get confused to whether
PTRAUTH_ADDRESS/GENERIC are two individual features, whether one is a
superset of the other, if the names are just an alias of one another, etc...

I think the doc should at least stress out that *both* flags are
required to enable ptrauth in a guest. However it raises the question,
if we don't plan to support the features individually (because we
can't), should we really expose two feature flags? I seems odd to
introduce two flags that only do something if used together...

Cheers,

--
Julien Thierry

2019-03-20 14:10:23

by Julien Thierry

[permalink] [raw]
Subject: Re: [PATCH v7 7/10] KVM: arm/arm64: context-switch ptrauth registers

Hi Amit,

On 19/03/2019 08:30, Amit Daniel Kachhap wrote:
> From: Mark Rutland <[email protected]>
>
> When pointer authentication is supported, a guest may wish to use it.
> This patch adds the necessary KVM infrastructure for this to work, with
> a semi-lazy context switch of the pointer auth state.
>
> Pointer authentication feature is only enabled when VHE is built
> in the kernel and present in the CPU implementation so only VHE code
> paths are modified.
>
> When we schedule a vcpu, we disable guest usage of pointer
> authentication instructions and accesses to the keys. While these are
> disabled, we avoid context-switching the keys. When we trap the guest
> trying to use pointer authentication functionality, we change to eagerly
> context-switching the keys, and enable the feature. The next time the
> vcpu is scheduled out/in, we start again. However the host key save is
> optimized and implemented inside ptrauth instruction/register access
> trap.
>
> Pointer authentication consists of address authentication and generic
> authentication, and CPUs in a system might have varied support for
> either. Where support for either feature is not uniform, it is hidden
> from guests via ID register emulation, as a result of the cpufeature
> framework in the host.
>
> Unfortunately, address authentication and generic authentication cannot
> be trapped separately, as the architecture provides a single EL2 trap
> covering both. If we wish to expose one without the other, we cannot
> prevent a (badly-written) guest from intermittently using a feature
> which is not uniformly supported (when scheduled on a physical CPU which
> supports the relevant feature). Hence, this patch expects both type of
> authentication to be present in a cpu.
>
> This switch of key is done from guest enter/exit assembly as preperation
> for the upcoming in-kernel pointer authentication support. Hence, these
> key switching routines are not implemented in C code as they may cause
> pointer authentication key signing error in some situations.
>
> Signed-off-by: Mark Rutland <[email protected]>
> [Only VHE, key switch in full assembly, vcpu_has_ptrauth checks
> , save host key in ptrauth exception trap]
> Signed-off-by: Amit Daniel Kachhap <[email protected]>
> Reviewed-by: Julien Thierry <[email protected]>
> Cc: Marc Zyngier <[email protected]>
> Cc: Christoffer Dall <[email protected]>
> Cc: [email protected]
> ---
> arch/arm64/include/asm/kvm_host.h | 17 ++++++
> arch/arm64/include/asm/kvm_ptrauth_asm.h | 100 +++++++++++++++++++++++++++++++
> arch/arm64/kernel/asm-offsets.c | 6 ++
> arch/arm64/kvm/guest.c | 14 +++++
> arch/arm64/kvm/handle_exit.c | 24 +++++---
> arch/arm64/kvm/hyp/entry.S | 7 +++
> arch/arm64/kvm/reset.c | 7 +++
> arch/arm64/kvm/sys_regs.c | 46 +++++++++++++-
> virt/kvm/arm/arm.c | 2 +
> 9 files changed, 212 insertions(+), 11 deletions(-)
> create mode 100644 arch/arm64/include/asm/kvm_ptrauth_asm.h
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 9dd2918..61239a6 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -152,6 +152,18 @@ enum vcpu_sysreg {
> PMSWINC_EL0, /* Software Increment Register */
> PMUSERENR_EL0, /* User Enable Register */
>
> + /* Pointer Authentication Registers */
> + APIAKEYLO_EL1,
> + APIAKEYHI_EL1,
> + APIBKEYLO_EL1,
> + APIBKEYHI_EL1,
> + APDAKEYLO_EL1,
> + APDAKEYHI_EL1,
> + APDBKEYLO_EL1,
> + APDBKEYHI_EL1,
> + APGAKEYLO_EL1,
> + APGAKEYHI_EL1,
> +
> /* 32bit specific registers. Keep them at the end of the range */
> DACR32_EL2, /* Domain Access Control Register */
> IFSR32_EL2, /* Instruction Fault Status Register */
> @@ -497,6 +509,11 @@ static inline bool kvm_arch_requires_vhe(void)
> test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) && \
> test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features))
>
> +void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu);
> +void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu);
> +void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu);
> +void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu);
> +
> static inline void kvm_arch_hardware_unsetup(void) {}
> static inline void kvm_arch_sync_events(struct kvm *kvm) {}
> static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
> diff --git a/arch/arm64/include/asm/kvm_ptrauth_asm.h b/arch/arm64/include/asm/kvm_ptrauth_asm.h
> new file mode 100644
> index 0000000..97bb040
> --- /dev/null
> +++ b/arch/arm64/include/asm/kvm_ptrauth_asm.h
> @@ -0,0 +1,100 @@
> +/* SPDX-License-Identifier: GPL-2.0
> + * arch/arm64/include/asm/kvm_ptrauth_asm.h: Guest/host ptrauth save/restore
> + * Copyright 2019 Arm Limited
> + * Author: Mark Rutland <[email protected]>
> + * Amit Daniel Kachhap <[email protected]>
> + */
> +
> +#ifndef __ASM_KVM_ASM_PTRAUTH_H
> +#define __ASM_KVM_ASM_PTRAUTH_H
> +
> +#ifndef __ASSEMBLY__
> +
> +#define __ptrauth_save_key(regs, key) \
> +({ \
> + regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1); \
> + regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1); \
> +})
> +
> +#define __ptrauth_save_state(ctxt) \
> +({ \
> + __ptrauth_save_key(ctxt->sys_regs, APIA); \
> + __ptrauth_save_key(ctxt->sys_regs, APIB); \
> + __ptrauth_save_key(ctxt->sys_regs, APDA); \
> + __ptrauth_save_key(ctxt->sys_regs, APDB); \
> + __ptrauth_save_key(ctxt->sys_regs, APGA); \
> +})
> +
> +#else /* __ASSEMBLY__ */
> +
> +#include <asm/sysreg.h>
> +
> +#ifdef CONFIG_ARM64_PTR_AUTH
> +
> +#define PTRAUTH_REG_OFFSET(x) (x - CPU_APIAKEYLO_EL1)

I don't really see the point of this macro. You move the pointers of
kvm_cpu_contexts to point to where the ptr auth registers are (which is
in the middle of an array) by adding the offset of APIAKEYLO and then we
have to recompute all offsets with this macro.

Why not just pass the kvm_cpu_context pointers to
ptrauth_save/restore_state and use the already defined offsets
(CPU_AP*_EL1) directly?

I think this would also allow to use one less register for the
ptrauth_switch_to_* macros.

> +
> +.macro ptrauth_save_state base, reg1, reg2
> + mrs_s \reg1, SYS_APIAKEYLO_EL1
> + mrs_s \reg2, SYS_APIAKEYHI_EL1
> + stp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
> + mrs_s \reg1, SYS_APIBKEYLO_EL1
> + mrs_s \reg2, SYS_APIBKEYHI_EL1
> + stp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)]
> + mrs_s \reg1, SYS_APDAKEYLO_EL1
> + mrs_s \reg2, SYS_APDAKEYHI_EL1
> + stp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)]
> + mrs_s \reg1, SYS_APDBKEYLO_EL1
> + mrs_s \reg2, SYS_APDBKEYHI_EL1
> + stp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)]
> + mrs_s \reg1, SYS_APGAKEYLO_EL1
> + mrs_s \reg2, SYS_APGAKEYHI_EL1
> + stp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)]
> +.endm
> +
> +.macro ptrauth_restore_state base, reg1, reg2
> + ldp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
> + msr_s SYS_APIAKEYLO_EL1, \reg1
> + msr_s SYS_APIAKEYHI_EL1, \reg2
> + ldp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)]
> + msr_s SYS_APIBKEYLO_EL1, \reg1
> + msr_s SYS_APIBKEYHI_EL1, \reg2
> + ldp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)]
> + msr_s SYS_APDAKEYLO_EL1, \reg1
> + msr_s SYS_APDAKEYHI_EL1, \reg2
> + ldp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)]
> + msr_s SYS_APDBKEYLO_EL1, \reg1
> + msr_s SYS_APDBKEYHI_EL1, \reg2
> + ldp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)]
> + msr_s SYS_APGAKEYLO_EL1, \reg1
> + msr_s SYS_APGAKEYHI_EL1, \reg2
> + .endm

Nit: Bad indentation.

> +
> +.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3
> + ldr \reg1, [\g_ctxt, #CPU_HCR_EL2]
> + and \reg1, \reg1, #(HCR_API | HCR_APK)
> + cbz \reg1, skip_switch_to_guest
> + add \reg1, \g_ctxt, #CPU_APIAKEYLO_EL1
> + ptrauth_restore_state \reg1, \reg2, \reg3

As mentioned we probably can get rid of one temporary register and
directly pass g_ctxt here.

> +skip_switch_to_guest:

I believe you should use numbered labels for assembly macros, otherwise
multiple references to that macros will lead to symbol redefinitions.

> +.endm
> +
> +.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3
> + ldr \reg1, [\g_ctxt, #CPU_HCR_EL2]
> + and \reg1, \reg1, #(HCR_API | HCR_APK)
> + cbz \reg1, skip_switch_to_host
> + add \reg1, \g_ctxt, #CPU_APIAKEYLO_EL1
> + ptrauth_save_state \reg1, \reg2, \reg3
> + add \reg1, \h_ctxt, #CPU_APIAKEYLO_EL1
> + ptrauth_restore_state \reg1, \reg2, \reg3
> + isb
> +skip_switch_to_host:

Same.

Cheers,

Julien

> +.endm
> +
> +#else /* !CONFIG_ARM64_PTR_AUTH */
> +.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3
> +.endm
> +.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3
> +.endm
> +#endif /* CONFIG_ARM64_PTR_AUTH */
> +#endif /* __ASSEMBLY__ */
> +#endif /* __ASM_KVM_ASM_PTRAUTH_H */
> diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
> index 7f40dcb..12ca916 100644
> --- a/arch/arm64/kernel/asm-offsets.c
> +++ b/arch/arm64/kernel/asm-offsets.c
> @@ -126,6 +126,12 @@ int main(void)
> DEFINE(VCPU_FAULT_DISR, offsetof(struct kvm_vcpu, arch.fault.disr_el1));
> DEFINE(VCPU_WORKAROUND_FLAGS, offsetof(struct kvm_vcpu, arch.workaround_flags));
> DEFINE(CPU_GP_REGS, offsetof(struct kvm_cpu_context, gp_regs));
> + DEFINE(CPU_APIAKEYLO_EL1, offsetof(struct kvm_cpu_context, sys_regs[APIAKEYLO_EL1]));
> + DEFINE(CPU_APIBKEYLO_EL1, offsetof(struct kvm_cpu_context, sys_regs[APIBKEYLO_EL1]));
> + DEFINE(CPU_APDAKEYLO_EL1, offsetof(struct kvm_cpu_context, sys_regs[APDAKEYLO_EL1]));
> + DEFINE(CPU_APDBKEYLO_EL1, offsetof(struct kvm_cpu_context, sys_regs[APDBKEYLO_EL1]));
> + DEFINE(CPU_APGAKEYLO_EL1, offsetof(struct kvm_cpu_context, sys_regs[APGAKEYLO_EL1]));
> + DEFINE(CPU_HCR_EL2, offsetof(struct kvm_cpu_context, hcr_el2));
> DEFINE(CPU_USER_PT_REGS, offsetof(struct kvm_regs, regs));
> DEFINE(HOST_CONTEXT_VCPU, offsetof(struct kvm_cpu_context, __hyp_running_vcpu));
> #endif
> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
> index e2f0268..9f591ad 100644
> --- a/arch/arm64/kvm/guest.c
> +++ b/arch/arm64/kvm/guest.c
> @@ -544,3 +544,17 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
>
> return ret;
> }
> +
> +/**
> + * kvm_arm_vcpu_ptrauth_setup_lazy - setup lazy ptrauth for vcpu schedule
> + *
> + * @vcpu: The VCPU pointer
> + *
> + * This function may be used to disable ptrauth and use it in a lazy context
> + * via traps.
> + */
> +void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu)
> +{
> + if (vcpu_has_ptrauth(vcpu))
> + kvm_arm_vcpu_ptrauth_disable(vcpu);
> +}
> diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> index 0b79834..5838ff9 100644
> --- a/arch/arm64/kvm/handle_exit.c
> +++ b/arch/arm64/kvm/handle_exit.c
> @@ -30,6 +30,7 @@
> #include <asm/kvm_coproc.h>
> #include <asm/kvm_emulate.h>
> #include <asm/kvm_mmu.h>
> +#include <asm/kvm_ptrauth_asm.h>
> #include <asm/debug-monitors.h>
> #include <asm/traps.h>
>
> @@ -174,19 +175,26 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
> }
>
> /*
> + * Handle the guest trying to use a ptrauth instruction, or trying to access a
> + * ptrauth register.
> + */
> +void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
> +{
> + if (vcpu_has_ptrauth(vcpu)) {
> + kvm_arm_vcpu_ptrauth_enable(vcpu);
> + __ptrauth_save_state(vcpu->arch.host_cpu_context);
> + } else {
> + kvm_inject_undefined(vcpu);
> + }
> +}
> +
> +/*
> * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
> * a NOP).
> */
> static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
> {
> - /*
> - * We don't currently support ptrauth in a guest, and we mask the ID
> - * registers to prevent well-behaved guests from trying to make use of
> - * it.
> - *
> - * Inject an UNDEF, as if the feature really isn't present.
> - */
> - kvm_inject_undefined(vcpu);
> + kvm_arm_vcpu_ptrauth_trap(vcpu);
> return 1;
> }
>
> diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
> index 675fdc1..3a70213 100644
> --- a/arch/arm64/kvm/hyp/entry.S
> +++ b/arch/arm64/kvm/hyp/entry.S
> @@ -24,6 +24,7 @@
> #include <asm/kvm_arm.h>
> #include <asm/kvm_asm.h>
> #include <asm/kvm_mmu.h>
> +#include <asm/kvm_ptrauth_asm.h>
>
> #define CPU_GP_REG_OFFSET(x) (CPU_GP_REGS + x)
> #define CPU_XREG_OFFSET(x) CPU_GP_REG_OFFSET(CPU_USER_PT_REGS + 8*x)
> @@ -64,6 +65,9 @@ ENTRY(__guest_enter)
>
> add x18, x0, #VCPU_CONTEXT
>
> + // Macro ptrauth_switch_to_guest(guest cxt, tmp1, tmp2, tmp3).
> + ptrauth_switch_to_guest x18, x0, x1, x2
> +
> // Restore guest regs x0-x17
> ldp x0, x1, [x18, #CPU_XREG_OFFSET(0)]
> ldp x2, x3, [x18, #CPU_XREG_OFFSET(2)]
> @@ -118,6 +122,9 @@ ENTRY(__guest_exit)
>
> get_host_ctxt x2, x3
>
> + // Macro ptrauth_switch_to_host(guest cxt, host cxt, tmp1, tmp2, tmp3).
> + ptrauth_switch_to_host x1, x2, x3, x4, x5
> +
> // Now restore the host regs
> restore_callee_saved_regs x2
>
> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> index f16a5f8..00f0639 100644
> --- a/arch/arm64/kvm/reset.c
> +++ b/arch/arm64/kvm/reset.c
> @@ -128,6 +128,13 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> if (loaded)
> kvm_arch_vcpu_put(vcpu);
>
> + if (test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
> + test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features)) {
> + /* Verify that KVM startup matches the conditions for ptrauth */
> + if (WARN_ON(!vcpu_has_ptrauth(vcpu)))
> + return -EINVAL;
> + }
> +
> switch (vcpu->arch.target) {
> default:
> if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) {
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 75942f6..ed6613e 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1007,6 +1007,38 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> { SYS_DESC(SYS_PMEVTYPERn_EL0(n)), \
> access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), }
>
> +void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu)
> +{
> + vcpu->arch.ctxt.hcr_el2 |= (HCR_API | HCR_APK);
> +}
> +
> +void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu)
> +{
> + vcpu->arch.ctxt.hcr_el2 &= ~(HCR_API | HCR_APK);
> +}
> +
> +static bool trap_ptrauth(struct kvm_vcpu *vcpu,
> + struct sys_reg_params *p,
> + const struct sys_reg_desc *rd)
> +{
> + kvm_arm_vcpu_ptrauth_trap(vcpu);
> + return false;
> +}
> +
> +static unsigned int ptrauth_restrictions(const struct kvm_vcpu *vcpu,
> + const struct sys_reg_desc *rd)
> +{
> + return vcpu_has_ptrauth(vcpu) ? 0 : REG_NO_USER | REG_NO_GUEST;
> +}
> +
> +#define __PTRAUTH_KEY(k) \
> + { SYS_DESC(SYS_## k), trap_ptrauth, reset_unknown, k, \
> + .restrictions = ptrauth_restrictions}
> +
> +#define PTRAUTH_KEY(k) \
> + __PTRAUTH_KEY(k ## KEYLO_EL1), \
> + __PTRAUTH_KEY(k ## KEYHI_EL1)
> +
> static bool access_arch_timer(struct kvm_vcpu *vcpu,
> struct sys_reg_params *p,
> const struct sys_reg_desc *r)
> @@ -1061,9 +1093,11 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu,
> (0xfUL << ID_AA64ISAR1_API_SHIFT) |
> (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
> (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
> - if (val & ptrauth_mask)
> - kvm_debug("ptrauth unsupported for guests, suppressing\n");
> - val &= ~ptrauth_mask;
> + if (!vcpu_has_ptrauth(vcpu)) {
> + if (val & ptrauth_mask)
> + kvm_debug("ptrauth unsupported for guests, suppressing\n");
> + val &= ~ptrauth_mask;
> + }
> }
>
> return val;
> @@ -1387,6 +1421,12 @@ static const struct sys_reg_desc sys_reg_descs[] = {
> { SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 },
> { SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 },
>
> + PTRAUTH_KEY(APIA),
> + PTRAUTH_KEY(APIB),
> + PTRAUTH_KEY(APDA),
> + PTRAUTH_KEY(APDB),
> + PTRAUTH_KEY(APGA),
> +
> { SYS_DESC(SYS_AFSR0_EL1), access_vm_reg, reset_unknown, AFSR0_EL1 },
> { SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 },
> { SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 },
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index 58de0ca..50ae066 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -385,6 +385,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
> vcpu_clear_wfe_traps(vcpu);
> else
> vcpu_set_wfe_traps(vcpu);
> +
> + kvm_arm_vcpu_ptrauth_setup_lazy(vcpu);
> }
>
> void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
>

--
Julien Thierry

2019-03-20 15:07:09

by Kristina Martsenko

[permalink] [raw]
Subject: Re: [PATCH v7 9/10] KVM: arm64: docs: document KVM support of pointer authentication

On 20/03/2019 13:37, Julien Thierry wrote:
> Hi Amit,
>
> On 19/03/2019 08:30, Amit Daniel Kachhap wrote:
>> This adds sections for KVM API extension for pointer authentication.
>> A brief description about usage of pointer authentication for KVM guests
>> is added in the arm64 documentations.

[...]

>> diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
>> index 7de9eee..b5c66bc 100644
>> --- a/Documentation/virtual/kvm/api.txt
>> +++ b/Documentation/virtual/kvm/api.txt
>> @@ -2659,6 +2659,12 @@ Possible features:
>> Depends on KVM_CAP_ARM_PSCI_0_2.
>> - KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.
>> Depends on KVM_CAP_ARM_PMU_V3.
>> + - KVM_ARM_VCPU_PTRAUTH_ADDRESS:
>> + - KVM_ARM_VCPU_PTRAUTH_GENERIC:
>> + Enables Pointer authentication for the CPU.
>> + Depends on KVM_CAP_ARM_PTRAUTH and only on arm64 architecture. If
>> + set, then the KVM guest allows the execution of pointer authentication
>> + instructions. Otherwise, KVM treats these instructions as undefined.
>>
>
> Overall I feel one could easily get confused to whether
> PTRAUTH_ADDRESS/GENERIC are two individual features, whether one is a
> superset of the other, if the names are just an alias of one another, etc...
>
> I think the doc should at least stress out that *both* flags are
> required to enable ptrauth in a guest. However it raises the question,
> if we don't plan to support the features individually (because we
> can't), should we really expose two feature flags? I seems odd to
> introduce two flags that only do something if used together...

Why can't we support the features individually? For example, if we ever
get a system where all CPUs support address authentication and none of
them support generic authentication, then we could still support address
authentication in the guest.

Thanks,
Kristina

2019-03-20 18:09:34

by Julien Thierry

[permalink] [raw]
Subject: Re: [PATCH v7 9/10] KVM: arm64: docs: document KVM support of pointer authentication



On 20/03/2019 15:04, Kristina Martsenko wrote:
> On 20/03/2019 13:37, Julien Thierry wrote:
>> Hi Amit,
>>
>> On 19/03/2019 08:30, Amit Daniel Kachhap wrote:
>>> This adds sections for KVM API extension for pointer authentication.
>>> A brief description about usage of pointer authentication for KVM guests
>>> is added in the arm64 documentations.
>
> [...]
>
>>> diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
>>> index 7de9eee..b5c66bc 100644
>>> --- a/Documentation/virtual/kvm/api.txt
>>> +++ b/Documentation/virtual/kvm/api.txt
>>> @@ -2659,6 +2659,12 @@ Possible features:
>>> Depends on KVM_CAP_ARM_PSCI_0_2.
>>> - KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.
>>> Depends on KVM_CAP_ARM_PMU_V3.
>>> + - KVM_ARM_VCPU_PTRAUTH_ADDRESS:
>>> + - KVM_ARM_VCPU_PTRAUTH_GENERIC:
>>> + Enables Pointer authentication for the CPU.
>>> + Depends on KVM_CAP_ARM_PTRAUTH and only on arm64 architecture. If
>>> + set, then the KVM guest allows the execution of pointer authentication
>>> + instructions. Otherwise, KVM treats these instructions as undefined.
>>>
>>
>> Overall I feel one could easily get confused to whether
>> PTRAUTH_ADDRESS/GENERIC are two individual features, whether one is a
>> superset of the other, if the names are just an alias of one another, etc...
>>
>> I think the doc should at least stress out that *both* flags are
>> required to enable ptrauth in a guest. However it raises the question,
>> if we don't plan to support the features individually (because we
>> can't), should we really expose two feature flags? I seems odd to
>> introduce two flags that only do something if used together...
>
> Why can't we support the features individually? For example, if we ever
> get a system where all CPUs support address authentication and none of
> them support generic authentication, then we could still support address
> authentication in the guest.
>
>

That's a good point, I didn't think of that.

Although, currently we don't have a way to detect that we are in such a
configuration. So as is, both flags are required to enable either
feature, and I feel the documentation should be clear on that aspect.

Another option would be to introduce a flag that enables both for now,
and if one day we decide to support the configuration you mentioned we
could add "more modular" flags that allow you to control those features
individually. While a bit cumbersome, I would find that less awkward
than having two flags that only do something if both are present.

Thanks,

--
Julien Thierry

2019-03-20 20:57:43

by Kristina Martsenko

[permalink] [raw]
Subject: Re: [PATCH v7 9/10] KVM: arm64: docs: document KVM support of pointer authentication

On 20/03/2019 18:06, Julien Thierry wrote:
>
>
> On 20/03/2019 15:04, Kristina Martsenko wrote:
>> On 20/03/2019 13:37, Julien Thierry wrote:
>>> Hi Amit,
>>>
>>> On 19/03/2019 08:30, Amit Daniel Kachhap wrote:
>>>> This adds sections for KVM API extension for pointer authentication.
>>>> A brief description about usage of pointer authentication for KVM guests
>>>> is added in the arm64 documentations.
>>
>> [...]
>>
>>>> diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
>>>> index 7de9eee..b5c66bc 100644
>>>> --- a/Documentation/virtual/kvm/api.txt
>>>> +++ b/Documentation/virtual/kvm/api.txt
>>>> @@ -2659,6 +2659,12 @@ Possible features:
>>>> Depends on KVM_CAP_ARM_PSCI_0_2.
>>>> - KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.
>>>> Depends on KVM_CAP_ARM_PMU_V3.
>>>> + - KVM_ARM_VCPU_PTRAUTH_ADDRESS:
>>>> + - KVM_ARM_VCPU_PTRAUTH_GENERIC:
>>>> + Enables Pointer authentication for the CPU.
>>>> + Depends on KVM_CAP_ARM_PTRAUTH and only on arm64 architecture. If
>>>> + set, then the KVM guest allows the execution of pointer authentication
>>>> + instructions. Otherwise, KVM treats these instructions as undefined.
>>>>
>>>
>>> Overall I feel one could easily get confused to whether
>>> PTRAUTH_ADDRESS/GENERIC are two individual features, whether one is a
>>> superset of the other, if the names are just an alias of one another, etc...
>>>
>>> I think the doc should at least stress out that *both* flags are
>>> required to enable ptrauth in a guest. However it raises the question,
>>> if we don't plan to support the features individually (because we
>>> can't), should we really expose two feature flags? I seems odd to
>>> introduce two flags that only do something if used together...
>>
>> Why can't we support the features individually? For example, if we ever
>> get a system where all CPUs support address authentication and none of
>> them support generic authentication, then we could still support address
>> authentication in the guest.
>>
>>
>
> That's a good point, I didn't think of that.
>
> Although, currently we don't have a way to detect that we are in such a
> configuration. So as is, both flags are required to enable either
> feature, and I feel the documentation should be clear on that aspect.

For now we only support enabling both features together, so both flags
need to be set. I agree that the documentation should be made clear on this.

In the future, if we need to, we can add "negative" cpucaps to detect
that a feature is absent on all CPUs.

>
> Another option would be to introduce a flag that enables both for now,
> and if one day we decide to support the configuration you mentioned we
> could add "more modular" flags that allow you to control those features
> individually. While a bit cumbersome, I would find that less awkward
> than having two flags that only do something if both are present.

That would work too.

I find it more logical to have two flags since there are two features
(two ID register fields), and KVM enables two features for the guest.
The fact that KVM does not currently support enabling them separately is
a KVM implementation choice, and could change in the future.

Thanks,
Kristina

2019-03-21 05:30:07

by Amit Kachhap

[permalink] [raw]
Subject: Re: [PATCH v7 3/10] KVM: arm64: Move hyp_symbol_addr to fix dependency

Hi Julien,

On 3/20/19 2:19 PM, Julien Thierry wrote:
> Hi Amit,
>
> On 19/03/2019 08:30, Amit Daniel Kachhap wrote:
>> Currently hyp_symbol_addr is palced in kvm_mmu.h which is mostly
>> used by __hyp_this_cpu_ptr in kvm_asm.h but it cannot include
>> kvm_mmu.h directly as kvm_mmu.h uses kvm_ksym_ref which is
>> defined inside kvm_asm.h. Hence, hyp_symbol_addr is moved inside
>> kvm_asm.h to fix this dependency on each other.
>>
>> Also kvm_ksym_ref is corresponding counterpart of hyp_symbol_addr
>> so should be ideally placed inside kvm_asm.h.
>>
>
> This part is a bit confusing, it lead me to think that kvm_ksym_ref was
> in kvm_mmu.h and should moved to kvm_asm.h as well. I'd suggest
> rephrasing it with something along the lines:
>
> "Also, hyp_symbol_addr corresponding counterpart, kvm_ksym_ref, is
> already in kvm_asm.h, making it more sensible to move kvm_symbol_addr to
> the same file."
ok, will rephrase.
>
> Otherwise:
>
> Reviewed-by: Julien Thierry <[email protected]>

Thanks,
Amit
>
> Cheers,
>
> Julien
>
>> Suggested by: James Morse <[email protected]>
>> Signed-off-by: Amit Daniel Kachhap <[email protected]>
>> Cc: Marc Zyngier <[email protected]>
>> Cc: Christoffer Dall <[email protected]>
>> Cc: [email protected]
>> ---
>> arch/arm64/include/asm/kvm_asm.h | 20 ++++++++++++++++++++
>> arch/arm64/include/asm/kvm_mmu.h | 20 --------------------
>> 2 files changed, 20 insertions(+), 20 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
>> index f5b79e9..57a07e8 100644
>> --- a/arch/arm64/include/asm/kvm_asm.h
>> +++ b/arch/arm64/include/asm/kvm_asm.h
>> @@ -80,6 +80,26 @@ extern void __vgic_v3_init_lrs(void);
>>
>> extern u32 __kvm_get_mdcr_el2(void);
>>
>> +/*
>> + * Obtain the PC-relative address of a kernel symbol
>> + * s: symbol
>> + *
>> + * The goal of this macro is to return a symbol's address based on a
>> + * PC-relative computation, as opposed to a loading the VA from a
>> + * constant pool or something similar. This works well for HYP, as an
>> + * absolute VA is guaranteed to be wrong. Only use this if trying to
>> + * obtain the address of a symbol (i.e. not something you obtained by
>> + * following a pointer).
>> + */
>> +#define hyp_symbol_addr(s) \
>> + ({ \
>> + typeof(s) *addr; \
>> + asm("adrp %0, %1\n" \
>> + "add %0, %0, :lo12:%1\n" \
>> + : "=r" (addr) : "S" (&s)); \
>> + addr; \
>> + })
>> +
>> /* Home-grown __this_cpu_{ptr,read} variants that always work at HYP */
>> #define __hyp_this_cpu_ptr(sym) \
>> ({ \
>> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
>> index b0742a1..3dea6af 100644
>> --- a/arch/arm64/include/asm/kvm_mmu.h
>> +++ b/arch/arm64/include/asm/kvm_mmu.h
>> @@ -118,26 +118,6 @@ static inline unsigned long __kern_hyp_va(unsigned long v)
>> #define kern_hyp_va(v) ((typeof(v))(__kern_hyp_va((unsigned long)(v))))
>>
>> /*
>> - * Obtain the PC-relative address of a kernel symbol
>> - * s: symbol
>> - *
>> - * The goal of this macro is to return a symbol's address based on a
>> - * PC-relative computation, as opposed to a loading the VA from a
>> - * constant pool or something similar. This works well for HYP, as an
>> - * absolute VA is guaranteed to be wrong. Only use this if trying to
>> - * obtain the address of a symbol (i.e. not something you obtained by
>> - * following a pointer).
>> - */
>> -#define hyp_symbol_addr(s) \
>> - ({ \
>> - typeof(s) *addr; \
>> - asm("adrp %0, %1\n" \
>> - "add %0, %0, :lo12:%1\n" \
>> - : "=r" (addr) : "S" (&s)); \
>> - addr; \
>> - })
>> -
>> -/*
>> * We currently support using a VM-specified IPA size. For backward
>> * compatibility, the default IPA size is fixed to 40bits.
>> */
>>
>

2019-03-21 06:11:07

by Amit Kachhap

[permalink] [raw]
Subject: Re: [PATCH v7 7/10] KVM: arm/arm64: context-switch ptrauth registers

Hi Julien,

On 3/20/19 5:43 PM, Julien Thierry wrote:
> Hi Amit,
>
> On 19/03/2019 08:30, Amit Daniel Kachhap wrote:
>> From: Mark Rutland <[email protected]>
>>
>> When pointer authentication is supported, a guest may wish to use it.
>> This patch adds the necessary KVM infrastructure for this to work, with
>> a semi-lazy context switch of the pointer auth state.
>>
>> Pointer authentication feature is only enabled when VHE is built
>> in the kernel and present in the CPU implementation so only VHE code
>> paths are modified.
>>
>> When we schedule a vcpu, we disable guest usage of pointer
>> authentication instructions and accesses to the keys. While these are
>> disabled, we avoid context-switching the keys. When we trap the guest
>> trying to use pointer authentication functionality, we change to eagerly
>> context-switching the keys, and enable the feature. The next time the
>> vcpu is scheduled out/in, we start again. However the host key save is
>> optimized and implemented inside ptrauth instruction/register access
>> trap.
>>
>> Pointer authentication consists of address authentication and generic
>> authentication, and CPUs in a system might have varied support for
>> either. Where support for either feature is not uniform, it is hidden
>> from guests via ID register emulation, as a result of the cpufeature
>> framework in the host.
>>
>> Unfortunately, address authentication and generic authentication cannot
>> be trapped separately, as the architecture provides a single EL2 trap
>> covering both. If we wish to expose one without the other, we cannot
>> prevent a (badly-written) guest from intermittently using a feature
>> which is not uniformly supported (when scheduled on a physical CPU which
>> supports the relevant feature). Hence, this patch expects both type of
>> authentication to be present in a cpu.
>>
>> This switch of key is done from guest enter/exit assembly as preperation
>> for the upcoming in-kernel pointer authentication support. Hence, these
>> key switching routines are not implemented in C code as they may cause
>> pointer authentication key signing error in some situations.
>>
>> Signed-off-by: Mark Rutland <[email protected]>
>> [Only VHE, key switch in full assembly, vcpu_has_ptrauth checks
>> , save host key in ptrauth exception trap]
>> Signed-off-by: Amit Daniel Kachhap <[email protected]>
>> Reviewed-by: Julien Thierry <[email protected]>
>> Cc: Marc Zyngier <[email protected]>
>> Cc: Christoffer Dall <[email protected]>
>> Cc: [email protected]
>> ---
>> arch/arm64/include/asm/kvm_host.h | 17 ++++++
>> arch/arm64/include/asm/kvm_ptrauth_asm.h | 100 +++++++++++++++++++++++++++++++
>> arch/arm64/kernel/asm-offsets.c | 6 ++
>> arch/arm64/kvm/guest.c | 14 +++++
>> arch/arm64/kvm/handle_exit.c | 24 +++++---
>> arch/arm64/kvm/hyp/entry.S | 7 +++
>> arch/arm64/kvm/reset.c | 7 +++
>> arch/arm64/kvm/sys_regs.c | 46 +++++++++++++-
>> virt/kvm/arm/arm.c | 2 +
>> 9 files changed, 212 insertions(+), 11 deletions(-)
>> create mode 100644 arch/arm64/include/asm/kvm_ptrauth_asm.h
>>
[...]
>> +#ifdef CONFIG_ARM64_PTR_AUTH
>> +
>> +#define PTRAUTH_REG_OFFSET(x) (x - CPU_APIAKEYLO_EL1)
>
> I don't really see the point of this macro. You move the pointers of
> kvm_cpu_contexts to point to where the ptr auth registers are (which is
> in the middle of an array) by adding the offset of APIAKEYLO and then we
> have to recompute all offsets with this macro.
>
> Why not just pass the kvm_cpu_context pointers to
> ptrauth_save/restore_state and use the already defined offsets
> (CPU_AP*_EL1) directly?
>
> I think this would also allow to use one less register for the
> ptrauth_switch_to_* macros.
Actually the values of CPU_AP*_EL1 are exceeding the immediate range
(i.e 512), so this was done to keep the immediate offset within the range.
The other way would have been to calculate the destination register but
these would add one more add instruction everywhere.
I should have mentioned them as comments somewhere.
>
>> +
>> +.macro ptrauth_save_state base, reg1, reg2
>> + mrs_s \reg1, SYS_APIAKEYLO_EL1
>> + mrs_s \reg2, SYS_APIAKEYHI_EL1
>> + stp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
>> + mrs_s \reg1, SYS_APIBKEYLO_EL1
>> + mrs_s \reg2, SYS_APIBKEYHI_EL1
>> + stp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)]
>> + mrs_s \reg1, SYS_APDAKEYLO_EL1
>> + mrs_s \reg2, SYS_APDAKEYHI_EL1
>> + stp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)]
>> + mrs_s \reg1, SYS_APDBKEYLO_EL1
>> + mrs_s \reg2, SYS_APDBKEYHI_EL1
>> + stp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)]
>> + mrs_s \reg1, SYS_APGAKEYLO_EL1
>> + mrs_s \reg2, SYS_APGAKEYHI_EL1
>> + stp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)]
>> +.endm
>> +
>> +.macro ptrauth_restore_state base, reg1, reg2
>> + ldp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
>> + msr_s SYS_APIAKEYLO_EL1, \reg1
>> + msr_s SYS_APIAKEYHI_EL1, \reg2
>> + ldp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIBKEYLO_EL1)]
>> + msr_s SYS_APIBKEYLO_EL1, \reg1
>> + msr_s SYS_APIBKEYHI_EL1, \reg2
>> + ldp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDAKEYLO_EL1)]
>> + msr_s SYS_APDAKEYLO_EL1, \reg1
>> + msr_s SYS_APDAKEYHI_EL1, \reg2
>> + ldp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APDBKEYLO_EL1)]
>> + msr_s SYS_APDBKEYLO_EL1, \reg1
>> + msr_s SYS_APDBKEYHI_EL1, \reg2
>> + ldp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APGAKEYLO_EL1)]
>> + msr_s SYS_APGAKEYLO_EL1, \reg1
>> + msr_s SYS_APGAKEYHI_EL1, \reg2
>> + .endm
>
> Nit: Bad indentation.
oops.
>
>> +
>> +.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3
>> + ldr \reg1, [\g_ctxt, #CPU_HCR_EL2]
>> + and \reg1, \reg1, #(HCR_API | HCR_APK)
>> + cbz \reg1, skip_switch_to_guest
>> + add \reg1, \g_ctxt, #CPU_APIAKEYLO_EL1
>> + ptrauth_restore_state \reg1, \reg2, \reg3
>
> As mentioned we probably can get rid of one temporary register and
> directly pass g_ctxt here.
>
>> +skip_switch_to_guest:
>
> I believe you should use numbered labels for assembly macros, otherwise
> multiple references to that macros will lead to symbol redefinitions.
Yes agreed.

Thanks,
Amit
>
>> +.endm
>> +
>> +.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3
>> + ldr \reg1, [\g_ctxt, #CPU_HCR_EL2]
>> + and \reg1, \reg1, #(HCR_API | HCR_APK)
>> + cbz \reg1, skip_switch_to_host
>> + add \reg1, \g_ctxt, #CPU_APIAKEYLO_EL1
>> + ptrauth_save_state \reg1, \reg2, \reg3
>> + add \reg1, \h_ctxt, #CPU_APIAKEYLO_EL1
>> + ptrauth_restore_state \reg1, \reg2, \reg3
>> + isb
>> +skip_switch_to_host:
>
> Same.
>
> Cheers,
>
> Julien
>
>> +.endm
>> +
>> +#else /* !CONFIG_ARM64_PTR_AUTH */
>> +.macro ptrauth_switch_to_guest g_ctxt, reg1, reg2, reg3
>> +.endm
>> +.macro ptrauth_switch_to_host g_ctxt, h_ctxt, reg1, reg2, reg3
[...]
>>
>

2019-03-21 06:42:17

by Amit Kachhap

[permalink] [raw]
Subject: Re: [PATCH v7 9/10] KVM: arm64: docs: document KVM support of pointer authentication

Hi Julien/Kristina,

On 3/21/19 2:26 AM, Kristina Martsenko wrote:
> On 20/03/2019 18:06, Julien Thierry wrote:
>>
>>
>> On 20/03/2019 15:04, Kristina Martsenko wrote:
>>> On 20/03/2019 13:37, Julien Thierry wrote:
>>>> Hi Amit,
>>>>
>>>> On 19/03/2019 08:30, Amit Daniel Kachhap wrote:
>>>>> This adds sections for KVM API extension for pointer authentication.
>>>>> A brief description about usage of pointer authentication for KVM guests
>>>>> is added in the arm64 documentations.
>>>
>>> [...]
>>>
>>>>> diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
>>>>> index 7de9eee..b5c66bc 100644
>>>>> --- a/Documentation/virtual/kvm/api.txt
>>>>> +++ b/Documentation/virtual/kvm/api.txt
>>>>> @@ -2659,6 +2659,12 @@ Possible features:
>>>>> Depends on KVM_CAP_ARM_PSCI_0_2.
>>>>> - KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.
>>>>> Depends on KVM_CAP_ARM_PMU_V3.
>>>>> + - KVM_ARM_VCPU_PTRAUTH_ADDRESS:
>>>>> + - KVM_ARM_VCPU_PTRAUTH_GENERIC:
>>>>> + Enables Pointer authentication for the CPU.
>>>>> + Depends on KVM_CAP_ARM_PTRAUTH and only on arm64 architecture. If
>>>>> + set, then the KVM guest allows the execution of pointer authentication
>>>>> + instructions. Otherwise, KVM treats these instructions as undefined.
>>>>>
>>>>
>>>> Overall I feel one could easily get confused to whether
>>>> PTRAUTH_ADDRESS/GENERIC are two individual features, whether one is a
>>>> superset of the other, if the names are just an alias of one another, etc...
>>>>
>>>> I think the doc should at least stress out that *both* flags are
>>>> required to enable ptrauth in a guest. However it raises the question,
>>>> if we don't plan to support the features individually (because we
>>>> can't), should we really expose two feature flags? I seems odd to
>>>> introduce two flags that only do something if used together...
>>>
>>> Why can't we support the features individually? For example, if we ever
>>> get a system where all CPUs support address authentication and none of
>>> them support generic authentication, then we could still support address
>>> authentication in the guest.
>>>
>>>
>>
>> That's a good point, I didn't think of that.
>>
>> Although, currently we don't have a way to detect that we are in such a
>> configuration. So as is, both flags are required to enable either
>> feature, and I feel the documentation should be clear on that aspect.
>
> For now we only support enabling both features together, so both flags
> need to be set. I agree that the documentation should be made clear on this.
>
> In the future, if we need to, we can add "negative" cpucaps to detect
> that a feature is absent on all CPUs.
>
>>
>> Another option would be to introduce a flag that enables both for now,
>> and if one day we decide to support the configuration you mentioned we
>> could add "more modular" flags that allow you to control those features
>> individually. While a bit cumbersome, I would find that less awkward
>> than having two flags that only do something if both are present.
>
> That would work too.
>
> I find it more logical to have two flags since there are two features
> (two ID register fields), and KVM enables two features for the guest.
> The fact that KVM does not currently support enabling them separately is
> a KVM implementation choice, and could change in the future.
Kristina, this comments of yours is actually what this patch series is
trying to do. I should have added more details about the necessity of
keeping two flags and enhancement of them is actually a future work.

Thanks,
Amit Daniel
>
> Thanks,
> Kristina
>

2019-03-21 08:31:01

by Julien Thierry

[permalink] [raw]
Subject: Re: [PATCH v7 7/10] KVM: arm/arm64: context-switch ptrauth registers



On 21/03/2019 06:08, Amit Daniel Kachhap wrote:
> Hi Julien,
>
> On 3/20/19 5:43 PM, Julien Thierry wrote:
>> Hi Amit,
>>
>> On 19/03/2019 08:30, Amit Daniel Kachhap wrote:
>>> From: Mark Rutland <[email protected]>
>>>
>>> When pointer authentication is supported, a guest may wish to use it.
>>> This patch adds the necessary KVM infrastructure for this to work, with
>>> a semi-lazy context switch of the pointer auth state.
>>>
>>> Pointer authentication feature is only enabled when VHE is built
>>> in the kernel and present in the CPU implementation so only VHE code
>>> paths are modified.
>>>
>>> When we schedule a vcpu, we disable guest usage of pointer
>>> authentication instructions and accesses to the keys. While these are
>>> disabled, we avoid context-switching the keys. When we trap the guest
>>> trying to use pointer authentication functionality, we change to eagerly
>>> context-switching the keys, and enable the feature. The next time the
>>> vcpu is scheduled out/in, we start again. However the host key save is
>>> optimized and implemented inside ptrauth instruction/register access
>>> trap.
>>>
>>> Pointer authentication consists of address authentication and generic
>>> authentication, and CPUs in a system might have varied support for
>>> either. Where support for either feature is not uniform, it is hidden
>>> from guests via ID register emulation, as a result of the cpufeature
>>> framework in the host.
>>>
>>> Unfortunately, address authentication and generic authentication cannot
>>> be trapped separately, as the architecture provides a single EL2 trap
>>> covering both. If we wish to expose one without the other, we cannot
>>> prevent a (badly-written) guest from intermittently using a feature
>>> which is not uniformly supported (when scheduled on a physical CPU which
>>> supports the relevant feature). Hence, this patch expects both type of
>>> authentication to be present in a cpu.
>>>
>>> This switch of key is done from guest enter/exit assembly as preperation
>>> for the upcoming in-kernel pointer authentication support. Hence, these
>>> key switching routines are not implemented in C code as they may cause
>>> pointer authentication key signing error in some situations.
>>>
>>> Signed-off-by: Mark Rutland <[email protected]>
>>> [Only VHE, key switch in full assembly, vcpu_has_ptrauth checks
>>> , save host key in ptrauth exception trap]
>>> Signed-off-by: Amit Daniel Kachhap <[email protected]>
>>> Reviewed-by: Julien Thierry <[email protected]>
>>> Cc: Marc Zyngier <[email protected]>
>>> Cc: Christoffer Dall <[email protected]>
>>> Cc: [email protected]
>>> ---
>>>   arch/arm64/include/asm/kvm_host.h        |  17 ++++++
>>>   arch/arm64/include/asm/kvm_ptrauth_asm.h | 100
>>> +++++++++++++++++++++++++++++++
>>>   arch/arm64/kernel/asm-offsets.c          |   6 ++
>>>   arch/arm64/kvm/guest.c                   |  14 +++++
>>>   arch/arm64/kvm/handle_exit.c             |  24 +++++---
>>>   arch/arm64/kvm/hyp/entry.S               |   7 +++
>>>   arch/arm64/kvm/reset.c                   |   7 +++
>>>   arch/arm64/kvm/sys_regs.c                |  46 +++++++++++++-
>>>   virt/kvm/arm/arm.c                       |   2 +
>>>   9 files changed, 212 insertions(+), 11 deletions(-)
>>>   create mode 100644 arch/arm64/include/asm/kvm_ptrauth_asm.h
>>>
> [...]
>>> +#ifdef    CONFIG_ARM64_PTR_AUTH
>>> +
>>> +#define PTRAUTH_REG_OFFSET(x)    (x - CPU_APIAKEYLO_EL1)
>>
>> I don't really see the point of this macro. You move the pointers of
>> kvm_cpu_contexts to point to where the ptr auth registers are (which is
>> in the middle of an array) by adding the offset of APIAKEYLO and then we
>> have to recompute all offsets with this macro.
>>
>> Why not just pass the kvm_cpu_context pointers to
>> ptrauth_save/restore_state and use the already defined offsets
>> (CPU_AP*_EL1) directly?
>>
>> I think this would also allow to use one less register for the
>> ptrauth_switch_to_* macros.
> Actually the values of CPU_AP*_EL1 are exceeding the immediate range
> (i.e 512), so this was done to keep the immediate offset within the range.
> The other way would have been to calculate the destination register but
> these would add one more add instruction everywhere.
> I should have mentioned them as comments somewhere.

Oh, I see. Yes, it would definitely be worth a comment.

Thanks,

--
Julien Thierry

2019-03-25 20:05:27

by Kristina Martsenko

[permalink] [raw]
Subject: Re: [PATCH v7 5/10] KVM: arm/arm64: preserve host MDCR_EL2 value

On 19/03/2019 08:30, Amit Daniel Kachhap wrote:
> Save host MDCR_EL2 value during kvm HYP initialisation and restore
> after every switch from host to guest. There should not be any
> change in functionality due to this.
>
> The value of mdcr_el2 is now stored in struct kvm_cpu_context as
> both host and guest can now use this field in a common way.
>
> Signed-off-by: Amit Daniel Kachhap <[email protected]>
> Acked-by: Mark Rutland <[email protected]>
> Cc: Marc Zyngier <[email protected]>
> Cc: Mark Rutland <[email protected]>
> Cc: Christoffer Dall <[email protected]>
> Cc: [email protected]

[...]

> /**
> - * kvm_arm_init_debug - grab what we need for debug
> - *
> - * Currently the sole task of this function is to retrieve the initial
> - * value of mdcr_el2 so we can preserve MDCR_EL2.HPMN which has
> - * presumably been set-up by some knowledgeable bootcode.
> - *
> - * It is called once per-cpu during CPU hyp initialisation.
> - */
> -
> -void kvm_arm_init_debug(void)
> -{
> - __this_cpu_write(mdcr_el2, kvm_call_hyp_ret(__kvm_get_mdcr_el2));
> -}

The __kvm_get_mdcr_el2 function is no longer used anywhere, so can also
be removed.

Thanks,
Kristina

2019-03-25 20:06:11

by Kristina Martsenko

[permalink] [raw]
Subject: Re: [PATCH v7 7/10] KVM: arm/arm64: context-switch ptrauth registers

On 19/03/2019 08:30, Amit Daniel Kachhap wrote:
> From: Mark Rutland <[email protected]>
>
> When pointer authentication is supported, a guest may wish to use it.
> This patch adds the necessary KVM infrastructure for this to work, with
> a semi-lazy context switch of the pointer auth state.
>
> Pointer authentication feature is only enabled when VHE is built
> in the kernel and present in the CPU implementation so only VHE code
> paths are modified.
>
> When we schedule a vcpu, we disable guest usage of pointer
> authentication instructions and accesses to the keys. While these are
> disabled, we avoid context-switching the keys. When we trap the guest
> trying to use pointer authentication functionality, we change to eagerly
> context-switching the keys, and enable the feature. The next time the
> vcpu is scheduled out/in, we start again. However the host key save is
> optimized and implemented inside ptrauth instruction/register access
> trap.
>
> Pointer authentication consists of address authentication and generic
> authentication, and CPUs in a system might have varied support for
> either. Where support for either feature is not uniform, it is hidden
> from guests via ID register emulation, as a result of the cpufeature
> framework in the host.
>
> Unfortunately, address authentication and generic authentication cannot
> be trapped separately, as the architecture provides a single EL2 trap
> covering both. If we wish to expose one without the other, we cannot
> prevent a (badly-written) guest from intermittently using a feature
> which is not uniformly supported (when scheduled on a physical CPU which
> supports the relevant feature). Hence, this patch expects both type of
> authentication to be present in a cpu.
>
> This switch of key is done from guest enter/exit assembly as preperation
> for the upcoming in-kernel pointer authentication support. Hence, these
> key switching routines are not implemented in C code as they may cause
> pointer authentication key signing error in some situations.
>
> Signed-off-by: Mark Rutland <[email protected]>
> [Only VHE, key switch in full assembly, vcpu_has_ptrauth checks
> , save host key in ptrauth exception trap]
> Signed-off-by: Amit Daniel Kachhap <[email protected]>
> Reviewed-by: Julien Thierry <[email protected]>
> Cc: Marc Zyngier <[email protected]>
> Cc: Christoffer Dall <[email protected]>
> Cc: [email protected]

[...]

> +/* SPDX-License-Identifier: GPL-2.0
> + * arch/arm64/include/asm/kvm_ptrauth_asm.h: Guest/host ptrauth save/restore
> + * Copyright 2019 Arm Limited
> + * Author: Mark Rutland <[email protected]>
> + * Amit Daniel Kachhap <[email protected]>
> + */

I think the license needs to be in its own comment, like

/* SPDX-License-Identifier: GPL-2.0 */
/* arch/arm64/include/asm/kvm_ptrauth_asm.h: ...
* ...
*/

> +
> +#ifndef __ASM_KVM_ASM_PTRAUTH_H
> +#define __ASM_KVM_ASM_PTRAUTH_H

__ASM_KVM_PTRAUTH_ASM_H ? (to match the file name)

> + if (test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
> + test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features)) {
> + /* Verify that KVM startup matches the conditions for ptrauth */
> + if (WARN_ON(!vcpu_has_ptrauth(vcpu)))
> + return -EINVAL;
> + }

I think this now needs to have "goto out;" instead of "return -EINVAL;",
since 5.1-rcX contains commit e761a927bc9a ("KVM: arm/arm64: Reset the
VCPU without preemption and vcpu state loaded") which changed some of
this code.

> @@ -385,6 +385,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
> vcpu_clear_wfe_traps(vcpu);
> else
> vcpu_set_wfe_traps(vcpu);
> +
> + kvm_arm_vcpu_ptrauth_setup_lazy(vcpu);

This version of the series seems to have lost the arch/arm/ definition
of kvm_arm_vcpu_ptrauth_setup_lazy (previously
kvm_arm_vcpu_ptrauth_reset), so KVM no longer compiles for arch/arm/ :(

Thanks,
Kristina

2019-03-25 20:06:56

by Kristina Martsenko

[permalink] [raw]
Subject: Re: [PATCH v7 9/10] KVM: arm64: docs: document KVM support of pointer authentication

On 19/03/2019 08:30, Amit Daniel Kachhap wrote:
> This adds sections for KVM API extension for pointer authentication.
> A brief description about usage of pointer authentication for KVM guests
> is added in the arm64 documentations.
>
> Signed-off-by: Amit Daniel Kachhap <[email protected]>
> Cc: Mark Rutland <[email protected]>
> Cc: Christoffer Dall <[email protected]>
> Cc: Marc Zyngier <[email protected]>
> Cc: [email protected]

I think it makes sense to also update the Kconfig symbol description for
CONFIG_ARM64_PTR_AUTH, since it currently only mentions userspace
support, but now the option also enables KVM guest support.

It's also worth mentioning that CONFIG_ARM64_VHE=y is required for guest
support.

Thanks,
Kristina

2019-03-25 20:07:54

by Kristina Martsenko

[permalink] [raw]
Subject: Re: [PATCH v7 8/10] KVM: arm64: Add capability to advertise ptrauth for guest

On 19/03/2019 08:30, Amit Daniel Kachhap wrote:
> This patch advertises the capability of pointer authentication
> when system supports pointer authentication and VHE mode present.
>
> Signed-off-by: Amit Daniel Kachhap <[email protected]>
> Cc: Mark Rutland <[email protected]>
> Cc: Marc Zyngier <[email protected]>
> Cc: Christoffer Dall <[email protected]>
> Cc: [email protected]
> ---
> arch/arm64/kvm/reset.c | 4 ++++
> include/uapi/linux/kvm.h | 1 +
> 2 files changed, 5 insertions(+)
>
> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> index 00f0639..a3b269e 100644
> --- a/arch/arm64/kvm/reset.c
> +++ b/arch/arm64/kvm/reset.c
> @@ -92,6 +92,10 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext)
> case KVM_CAP_ARM_VM_IPA_SIZE:
> r = kvm_ipa_limit;
> break;
> + case KVM_CAP_ARM_PTRAUTH:
> + r = has_vhe() && system_supports_address_auth() &&
> + system_supports_generic_auth();
> + break;
> default:
> r = 0;
> }
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index 6d4ea4b..a553477 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -988,6 +988,7 @@ struct kvm_ppc_resize_hpt {
> #define KVM_CAP_ARM_VM_IPA_SIZE 165
> #define KVM_CAP_MANUAL_DIRTY_LOG_PROTECT 166
> #define KVM_CAP_HYPERV_CPUID 167
> +#define KVM_CAP_ARM_PTRAUTH 168

Since we now have two separate vcpu flags, then I think we also need two
capabilities here (one for address auth and one for generic auth). This
will allow us to support the features separately in the future if we
need to.

Thanks,
Kristina

2019-03-26 03:56:38

by Amit Kachhap

[permalink] [raw]
Subject: Re: [PATCH v7 5/10] KVM: arm/arm64: preserve host MDCR_EL2 value

Hi,

On 3/26/19 1:34 AM, Kristina Martsenko wrote:
> On 19/03/2019 08:30, Amit Daniel Kachhap wrote:
>> Save host MDCR_EL2 value during kvm HYP initialisation and restore
>> after every switch from host to guest. There should not be any
>> change in functionality due to this.
>>
>> The value of mdcr_el2 is now stored in struct kvm_cpu_context as
>> both host and guest can now use this field in a common way.
>>
>> Signed-off-by: Amit Daniel Kachhap <[email protected]>
>> Acked-by: Mark Rutland <[email protected]>
>> Cc: Marc Zyngier <[email protected]>
>> Cc: Mark Rutland <[email protected]>
>> Cc: Christoffer Dall <[email protected]>
>> Cc: [email protected]
>
> [...]
>
>> /**
>> - * kvm_arm_init_debug - grab what we need for debug
>> - *
>> - * Currently the sole task of this function is to retrieve the initial
>> - * value of mdcr_el2 so we can preserve MDCR_EL2.HPMN which has
>> - * presumably been set-up by some knowledgeable bootcode.
>> - *
>> - * It is called once per-cpu during CPU hyp initialisation.
>> - */
>> -
>> -void kvm_arm_init_debug(void)
>> -{
>> - __this_cpu_write(mdcr_el2, kvm_call_hyp_ret(__kvm_get_mdcr_el2));
>> -}
>
> The __kvm_get_mdcr_el2 function is no longer used anywhere, so can also
> be removed.
Nice catch. Will do.

Thanks,
Amit D
>
> Thanks,
> Kristina
>

2019-03-26 04:06:31

by Amit Kachhap

[permalink] [raw]
Subject: Re: [PATCH v7 7/10] KVM: arm/arm64: context-switch ptrauth registers

Hi,

On 3/26/19 1:34 AM, Kristina Martsenko wrote:
> On 19/03/2019 08:30, Amit Daniel Kachhap wrote:
>> From: Mark Rutland <[email protected]>
>>
>> When pointer authentication is supported, a guest may wish to use it.
>> This patch adds the necessary KVM infrastructure for this to work, with
>> a semi-lazy context switch of the pointer auth state.
>>
>> Pointer authentication feature is only enabled when VHE is built
>> in the kernel and present in the CPU implementation so only VHE code
>> paths are modified.
>>
>> When we schedule a vcpu, we disable guest usage of pointer
>> authentication instructions and accesses to the keys. While these are
>> disabled, we avoid context-switching the keys. When we trap the guest
>> trying to use pointer authentication functionality, we change to eagerly
>> context-switching the keys, and enable the feature. The next time the
>> vcpu is scheduled out/in, we start again. However the host key save is
>> optimized and implemented inside ptrauth instruction/register access
>> trap.
>>
>> Pointer authentication consists of address authentication and generic
>> authentication, and CPUs in a system might have varied support for
>> either. Where support for either feature is not uniform, it is hidden
>> from guests via ID register emulation, as a result of the cpufeature
>> framework in the host.
>>
>> Unfortunately, address authentication and generic authentication cannot
>> be trapped separately, as the architecture provides a single EL2 trap
>> covering both. If we wish to expose one without the other, we cannot
>> prevent a (badly-written) guest from intermittently using a feature
>> which is not uniformly supported (when scheduled on a physical CPU which
>> supports the relevant feature). Hence, this patch expects both type of
>> authentication to be present in a cpu.
>>
>> This switch of key is done from guest enter/exit assembly as preperation
>> for the upcoming in-kernel pointer authentication support. Hence, these
>> key switching routines are not implemented in C code as they may cause
>> pointer authentication key signing error in some situations.
>>
>> Signed-off-by: Mark Rutland <[email protected]>
>> [Only VHE, key switch in full assembly, vcpu_has_ptrauth checks
>> , save host key in ptrauth exception trap]
>> Signed-off-by: Amit Daniel Kachhap <[email protected]>
>> Reviewed-by: Julien Thierry <[email protected]>
>> Cc: Marc Zyngier <[email protected]>
>> Cc: Christoffer Dall <[email protected]>
>> Cc: [email protected]
>
> [...]
>
>> +/* SPDX-License-Identifier: GPL-2.0
>> + * arch/arm64/include/asm/kvm_ptrauth_asm.h: Guest/host ptrauth save/restore
>> + * Copyright 2019 Arm Limited
>> + * Author: Mark Rutland <[email protected]>
>> + * Amit Daniel Kachhap <[email protected]>
>> + */
>
> I think the license needs to be in its own comment, like
>
> /* SPDX-License-Identifier: GPL-2.0 */
yes this is indeed the format followed.
> /* arch/arm64/include/asm/kvm_ptrauth_asm.h: ...
> * ...
> */
>
>> +
>> +#ifndef __ASM_KVM_ASM_PTRAUTH_H
>> +#define __ASM_KVM_ASM_PTRAUTH_H
>
> __ASM_KVM_PTRAUTH_ASM_H ? (to match the file name)
>
>> + if (test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
>> + test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features)) {
>> + /* Verify that KVM startup matches the conditions for ptrauth */
>> + if (WARN_ON(!vcpu_has_ptrauth(vcpu)))
>> + return -EINVAL;
>> + }
>
> I think this now needs to have "goto out;" instead of "return -EINVAL;",
> since 5.1-rcX contains commit e761a927bc9a ("KVM: arm/arm64: Reset the
> VCPU without preemption and vcpu state loaded") which changed some of
> this code.
ok missed the changes for this commit.
>
>> @@ -385,6 +385,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>> vcpu_clear_wfe_traps(vcpu);
>> else
>> vcpu_set_wfe_traps(vcpu);
>> +
>> + kvm_arm_vcpu_ptrauth_setup_lazy(vcpu);
>
> This version of the series seems to have lost the arch/arm/ definition
> of kvm_arm_vcpu_ptrauth_setup_lazy (previously
> kvm_arm_vcpu_ptrauth_reset), so KVM no longer compiles for arch/arm/ :(

ok my bad.

Thanks,
Amit D
>
> Thanks,
> Kristina
>

2019-03-26 04:13:04

by Amit Kachhap

[permalink] [raw]
Subject: Re: [PATCH v7 8/10] KVM: arm64: Add capability to advertise ptrauth for guest

Hi,

On 3/26/19 1:35 AM, Kristina Martsenko wrote:
> On 19/03/2019 08:30, Amit Daniel Kachhap wrote:
>> This patch advertises the capability of pointer authentication
>> when system supports pointer authentication and VHE mode present.
>>
>> Signed-off-by: Amit Daniel Kachhap <[email protected]>
>> Cc: Mark Rutland <[email protected]>
>> Cc: Marc Zyngier <[email protected]>
>> Cc: Christoffer Dall <[email protected]>
>> Cc: [email protected]
>> ---
>> arch/arm64/kvm/reset.c | 4 ++++
>> include/uapi/linux/kvm.h | 1 +
>> 2 files changed, 5 insertions(+)
>>
>> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
>> index 00f0639..a3b269e 100644
>> --- a/arch/arm64/kvm/reset.c
>> +++ b/arch/arm64/kvm/reset.c
>> @@ -92,6 +92,10 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>> case KVM_CAP_ARM_VM_IPA_SIZE:
>> r = kvm_ipa_limit;
>> break;
>> + case KVM_CAP_ARM_PTRAUTH:
>> + r = has_vhe() && system_supports_address_auth() &&
>> + system_supports_generic_auth();
>> + break;
>> default:
>> r = 0;
>> }
>> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
>> index 6d4ea4b..a553477 100644
>> --- a/include/uapi/linux/kvm.h
>> +++ b/include/uapi/linux/kvm.h
>> @@ -988,6 +988,7 @@ struct kvm_ppc_resize_hpt {
>> #define KVM_CAP_ARM_VM_IPA_SIZE 165
>> #define KVM_CAP_MANUAL_DIRTY_LOG_PROTECT 166
>> #define KVM_CAP_HYPERV_CPUID 167
>> +#define KVM_CAP_ARM_PTRAUTH 168
>
> Since we now have two separate vcpu flags, then I think we also need two
> capabilities here (one for address auth and one for generic auth). This
> will allow us to support the features separately in the future if we
> need to.
I have no objection to your suggestion. Infact all other KVM_ARM_VCPU_*
features have there own capability defined. I will check other
architectures if they define separate capability for major/minor features.

Thanks,
Amit D
>
> Thanks,
> Kristina
>

2019-03-26 18:03:52

by Kristina Martsenko

[permalink] [raw]
Subject: Re: [PATCH v7 7/10] KVM: arm/arm64: context-switch ptrauth registers

On 26/03/2019 04:03, Amit Daniel Kachhap wrote:
> Hi,
>
> On 3/26/19 1:34 AM, Kristina Martsenko wrote:
>> On 19/03/2019 08:30, Amit Daniel Kachhap wrote:
>>> From: Mark Rutland <[email protected]>
>>>
>>> When pointer authentication is supported, a guest may wish to use it.
>>> This patch adds the necessary KVM infrastructure for this to work, with
>>> a semi-lazy context switch of the pointer auth state.
>>
>> [...]
>>
>>> +    if (test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
>>> +        test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features)) {
>>> +        /* Verify that KVM startup matches the conditions for ptrauth */
>>> +        if (WARN_ON(!vcpu_has_ptrauth(vcpu)))
>>> +            return -EINVAL;
>>> +    }
>>
>> I think this now needs to have "goto out;" instead of "return -EINVAL;",
>> since 5.1-rcX contains commit e761a927bc9a ("KVM: arm/arm64: Reset the
>> VCPU without preemption and vcpu state loaded") which changed some of
>> this code.
> ok missed the changes for this commit.

One more thing - I think the WARN_ON() here should be removed. Otherwise
if panic_on_warn is set then userspace can panic the kernel. I think
WARN_ON is only for internal kernel errors (see comment in
include/asm-generic/bug.h).

Thanks,
Kristina

2019-03-27 03:22:09

by Amit Kachhap

[permalink] [raw]
Subject: Re: [PATCH v7 7/10] KVM: arm/arm64: context-switch ptrauth registers



On 3/26/19 11:31 PM, Kristina Martsenko wrote:
> On 26/03/2019 04:03, Amit Daniel Kachhap wrote:
>> Hi,
>>
>> On 3/26/19 1:34 AM, Kristina Martsenko wrote:
>>> On 19/03/2019 08:30, Amit Daniel Kachhap wrote:
>>>> From: Mark Rutland <[email protected]>
>>>>
>>>> When pointer authentication is supported, a guest may wish to use it.
>>>> This patch adds the necessary KVM infrastructure for this to work, with
>>>> a semi-lazy context switch of the pointer auth state.
>>>
>>> [...]
>>>
>>>> +    if (test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
>>>> +        test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features)) {
>>>> +        /* Verify that KVM startup matches the conditions for ptrauth */
>>>> +        if (WARN_ON(!vcpu_has_ptrauth(vcpu)))
>>>> +            return -EINVAL;
>>>> +    }
>>>
>>> I think this now needs to have "goto out;" instead of "return -EINVAL;",
>>> since 5.1-rcX contains commit e761a927bc9a ("KVM: arm/arm64: Reset the
>>> VCPU without preemption and vcpu state loaded") which changed some of
>>> this code.
>> ok missed the changes for this commit.
>
> One more thing - I think the WARN_ON() here should be removed. Otherwise
> if panic_on_warn is set then userspace can panic the kernel. I think
> WARN_ON is only for internal kernel errors (see comment in
> include/asm-generic/bug.h).

The documentation makes sense so in this case a pr_err like message
will suffice. Btw there is one WARN in the function kvm_set_ipa_limit in
the same file.

Thanks,
Amit D
>
> Thanks,
> Kristina
>

2019-03-27 10:45:34

by Dave Martin

[permalink] [raw]
Subject: Re: [PATCH v7 9/10] KVM: arm64: docs: document KVM support of pointer authentication

On Mon, Mar 25, 2019 at 08:05:49PM +0000, Kristina Martsenko wrote:
> On 19/03/2019 08:30, Amit Daniel Kachhap wrote:
> > This adds sections for KVM API extension for pointer authentication.
> > A brief description about usage of pointer authentication for KVM guests
> > is added in the arm64 documentations.
> >
> > Signed-off-by: Amit Daniel Kachhap <[email protected]>
> > Cc: Mark Rutland <[email protected]>
> > Cc: Christoffer Dall <[email protected]>
> > Cc: Marc Zyngier <[email protected]>
> > Cc: [email protected]
>
> I think it makes sense to also update the Kconfig symbol description for
> CONFIG_ARM64_PTR_AUTH, since it currently only mentions userspace
> support, but now the option also enables KVM guest support.
>
> It's also worth mentioning that CONFIG_ARM64_VHE=y is required for guest
> support.

Is it worth making this dependency explicit in Kconfig?

For SVE, I have

config ARM64_SVE
depends on !KVM || ARM64_VHE

to implement this.

Cheers
---Dave

2019-03-27 11:52:07

by Amit Kachhap

[permalink] [raw]
Subject: Re: [PATCH v7 9/10] KVM: arm64: docs: document KVM support of pointer authentication

Hi,

On 3/27/19 4:14 PM, Dave Martin wrote:
> On Mon, Mar 25, 2019 at 08:05:49PM +0000, Kristina Martsenko wrote:
>> On 19/03/2019 08:30, Amit Daniel Kachhap wrote:
>>> This adds sections for KVM API extension for pointer authentication.
>>> A brief description about usage of pointer authentication for KVM guests
>>> is added in the arm64 documentations.
>>>
>>> Signed-off-by: Amit Daniel Kachhap <[email protected]>
>>> Cc: Mark Rutland <[email protected]>
>>> Cc: Christoffer Dall <[email protected]>
>>> Cc: Marc Zyngier <[email protected]>
>>> Cc: [email protected]
>>
>> I think it makes sense to also update the Kconfig symbol description for
>> CONFIG_ARM64_PTR_AUTH, since it currently only mentions userspace
>> support, but now the option also enables KVM guest support.
>>
>> It's also worth mentioning that CONFIG_ARM64_VHE=y is required for guest
>> support.
>
> Is it worth making this dependency explicit in Kconfig?
Currently there is discrepancy that userspace supports ptrauth in both
nVHE/VHE mode and KVM guest only in VHE mode. I suppose adding explicit
dependency flag here makes both of them similar.

Thanks,
Amit D
>
> For SVE, I have
>
> config ARM64_SVE
> depends on !KVM || ARM64_VHE
>
> to implement this.
>
> Cheers
> ---Dave
>

2019-03-27 13:52:05

by Dave Martin

[permalink] [raw]
Subject: Re: [PATCH v7 9/10] KVM: arm64: docs: document KVM support of pointer authentication

On Wed, Mar 27, 2019 at 05:19:28PM +0530, Amit Daniel Kachhap wrote:
> Hi,
>
> On 3/27/19 4:14 PM, Dave Martin wrote:
> >On Mon, Mar 25, 2019 at 08:05:49PM +0000, Kristina Martsenko wrote:
> >>On 19/03/2019 08:30, Amit Daniel Kachhap wrote:
> >>>This adds sections for KVM API extension for pointer authentication.
> >>>A brief description about usage of pointer authentication for KVM guests
> >>>is added in the arm64 documentations.
> >>>
> >>>Signed-off-by: Amit Daniel Kachhap <[email protected]>
> >>>Cc: Mark Rutland <[email protected]>
> >>>Cc: Christoffer Dall <[email protected]>
> >>>Cc: Marc Zyngier <[email protected]>
> >>>Cc: [email protected]
> >>
> >>I think it makes sense to also update the Kconfig symbol description for
> >>CONFIG_ARM64_PTR_AUTH, since it currently only mentions userspace
> >>support, but now the option also enables KVM guest support.
> >>
> >>It's also worth mentioning that CONFIG_ARM64_VHE=y is required for guest
> >>support.
> >
> >Is it worth making this dependency explicit in Kconfig?
> Currently there is discrepancy that userspace supports ptrauth in both
> nVHE/VHE mode and KVM guest only in VHE mode. I suppose adding explicit
> dependency flag here makes both of them similar.

Looking at the history, for SVE this Kconfig restriction has always been
present. Since ptrauth initially upstreamed without an equivalent
restriction in Kconfig, adding it now could be seen as a regression.

So, maybe it's not worth it here.

You could add a separate option, say

config ARM64_PTR_AUTH_KVM
bool "Pointer authentication support for KVM guests"
default y
depends on ARM64_PTR_AUTH && ARM64_VHE && KVM

...but that may be overkill.

Cheers,
---Dave

2019-03-27 18:58:37

by James Morse

[permalink] [raw]
Subject: Re: [PATCH v7 7/10] KVM: arm/arm64: context-switch ptrauth registers

Hi Amit, Kristina,

On 27/03/2019 03:21, Amit Daniel Kachhap wrote:
> On 3/26/19 11:31 PM, Kristina Martsenko wrote:
>> On 26/03/2019 04:03, Amit Daniel Kachhap wrote:
>>> On 3/26/19 1:34 AM, Kristina Martsenko wrote:
>>>> On 19/03/2019 08:30, Amit Daniel Kachhap wrote:
>>>>> From: Mark Rutland <[email protected]>
>>>>>
>>>>> When pointer authentication is supported, a guest may wish to use it.
>>>>> This patch adds the necessary KVM infrastructure for this to work, with
>>>>> a semi-lazy context switch of the pointer auth state.

>>>>> +    if (test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
>>>>> +        test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features)) {>>>>> +        /* Verify that KVM startup matches the conditions for ptrauth */
>>>>> +        if (WARN_ON(!vcpu_has_ptrauth(vcpu)))
>>>>> +            return -EINVAL;
>>>>> +    }

>> One more thing - I think the WARN_ON() here should be removed. Otherwise
>> if panic_on_warn is set then userspace can panic the kernel. I think
>> WARN_ON is only for internal kernel errors (see comment in
>> include/asm-generic/bug.h).
>
> The documentation makes sense so in this case a pr_err like message will suffice.

(could it be a kvm_debug() at most?)

Do we need to print anything at all? User-space asked us for something we can't do.
Filling up the kernel log with user-space's mistakes makes it harder to debug the kernel
when something goes wrong.

kvm_arm_pmu_v3_init() returns -ENODEV if you ask if for the PMU and the platform can't
support it. Isn't the returned error enough?


> Btw
> there is one WARN in the function kvm_set_ipa_limit in the same file.

That is called once via kvm_arch_init(), it can't be triggered repeatedly from user-space.


Thanks,

James

2019-03-28 10:14:13

by Amit Kachhap

[permalink] [raw]
Subject: Re: [PATCH v7 9/10] KVM: arm64: docs: document KVM support of pointer authentication

Hi Dave,

On 3/27/19 7:20 PM, Dave Martin wrote:
> On Wed, Mar 27, 2019 at 05:19:28PM +0530, Amit Daniel Kachhap wrote:
>> Hi,
>>
>> On 3/27/19 4:14 PM, Dave Martin wrote:
>>> On Mon, Mar 25, 2019 at 08:05:49PM +0000, Kristina Martsenko wrote:
>>>> On 19/03/2019 08:30, Amit Daniel Kachhap wrote:
>>>>> This adds sections for KVM API extension for pointer authentication.
>>>>> A brief description about usage of pointer authentication for KVM guests
>>>>> is added in the arm64 documentations.
>>>>>
>>>>> Signed-off-by: Amit Daniel Kachhap <[email protected]>
>>>>> Cc: Mark Rutland <[email protected]>
>>>>> Cc: Christoffer Dall <[email protected]>
>>>>> Cc: Marc Zyngier <[email protected]>
>>>>> Cc: [email protected]
>>>>
>>>> I think it makes sense to also update the Kconfig symbol description for
>>>> CONFIG_ARM64_PTR_AUTH, since it currently only mentions userspace
>>>> support, but now the option also enables KVM guest support.
>>>>
>>>> It's also worth mentioning that CONFIG_ARM64_VHE=y is required for guest
>>>> support.
>>>
>>> Is it worth making this dependency explicit in Kconfig?
>> Currently there is discrepancy that userspace supports ptrauth in both
>> nVHE/VHE mode and KVM guest only in VHE mode. I suppose adding explicit
>> dependency flag here makes both of them similar.
>
> Looking at the history, for SVE this Kconfig restriction has always been
> present. Since ptrauth initially upstreamed without an equivalent
> restriction in Kconfig, adding it now could be seen as a regression.
ok it makes sense to keep it this way.
>
> So, maybe it's not worth it here.
>
> You could add a separate option, say
>
> config ARM64_PTR_AUTH_KVM
> bool "Pointer authentication support for KVM guests"
> default y
> depends on ARM64_PTR_AUTH && ARM64_VHE && KVM
>
> ...but that may be overkill.
I was thinking for adding some details for this in virtualization
section in Documentation/arm64/pointer-authentication.txt along with
brief comment in arch/arm64/Kconfig.

Thanks,
Amit D
>
> Cheers,
> ---Dave
>

2019-03-28 11:30:14

by Amit Kachhap

[permalink] [raw]
Subject: Re: [PATCH v7 7/10] KVM: arm/arm64: context-switch ptrauth registers


Hi James,
On 3/27/19 11:46 PM, James Morse wrote:
> Hi Amit, Kristina,
>
> On 27/03/2019 03:21, Amit Daniel Kachhap wrote:
>> On 3/26/19 11:31 PM, Kristina Martsenko wrote:
>>> On 26/03/2019 04:03, Amit Daniel Kachhap wrote:
>>>> On 3/26/19 1:34 AM, Kristina Martsenko wrote:
>>>>> On 19/03/2019 08:30, Amit Daniel Kachhap wrote:
>>>>>> From: Mark Rutland <[email protected]>
>>>>>>
>>>>>> When pointer authentication is supported, a guest may wish to use it.
>>>>>> This patch adds the necessary KVM infrastructure for this to work, with
>>>>>> a semi-lazy context switch of the pointer auth state.
>
>>>>>> +    if (test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
>>>>>> +        test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features)) {>>>>> +        /* Verify that KVM startup matches the conditions for ptrauth */
>>>>>> +        if (WARN_ON(!vcpu_has_ptrauth(vcpu)))
>>>>>> +            return -EINVAL;
>>>>>> +    }
>
>>> One more thing - I think the WARN_ON() here should be removed. Otherwise
>>> if panic_on_warn is set then userspace can panic the kernel. I think
>>> WARN_ON is only for internal kernel errors (see comment in
>>> include/asm-generic/bug.h).
>>
>> The documentation makes sense so in this case a pr_err like message will suffice.
>
> (could it be a kvm_debug() at most?)
>
> Do we need to print anything at all? User-space asked us for something we can't do.
> Filling up the kernel log with user-space's mistakes makes it harder to debug the kernel
> when something goes wrong.
>
> kvm_arm_pmu_v3_init() returns -ENODEV if you ask if for the PMU and the platform can't
> support it. Isn't the returned error enough?
Yes Agree that most of the time the user invoked functions are just
returned with error value in case of failure. Thanks for the details.

Regards,
Amit D
>
>
>> Btw
>> there is one WARN in the function kvm_set_ipa_limit in the same file.
>
> That is called once via kvm_arch_init(), it can't be triggered repeatedly from user-space.
>
>
> Thanks,
>
> James
>

2019-03-28 18:52:59

by James Morse

[permalink] [raw]
Subject: Re: [PATCH v7 7/10] KVM: arm/arm64: context-switch ptrauth registers

Hi Amit,

On 19/03/2019 08:30, Amit Daniel Kachhap wrote:
> From: Mark Rutland <[email protected]>
>
> When pointer authentication is supported, a guest may wish to use it.
> This patch adds the necessary KVM infrastructure for this to work, with
> a semi-lazy context switch of the pointer auth state.
>
> Pointer authentication feature is only enabled when VHE is built
> in the kernel and present in the CPU implementation so only VHE code
> paths are modified.
>
> When we schedule a vcpu, we disable guest usage of pointer
> authentication instructions and accesses to the keys. While these are
> disabled, we avoid context-switching the keys. When we trap the guest
> trying to use pointer authentication functionality, we change to eagerly
> context-switching the keys, and enable the feature. The next time the
> vcpu is scheduled out/in, we start again. However the host key save is
> optimized and implemented inside ptrauth instruction/register access
> trap.
>
> Pointer authentication consists of address authentication and generic
> authentication, and CPUs in a system might have varied support for
> either. Where support for either feature is not uniform, it is hidden
> from guests via ID register emulation, as a result of the cpufeature
> framework in the host.
>
> Unfortunately, address authentication and generic authentication cannot
> be trapped separately, as the architecture provides a single EL2 trap
> covering both. If we wish to expose one without the other, we cannot
> prevent a (badly-written) guest from intermittently using a feature
> which is not uniformly supported (when scheduled on a physical CPU which
> supports the relevant feature). Hence, this patch expects both type of
> authentication to be present in a cpu.
>
> This switch of key is done from guest enter/exit assembly as preperation
> for the upcoming in-kernel pointer authentication support. Hence, these
> key switching routines are not implemented in C code as they may cause
> pointer authentication key signing error in some situations.


> diff --git a/arch/arm64/include/asm/kvm_ptrauth_asm.h b/arch/arm64/include/asm/kvm_ptrauth_asm.h
> new file mode 100644
> index 0000000..97bb040
> --- /dev/null
> +++ b/arch/arm64/include/asm/kvm_ptrauth_asm.h

> +.macro ptrauth_save_state base, reg1, reg2
> + mrs_s \reg1, SYS_APIAKEYLO_EL1
> + mrs_s \reg2, SYS_APIAKEYHI_EL1
> + stp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]

A nice-to-have:
Because you only use the 'LO' asm-offset definitions here, we have to just assume that the
'HI' ones are adjacent. Is it possible to add a BUILD_BUG() somewhere (probably in
asm-offsets.c) to check this? As its just an enum we'd expect the order to be arbitrary,
and asm-offsets to take care of any re-ordering... (struct randomisation may one day come
to enums!)


> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
> index e2f0268..9f591ad 100644
> --- a/arch/arm64/kvm/guest.c
> +++ b/arch/arm64/kvm/guest.c
> @@ -544,3 +544,17 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
>
> return ret;
> }
> +
> +/**
> + * kvm_arm_vcpu_ptrauth_setup_lazy - setup lazy ptrauth for vcpu schedule
> + *
> + * @vcpu: The VCPU pointer
> + *
> + * This function may be used to disable ptrauth and use it in a lazy context
> + * via traps.
> + */
> +void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu)
> +{
> + if (vcpu_has_ptrauth(vcpu))
> + kvm_arm_vcpu_ptrauth_disable(vcpu);
> +}

Can you check my reasoning here, to make sure I've understood this properly?!:

This clears the API/APK bits to enable the traps, if the system supports ptrauth, and if
Qemu requested it for this vcpu. If any of those things aren't true, the guest gets
HCR_GUEST_FLAGS, which also has the API/APK bits clear...

What this is doing is clearing the API/APK bits that may have been left set by a previous
run of a ptrauth-enabled vcpu.



> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> index f16a5f8..00f0639 100644
> --- a/arch/arm64/kvm/reset.c
> +++ b/arch/arm64/kvm/reset.c
> @@ -128,6 +128,13 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> if (loaded)
> kvm_arch_vcpu_put(vcpu);
>
> + if (test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
> + test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features)) {
> + /* Verify that KVM startup matches the conditions for ptrauth */
> + if (WARN_ON(!vcpu_has_ptrauth(vcpu)))
> + return -EINVAL;
> + }

Could this hunk go in the previous patch that added the vcpu definitions?
It would be good if the uapi defines, the documentation and this validation logic came as
one patch, it makes future archaeology easier!


Looks good!

Thanks,

James

2019-03-29 05:55:54

by Amit Kachhap

[permalink] [raw]
Subject: Re: [PATCH v7 7/10] KVM: arm/arm64: context-switch ptrauth registers

Hi James,

On 3/29/19 12:21 AM, James Morse wrote:
> Hi Amit,
>
> On 19/03/2019 08:30, Amit Daniel Kachhap wrote:
>> From: Mark Rutland <[email protected]>
>>
>> When pointer authentication is supported, a guest may wish to use it.
>> This patch adds the necessary KVM infrastructure for this to work, with
>> a semi-lazy context switch of the pointer auth state.
>>
>> Pointer authentication feature is only enabled when VHE is built
>> in the kernel and present in the CPU implementation so only VHE code
>> paths are modified.
>>
>> When we schedule a vcpu, we disable guest usage of pointer
>> authentication instructions and accesses to the keys. While these are
>> disabled, we avoid context-switching the keys. When we trap the guest
>> trying to use pointer authentication functionality, we change to eagerly
>> context-switching the keys, and enable the feature. The next time the
>> vcpu is scheduled out/in, we start again. However the host key save is
>> optimized and implemented inside ptrauth instruction/register access
>> trap.
>>
>> Pointer authentication consists of address authentication and generic
>> authentication, and CPUs in a system might have varied support for
>> either. Where support for either feature is not uniform, it is hidden
>> from guests via ID register emulation, as a result of the cpufeature
>> framework in the host.
>>
>> Unfortunately, address authentication and generic authentication cannot
>> be trapped separately, as the architecture provides a single EL2 trap
>> covering both. If we wish to expose one without the other, we cannot
>> prevent a (badly-written) guest from intermittently using a feature
>> which is not uniformly supported (when scheduled on a physical CPU which
>> supports the relevant feature). Hence, this patch expects both type of
>> authentication to be present in a cpu.
>>
>> This switch of key is done from guest enter/exit assembly as preperation
>> for the upcoming in-kernel pointer authentication support. Hence, these
>> key switching routines are not implemented in C code as they may cause
>> pointer authentication key signing error in some situations.
>
>
>> diff --git a/arch/arm64/include/asm/kvm_ptrauth_asm.h b/arch/arm64/include/asm/kvm_ptrauth_asm.h
>> new file mode 100644
>> index 0000000..97bb040
>> --- /dev/null
>> +++ b/arch/arm64/include/asm/kvm_ptrauth_asm.h
>
>> +.macro ptrauth_save_state base, reg1, reg2
>> + mrs_s \reg1, SYS_APIAKEYLO_EL1
>> + mrs_s \reg2, SYS_APIAKEYHI_EL1
>> + stp \reg1, \reg2, [\base, #PTRAUTH_REG_OFFSET(CPU_APIAKEYLO_EL1)]
>
> A nice-to-have:
> Because you only use the 'LO' asm-offset definitions here, we have to just assume that the
> 'HI' ones are adjacent. Is it possible to add a BUILD_BUG() somewhere (probably in
> asm-offsets.c) to check this? As its just an enum we'd expect the order to be arbitrary,
> and asm-offsets to take care of any re-ordering... (struct randomisation may one day come
> to enums!)
Yes, seems this logic will fail with enum randomization. Adding any BUG
in asm-offsets.c is not picked by the build system. I guess another
approach would be to disable randomization for each key and define the
enums as like below,
enum vcpu_sysreg {
...
APIAKEYLO_EL1,
APIAKEYHI_EL1 = APIAKEYLO_EL1 + 1,
APIBKEYLO_EL1,
APIBKEYHI_EL1 = APIBKEYLO_EL1 + 1,
...
}
This should be fine as each key size is 128 bit so LOW/HI key is placed
together to access it as a unit.
>
>
>> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
>> index e2f0268..9f591ad 100644
>> --- a/arch/arm64/kvm/guest.c
>> +++ b/arch/arm64/kvm/guest.c
>> @@ -544,3 +544,17 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
>>
>> return ret;
>> }
>> +
>> +/**
>> + * kvm_arm_vcpu_ptrauth_setup_lazy - setup lazy ptrauth for vcpu schedule
>> + *
>> + * @vcpu: The VCPU pointer
>> + *
>> + * This function may be used to disable ptrauth and use it in a lazy context
>> + * via traps.
>> + */
>> +void kvm_arm_vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu)
>> +{
>> + if (vcpu_has_ptrauth(vcpu))
>> + kvm_arm_vcpu_ptrauth_disable(vcpu);
>> +}
>
> Can you check my reasoning here, to make sure I've understood this properly?!:
>
> This clears the API/APK bits to enable the traps, if the system supports ptrauth, and if
> Qemu requested it for this vcpu. If any of those things aren't true, the guest gets
> HCR_GUEST_FLAGS, which also has the API/APK bits clear...
>
> What this is doing is clearing the API/APK bits that may have been left set by a previous
> run of a ptrauth-enabled vcpu.
Yes your description looks fine. Mark mentioned the benefit of this
approach in earlier thread [1].

[1]:
https://lore.kernel.org/lkml/[email protected]/
>
>
>
>> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
>> index f16a5f8..00f0639 100644
>> --- a/arch/arm64/kvm/reset.c
>> +++ b/arch/arm64/kvm/reset.c
>> @@ -128,6 +128,13 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>> if (loaded)
>> kvm_arch_vcpu_put(vcpu);
>>
>> + if (test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
>> + test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features)) {
>> + /* Verify that KVM startup matches the conditions for ptrauth */
>> + if (WARN_ON(!vcpu_has_ptrauth(vcpu)))
>> + return -EINVAL;
>> + }
>
> Could this hunk go in the previous patch that added the vcpu definitions?
> It would be good if the uapi defines, the documentation and this validation logic came as
> one patch, it makes future archaeology easier!
ok it makes sense. I will update it in the next patch series.

Thanks,
Amit Daniel
>
>
> Looks good!
>
> Thanks,
>
> James
>