This series adds support for the ARMv8.3 pointer authentication extension,
enabling userspace return address protection with recent versions of GCC.
Since RFC [1]:
* Make the KVM context switch (semi-lazy)
* Rebase to v4.13-rc1
* Improve pointer authentication documentation
* Add hwcap documentation
* Various minor cleanups
Since v1 [2]:
* Rebase to v4.15-rc1
* Settle on per-process keys
* Strip PACs when unwinding userspace
* Don't expose an XPAC hwcap (this is implied by ID registers)
* Leave APIB, ABPDA, APDB, and APGA keys unsupported for now
* Support IMP DEF algorithms
* Rely on KVM ID register emulation
* Various cleanups
Since v2 [3]:
* Unify HCR_EL2 initialization
* s/POINTER_AUTHENTICATION/PTR_AUTH/
* Drop KVM support (for now)
* Drop detection of generic authentication
While there are use-cases for keys other than APIAKey, the only software that
I'm aware of with pointer authentication support is GCC, which only makes use
of APIAKey. I'm happy to add support for other keys as users appear.
I've pushed the series to the arm64/pointer-auth branch [4] of my linux tree.
I've also pushed out a necessary bootwrapper patch to the pointer-auth branch
[5] of my bootwrapper repo.
Extension Overview
==================
The ARMv8.3 pointer authentication extension adds functionality to detect
modification of pointer values, mitigating certain classes of attack such as
stack smashing, and making return oriented programming attacks harder
The extension introduces the concept of a pointer authentication code (PAC),
which is stored in some upper bits of pointers. Each PAC is derived from the
original pointer, another 64-bit value (e.g. the stack pointer), and a secret
128-bit key.
New instructions are added which can be used to:
* Insert a PAC into a pointer
* Strip a PAC from a pointer
* Authenticate strip a PAC from a pointer
If authentication succeeds, the code is removed, yielding the original pointer.
If authentication fails, bits are set in the pointer such that it is guaranteed
to cause a fault if used.
These instructions can make use of four keys:
* APIAKey (A.K.A. Instruction A key)
* APIBKey (A.K.A. Instruction B key)
* APDAKey (A.K.A. Data A key)
* APDBKey (A.K.A. Data B Key)
A subset of these instruction encodings have been allocated from the HINT
space, and will operate as NOPs on any ARMv8-A parts which do not feature the
extension (or if purposefully disabled by the kernel). Software using only this
subset of the instructions should function correctly on all ARMv8-A parts.
Additionally, instructions are added to authenticate small blocks of memory in
similar fashion, using APGAKey (A.K.A. Generic key).
This Series
===========
This series enables the use of instructions using APIAKey, which is initialised
and maintained per-process (shared by all threads). This series does not add
support for APIBKey, APDAKey, APDBKey, nor APGAKey.
I've given this some basic testing with a homebrew test suite. More ideally,
we'd add some tests to the kernel source tree.
For the time being, pointer authentication functionality is hidden from
guests via ID register trapping.
Thanks,
Mark.
[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2017-April/498941.html
[2] https://lkml.kernel.org/r/[email protected]
[3] https://lkml.kernel.org/r/[email protected]
[4] git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git arm64/pointer-auth
[5] git://git.kernel.org/pub/scm/linux/kernel/git/mark/boot-wrapper-aarch64.git pointer-auth
Mark Rutland (11):
arm64: add pointer authentication register bits
arm64/kvm: consistently handle host HCR_EL2 flags
arm64/kvm: hide ptrauth from guests
arm64: Don't trap host pointer auth use to EL2
arm64/cpufeature: detect pointer authentication
asm-generic: mm_hooks: allow hooks to be overridden individually
arm64: add basic pointer authentication support
arm64: expose user PAC bit positions via ptrace
arm64: perf: strip PAC when unwinding userspace
arm64: enable pointer authentication
arm64: docs: document pointer authentication
Documentation/arm64/booting.txt | 8 ++
Documentation/arm64/elf_hwcaps.txt | 6 ++
Documentation/arm64/pointer-authentication.txt | 84 ++++++++++++++++++++
arch/arm64/Kconfig | 23 ++++++
arch/arm64/include/asm/cpucaps.h | 5 +-
arch/arm64/include/asm/esr.h | 3 +-
arch/arm64/include/asm/kvm_arm.h | 3 +
arch/arm64/include/asm/mmu.h | 5 ++
arch/arm64/include/asm/mmu_context.h | 25 +++++-
arch/arm64/include/asm/pointer_auth.h | 104 +++++++++++++++++++++++++
arch/arm64/include/asm/sysreg.h | 30 +++++++
arch/arm64/include/uapi/asm/hwcap.h | 1 +
arch/arm64/include/uapi/asm/ptrace.h | 7 ++
arch/arm64/kernel/cpufeature.c | 56 +++++++++++++
arch/arm64/kernel/cpuinfo.c | 1 +
arch/arm64/kernel/head.S | 5 +-
arch/arm64/kernel/perf_callchain.c | 5 +-
arch/arm64/kernel/ptrace.c | 38 +++++++++
arch/arm64/kvm/handle_exit.c | 18 +++++
arch/arm64/kvm/hyp/switch.c | 2 +-
arch/arm64/kvm/sys_regs.c | 9 +++
include/asm-generic/mm_hooks.h | 11 +++
include/uapi/linux/elf.h | 1 +
23 files changed, 441 insertions(+), 9 deletions(-)
create mode 100644 Documentation/arm64/pointer-authentication.txt
create mode 100644 arch/arm64/include/asm/pointer_auth.h
--
2.11.0
To allow EL0 (and/or EL1) to use pointer authentication functionality,
we must ensure that pointer authentication instructions and accesses to
pointer authentication keys are not trapped to EL2.
This patch ensures that HCR_EL2 is configured appropriately when the
kernel is booted at EL2. For non-VHE kernels we set HCR_EL2.{API,APK},
ensuring that EL1 can access keys and permit EL0 use of instructions.
For VHE kernels host EL0 (TGE && E2H) is unaffected by these settings,
and it doesn't matter how we configure HCR_EL2.{API,APK}, so we don't
bother setting them.
This does not enable support for KVM guests, since KVM manages HCR_EL2
itself when running VMs.
Signed-off-by: Mark Rutland <[email protected]>
Cc: Christoffer Dall <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: [email protected]
---
arch/arm64/include/asm/kvm_arm.h | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 89b3dda7e3cb..4d57e1e58323 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -23,6 +23,8 @@
#include <asm/types.h>
/* Hyp Configuration Register (HCR) bits */
+#define HCR_API (UL(1) << 41)
+#define HCR_APK (UL(1) << 40)
#define HCR_TEA (UL(1) << 37)
#define HCR_TERR (UL(1) << 36)
#define HCR_TLOR (UL(1) << 35)
@@ -86,7 +88,7 @@
HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW | HCR_TLOR | \
HCR_FMO | HCR_IMO)
#define HCR_VIRT_EXCP_MASK (HCR_VSE | HCR_VI | HCR_VF)
-#define HCR_HOST_NVHE_FLAGS (HCR_RW)
+#define HCR_HOST_NVHE_FLAGS (HCR_RW | HCR_API | HCR_APK)
#define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)
/* TCR_EL2 Registers bits */
--
2.11.0
So that we can dynamically handle the presence of pointer authentication
functionality, wire up probing code in cpufeature.c.
From ARMv8.3 onwards, ID_AA64ISAR1 is no longer entirely RES0, and now
has four fields describing the presence of pointer authentication
functionality:
* APA - address authentication present, using an architected algorithm
* API - address authentication present, using an IMP DEF algorithm
* GPA - generic authentication present, using an architected algorithm
* GPI - generic authentication present, using an IMP DEF algorithm
For the moment we only care about address authentication, so we only
need to check APA and API. It is assumed that if all CPUs support an IMP
DEF algorithm, the same algorithm is used across all CPUs.
Note that when we implement KVM support, we will also need to ensure
that CPUs have uniform support for GPA and GPI.
Signed-off-by: Mark Rutland <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Suzuki K Poulose <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/arm64/include/asm/cpucaps.h | 5 ++++-
arch/arm64/kernel/cpufeature.c | 47 ++++++++++++++++++++++++++++++++++++++++
2 files changed, 51 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index bc51b72fafd4..9dcb4d1b14f5 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -48,7 +48,10 @@
#define ARM64_HAS_CACHE_IDC 27
#define ARM64_HAS_CACHE_DIC 28
#define ARM64_HW_DBM 29
+#define ARM64_HAS_ADDRESS_AUTH_ARCH 30
+#define ARM64_HAS_ADDRESS_AUTH_IMP_DEF 31
+#define ARM64_HAS_ADDRESS_AUTH 32
-#define ARM64_NCAPS 30
+#define ARM64_NCAPS 33
#endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 536d572e5596..01b1a7e7d70f 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -142,6 +142,10 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_LRCPC_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
+ FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
+ FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
ARM64_FTR_END,
};
@@ -1025,6 +1029,22 @@ static void cpu_copy_el2regs(const struct arm64_cpu_capabilities *__unused)
}
#endif
+#ifdef CONFIG_ARM64_PTR_AUTH
+static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
+ int __unused)
+{
+ u64 isar1 = read_sanitised_ftr_reg(SYS_ID_AA64ISAR1_EL1);
+ bool api, apa;
+
+ apa = cpuid_feature_extract_unsigned_field(isar1,
+ ID_AA64ISAR1_APA_SHIFT) > 0;
+ api = cpuid_feature_extract_unsigned_field(isar1,
+ ID_AA64ISAR1_API_SHIFT) > 0;
+
+ return apa || api;
+}
+#endif /* CONFIG_ARM64_PTR_AUTH */
+
static const struct arm64_cpu_capabilities arm64_features[] = {
{
.desc = "GIC system register CPU interface",
@@ -1201,6 +1221,33 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.cpu_enable = cpu_enable_hw_dbm,
},
#endif
+#ifdef CONFIG_ARM64_PTR_AUTH
+ {
+ .desc = "Address authentication (architected algorithm)",
+ .capability = ARM64_HAS_ADDRESS_AUTH_ARCH,
+ .type = ARM64_CPUCAP_SYSTEM_FEATURE,
+ .sys_reg = SYS_ID_AA64ISAR1_EL1,
+ .sign = FTR_UNSIGNED,
+ .field_pos = ID_AA64ISAR1_APA_SHIFT,
+ .min_field_value = ID_AA64ISAR1_APA_ARCHITECTED,
+ .matches = has_cpuid_feature,
+ },
+ {
+ .desc = "Address authentication (IMP DEF algorithm)",
+ .capability = ARM64_HAS_ADDRESS_AUTH_IMP_DEF,
+ .type = ARM64_CPUCAP_SYSTEM_FEATURE,
+ .sys_reg = SYS_ID_AA64ISAR1_EL1,
+ .sign = FTR_UNSIGNED,
+ .field_pos = ID_AA64ISAR1_API_SHIFT,
+ .min_field_value = ID_AA64ISAR1_API_IMP_DEF,
+ .matches = has_cpuid_feature,
+ },
+ {
+ .capability = ARM64_HAS_ADDRESS_AUTH,
+ .type = ARM64_CPUCAP_SYSTEM_FEATURE,
+ .matches = has_address_auth,
+ },
+#endif /* CONFIG_ARM64_PTR_AUTH */
{},
};
--
2.11.0
Now that we've added code to support pointer authentication, add some
documentation so that people can figure out if/how to use it.
Signed-off-by: Mark Rutland <[email protected]>
Cc: Andrew Jones <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Ramana Radhakrishnan <[email protected]>
Cc: Will Deacon <[email protected]>
---
Documentation/arm64/booting.txt | 8 +++
Documentation/arm64/elf_hwcaps.txt | 6 ++
Documentation/arm64/pointer-authentication.txt | 84 ++++++++++++++++++++++++++
3 files changed, 98 insertions(+)
create mode 100644 Documentation/arm64/pointer-authentication.txt
diff --git a/Documentation/arm64/booting.txt b/Documentation/arm64/booting.txt
index 8d0df62c3fe0..8df9f4658d6f 100644
--- a/Documentation/arm64/booting.txt
+++ b/Documentation/arm64/booting.txt
@@ -205,6 +205,14 @@ Before jumping into the kernel, the following conditions must be met:
ICC_SRE_EL2.SRE (bit 0) must be initialised to 0b0.
- The DT or ACPI tables must describe a GICv2 interrupt controller.
+ For CPUs with pointer authentication functionality:
+ - If EL3 is present:
+ SCR_EL3.APK (bit 16) must be initialised to 0b1
+ SCR_EL3.API (bit 17) must be initialised to 0b1
+ - If the kernel is entered at EL1:
+ HCR_EL2.APK (bit 40) must be initialised to 0b1
+ HCR_EL2.API (bit 41) must be initialised to 0b1
+
The requirements described above for CPU mode, caches, MMUs, architected
timers, coherency and system registers apply to all CPUs. All CPUs must
enter the kernel in the same exception level.
diff --git a/Documentation/arm64/elf_hwcaps.txt b/Documentation/arm64/elf_hwcaps.txt
index d6aff2c5e9e2..ebc8b15b45fc 100644
--- a/Documentation/arm64/elf_hwcaps.txt
+++ b/Documentation/arm64/elf_hwcaps.txt
@@ -178,3 +178,9 @@ HWCAP_ILRCPC
HWCAP_FLAGM
Functionality implied by ID_AA64ISAR0_EL1.TS == 0b0001.
+
+HWCAP_APIA
+
+ EL0 AddPac and Auth functionality using APIAKey_EL1 is enabled, as
+ described by Documentation/arm64/pointer-authentication.txt.
+
diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt
new file mode 100644
index 000000000000..8a9cb5713770
--- /dev/null
+++ b/Documentation/arm64/pointer-authentication.txt
@@ -0,0 +1,84 @@
+Pointer authentication in AArch64 Linux
+=======================================
+
+Author: Mark Rutland <[email protected]>
+Date: 2017-07-19
+
+This document briefly describes the provision of pointer authentication
+functionality in AArch64 Linux.
+
+
+Architecture overview
+---------------------
+
+The ARMv8.3 Pointer Authentication extension adds primitives that can be
+used to mitigate certain classes of attack where an attacker can corrupt
+the contents of some memory (e.g. the stack).
+
+The extension uses a Pointer Authentication Code (PAC) to determine
+whether pointers have been modified unexpectedly. A PAC is derived from
+a pointer, another value (such as the stack pointer), and a secret key
+held in system registers.
+
+The extension adds instructions to insert a valid PAC into a pointer,
+and to verify/remove the PAC from a pointer. The PAC occupies a number
+of high-order bits of the pointer, which varies dependent on the
+configured virtual address size and whether pointer tagging is in use.
+
+A subset of these instructions have been allocated from the HINT
+encoding space. In the absence of the extension (or when disabled),
+these instructions behave as NOPs. Applications and libraries using
+these instructions operate correctly regardless of the presence of the
+extension.
+
+
+Basic support
+-------------
+
+When CONFIG_ARM64_PTR_AUTH is selected, and relevant HW support is
+present, the kernel will assign a random APIAKey value to each process
+at exec*() time. This key is shared by all threads within the process,
+and the key is preserved across fork(). Presence of functionality using
+APIAKey is advertised via HWCAP_APIA.
+
+Recent versions of GCC can compile code with APIAKey-based return
+address protection when passed the -msign-return-address option. This
+uses instructions in the HINT space, and such code can run on systems
+without the pointer authentication extension.
+
+The remaining instruction and data keys (APIBKey, APDAKey, APDBKey) are
+reserved for future use, and instructions using these keys must not be
+used by software until a purpose and scope for their use has been
+decided. To enable future software using these keys to function on
+contemporary kernels, where possible, instructions using these keys are
+made to behave as NOPs.
+
+The generic key (APGAKey) is currently unsupported. Instructions using
+the generic key must not be used by software.
+
+
+Debugging
+---------
+
+When CONFIG_ARM64_PTR_AUTH is selected, and relevant HW support is
+present, the kernel will expose the position of TTBR0 PAC bits in the
+NT_ARM_PAC_MASK regset (struct user_pac_mask), which userspace can
+acqure via PTRACE_GETREGSET.
+
+Separate masks are exposed for data pointers and instruction pointers,
+as the set of PAC bits can vary between the two. Debuggers should not
+expect that HWCAP_APIA implies the presence (or non-presence) of this
+regset -- in future the kernel may support the use of APIBKey, APDAKey,
+and/or APBAKey, even in the absence of APIAKey.
+
+Note that the masks apply to TTBR0 addresses, and are not valid to apply
+to TTBR1 addresses (e.g. kernel pointers).
+
+
+Virtualization
+--------------
+
+Pointer authentication is not currently supported in KVM guests. KVM
+will mask the feature bits from ID_AA64ISAR1_EL1, and attempted use of
+the feature will result in an UNDEFINED exception being injected into
+the guest.
--
2.11.0
When pointer authentication is in use, data/instruction pointers have a
number of PAC bits inserted into them. The number and position of these
bits depends on the configured TCR_ELx.TxSZ and whether tagging is
enabled. ARMv8.3 allows tagging to differ for instruction and data
pointers.
For userspace debuggers to unwind the stack and/or to follow pointer
chains, they need to be able to remove the PAC bits before attempting to
use a pointer.
This patch adds a new structure with masks describing the location of
the PAC bits in userspace instruction and data pointers (i.e. those
addressable via TTBR0), which userspace can query via PTRACE_GETREGSET.
By clearing these bits from pointers, userspace can acquire the PAC-less
versions.
This new regset is exposed when the kernel is built with (user) pointer
authentication support, and the feature is enabled. Otherwise, it is
hidden.
Signed-off-by: Mark Rutland <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Ramana Radhakrishnan <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/arm64/include/asm/pointer_auth.h | 8 ++++++++
arch/arm64/include/uapi/asm/ptrace.h | 7 +++++++
arch/arm64/kernel/ptrace.c | 38 +++++++++++++++++++++++++++++++++++
include/uapi/linux/elf.h | 1 +
4 files changed, 54 insertions(+)
diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
index a2e8fb91fdee..5ff141245633 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -16,9 +16,11 @@
#ifndef __ASM_POINTER_AUTH_H
#define __ASM_POINTER_AUTH_H
+#include <linux/bitops.h>
#include <linux/random.h>
#include <asm/cpufeature.h>
+#include <asm/memory.h>
#include <asm/sysreg.h>
#ifdef CONFIG_ARM64_PTR_AUTH
@@ -71,6 +73,12 @@ static inline void ptrauth_keys_dup(struct ptrauth_keys *old,
*new = *old;
}
+/*
+ * The EL0 pointer bits used by a pointer authentication code.
+ * This is dependent on TBI0 being enabled, or bits 63:56 would also apply.
+ */
+#define ptrauth_pac_mask() GENMASK(54, VA_BITS)
+
#define mm_ctx_ptrauth_init(ctx) \
ptrauth_keys_init(&(ctx)->ptrauth_keys)
diff --git a/arch/arm64/include/uapi/asm/ptrace.h b/arch/arm64/include/uapi/asm/ptrace.h
index 98c4ce55d9c3..4994d718771a 100644
--- a/arch/arm64/include/uapi/asm/ptrace.h
+++ b/arch/arm64/include/uapi/asm/ptrace.h
@@ -228,6 +228,13 @@ struct user_sve_header {
SVE_PT_SVE_OFFSET + SVE_PT_SVE_SIZE(vq, flags) \
: SVE_PT_FPSIMD_OFFSET + SVE_PT_FPSIMD_SIZE(vq, flags))
+/* pointer authentication masks (NT_ARM_PAC_MASK) */
+
+struct user_pac_mask {
+ __u64 data_mask;
+ __u64 insn_mask;
+};
+
#endif /* __ASSEMBLY__ */
#endif /* _UAPI__ASM_PTRACE_H */
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index 71d99af24ef2..f395649f755e 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -44,6 +44,7 @@
#include <asm/cpufeature.h>
#include <asm/debug-monitors.h>
#include <asm/pgtable.h>
+#include <asm/pointer_auth.h>
#include <asm/stacktrace.h>
#include <asm/syscall.h>
#include <asm/traps.h>
@@ -951,6 +952,30 @@ static int sve_set(struct task_struct *target,
#endif /* CONFIG_ARM64_SVE */
+#ifdef CONFIG_ARM64_PTR_AUTH
+static int pac_mask_get(struct task_struct *target,
+ const struct user_regset *regset,
+ unsigned int pos, unsigned int count,
+ void *kbuf, void __user *ubuf)
+{
+ /*
+ * The PAC bits can differ across data and instruction pointers
+ * depending on TCR_EL1.TBID*, which we may make use of in future, so
+ * we expose separate masks.
+ */
+ unsigned long mask = ptrauth_pac_mask();
+ struct user_pac_mask uregs = {
+ .data_mask = mask,
+ .insn_mask = mask,
+ };
+
+ if (!cpus_have_cap(ARM64_HAS_ADDRESS_AUTH))
+ return -EINVAL;
+
+ return user_regset_copyout(&pos, &count, &kbuf, &ubuf, &uregs, 0, -1);
+}
+#endif /* CONFIG_ARM64_PTR_AUTH */
+
enum aarch64_regset {
REGSET_GPR,
REGSET_FPR,
@@ -963,6 +988,9 @@ enum aarch64_regset {
#ifdef CONFIG_ARM64_SVE
REGSET_SVE,
#endif
+#ifdef CONFIG_ARM64_PTR_AUTH
+ REGSET_PAC_MASK,
+#endif
};
static const struct user_regset aarch64_regsets[] = {
@@ -1032,6 +1060,16 @@ static const struct user_regset aarch64_regsets[] = {
.get_size = sve_get_size,
},
#endif
+#ifdef CONFIG_ARM64_PTR_AUTH
+ [REGSET_PAC_MASK] = {
+ .core_note_type = NT_ARM_PAC_MASK,
+ .n = sizeof(struct user_pac_mask) / sizeof(u64),
+ .size = sizeof(u64),
+ .align = sizeof(u64),
+ .get = pac_mask_get,
+ /* this cannot be set dynamically */
+ },
+#endif
};
static const struct user_regset_view user_aarch64_view = {
diff --git a/include/uapi/linux/elf.h b/include/uapi/linux/elf.h
index e2535d6dcec7..070c28121979 100644
--- a/include/uapi/linux/elf.h
+++ b/include/uapi/linux/elf.h
@@ -420,6 +420,7 @@ typedef struct elf64_shdr {
#define NT_ARM_HW_WATCH 0x403 /* ARM hardware watchpoint registers */
#define NT_ARM_SYSTEM_CALL 0x404 /* ARM system call number */
#define NT_ARM_SVE 0x405 /* ARM Scalable Vector Extension registers */
+#define NT_ARM_PAC_MASK 0x406 /* ARM pointer authentication code masks */
#define NT_ARC_V2 0x600 /* ARCv2 accumulator/extra registers */
/* Note header in a PT_NOTE section */
--
2.11.0
Now that all the necessary bits are in place for userspace, add the
necessary Kconfig logic to allow this to be enabled.
Signed-off-by: Mark Rutland <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/arm64/Kconfig | 23 +++++++++++++++++++++++
1 file changed, 23 insertions(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index eb2cf4938f6d..d6ce16b1ee47 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1127,6 +1127,29 @@ config ARM64_RAS_EXTN
endmenu
+menu "ARMv8.3 architectural features"
+
+config ARM64_PTR_AUTH
+ bool "Enable support for pointer authentication"
+ default y
+ help
+ Pointer authentication (part of the ARMv8.3 Extensions) provides
+ instructions for signing and authenticating pointers against secret
+ keys, which can be used to mitigate Return Oriented Programming (ROP)
+ and other attacks.
+
+ This option enables these instructions at EL0 (i.e. for userspace).
+
+ Choosing this option will cause the kernel to initialise secret keys
+ for each process at exec() time, with these keys being
+ context-switched along with the process.
+
+ The feature is detected at runtime. If the feature is not present in
+ hardware it will not be advertised to userspace nor will it be
+ enabled.
+
+endmenu
+
config ARM64_SVE
bool "ARM Scalable Vector Extension support"
default y
--
2.11.0
When the kernel is unwinding userspace callchains, we can't expect that
the userspace consumer of these callchains has the data necessary to
strip the PAC from the stored LR.
This patch has the kernel strip the PAC from user stackframes when the
in-kernel unwinder is used. This only affects the LR value, and not the
FP.
This only affects the in-kernel unwinder. When userspace performs
unwinding, it is up to userspace to strip PACs as necessary (which can
be determined from DWARF information).
Signed-off-by: Mark Rutland <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Ramana Radhakrishnan <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/arm64/include/asm/pointer_auth.h | 7 +++++++
arch/arm64/kernel/perf_callchain.c | 5 ++++-
2 files changed, 11 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
index 5ff141245633..a9ad81791c7f 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -79,6 +79,12 @@ static inline void ptrauth_keys_dup(struct ptrauth_keys *old,
*/
#define ptrauth_pac_mask() GENMASK(54, VA_BITS)
+/* Only valid for EL0 TTBR0 instruction pointers */
+static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
+{
+ return ptr & ~ptrauth_pac_mask();
+}
+
#define mm_ctx_ptrauth_init(ctx) \
ptrauth_keys_init(&(ctx)->ptrauth_keys)
@@ -89,6 +95,7 @@ static inline void ptrauth_keys_dup(struct ptrauth_keys *old,
ptrauth_keys_dup(&(oldctx)->ptrauth_keys, &(newctx)->ptrauth_keys)
#else
+#define ptrauth_strip_insn_pac(lr) (lr)
#define mm_ctx_ptrauth_init(ctx)
#define mm_ctx_ptrauth_switch(ctx)
#define mm_ctx_ptrauth_dup(oldctx, newctx)
diff --git a/arch/arm64/kernel/perf_callchain.c b/arch/arm64/kernel/perf_callchain.c
index bcafd7dcfe8b..928204f6ab08 100644
--- a/arch/arm64/kernel/perf_callchain.c
+++ b/arch/arm64/kernel/perf_callchain.c
@@ -35,6 +35,7 @@ user_backtrace(struct frame_tail __user *tail,
{
struct frame_tail buftail;
unsigned long err;
+ unsigned long lr;
/* Also check accessibility of one struct frame_tail beyond */
if (!access_ok(VERIFY_READ, tail, sizeof(buftail)))
@@ -47,7 +48,9 @@ user_backtrace(struct frame_tail __user *tail,
if (err)
return NULL;
- perf_callchain_store(entry, buftail.lr);
+ lr = ptrauth_strip_insn_pac(buftail.lr);
+
+ perf_callchain_store(entry, lr);
/*
* Frame pointers should strictly progress back up the stack
--
2.11.0
This patch adds basic support for pointer authentication, allowing
userspace to make use of APIAKey. The kernel maintains an APIAKey value
for each process (shared by all threads within), which is initialised to
a random value at exec() time.
To describe that address authentication instructions are available, the
ID_AA64ISAR0.{APA,API} fields are exposed to userspace. A new hwcap,
APIA, is added to describe that the kernel manages APIAKey.
Instructions using other keys (APIBKey, APDAKey, APDBKey) are disabled,
and will behave as NOPs. These may be made use of in future patches.
No support is added for the generic key (APGAKey), though this cannot be
trapped or made to behave as a NOP. Its presence is not advertised with
a hwcap.
Signed-off-by: Mark Rutland <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Ramana Radhakrishnan <[email protected]>
Cc: Suzuki K Poulose <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/arm64/include/asm/mmu.h | 5 ++
arch/arm64/include/asm/mmu_context.h | 25 +++++++++-
arch/arm64/include/asm/pointer_auth.h | 89 +++++++++++++++++++++++++++++++++++
arch/arm64/include/uapi/asm/hwcap.h | 1 +
arch/arm64/kernel/cpufeature.c | 9 ++++
arch/arm64/kernel/cpuinfo.c | 1 +
6 files changed, 128 insertions(+), 2 deletions(-)
create mode 100644 arch/arm64/include/asm/pointer_auth.h
diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index dd320df0d026..f6480ea7b0d5 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -25,10 +25,15 @@
#ifndef __ASSEMBLY__
+#include <asm/pointer_auth.h>
+
typedef struct {
atomic64_t id;
void *vdso;
unsigned long flags;
+#ifdef CONFIG_ARM64_PTR_AUTH
+ struct ptrauth_keys ptrauth_keys;
+#endif
} mm_context_t;
/*
diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index 39ec0b8a689e..caf0d3010112 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -29,7 +29,6 @@
#include <asm/cacheflush.h>
#include <asm/cpufeature.h>
#include <asm/proc-fns.h>
-#include <asm-generic/mm_hooks.h>
#include <asm/cputype.h>
#include <asm/pgtable.h>
#include <asm/sysreg.h>
@@ -168,7 +167,14 @@ static inline void cpu_replace_ttbr1(pgd_t *pgdp)
#define destroy_context(mm) do { } while(0)
void check_and_switch_context(struct mm_struct *mm, unsigned int cpu);
-#define init_new_context(tsk,mm) ({ atomic64_set(&(mm)->context.id, 0); 0; })
+static inline int init_new_context(struct task_struct *tsk,
+ struct mm_struct *mm)
+{
+ atomic64_set(&mm->context.id, 0);
+ mm_ctx_ptrauth_init(&mm->context);
+
+ return 0;
+}
#ifdef CONFIG_ARM64_SW_TTBR0_PAN
static inline void update_saved_ttbr0(struct task_struct *tsk,
@@ -216,6 +222,8 @@ static inline void __switch_mm(struct mm_struct *next)
return;
}
+ mm_ctx_ptrauth_switch(&next->context);
+
check_and_switch_context(next, cpu);
}
@@ -241,6 +249,19 @@ switch_mm(struct mm_struct *prev, struct mm_struct *next,
void verify_cpu_asid_bits(void);
void post_ttbr_update_workaround(void);
+static inline void arch_dup_mmap(struct mm_struct *oldmm,
+ struct mm_struct *mm)
+{
+ mm_ctx_ptrauth_dup(&oldmm->context, &mm->context);
+}
+#define arch_dup_mmap arch_dup_mmap
+
+/*
+ * We need to override arch_dup_mmap before including the generic hooks, which
+ * are otherwise sufficient for us.
+ */
+#include <asm-generic/mm_hooks.h>
+
#endif /* !__ASSEMBLY__ */
#endif /* !__ASM_MMU_CONTEXT_H */
diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
new file mode 100644
index 000000000000..a2e8fb91fdee
--- /dev/null
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -0,0 +1,89 @@
+/*
+ * Copyright (C) 2016 ARM Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef __ASM_POINTER_AUTH_H
+#define __ASM_POINTER_AUTH_H
+
+#include <linux/random.h>
+
+#include <asm/cpufeature.h>
+#include <asm/sysreg.h>
+
+#ifdef CONFIG_ARM64_PTR_AUTH
+/*
+ * Each key is a 128-bit quantity which is split accross a pair of 64-bit
+ * registers (Lo and Hi).
+ */
+struct ptrauth_key {
+ unsigned long lo, hi;
+};
+
+/*
+ * We give each process its own instruction A key (APIAKey), which is shared by
+ * all threads. This is inherited upon fork(), and reinitialised upon exec*().
+ * All other keys are currently unused, with APIBKey, APDAKey, and APBAKey
+ * instructions behaving as NOPs.
+ */
+struct ptrauth_keys {
+ struct ptrauth_key apia;
+};
+
+static inline void ptrauth_keys_init(struct ptrauth_keys *keys)
+{
+ if (!cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH))
+ return;
+
+ get_random_bytes(keys, sizeof(*keys));
+}
+
+#define __ptrauth_key_install(k, v) \
+do { \
+ write_sysreg_s(v.lo, SYS_ ## k ## KEYLO_EL1); \
+ write_sysreg_s(v.hi, SYS_ ## k ## KEYHI_EL1); \
+} while (0)
+
+static inline void ptrauth_keys_switch(struct ptrauth_keys *keys)
+{
+ if (!cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH))
+ return;
+
+ __ptrauth_key_install(APIA, keys->apia);
+}
+
+static inline void ptrauth_keys_dup(struct ptrauth_keys *old,
+ struct ptrauth_keys *new)
+{
+ if (!cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH))
+ return;
+
+ *new = *old;
+}
+
+#define mm_ctx_ptrauth_init(ctx) \
+ ptrauth_keys_init(&(ctx)->ptrauth_keys)
+
+#define mm_ctx_ptrauth_switch(ctx) \
+ ptrauth_keys_switch(&(ctx)->ptrauth_keys)
+
+#define mm_ctx_ptrauth_dup(oldctx, newctx) \
+ ptrauth_keys_dup(&(oldctx)->ptrauth_keys, &(newctx)->ptrauth_keys)
+
+#else
+#define mm_ctx_ptrauth_init(ctx)
+#define mm_ctx_ptrauth_switch(ctx)
+#define mm_ctx_ptrauth_dup(oldctx, newctx)
+#endif
+
+#endif /* __ASM_POINTER_AUTH_H */
diff --git a/arch/arm64/include/uapi/asm/hwcap.h b/arch/arm64/include/uapi/asm/hwcap.h
index 17c65c8f33cb..01f02ac500ae 100644
--- a/arch/arm64/include/uapi/asm/hwcap.h
+++ b/arch/arm64/include/uapi/asm/hwcap.h
@@ -48,5 +48,6 @@
#define HWCAP_USCAT (1 << 25)
#define HWCAP_ILRCPC (1 << 26)
#define HWCAP_FLAGM (1 << 27)
+#define HWCAP_APIA (1 << 28)
#endif /* _UAPI__ASM_HWCAP_H */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 01b1a7e7d70f..f418d4cb6691 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1030,6 +1030,11 @@ static void cpu_copy_el2regs(const struct arm64_cpu_capabilities *__unused)
#endif
#ifdef CONFIG_ARM64_PTR_AUTH
+static void cpu_enable_address_auth(struct arm64_cpu_capabilities const *cap)
+{
+ config_sctlr_el1(0, SCTLR_ELx_ENIA);
+}
+
static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
int __unused)
{
@@ -1246,6 +1251,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.capability = ARM64_HAS_ADDRESS_AUTH,
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
.matches = has_address_auth,
+ .cpu_enable = cpu_enable_address_auth,
},
#endif /* CONFIG_ARM64_PTR_AUTH */
{},
@@ -1293,6 +1299,9 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
#ifdef CONFIG_ARM64_SVE
HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_SVE_SHIFT, FTR_UNSIGNED, ID_AA64PFR0_SVE, CAP_HWCAP, HWCAP_SVE),
#endif
+#ifdef CONFIG_ARM64_PNTR_AUTH
+ HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_APA_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_APIA),
+#endif
{},
};
diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
index e9ab7b3ed317..608411e3aaff 100644
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -81,6 +81,7 @@ static const char *const hwcap_str[] = {
"uscat",
"ilrcpc",
"flagm",
+ "apia",
NULL
};
--
2.11.0
Currently, an architecture must either implement all of the mm hooks
itself, or use all of those provided by the asm-generic implementation.
When an architecture only needs to override a single hook, it must copy
the stub implementations from the asm-generic version.
To avoid this repetition, allow each hook to be overridden indiviually,
by placing each under an #ifndef block. As architectures providing their
own hooks can't include this file today, this shouldn't adversely affect
any existing hooks.
Signed-off-by: Mark Rutland <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: [email protected]
---
include/asm-generic/mm_hooks.h | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/include/asm-generic/mm_hooks.h b/include/asm-generic/mm_hooks.h
index 8ac4e68a12f0..2b3ee15d3702 100644
--- a/include/asm-generic/mm_hooks.h
+++ b/include/asm-generic/mm_hooks.h
@@ -7,31 +7,42 @@
#ifndef _ASM_GENERIC_MM_HOOKS_H
#define _ASM_GENERIC_MM_HOOKS_H
+#ifndef arch_dup_mmap
static inline int arch_dup_mmap(struct mm_struct *oldmm,
struct mm_struct *mm)
{
return 0;
}
+#endif
+#ifndef arch_exit_mmap
static inline void arch_exit_mmap(struct mm_struct *mm)
{
}
+#endif
+#ifndef arch_unmap
static inline void arch_unmap(struct mm_struct *mm,
struct vm_area_struct *vma,
unsigned long start, unsigned long end)
{
}
+#endif
+#ifndef arch_bprm_mm_init
static inline void arch_bprm_mm_init(struct mm_struct *mm,
struct vm_area_struct *vma)
{
}
+#endif
+#ifndef arch_vma_access_permitted
static inline bool arch_vma_access_permitted(struct vm_area_struct *vma,
bool write, bool execute, bool foreign)
{
/* by default, allow everything */
return true;
}
+#endif
+
#endif /* _ASM_GENERIC_MM_HOOKS_H */
--
2.11.0
In KVM we define the configuration of HCR_EL2 for a VHE HOST in
HCR_HOST_VHE_FLAGS, but we don't ahve a similar definition for the
non-VHE host flags, and open-code HCR_RW. Further, in head.S we
open-code the flags for VHE and non-VHE configurations.
In future, we're going to want to configure more flags for the host, so
lets add a HCR_HOST_NVHE_FLAGS defintion, adn consistently use both
HCR_HOST_VHE_FLAGS and HCR_HOST_NVHE_FLAGS in the kvm code and head.S.
We now use mov_q to generate the HCR_EL2 value, as we use when
configuring other registers in head.S.
Signed-off-by: Mark Rutland <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Christoffer Dall <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: [email protected]
---
arch/arm64/include/asm/kvm_arm.h | 1 +
arch/arm64/kernel/head.S | 5 ++---
arch/arm64/kvm/hyp/switch.c | 2 +-
3 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 6dd285e979c9..89b3dda7e3cb 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -86,6 +86,7 @@
HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW | HCR_TLOR | \
HCR_FMO | HCR_IMO)
#define HCR_VIRT_EXCP_MASK (HCR_VSE | HCR_VI | HCR_VF)
+#define HCR_HOST_NVHE_FLAGS (HCR_RW)
#define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)
/* TCR_EL2 Registers bits */
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index b0853069702f..651a06b1980f 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -494,10 +494,9 @@ ENTRY(el2_setup)
#endif
/* Hyp configuration. */
- mov x0, #HCR_RW // 64-bit EL1
+ mov_q x0, HCR_HOST_NVHE_FLAGS
cbz x2, set_hcr
- orr x0, x0, #HCR_TGE // Enable Host Extensions
- orr x0, x0, #HCR_E2H
+ mov_q x0, HCR_HOST_VHE_FLAGS
set_hcr:
msr hcr_el2, x0
isb
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index d9645236e474..cdae330e15e9 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -143,7 +143,7 @@ static void __hyp_text __deactivate_traps_nvhe(void)
mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
write_sysreg(mdcr_el2, mdcr_el2);
- write_sysreg(HCR_RW, hcr_el2);
+ write_sysreg(HCR_HOST_NVHE_FLAGS, hcr_el2);
write_sysreg(CPTR_EL2_DEFAULT, cptr_el2);
}
--
2.11.0
The ARMv8.3 pointer authentication extension adds:
* New fields in ID_AA64ISAR1 to report the presence of pointer
authentication functionality.
* New control bits in SCTLR_ELx to enable this functionality.
* New system registers to hold the keys necessary for this
functionality.
* A new ESR_ELx.EC code used when the new instructions are affected by
configurable traps
This patch adds the relevant definitions to <asm/sysreg.h> and
<asm/esr.h> for these, to be used by subsequent patches.
Signed-off-by: Mark Rutland <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Suzuki K Poulose <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/arm64/include/asm/esr.h | 3 ++-
arch/arm64/include/asm/sysreg.h | 30 ++++++++++++++++++++++++++++++
2 files changed, 32 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index ce70c3ffb993..022785162281 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -30,7 +30,8 @@
#define ESR_ELx_EC_CP14_LS (0x06)
#define ESR_ELx_EC_FP_ASIMD (0x07)
#define ESR_ELx_EC_CP10_ID (0x08)
-/* Unallocated EC: 0x09 - 0x0B */
+#define ESR_ELx_EC_PAC (0x09)
+/* Unallocated EC: 0x0A - 0x0B */
#define ESR_ELx_EC_CP14_64 (0x0C)
/* Unallocated EC: 0x0d */
#define ESR_ELx_EC_ILL (0x0E)
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 6171178075dc..426f0eb90101 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -171,6 +171,19 @@
#define SYS_TTBR1_EL1 sys_reg(3, 0, 2, 0, 1)
#define SYS_TCR_EL1 sys_reg(3, 0, 2, 0, 2)
+#define SYS_APIAKEYLO_EL1 sys_reg(3, 0, 2, 1, 0)
+#define SYS_APIAKEYHI_EL1 sys_reg(3, 0, 2, 1, 1)
+#define SYS_APIBKEYLO_EL1 sys_reg(3, 0, 2, 1, 2)
+#define SYS_APIBKEYHI_EL1 sys_reg(3, 0, 2, 1, 3)
+
+#define SYS_APDAKEYLO_EL1 sys_reg(3, 0, 2, 2, 0)
+#define SYS_APDAKEYHI_EL1 sys_reg(3, 0, 2, 2, 1)
+#define SYS_APDBKEYLO_EL1 sys_reg(3, 0, 2, 2, 2)
+#define SYS_APDBKEYHI_EL1 sys_reg(3, 0, 2, 2, 3)
+
+#define SYS_APGAKEYLO_EL1 sys_reg(3, 0, 2, 3, 0)
+#define SYS_APGAKEYHI_EL1 sys_reg(3, 0, 2, 3, 1)
+
#define SYS_ICC_PMR_EL1 sys_reg(3, 0, 4, 6, 0)
#define SYS_AFSR0_EL1 sys_reg(3, 0, 5, 1, 0)
@@ -417,9 +430,13 @@
#define SYS_ICH_LR15_EL2 __SYS__LR8_EL2(7)
/* Common SCTLR_ELx flags. */
+#define SCTLR_ELx_ENIA (1 << 31)
+#define SCTLR_ELx_ENIB (1 << 30)
+#define SCTLR_ELx_ENDA (1 << 27)
#define SCTLR_ELx_EE (1 << 25)
#define SCTLR_ELx_IESB (1 << 21)
#define SCTLR_ELx_WXN (1 << 19)
+#define SCTLR_ELx_ENDB (1 << 13)
#define SCTLR_ELx_I (1 << 12)
#define SCTLR_ELx_SA (1 << 3)
#define SCTLR_ELx_C (1 << 2)
@@ -510,11 +527,24 @@
#define ID_AA64ISAR0_AES_SHIFT 4
/* id_aa64isar1 */
+#define ID_AA64ISAR1_GPI_SHIFT 28
+#define ID_AA64ISAR1_GPA_SHIFT 24
#define ID_AA64ISAR1_LRCPC_SHIFT 20
#define ID_AA64ISAR1_FCMA_SHIFT 16
#define ID_AA64ISAR1_JSCVT_SHIFT 12
+#define ID_AA64ISAR1_API_SHIFT 8
+#define ID_AA64ISAR1_APA_SHIFT 4
#define ID_AA64ISAR1_DPB_SHIFT 0
+#define ID_AA64ISAR1_APA_NI 0x0
+#define ID_AA64ISAR1_APA_ARCHITECTED 0x1
+#define ID_AA64ISAR1_API_NI 0x0
+#define ID_AA64ISAR1_API_IMP_DEF 0x1
+#define ID_AA64ISAR1_GPA_NI 0x0
+#define ID_AA64ISAR1_GPA_ARCHITECTED 0x1
+#define ID_AA64ISAR1_GPI_NI 0x0
+#define ID_AA64ISAR1_GPI_IMP_DEF 0x1
+
/* id_aa64pfr0 */
#define ID_AA64PFR0_CSV3_SHIFT 60
#define ID_AA64PFR0_CSV2_SHIFT 56
--
2.11.0
In subsequent patches we're going to expose ptrauth to the host kernel
and userspace, but things are a bit trickier for guest kernels. For the
time being, let's hide ptrauth from KVM guests.
Regardless of how well-behaved the guest kernel is, guest userspace
could attempt to use ptrauth instructions, triggering a trap to EL2,
resulting in noise from kvm_handle_unknown_ec(). So let's write up a
handler for the PAC trap, which silently injects an UNDEF into the
guest, as if the feature were really missing.
Signed-off-by: Mark Rutland <[email protected]>
Cc: Christoffer Dall <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: [email protected]
---
arch/arm64/kvm/handle_exit.c | 18 ++++++++++++++++++
arch/arm64/kvm/sys_regs.c | 9 +++++++++
2 files changed, 27 insertions(+)
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index e5e741bfffe1..5114ad691eae 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -173,6 +173,23 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
return 1;
}
+/*
+ * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
+ * a NOP), or guest EL1 access to a ptrauth register.
+ */
+static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+ /*
+ * We don't currently suport ptrauth in a guest, and we mask the ID
+ * registers to prevent well-behaved guests from trying to make use of
+ * it.
+ *
+ * Inject an UNDEF, as if the feature really isn't present.
+ */
+ kvm_inject_undefined(vcpu);
+ return 1;
+}
+
static exit_handle_fn arm_exit_handlers[] = {
[0 ... ESR_ELx_EC_MAX] = kvm_handle_unknown_ec,
[ESR_ELx_EC_WFx] = kvm_handle_wfx,
@@ -195,6 +212,7 @@ static exit_handle_fn arm_exit_handlers[] = {
[ESR_ELx_EC_BKPT32] = kvm_handle_guest_debug,
[ESR_ELx_EC_BRK64] = kvm_handle_guest_debug,
[ESR_ELx_EC_FP_ASIMD] = handle_no_fpsimd,
+ [ESR_ELx_EC_PAC] = kvm_handle_ptrauth,
};
static exit_handle_fn kvm_get_exit_handler(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 806b0b126a64..eee399c35e84 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1000,6 +1000,15 @@ static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
task_pid_nr(current));
val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT);
+ } else if (id == SYS_ID_AA64ISAR1_EL1) {
+ const u64 ptrauth_mask = (0xfUL << ID_AA64ISAR1_APA_SHIFT) |
+ (0xfUL << ID_AA64ISAR1_API_SHIFT) |
+ (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
+ (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
+ if (val & ptrauth_mask)
+ pr_err_once("kvm [%i]: ptrauth unsupported for guests, suppressing\n",
+ task_pid_nr(current));
+ val &= ~ptrauth_mask;
} else if (id == SYS_ID_AA64MMFR1_EL1) {
if (val & (0xfUL << ID_AA64MMFR1_LOR_SHIFT))
pr_err_once("kvm [%i]: LORegions unsupported for guests, suppressing\n",
--
2.11.0
On Tue, Apr 17, 2018 at 8:37 PM, Mark Rutland <[email protected]> wrote:
> Currently, an architecture must either implement all of the mm hooks
> itself, or use all of those provided by the asm-generic implementation.
> When an architecture only needs to override a single hook, it must copy
> the stub implementations from the asm-generic version.
>
> To avoid this repetition, allow each hook to be overridden indiviually,
> by placing each under an #ifndef block. As architectures providing their
> own hooks can't include this file today, this shouldn't adversely affect
> any existing hooks.
>
> Signed-off-by: Mark Rutland <[email protected]>
> Cc: Arnd Bergmann <[email protected]>
> Cc: [email protected]
Acked-by: Arnd Bergmann <[email protected]>
On Tue, Apr 17, 2018 at 09:56:02PM +0200, Arnd Bergmann wrote:
> On Tue, Apr 17, 2018 at 8:37 PM, Mark Rutland <[email protected]> wrote:
> > Currently, an architecture must either implement all of the mm hooks
> > itself, or use all of those provided by the asm-generic implementation.
> > When an architecture only needs to override a single hook, it must copy
> > the stub implementations from the asm-generic version.
> >
> > To avoid this repetition, allow each hook to be overridden indiviually,
> > by placing each under an #ifndef block. As architectures providing their
> > own hooks can't include this file today, this shouldn't adversely affect
> > any existing hooks.
> >
> > Signed-off-by: Mark Rutland <[email protected]>
> > Cc: Arnd Bergmann <[email protected]>
> > Cc: [email protected]
>
> Acked-by: Arnd Bergmann <[email protected]>
Cheers!
Mark.
On Tue, Apr 17, 2018 at 07:37:27PM +0100, Mark Rutland wrote:
> In subsequent patches we're going to expose ptrauth to the host kernel
> and userspace, but things are a bit trickier for guest kernels. For the
> time being, let's hide ptrauth from KVM guests.
>
> Regardless of how well-behaved the guest kernel is, guest userspace
> could attempt to use ptrauth instructions, triggering a trap to EL2,
> resulting in noise from kvm_handle_unknown_ec(). So let's write up a
> handler for the PAC trap, which silently injects an UNDEF into the
> guest, as if the feature were really missing.
>
> Signed-off-by: Mark Rutland <[email protected]>
> Cc: Christoffer Dall <[email protected]>
> Cc: Marc Zyngier <[email protected]>
> Cc: [email protected]
> ---
> arch/arm64/kvm/handle_exit.c | 18 ++++++++++++++++++
> arch/arm64/kvm/sys_regs.c | 9 +++++++++
> 2 files changed, 27 insertions(+)
>
> diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> index e5e741bfffe1..5114ad691eae 100644
> --- a/arch/arm64/kvm/handle_exit.c
> +++ b/arch/arm64/kvm/handle_exit.c
> @@ -173,6 +173,23 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
> return 1;
> }
>
> +/*
> + * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
> + * a NOP), or guest EL1 access to a ptrauth register.
> + */
> +static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
> +{
> + /*
> + * We don't currently suport ptrauth in a guest, and we mask the ID
> + * registers to prevent well-behaved guests from trying to make use of
> + * it.
> + *
> + * Inject an UNDEF, as if the feature really isn't present.
> + */
> + kvm_inject_undefined(vcpu);
> + return 1;
> +}
> +
> static exit_handle_fn arm_exit_handlers[] = {
> [0 ... ESR_ELx_EC_MAX] = kvm_handle_unknown_ec,
> [ESR_ELx_EC_WFx] = kvm_handle_wfx,
> @@ -195,6 +212,7 @@ static exit_handle_fn arm_exit_handlers[] = {
> [ESR_ELx_EC_BKPT32] = kvm_handle_guest_debug,
> [ESR_ELx_EC_BRK64] = kvm_handle_guest_debug,
> [ESR_ELx_EC_FP_ASIMD] = handle_no_fpsimd,
> + [ESR_ELx_EC_PAC] = kvm_handle_ptrauth,
> };
>
> static exit_handle_fn kvm_get_exit_handler(struct kvm_vcpu *vcpu)
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 806b0b126a64..eee399c35e84 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1000,6 +1000,15 @@ static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
> task_pid_nr(current));
>
> val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT);
> + } else if (id == SYS_ID_AA64ISAR1_EL1) {
> + const u64 ptrauth_mask = (0xfUL << ID_AA64ISAR1_APA_SHIFT) |
> + (0xfUL << ID_AA64ISAR1_API_SHIFT) |
> + (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
> + (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
> + if (val & ptrauth_mask)
> + pr_err_once("kvm [%i]: ptrauth unsupported for guests, suppressing\n",
> + task_pid_nr(current));
Marc just changed the equivalent SVE pr_err_once() to kvm_debug().
So we probably want to do the same here.
> + val &= ~ptrauth_mask;
> } else if (id == SYS_ID_AA64MMFR1_EL1) {
> if (val & (0xfUL << ID_AA64MMFR1_LOR_SHIFT))
> pr_err_once("kvm [%i]: LORegions unsupported for guests, suppressing\n",
> --
> 2.11.0
>
Otherwise
Reviewed-by: Andrew Jones <[email protected]>
On Wed, Apr 18, 2018 at 03:19:26PM +0200, Andrew Jones wrote:
> On Tue, Apr 17, 2018 at 07:37:27PM +0100, Mark Rutland wrote:
> > @@ -1000,6 +1000,15 @@ static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
> > task_pid_nr(current));
> >
> > val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT);
> > + } else if (id == SYS_ID_AA64ISAR1_EL1) {
> > + const u64 ptrauth_mask = (0xfUL << ID_AA64ISAR1_APA_SHIFT) |
> > + (0xfUL << ID_AA64ISAR1_API_SHIFT) |
> > + (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
> > + (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
> > + if (val & ptrauth_mask)
> > + pr_err_once("kvm [%i]: ptrauth unsupported for guests, suppressing\n",
> > + task_pid_nr(current));
>
> Marc just changed the equivalent SVE pr_err_once() to kvm_debug().
> So we probably want to do the same here.
Good point. Done.
> > + val &= ~ptrauth_mask;
> > } else if (id == SYS_ID_AA64MMFR1_EL1) {
> > if (val & (0xfUL << ID_AA64MMFR1_LOR_SHIFT))
> > pr_err_once("kvm [%i]: LORegions unsupported for guests, suppressing\n",
> > --
> > 2.11.0
> >
>
> Otherwise
>
> Reviewed-by: Andrew Jones <[email protected]>
Cheers!
Mark.
Hi!
> @@ -205,6 +205,14 @@ Before jumping into the kernel, the following conditions must be met:
> ICC_SRE_EL2.SRE (bit 0) must be initialised to 0b0.
> - The DT or ACPI tables must describe a GICv2 interrupt controller.
>
> + For CPUs with pointer authentication functionality:
> + - If EL3 is present:
> + SCR_EL3.APK (bit 16) must be initialised to 0b1
> + SCR_EL3.API (bit 17) must be initialised to 0b1
> + - If the kernel is entered at EL1:
> + HCR_EL2.APK (bit 40) must be initialised to 0b1
> + HCR_EL2.API (bit 41) must be initialised to 0b1
> +
0b1 is quite confusing way to write 1.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
On Sun, 22 Apr 2018 09:05:21 +0100,
Pavel Machek wrote:
>
> Hi!
>
> > @@ -205,6 +205,14 @@ Before jumping into the kernel, the following conditions must be met:
> > ICC_SRE_EL2.SRE (bit 0) must be initialised to 0b0.
> > - The DT or ACPI tables must describe a GICv2 interrupt controller.
> >
> > + For CPUs with pointer authentication functionality:
> > + - If EL3 is present:
> > + SCR_EL3.APK (bit 16) must be initialised to 0b1
> > + SCR_EL3.API (bit 17) must be initialised to 0b1
> > + - If the kernel is entered at EL1:
> > + HCR_EL2.APK (bit 40) must be initialised to 0b1
> > + HCR_EL2.API (bit 41) must be initialised to 0b1
> > +
>
> 0b1 is quite confusing way to write 1.
Do you find 0x1 equally confusing?
0bx is a pretty common way of describing bit-fields of a hardware
register. It is also consistent with the rest of this document.
M.
--
Jazz is not dead, it just smell funny.
On Sun 2018-04-22 09:47:29, Marc Zyngier wrote:
> On Sun, 22 Apr 2018 09:05:21 +0100,
> Pavel Machek wrote:
> >
> > Hi!
> >
> > > @@ -205,6 +205,14 @@ Before jumping into the kernel, the following conditions must be met:
> > > ICC_SRE_EL2.SRE (bit 0) must be initialised to 0b0.
> > > - The DT or ACPI tables must describe a GICv2 interrupt controller.
> > >
> > > + For CPUs with pointer authentication functionality:
> > > + - If EL3 is present:
> > > + SCR_EL3.APK (bit 16) must be initialised to 0b1
> > > + SCR_EL3.API (bit 17) must be initialised to 0b1
> > > + - If the kernel is entered at EL1:
> > > + HCR_EL2.APK (bit 40) must be initialised to 0b1
> > > + HCR_EL2.API (bit 41) must be initialised to 0b1
> > > +
> >
> > 0b1 is quite confusing way to write 1.
>
> Do you find 0x1 equally confusing?
It is slightly better. Still I'd not use it for describing single bit.
> 0bx is a pretty common way of describing bit-fields of a hardware
> register. It is also consistent with the rest of this document.
...and inconsistent with rest of the world :-).
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
Hi Mark,
On Tue, Apr 17, 2018 at 07:37:31PM +0100, Mark Rutland wrote:
> diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
> index 39ec0b8a689e..caf0d3010112 100644
> --- a/arch/arm64/include/asm/mmu_context.h
> +++ b/arch/arm64/include/asm/mmu_context.h
> @@ -29,7 +29,6 @@
> #include <asm/cacheflush.h>
> #include <asm/cpufeature.h>
> #include <asm/proc-fns.h>
> -#include <asm-generic/mm_hooks.h>
> #include <asm/cputype.h>
> #include <asm/pgtable.h>
> #include <asm/sysreg.h>
> @@ -168,7 +167,14 @@ static inline void cpu_replace_ttbr1(pgd_t *pgdp)
> #define destroy_context(mm) do { } while(0)
> void check_and_switch_context(struct mm_struct *mm, unsigned int cpu);
>
> -#define init_new_context(tsk,mm) ({ atomic64_set(&(mm)->context.id, 0); 0; })
> +static inline int init_new_context(struct task_struct *tsk,
> + struct mm_struct *mm)
> +{
> + atomic64_set(&mm->context.id, 0);
> + mm_ctx_ptrauth_init(&mm->context);
> +
> + return 0;
> +}
>
> #ifdef CONFIG_ARM64_SW_TTBR0_PAN
> static inline void update_saved_ttbr0(struct task_struct *tsk,
> @@ -216,6 +222,8 @@ static inline void __switch_mm(struct mm_struct *next)
> return;
> }
>
> + mm_ctx_ptrauth_switch(&next->context);
> +
> check_and_switch_context(next, cpu);
> }
>
> @@ -241,6 +249,19 @@ switch_mm(struct mm_struct *prev, struct mm_struct *next,
> void verify_cpu_asid_bits(void);
> void post_ttbr_update_workaround(void);
>
> +static inline void arch_dup_mmap(struct mm_struct *oldmm,
> + struct mm_struct *mm)
> +{
> + mm_ctx_ptrauth_dup(&oldmm->context, &mm->context);
> +}
> +#define arch_dup_mmap arch_dup_mmap
IIUC, we could skip the arch_dup_mmap() and init_new_context() here for
the fork() case since the ptrauth_keys would be copied as part of the
dup_mm(). There is another situation where init_new_context() is called
bprm_mm_init() -> mm_alloc() -> mm_init() -> init_new_context().
However, in this case the core code also calls arch_bprm_mm_init(). So I
think we only need to update the latter to get a new random key.
> diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
> new file mode 100644
> index 000000000000..a2e8fb91fdee
> --- /dev/null
> +++ b/arch/arm64/include/asm/pointer_auth.h
> @@ -0,0 +1,89 @@
> +/*
> + * Copyright (C) 2016 ARM Ltd.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program. If not, see <http://www.gnu.org/licenses/>.
> + */
Nitpick: 2018. You could also use the SPDX header, save some lines.
--
Catalin
On Tue, Apr 17, 2018 at 07:37:35PM +0100, Mark Rutland wrote:
> +Basic support
> +-------------
> +
> +When CONFIG_ARM64_PTR_AUTH is selected, and relevant HW support is
> +present, the kernel will assign a random APIAKey value to each process
> +at exec*() time. This key is shared by all threads within the process,
> +and the key is preserved across fork(). Presence of functionality using
> +APIAKey is advertised via HWCAP_APIA.
> +
> +Recent versions of GCC can compile code with APIAKey-based return
> +address protection when passed the -msign-return-address option. This
> +uses instructions in the HINT space, and such code can run on systems
> +without the pointer authentication extension.
> +
> +The remaining instruction and data keys (APIBKey, APDAKey, APDBKey) are
> +reserved for future use, and instructions using these keys must not be
> +used by software until a purpose and scope for their use has been
> +decided. To enable future software using these keys to function on
> +contemporary kernels, where possible, instructions using these keys are
> +made to behave as NOPs.
> +
> +The generic key (APGAKey) is currently unsupported. Instructions using
> +the generic key must not be used by software.
> +
> +
> +Debugging
> +---------
> +
> +When CONFIG_ARM64_PTR_AUTH is selected, and relevant HW support is
> +present, the kernel will expose the position of TTBR0 PAC bits in the
> +NT_ARM_PAC_MASK regset (struct user_pac_mask), which userspace can
> +acqure via PTRACE_GETREGSET.
> +
> +Separate masks are exposed for data pointers and instruction pointers,
> +as the set of PAC bits can vary between the two. Debuggers should not
> +expect that HWCAP_APIA implies the presence (or non-presence) of this
> +regset -- in future the kernel may support the use of APIBKey, APDAKey,
> +and/or APBAKey, even in the absence of APIAKey.
> +
> +Note that the masks apply to TTBR0 addresses, and are not valid to apply
> +to TTBR1 addresses (e.g. kernel pointers).
I'm fine with the rest of the series but I'd like the toolchain guys to
ack the ABI we are exposing (just this document is fine).
> +Virtualization
> +--------------
> +
> +Pointer authentication is not currently supported in KVM guests. KVM
> +will mask the feature bits from ID_AA64ISAR1_EL1, and attempted use of
> +the feature will result in an UNDEFINED exception being injected into
> +the guest.
I suspect at some point we'll see patches for KVM?
--
Catalin
On Tue, Apr 17, 2018 at 07:37:26PM +0100, Mark Rutland wrote:
> In KVM we define the configuration of HCR_EL2 for a VHE HOST in
> HCR_HOST_VHE_FLAGS, but we don't ahve a similar definition for the
nit: have
> non-VHE host flags, and open-code HCR_RW. Further, in head.S we
> open-code the flags for VHE and non-VHE configurations.
>
> In future, we're going to want to configure more flags for the host, so
> lets add a HCR_HOST_NVHE_FLAGS defintion, adn consistently use both
nit: and
> HCR_HOST_VHE_FLAGS and HCR_HOST_NVHE_FLAGS in the kvm code and head.S.
>
> We now use mov_q to generate the HCR_EL2 value, as we use when
> configuring other registers in head.S.
>
> Signed-off-by: Mark Rutland <[email protected]>
> Cc: Catalin Marinas <[email protected]>
> Cc: Christoffer Dall <[email protected]>
> Cc: Marc Zyngier <[email protected]>
> Cc: Will Deacon <[email protected]>
> Cc: [email protected]
> ---
> arch/arm64/include/asm/kvm_arm.h | 1 +
> arch/arm64/kernel/head.S | 5 ++---
> arch/arm64/kvm/hyp/switch.c | 2 +-
> 3 files changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> index 6dd285e979c9..89b3dda7e3cb 100644
> --- a/arch/arm64/include/asm/kvm_arm.h
> +++ b/arch/arm64/include/asm/kvm_arm.h
> @@ -86,6 +86,7 @@
> HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW | HCR_TLOR | \
> HCR_FMO | HCR_IMO)
> #define HCR_VIRT_EXCP_MASK (HCR_VSE | HCR_VI | HCR_VF)
> +#define HCR_HOST_NVHE_FLAGS (HCR_RW)
> #define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)
>
> /* TCR_EL2 Registers bits */
> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> index b0853069702f..651a06b1980f 100644
> --- a/arch/arm64/kernel/head.S
> +++ b/arch/arm64/kernel/head.S
> @@ -494,10 +494,9 @@ ENTRY(el2_setup)
> #endif
>
> /* Hyp configuration. */
> - mov x0, #HCR_RW // 64-bit EL1
> + mov_q x0, HCR_HOST_NVHE_FLAGS
> cbz x2, set_hcr
> - orr x0, x0, #HCR_TGE // Enable Host Extensions
> - orr x0, x0, #HCR_E2H
> + mov_q x0, HCR_HOST_VHE_FLAGS
> set_hcr:
> msr hcr_el2, x0
> isb
> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> index d9645236e474..cdae330e15e9 100644
> --- a/arch/arm64/kvm/hyp/switch.c
> +++ b/arch/arm64/kvm/hyp/switch.c
> @@ -143,7 +143,7 @@ static void __hyp_text __deactivate_traps_nvhe(void)
> mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
>
> write_sysreg(mdcr_el2, mdcr_el2);
> - write_sysreg(HCR_RW, hcr_el2);
> + write_sysreg(HCR_HOST_NVHE_FLAGS, hcr_el2);
> write_sysreg(CPTR_EL2_DEFAULT, cptr_el2);
> }
>
> --
> 2.11.0
>
>
Reviewed-by: Christoffer Dall <[email protected]>
On Tue, Apr 17, 2018 at 07:37:27PM +0100, Mark Rutland wrote:
> In subsequent patches we're going to expose ptrauth to the host kernel
> and userspace, but things are a bit trickier for guest kernels. For the
> time being, let's hide ptrauth from KVM guests.
>
> Regardless of how well-behaved the guest kernel is, guest userspace
> could attempt to use ptrauth instructions, triggering a trap to EL2,
> resulting in noise from kvm_handle_unknown_ec(). So let's write up a
> handler for the PAC trap, which silently injects an UNDEF into the
> guest, as if the feature were really missing.
>
> Signed-off-by: Mark Rutland <[email protected]>
> Cc: Christoffer Dall <[email protected]>
> Cc: Marc Zyngier <[email protected]>
> Cc: [email protected]
> ---
> arch/arm64/kvm/handle_exit.c | 18 ++++++++++++++++++
> arch/arm64/kvm/sys_regs.c | 9 +++++++++
> 2 files changed, 27 insertions(+)
>
> diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> index e5e741bfffe1..5114ad691eae 100644
> --- a/arch/arm64/kvm/handle_exit.c
> +++ b/arch/arm64/kvm/handle_exit.c
> @@ -173,6 +173,23 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
> return 1;
> }
>
> +/*
> + * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
> + * a NOP), or guest EL1 access to a ptrauth register.
> + */
> +static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
> +{
> + /*
> + * We don't currently suport ptrauth in a guest, and we mask the ID
> + * registers to prevent well-behaved guests from trying to make use of
> + * it.
> + *
> + * Inject an UNDEF, as if the feature really isn't present.
> + */
> + kvm_inject_undefined(vcpu);
> + return 1;
> +}
> +
> static exit_handle_fn arm_exit_handlers[] = {
> [0 ... ESR_ELx_EC_MAX] = kvm_handle_unknown_ec,
> [ESR_ELx_EC_WFx] = kvm_handle_wfx,
> @@ -195,6 +212,7 @@ static exit_handle_fn arm_exit_handlers[] = {
> [ESR_ELx_EC_BKPT32] = kvm_handle_guest_debug,
> [ESR_ELx_EC_BRK64] = kvm_handle_guest_debug,
> [ESR_ELx_EC_FP_ASIMD] = handle_no_fpsimd,
> + [ESR_ELx_EC_PAC] = kvm_handle_ptrauth,
> };
>
> static exit_handle_fn kvm_get_exit_handler(struct kvm_vcpu *vcpu)
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 806b0b126a64..eee399c35e84 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1000,6 +1000,15 @@ static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
> task_pid_nr(current));
>
> val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT);
> + } else if (id == SYS_ID_AA64ISAR1_EL1) {
> + const u64 ptrauth_mask = (0xfUL << ID_AA64ISAR1_APA_SHIFT) |
> + (0xfUL << ID_AA64ISAR1_API_SHIFT) |
> + (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
> + (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
> + if (val & ptrauth_mask)
> + pr_err_once("kvm [%i]: ptrauth unsupported for guests, suppressing\n",
> + task_pid_nr(current));
> + val &= ~ptrauth_mask;
> } else if (id == SYS_ID_AA64MMFR1_EL1) {
> if (val & (0xfUL << ID_AA64MMFR1_LOR_SHIFT))
> pr_err_once("kvm [%i]: LORegions unsupported for guests, suppressing\n",
> --
> 2.11.0
>
With the change to the debugging print:
Reviewed-by: Christoffer Dall <[email protected]>
On Tue, Apr 17, 2018 at 07:37:28PM +0100, Mark Rutland wrote:
> To allow EL0 (and/or EL1) to use pointer authentication functionality,
> we must ensure that pointer authentication instructions and accesses to
> pointer authentication keys are not trapped to EL2.
>
> This patch ensures that HCR_EL2 is configured appropriately when the
> kernel is booted at EL2. For non-VHE kernels we set HCR_EL2.{API,APK},
> ensuring that EL1 can access keys and permit EL0 use of instructions.
> For VHE kernels host EL0 (TGE && E2H) is unaffected by these settings,
> and it doesn't matter how we configure HCR_EL2.{API,APK}, so we don't
> bother setting them.
>
> This does not enable support for KVM guests, since KVM manages HCR_EL2
> itself when running VMs.
>
> Signed-off-by: Mark Rutland <[email protected]>
> Cc: Christoffer Dall <[email protected]>
> Cc: Catalin Marinas <[email protected]>
> Cc: Marc Zyngier <[email protected]>
> Cc: Will Deacon <[email protected]>
> Cc: [email protected]
FWIW: Acked-by: Christoffer Dall <[email protected]>
> ---
> arch/arm64/include/asm/kvm_arm.h | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> index 89b3dda7e3cb..4d57e1e58323 100644
> --- a/arch/arm64/include/asm/kvm_arm.h
> +++ b/arch/arm64/include/asm/kvm_arm.h
> @@ -23,6 +23,8 @@
> #include <asm/types.h>
>
> /* Hyp Configuration Register (HCR) bits */
> +#define HCR_API (UL(1) << 41)
> +#define HCR_APK (UL(1) << 40)
> #define HCR_TEA (UL(1) << 37)
> #define HCR_TERR (UL(1) << 36)
> #define HCR_TLOR (UL(1) << 35)
> @@ -86,7 +88,7 @@
> HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW | HCR_TLOR | \
> HCR_FMO | HCR_IMO)
> #define HCR_VIRT_EXCP_MASK (HCR_VSE | HCR_VI | HCR_VF)
> -#define HCR_HOST_NVHE_FLAGS (HCR_RW)
> +#define HCR_HOST_NVHE_FLAGS (HCR_RW | HCR_API | HCR_APK)
> #define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)
>
> /* TCR_EL2 Registers bits */
> --
> 2.11.0
>
> _______________________________________________
> kvmarm mailing list
> [email protected]
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
On Fri, Apr 27, 2018 at 11:51:39AM +0200, Christoffer Dall wrote:
> On Tue, Apr 17, 2018 at 07:37:26PM +0100, Mark Rutland wrote:
> > In KVM we define the configuration of HCR_EL2 for a VHE HOST in
> > HCR_HOST_VHE_FLAGS, but we don't ahve a similar definition for the
>
> nit: have
>
> > non-VHE host flags, and open-code HCR_RW. Further, in head.S we
> > open-code the flags for VHE and non-VHE configurations.
> >
> > In future, we're going to want to configure more flags for the host, so
> > lets add a HCR_HOST_NVHE_FLAGS defintion, adn consistently use both
>
> nit: and
>
Sorry about those; fixed up now.
[...]
> Reviewed-by: Christoffer Dall <[email protected]>
Cheers!
Mark.
On Wed, Apr 25, 2018 at 12:23:32PM +0100, Catalin Marinas wrote:
> Hi Mark,
>
> On Tue, Apr 17, 2018 at 07:37:31PM +0100, Mark Rutland wrote:
> > diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
> > index 39ec0b8a689e..caf0d3010112 100644
> > --- a/arch/arm64/include/asm/mmu_context.h
> > +++ b/arch/arm64/include/asm/mmu_context.h
> > @@ -29,7 +29,6 @@
> > #include <asm/cacheflush.h>
> > #include <asm/cpufeature.h>
> > #include <asm/proc-fns.h>
> > -#include <asm-generic/mm_hooks.h>
> > #include <asm/cputype.h>
> > #include <asm/pgtable.h>
> > #include <asm/sysreg.h>
> > @@ -168,7 +167,14 @@ static inline void cpu_replace_ttbr1(pgd_t *pgdp)
> > #define destroy_context(mm) do { } while(0)
> > void check_and_switch_context(struct mm_struct *mm, unsigned int cpu);
> >
> > -#define init_new_context(tsk,mm) ({ atomic64_set(&(mm)->context.id, 0); 0; })
> > +static inline int init_new_context(struct task_struct *tsk,
> > + struct mm_struct *mm)
> > +{
> > + atomic64_set(&mm->context.id, 0);
> > + mm_ctx_ptrauth_init(&mm->context);
> > +
> > + return 0;
> > +}
> >
> > #ifdef CONFIG_ARM64_SW_TTBR0_PAN
> > static inline void update_saved_ttbr0(struct task_struct *tsk,
> > @@ -216,6 +222,8 @@ static inline void __switch_mm(struct mm_struct *next)
> > return;
> > }
> >
> > + mm_ctx_ptrauth_switch(&next->context);
> > +
> > check_and_switch_context(next, cpu);
> > }
> >
> > @@ -241,6 +249,19 @@ switch_mm(struct mm_struct *prev, struct mm_struct *next,
> > void verify_cpu_asid_bits(void);
> > void post_ttbr_update_workaround(void);
> >
> > +static inline void arch_dup_mmap(struct mm_struct *oldmm,
> > + struct mm_struct *mm)
> > +{
> > + mm_ctx_ptrauth_dup(&oldmm->context, &mm->context);
> > +}
> > +#define arch_dup_mmap arch_dup_mmap
>
> IIUC, we could skip the arch_dup_mmap() and init_new_context() here for
> the fork() case since the ptrauth_keys would be copied as part of the
> dup_mm().
If we can hook into the exec*() path to init the keys, then I agree we
can do this...
> There is another situation where init_new_context() is called
> bprm_mm_init() -> mm_alloc() -> mm_init() -> init_new_context().
> However, in this case the core code also calls arch_bprm_mm_init(). So I
> think we only need to update the latter to get a new random key.
... and this seems to be the right place to do it, so I'll have a go.
> > diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
> > new file mode 100644
> > index 000000000000..a2e8fb91fdee
> > --- /dev/null
> > +++ b/arch/arm64/include/asm/pointer_auth.h
> > @@ -0,0 +1,89 @@
> > +/*
> > + * Copyright (C) 2016 ARM Ltd.
> > + *
> > + * This program is free software; you can redistribute it and/or modify
> > + * it under the terms of the GNU General Public License version 2 as
> > + * published by the Free Software Foundation.
> > + *
> > + * This program is distributed in the hope that it will be useful,
> > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> > + * GNU General Public License for more details.
> > + *
> > + * You should have received a copy of the GNU General Public License
> > + * along with this program. If not, see <http://www.gnu.org/licenses/>.
> > + */
>
> Nitpick: 2018. You could also use the SPDX header, save some lines.
Sure, I'll replace this with a SPDX GPL-2.0 line.
Thanks,
Mark.