This series adds support for the ARMv8.3 pointer authentication extension.
I've included a quick intro to the extension below, with the usual series
description below that. The final patch of the series adds additional
documentation regarding the extension.
I've based the series on the arm64 for-next/core branch [1]. I'm aware that
this series may conflict with other patches currently in flight (e.g.
allocation of ELF notes), and I intend to rebase this series as things settle.
I've pushed the series to the arm64/pointer-auth branch [2] of my linux tree.
I've also pushed out a necessary bootwrapper patch to the pointer-auth branch
[3] of my bootwrapper repo.
Extension Overview
==================
The ARMv8.3 pointer authentication extension adds functionality to detect
modification of pointer values, mitigating certain classes of attack such as
stack smashing, and making return oriented programming attacks harder
The extension introduces the concept of a pointer authentication code (PAC),
which is stored in some upper bits of pointers. Each PAC is derived from the
original pointer, another 64-bit value (e.g. the stack pointer), and a secret
128-bit key.
New instructions are added which can be used to:
* Insert a PAC into a pointer
* Strip a PAC from a pointer
* Authenticate strip a PAC from a pointer
If authentication succeeds, the code is removed, yielding the original pointer.
If authentication fails, bits are set in the pointer such that it is guaranteed
to cause a fault if used.
These instructions can make use of four keys:
* APIAKey (A.K.A. Instruction A key)
* APIBKey (A.K.A. Instruction B key)
* APDAKey (A.K.A. Data A key)
* APDBKey (A.K.A. Data B Key)
A subset of these instruction encodings have been allocated from the HINT
space, and will operate as NOPs on any ARMv8 parts which do not feature the
extension (or if purposefully disabled by the kernel). Software using only this
subset of the instructions should function correctly on all ARMv8-A parts.
Additionally, instructions are added to authenticate small blocks of memory in
similar fashion, using APGAKey (A.K.A. Generic key).
This Series
===========
This series enables the use of instructions using APIAKey, which is initialised
and maintained per-process (shared by all threads). This series does not add
support for APIBKey, APDAKey, APDBKey, nor APGAKey. The series only supports
the use of an architected algorithm.
I've given this some basic testing with a homebrew test suite. More ideally,
we'd add some tests to the kernel source tree.
I've added some basic KVM support, but this doesn't cater for systems with
mismatched support. Looking forward, we'll need ID register emulation in KVM so
that we can hide features from guests to cater for cases like this.
There are also a few questions to consider, e.g:
* Should we expose a per-process data key now, to go with the insn key?
* Should keys be per-thread rather than per-process?
* Should we expose generic authentication (i.e. APGAKey)?
* Should the kernel remove PACs when unwinding user stacks?
Thanks,
Mark.
[1] git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-next/core
[2] git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git arm64/pointer-auth
[3] git://git.kernel.org/pub/scm/linux/kernel/git/mark/boot-wrapper-aarch64.git pointer-auth
Mark Rutland (9):
asm-generic: mm_hooks: allow hooks to be overridden individually
arm64: add pointer authentication register bits
arm64/cpufeature: add ARMv8.3 id_aa64isar1 bits
arm64/cpufeature: detect pointer authentication
arm64: Don't trap host pointer auth use to EL2
arm64: add basic pointer authentication support
arm64: expose PAC bit positions via ptrace
arm64/kvm: context-switch PAC registers
arm64: docs: document pointer authentication
Documentation/arm64/booting.txt | 8 +++
Documentation/arm64/pointer-authentication.txt | 78 +++++++++++++++++++++
arch/arm64/Kconfig | 23 ++++++
arch/arm64/include/asm/cpucaps.h | 4 +-
arch/arm64/include/asm/esr.h | 3 +-
arch/arm64/include/asm/kvm_arm.h | 2 +
arch/arm64/include/asm/kvm_emulate.h | 15 ++++
arch/arm64/include/asm/kvm_host.h | 12 ++++
arch/arm64/include/asm/mmu.h | 5 ++
arch/arm64/include/asm/mmu_context.h | 25 ++++++-
arch/arm64/include/asm/pointer_auth.h | 96 ++++++++++++++++++++++++++
arch/arm64/include/asm/sysreg.h | 30 ++++++++
arch/arm64/include/uapi/asm/hwcap.h | 1 +
arch/arm64/include/uapi/asm/ptrace.h | 5 ++
arch/arm64/kernel/cpufeature.c | 39 ++++++++++-
arch/arm64/kernel/cpuinfo.c | 1 +
arch/arm64/kernel/head.S | 19 ++++-
arch/arm64/kernel/ptrace.c | 39 +++++++++++
arch/arm64/kvm/hyp/sysreg-sr.c | 43 ++++++++++++
include/asm-generic/mm_hooks.h | 12 ++++
include/uapi/linux/elf.h | 1 +
21 files changed, 454 insertions(+), 7 deletions(-)
create mode 100644 Documentation/arm64/pointer-authentication.txt
create mode 100644 arch/arm64/include/asm/pointer_auth.h
--
1.9.1
Currently, an architecture must either implement all of the mm hooks
itself, or use all of those provided by the asm-generic implementation.
When an architecture only needs to override a single hook, it must copy
the stub implementations from the asm-generic version.
To avoid this repetition, allow each hook to be overridden indiviually,
by placing each under an #ifndef block. As architectures providing their
own hooks can't include this file today, this shouldn't adversely affect
any existing hooks.
Signed-off-by: Mark Rutland <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: [email protected]
---
include/asm-generic/mm_hooks.h | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/include/asm-generic/mm_hooks.h b/include/asm-generic/mm_hooks.h
index cc5d9a1..c5a4328 100644
--- a/include/asm-generic/mm_hooks.h
+++ b/include/asm-generic/mm_hooks.h
@@ -6,36 +6,48 @@
#ifndef _ASM_GENERIC_MM_HOOKS_H
#define _ASM_GENERIC_MM_HOOKS_H
+#ifndef arch_dup_mmap
static inline void arch_dup_mmap(struct mm_struct *oldmm,
struct mm_struct *mm)
{
}
+#endif
+#ifndef arch_exit_mmap
static inline void arch_exit_mmap(struct mm_struct *mm)
{
}
+#endif
+#ifndef arch_unmap
static inline void arch_unmap(struct mm_struct *mm,
struct vm_area_struct *vma,
unsigned long start, unsigned long end)
{
}
+#endif
+#ifndef arch_bprm_mm_init
static inline void arch_bprm_mm_init(struct mm_struct *mm,
struct vm_area_struct *vma)
{
}
+#endif
+#ifndef arch_vma_access_permitted
static inline bool arch_vma_access_permitted(struct vm_area_struct *vma,
bool write, bool execute, bool foreign)
{
/* by default, allow everything */
return true;
}
+#endif
+#ifndef arch_pte_access_permitted
static inline bool arch_pte_access_permitted(pte_t pte, bool write)
{
/* by default, allow everything */
return true;
}
+#endif
#endif /* _ASM_GENERIC_MM_HOOKS_H */
--
1.9.1
>From ARMv8.3 onwards, ID_AA64ISAR1 is no longer entirely RES0, and now
has four fields describing the presence of pointer authentication
functionality:
* APA - address authentication present, using an architected algorithm
* API - address authentication present, using an IMP DEF algorithm
* GPA - generic authentication present, using an architected algorithm
* GPI - generic authentication present, using an IMP DEF algoithm
This patch adds the requisite definitions so that we can identify the
presence of this functionality. For the timebeing, the features are
hidden from userspace.
Signed-off-by: Mark Rutland <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Suzuki K Poulose <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/arm64/kernel/cpufeature.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 81a78d9..30255b2 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -101,7 +101,11 @@
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_LRCPC_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
- ARM64_FTR_END,
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_GPI_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_GPA_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
+ ARM64_FTR_END
};
static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
--
1.9.1
So that we can dynamically handle the presence of pointer authentication
functionality, wire up probing code in cpufeature.c.
Currently, this only detects the presence of an architected algorithm.
Signed-off-by: Mark Rutland <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Suzuki K Poulose <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/arm64/include/asm/cpucaps.h | 4 +++-
arch/arm64/kernel/cpufeature.c | 22 ++++++++++++++++++++++
2 files changed, 25 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index fb78a5d..15dd6d6 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -37,7 +37,9 @@
#define ARM64_HAS_NO_FPSIMD 16
#define ARM64_WORKAROUND_REPEAT_TLBI 17
#define ARM64_WORKAROUND_QCOM_FALKOR_E1003 18
+#define ARM64_HAS_ADDRESS_AUTH 19
+#define ARM64_HAS_GENERIC_AUTH 20
-#define ARM64_NCAPS 19
+#define ARM64_NCAPS 21
#endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 30255b2..172c80e 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -871,6 +871,28 @@ static bool has_no_fpsimd(const struct arm64_cpu_capabilities *entry, int __unus
.min_field_value = 0,
.matches = has_no_fpsimd,
},
+#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
+ {
+ .desc = "Address authentication (architected algorithm)",
+ .capability = ARM64_HAS_ADDRESS_AUTH,
+ .def_scope = SCOPE_SYSTEM,
+ .sys_reg = SYS_ID_AA64ISAR1_EL1,
+ .sign = FTR_UNSIGNED,
+ .field_pos = ID_AA64ISAR1_APA_SHIFT,
+ .min_field_value = ID_AA64ISAR1_APA_ARCHITECTED,
+ .matches = has_cpuid_feature,
+ },
+ {
+ .desc = "Generic authentication (architected algorithm)",
+ .capability = ARM64_HAS_GENERIC_AUTH,
+ .def_scope = SCOPE_SYSTEM,
+ .sys_reg = SYS_ID_AA64ISAR1_EL1,
+ .sign = FTR_UNSIGNED,
+ .field_pos = ID_AA64ISAR1_GPA_SHIFT,
+ .min_field_value = ID_AA64ISAR1_GPA_ARCHITECTED,
+ .matches = has_cpuid_feature
+ },
+#endif /* CONFIG_ARM64_POINTER_AUTHENTICATION */
{},
};
--
1.9.1
To allow EL0 (and/or EL1) to use pointer authentication functionality,
we must ensure that pointer authentication instructions and accesses to
pointer authentication keys are not trapped to EL2 (where we will not be
able to handle them).
This patch ensures that HCR_EL2 is configured appropriately when the
kernel is booted at EL2. For non-VHE kernels we set HCR_EL2.{API,APK},
ensuring that EL1 can access keys and permit EL0 use of instructions.
For VHE kernels, EL2 access is controlled by EL3, and we need not set
anything.
This does not enable support for KVM guests, since KVM manages HCR_EL2
itself.
Signed-off-by: Mark Rutland <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Christoffer Dall <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: [email protected]
---
arch/arm64/include/asm/kvm_arm.h | 2 ++
arch/arm64/kernel/head.S | 19 +++++++++++++++++--
2 files changed, 19 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 6e99978..d8a1271 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -23,6 +23,8 @@
#include <asm/types.h>
/* Hyp Configuration Register (HCR) bits */
+#define HCR_API (UL(1) << 41)
+#define HCR_APK (UL(1) << 40)
#define HCR_E2H (UL(1) << 34)
#define HCR_ID (UL(1) << 33)
#define HCR_CD (UL(1) << 32)
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 0b13748..9f3f49f 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -562,10 +562,25 @@ CPU_LE( bic x0, x0, #(1 << 25) ) // Clear the EE bit for EL2
/* Hyp configuration. */
mov x0, #HCR_RW // 64-bit EL1
- cbz x2, set_hcr
+ cbz x2, 1f
orr x0, x0, #HCR_TGE // Enable Host Extensions
orr x0, x0, #HCR_E2H
-set_hcr:
+1:
+#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
+ /*
+ * Disable pointer authentication traps to EL2. The HCR_EL2.{APK,API}
+ * bits exist iff at least one authentication mechanism is implemented.
+ */
+ mrs x1, id_aa64isar1_el1
+ mov_q x3, ((0xf << ID_AA64ISAR1_GPI_SHIFT) | \
+ (0xf << ID_AA64ISAR1_GPA_SHIFT) | \
+ (0xf << ID_AA64ISAR1_API_SHIFT) | \
+ (0xf << ID_AA64ISAR1_APA_SHIFT))
+ and x1, x1, x3
+ cbz x1, 1f
+ orr x0, x0, #(HCR_APK | HCR_API)
+1:
+#endif
msr hcr_el2, x0
isb
--
1.9.1
Now that we've added code to support pointer authentication, add some
documentation so that people can figure out if/how to use it.
Since there are new enable bits in SCR_EL3 (and HCR_EL2), I think we
should document something in booting.txt w.r.t. functionality advertised
via ID registers being available (e.g. as we expect for FP and other
things today). I'm not sure quite what to say, and as it stands this
isn't quite correct.
Signed-off-by: Mark Rutland <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
---
Documentation/arm64/booting.txt | 8 +++
Documentation/arm64/pointer-authentication.txt | 78 ++++++++++++++++++++++++++
2 files changed, 86 insertions(+)
create mode 100644 Documentation/arm64/pointer-authentication.txt
diff --git a/Documentation/arm64/booting.txt b/Documentation/arm64/booting.txt
index 8d0df62..8df9f46 100644
--- a/Documentation/arm64/booting.txt
+++ b/Documentation/arm64/booting.txt
@@ -205,6 +205,14 @@ Before jumping into the kernel, the following conditions must be met:
ICC_SRE_EL2.SRE (bit 0) must be initialised to 0b0.
- The DT or ACPI tables must describe a GICv2 interrupt controller.
+ For CPUs with pointer authentication functionality:
+ - If EL3 is present:
+ SCR_EL3.APK (bit 16) must be initialised to 0b1
+ SCR_EL3.API (bit 17) must be initialised to 0b1
+ - If the kernel is entered at EL1:
+ HCR_EL2.APK (bit 40) must be initialised to 0b1
+ HCR_EL2.API (bit 41) must be initialised to 0b1
+
The requirements described above for CPU mode, caches, MMUs, architected
timers, coherency and system registers apply to all CPUs. All CPUs must
enter the kernel in the same exception level.
diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt
new file mode 100644
index 0000000..fb07783
--- /dev/null
+++ b/Documentation/arm64/pointer-authentication.txt
@@ -0,0 +1,78 @@
+Pointer authentication in AArch64 Linux
+=======================================
+
+Author: Mark Rutland <[email protected]>
+Date: 2017-02-21
+
+This document briefly describes the provision of pointer authentication
+functionality in AArch64 Linux.
+
+
+Architecture overview
+---------------------
+
+The ARMv8.3 Pointer Authentication extension adds primitives that can be
+used to mitigate certain classes of attack where an attacker can corrupt
+the contents of some memory (e.g. the stack).
+
+The extension uses a Pointer Authentication Code (PAC) to determine
+whether pointers have been modified unexpectedly. A PAC is derived from
+a pointer, another value (such as the stack pointer), and a secret key
+held in system registers.
+
+The extension adds instructions to insert a valid PAC into a pointer,
+and to verify/remove the PAC from a pointer. The PAC occupies a number
+of high-order bits of the pointer, which varies dependent on the
+configured virtual address size and whether pointer tagging is in use.
+
+A subset of these instructions have been allocated from the HINT
+encoding space. In the absence of the extension (or when disabled),
+these instructions behave as NOPs. Applications and libraries using
+these instructions operate correctly regardless of the presence of the
+extension.
+
+
+Basic support
+-------------
+
+When CONFIG_ARM64_POINTER_AUTHENTICATION is selected, and relevant HW
+support is present, the kernel will assign a random APIAKey value to
+each process at exec*() time. This key is shared by all threads within
+the process, and the key is preserved across fork(). Presence of
+functionality using APIAKey is advertised via HWCAP_APIA.
+
+Recent versions of GCC can compile code with APIAKey-based return
+address protection when passed the -msign-return-address option. This
+uses instructions in the HINT space, and such code can run on systems
+without the pointer authentication extension.
+
+The remaining instruction and data keys (APIBKey, APDAKey, APDBKey) are
+reserved for future use, and instructions using these keys must not be
+used by software until a purpose and scope for their use has been
+decided. To enable future software using these keys to function on
+contemporary kernels, where possible, instructions using these keys are
+made to behave as NOPs.
+
+The generic key (APGAKey) is currently unsupported. Instructions using
+the generic key must not be used by software. If/when supported in
+future, its presence will be advertised via a new hwcap.
+
+
+Virtualization
+--------------
+
+When CONFIG_ARM64_POINTER_AUTHENTICATION is selected, and uniform HW
+support is present, KVM will context switch all keys used by vCPUs.
+Otherwise, the feature is disabled. When disabled, accesses to keys, or
+use of instructions enabled within the guest will trap to EL2, and an
+UNDEFINED exception will be injected into the guest.
+
+
+Debugging
+---------
+
+When CONFIG_ARM64_POINTER_AUTHENTICATION is selected, the kernel exposes
+the position of PAC bits in the form of masks that can be queried via
+PTRACE_GETREGSET. Separate masks are exposed for instruction and data
+pointers, as the number of tag bits can vary between the two, affecting
+the number and position of PAC bits.
--
1.9.1
This patch adds basic support for pointer authentication, allowing
userspace to make use of APIAKey. The kernel maintains an APIAKey value
for each process (shared by all threads within), which is initialised to
a random value at exec() time.
Instructions using other keys (APIBKey, APDAKey, APDBKey) are disabled,
and will behave as NOPs. These may be made use of in future patches.
No support is added for the generic key (APGAKey), though this cannot be
trapped or made to behave as a NOP. Its presence is not advertised with
a hwcap.
Signed-off-by: Mark Rutland <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Suzuki K Poulose <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/arm64/Kconfig | 23 +++++++++
arch/arm64/include/asm/mmu.h | 5 ++
arch/arm64/include/asm/mmu_context.h | 25 +++++++++-
arch/arm64/include/asm/pointer_auth.h | 88 +++++++++++++++++++++++++++++++++++
arch/arm64/include/uapi/asm/hwcap.h | 1 +
arch/arm64/kernel/cpufeature.c | 11 +++++
arch/arm64/kernel/cpuinfo.c | 1 +
7 files changed, 152 insertions(+), 2 deletions(-)
create mode 100644 arch/arm64/include/asm/pointer_auth.h
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 3741859..0923f70 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -931,6 +931,29 @@ config ARM64_UAO
endmenu
+menu "ARMv8.3 architectural features"
+
+config ARM64_POINTER_AUTHENTICATION
+ bool "Enable support for pointer authentication"
+ default y
+ help
+ Pointer authentication (part of the ARMv8.3 Extensions) provides
+ instructions for signing and authenticating pointers against secret
+ keys, which can be used to mitigate Return Oriented Programming (ROP)
+ and other attacks.
+
+ This option enables these instructions at EL0 (i.e. for userspace).
+
+ Choosing this option will cause the kernel to initialise secret keys
+ for each process at exec() time, with these keys being
+ context-switched along with the process.
+
+ The feature is detected at runtime. If the feature is not present in
+ hardware it will not be advertised to userspace nor will it be
+ enabled.
+
+endmenu
+
config ARM64_MODULE_CMODEL_LARGE
bool
diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 5468c83..6a848f3 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -16,10 +16,15 @@
#ifndef __ASM_MMU_H
#define __ASM_MMU_H
+#include <asm/pointer_auth.h>
+
typedef struct {
atomic64_t id;
void *vdso;
unsigned long flags;
+#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
+ struct ptrauth_keys ptrauth_keys;
+#endif
} mm_context_t;
/*
diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index 3257895a..06757a5 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -31,7 +31,6 @@
#include <asm/cacheflush.h>
#include <asm/cpufeature.h>
#include <asm/proc-fns.h>
-#include <asm-generic/mm_hooks.h>
#include <asm/cputype.h>
#include <asm/pgtable.h>
#include <asm/sysreg.h>
@@ -154,7 +153,14 @@ static inline void cpu_replace_ttbr1(pgd_t *pgd)
#define destroy_context(mm) do { } while(0)
void check_and_switch_context(struct mm_struct *mm, unsigned int cpu);
-#define init_new_context(tsk,mm) ({ atomic64_set(&(mm)->context.id, 0); 0; })
+static inline int init_new_context(struct task_struct *tsk,
+ struct mm_struct *mm)
+{
+ atomic64_set(&mm->context.id, 0);
+ mm_ctx_ptrauth_init(&mm->context);
+
+ return 0;
+}
/*
* This is called when "tsk" is about to enter lazy TLB mode.
@@ -200,6 +206,8 @@ static inline void __switch_mm(struct mm_struct *next)
return;
}
+ mm_ctx_ptrauth_switch(&next->context);
+
check_and_switch_context(next, cpu);
}
@@ -226,6 +234,19 @@ static inline void __switch_mm(struct mm_struct *next)
void verify_cpu_asid_bits(void);
+static inline void arch_dup_mmap(struct mm_struct *oldmm,
+ struct mm_struct *mm)
+{
+ mm_ctx_ptrauth_dup(&oldmm->context, &mm->context);
+}
+#define arch_dup_mmap arch_dup_mmap
+
+/*
+ * We need to override arch_dup_mmap before including the generic hooks, which
+ * are otherwise sufficient for us.
+ */
+#include <asm-generic/mm_hooks.h>
+
#endif /* !__ASSEMBLY__ */
#endif /* !__ASM_MMU_CONTEXT_H */
diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
new file mode 100644
index 0000000..345df24
--- /dev/null
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -0,0 +1,88 @@
+/*
+ * Copyright (C) 2016 ARM Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef __ASM_POINTER_AUTH_H
+#define __ASM_POINTER_AUTH_H
+
+#include <linux/random.h>
+
+#include <asm/cpufeature.h>
+#include <asm/sysreg.h>
+
+#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
+/*
+ * Each key is a 128-bit quantity which is split accross a pair of 64-bit
+ * registers (Lo and Hi).
+ */
+struct ptrauth_key {
+ unsigned long lo, hi;
+};
+
+/*
+ * We give each process its own instruction A key (APIAKey), which is shared by
+ * all threads. This is inherited upon fork(), and reinitialised upon exec*().
+ * All other keys are currently unused, with APIBKey, APDAKey, and APBAKey
+ * instructions behaving as NOPs.
+ */
+struct ptrauth_keys {
+ struct ptrauth_key apia;
+};
+
+static inline void ptrauth_keys_init(struct ptrauth_keys *keys)
+{
+ if (!cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH))
+ return;
+
+ get_random_bytes(keys, sizeof(*keys));
+}
+
+#define __ptrauth_key_install(k, v) ({ \
+ write_sysreg_s(v.lo, SYS_ ## k ## KEYLO_EL1); \
+ write_sysreg_s(v.hi, SYS_ ## k ## KEYHI_EL1); \
+})
+
+static inline void ptrauth_keys_switch(struct ptrauth_keys *keys)
+{
+ if (!cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH))
+ return;
+
+ __ptrauth_key_install(APIA, keys->apia);
+}
+
+static inline void ptrauth_keys_dup(struct ptrauth_keys *old,
+ struct ptrauth_keys *new)
+{
+ if (!cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH))
+ return;
+
+ *new = *old;
+}
+
+#define mm_ctx_ptrauth_init(ctx) \
+ ptrauth_keys_init(&(ctx)->ptrauth_keys)
+
+#define mm_ctx_ptrauth_switch(ctx) \
+ ptrauth_keys_switch(&(ctx)->ptrauth_keys)
+
+#define mm_ctx_ptrauth_dup(oldctx, newctx) \
+ ptrauth_keys_dup(&(oldctx)->ptrauth_keys, &(newctx)->ptrauth_keys)
+
+#else
+#define mm_ctx_ptrauth_init(ctx)
+#define mm_ctx_ptrauth_switch(ctx)
+#define mm_ctx_ptrauth_dup(oldctx, newctx)
+#endif
+
+#endif /* __ASM_POINTER_AUTH_H */
diff --git a/arch/arm64/include/uapi/asm/hwcap.h b/arch/arm64/include/uapi/asm/hwcap.h
index 4e187ce..0481c73 100644
--- a/arch/arm64/include/uapi/asm/hwcap.h
+++ b/arch/arm64/include/uapi/asm/hwcap.h
@@ -35,5 +35,6 @@
#define HWCAP_JSCVT (1 << 13)
#define HWCAP_FCMA (1 << 14)
#define HWCAP_LRCPC (1 << 15)
+#define HWCAP_APIA (1 << 16)
#endif /* _UAPI__ASM_HWCAP_H */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 172c80e..6bb00d3 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -757,6 +757,15 @@ static bool runs_at_el2(const struct arm64_cpu_capabilities *entry, int __unused
return is_kernel_in_hyp_mode();
}
+#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
+static int cpu_enable_address_auth(void *__unused)
+{
+ config_sctlr_el1(0, SCTLR_ELx_ENIA);
+
+ return 0;
+}
+#endif /* CONFIG_ARM64_POINTER_AUTHENTICATION */
+
static bool hyp_offset_low(const struct arm64_cpu_capabilities *entry,
int __unused)
{
@@ -881,6 +890,7 @@ static bool has_no_fpsimd(const struct arm64_cpu_capabilities *entry, int __unus
.field_pos = ID_AA64ISAR1_APA_SHIFT,
.min_field_value = ID_AA64ISAR1_APA_ARCHITECTED,
.matches = has_cpuid_feature,
+ .enable = cpu_enable_address_auth,
},
{
.desc = "Generic authentication (architected algorithm)",
@@ -924,6 +934,7 @@ static bool has_no_fpsimd(const struct arm64_cpu_capabilities *entry, int __unus
HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_JSCVT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_JSCVT),
HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_FCMA_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_FCMA),
HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_LRCPC),
+ HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_APA_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_APIA),
{},
};
diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
index 68b1f36..e3845b2 100644
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -68,6 +68,7 @@
"jscvt",
"fcma",
"lrcpc",
+ "apia",
NULL
};
--
1.9.1
If we have pointer authentication support, a guest may wish to use it.
This patch adds the infrastructure to allow it to do so.
This is sufficient for basic testing, but not for real-world usage. A
guest will still see pointer authentication support advertised in the ID
registers, and we will need to trap accesses to these to provide
santized values.
Signed-off-by: Mark Rutland <[email protected]>
Cc: Christoffer Dall <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: [email protected]
---
arch/arm64/include/asm/kvm_emulate.h | 15 +++++++++++++
arch/arm64/include/asm/kvm_host.h | 12 ++++++++++
arch/arm64/kvm/hyp/sysreg-sr.c | 43 ++++++++++++++++++++++++++++++++++++
3 files changed, 70 insertions(+)
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index f5ea0ba..0c3cb43 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -28,6 +28,8 @@
#include <asm/kvm_arm.h>
#include <asm/kvm_mmio.h>
#include <asm/ptrace.h>
+#include <asm/cpucaps.h>
+#include <asm/cpufeature.h>
#include <asm/cputype.h>
#include <asm/virt.h>
@@ -49,6 +51,19 @@ static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
vcpu->arch.hcr_el2 |= HCR_E2H;
if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features))
vcpu->arch.hcr_el2 &= ~HCR_RW;
+
+ /*
+ * Address auth and generic auth share the same enable bits, so we have
+ * to ensure both are uniform before we can enable support in a guest.
+ * Until we have the infrastructure to detect uniform absence of a
+ * feature, only permit the case when both are supported.
+ *
+ * Note that a guest will still see the feature in ID_AA64_ISAR1 until
+ * we introduce code to emulate the ID registers.
+ */
+ if (cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH) &&
+ cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH))
+ vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK);
}
static inline unsigned long vcpu_get_hcr(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index e7705e7..b25f710 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -133,6 +133,18 @@ enum vcpu_sysreg {
PMSWINC_EL0, /* Software Increment Register */
PMUSERENR_EL0, /* User Enable Register */
+ /* Pointer Authentication Registers */
+ APIAKEYLO_EL1,
+ APIAKEYHI_EL1,
+ APIBKEYLO_EL1,
+ APIBKEYHI_EL1,
+ APDAKEYLO_EL1,
+ APDAKEYHI_EL1,
+ APDBKEYLO_EL1,
+ APDBKEYHI_EL1,
+ APGAKEYLO_EL1,
+ APGAKEYHI_EL1,
+
/* 32bit specific registers. Keep them at the end of the range */
DACR32_EL2, /* Domain Access Control Register */
IFSR32_EL2, /* Instruction Fault Status Register */
diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c
index 9341376..3440b42 100644
--- a/arch/arm64/kvm/hyp/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/sysreg-sr.c
@@ -18,6 +18,8 @@
#include <linux/compiler.h>
#include <linux/kvm_host.h>
+#include <asm/cpucaps.h>
+#include <asm/cpufeature.h>
#include <asm/kvm_asm.h>
#include <asm/kvm_hyp.h>
@@ -31,6 +33,24 @@ static void __hyp_text __sysreg_do_nothing(struct kvm_cpu_context *ctxt) { }
* pstate, and guest must save everything.
*/
+#define __save_ap_key(regs, key) \
+ regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1); \
+ regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1)
+
+static void __hyp_text __sysreg_save_ap_keys(struct kvm_cpu_context *ctxt)
+{
+ if (cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH)) {
+ __save_ap_key(ctxt->sys_regs, APIA);
+ __save_ap_key(ctxt->sys_regs, APIB);
+ __save_ap_key(ctxt->sys_regs, APDA);
+ __save_ap_key(ctxt->sys_regs, APDB);
+ }
+
+ if (cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH)) {
+ __save_ap_key(ctxt->sys_regs, APGA);
+ }
+}
+
static void __hyp_text __sysreg_save_common_state(struct kvm_cpu_context *ctxt)
{
ctxt->sys_regs[ACTLR_EL1] = read_sysreg(actlr_el1);
@@ -41,6 +61,8 @@ static void __hyp_text __sysreg_save_common_state(struct kvm_cpu_context *ctxt)
ctxt->gp_regs.regs.sp = read_sysreg(sp_el0);
ctxt->gp_regs.regs.pc = read_sysreg_el2(elr);
ctxt->gp_regs.regs.pstate = read_sysreg_el2(spsr);
+
+ __sysreg_save_ap_keys(ctxt);
}
static void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
@@ -84,6 +106,25 @@ void __hyp_text __sysreg_save_guest_state(struct kvm_cpu_context *ctxt)
__sysreg_save_common_state(ctxt);
}
+#define __restore_ap_key(regs, key) \
+ write_sysreg_s(regs[key ## KEYLO_EL1], SYS_ ## key ## KEYLO_EL1); \
+ write_sysreg_s(regs[key ## KEYHI_EL1], SYS_ ## key ## KEYHI_EL1)
+
+static void __hyp_text __sysreg_restore_ap_keys(struct kvm_cpu_context *ctxt)
+{
+ if (cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH)) {
+ __restore_ap_key(ctxt->sys_regs, APIA);
+ __restore_ap_key(ctxt->sys_regs, APIB);
+ __restore_ap_key(ctxt->sys_regs, APDA);
+ __restore_ap_key(ctxt->sys_regs, APDB);
+ }
+
+ if (cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH)) {
+ __restore_ap_key(ctxt->sys_regs, APGA);
+ }
+}
+
+
static void __hyp_text __sysreg_restore_common_state(struct kvm_cpu_context *ctxt)
{
write_sysreg(ctxt->sys_regs[ACTLR_EL1], actlr_el1);
@@ -94,6 +135,8 @@ static void __hyp_text __sysreg_restore_common_state(struct kvm_cpu_context *ctx
write_sysreg(ctxt->gp_regs.regs.sp, sp_el0);
write_sysreg_el2(ctxt->gp_regs.regs.pc, elr);
write_sysreg_el2(ctxt->gp_regs.regs.pstate, spsr);
+
+ __sysreg_restore_ap_keys(ctxt);
}
static void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
--
1.9.1
When pointer authentication is in use, data/instruction pointers have a
number of PAC bits inserted into them. The number and position of these
bits depends on the configured TCR_ELx.TxSZ and whether tagging is
enabled. ARMv8.3 allows tagging to differ for instruction and data
pointers.
For userspace debuggers to unwind the stack and/or to follow pointer
chains, they need to be able to remove the PAC bits before attempting to
use a pointer.
This patch adds a new structure with masks describing the location of
PAC bits in instruction and data pointers, which userspace can query via
PTRACE_GETREGSET. By clearing these bits from pointers, userspace can
acquire the PAC-less versions.
This new regset is exposed when the kernel is built with (user) pointer
authentication support, and the feature is enabled. Otherwise, it is
hidden.
Note that even if the feature is available and enabled, we cannot
determine whether userspace is making use of the feature, so debuggers
need to cope with this case regardless.
Signed-off-by: Mark Rutland <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Jiong Wang <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/arm64/include/asm/pointer_auth.h | 8 +++++++
arch/arm64/include/uapi/asm/ptrace.h | 5 +++++
arch/arm64/kernel/ptrace.c | 39 +++++++++++++++++++++++++++++++++++
include/uapi/linux/elf.h | 1 +
4 files changed, 53 insertions(+)
diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
index 345df24..ed505fe 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -16,9 +16,11 @@
#ifndef __ASM_POINTER_AUTH_H
#define __ASM_POINTER_AUTH_H
+#include <linux/bitops.h>
#include <linux/random.h>
#include <asm/cpufeature.h>
+#include <asm/memory.h>
#include <asm/sysreg.h>
#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
@@ -70,6 +72,12 @@ static inline void ptrauth_keys_dup(struct ptrauth_keys *old,
*new = *old;
}
+/*
+ * The pointer bits used by a pointer authentication code.
+ * If we were to use tagged pointers, bits 63:56 would also apply.
+ */
+#define ptrauth_pac_mask() GENMASK(54, VA_BITS)
+
#define mm_ctx_ptrauth_init(ctx) \
ptrauth_keys_init(&(ctx)->ptrauth_keys)
diff --git a/arch/arm64/include/uapi/asm/ptrace.h b/arch/arm64/include/uapi/asm/ptrace.h
index d1ff83d..5092fbf 100644
--- a/arch/arm64/include/uapi/asm/ptrace.h
+++ b/arch/arm64/include/uapi/asm/ptrace.h
@@ -90,6 +90,11 @@ struct user_hwdebug_state {
} dbg_regs[16];
};
+struct user_pac_mask {
+ __u64 data_mask;
+ __u64 insn_mask;
+};
+
#endif /* __ASSEMBLY__ */
#endif /* _UAPI__ASM_PTRACE_H */
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index c142459..b0bcdfb 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -40,8 +40,10 @@
#include <linux/elf.h>
#include <asm/compat.h>
+#include <asm/cpufeature.h>
#include <asm/debug-monitors.h>
#include <asm/pgtable.h>
+#include <asm/pointer_auth.h>
#include <asm/syscall.h>
#include <asm/traps.h>
#include <asm/system_misc.h>
@@ -693,6 +695,30 @@ static int system_call_set(struct task_struct *target,
return ret;
}
+#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
+static int pac_mask_get(struct task_struct *target,
+ const struct user_regset *regset,
+ unsigned int pos, unsigned int count,
+ void *kbuf, void __user *ubuf)
+{
+ /*
+ * While the PAC bits are currently the same for data and instruction
+ * pointers, this could change if we use TCR_ELx.TBID*. So we expose
+ * them separately from the outset.
+ */
+ unsigned long mask = ptrauth_pac_mask();
+ struct user_pac_mask uregs = {
+ .data_mask = mask,
+ .insn_mask = mask,
+ };
+
+ if (!cpus_have_cap(ARM64_HAS_ADDRESS_AUTH))
+ return -EINVAL;
+
+ return user_regset_copyout(&pos, &count, &kbuf, &ubuf, &uregs, 0, -1);
+}
+#endif
+
enum aarch64_regset {
REGSET_GPR,
REGSET_FPR,
@@ -702,6 +728,9 @@ enum aarch64_regset {
REGSET_HW_WATCH,
#endif
REGSET_SYSTEM_CALL,
+#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
+ REGSET_PAC_MASK,
+#endif
};
static const struct user_regset aarch64_regsets[] = {
@@ -759,6 +788,16 @@ enum aarch64_regset {
.get = system_call_get,
.set = system_call_set,
},
+#ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
+ [REGSET_PAC_MASK] = {
+ .core_note_type = NT_ARM_PAC_MASK,
+ .n = sizeof(struct user_pac_mask) / sizeof(u64),
+ .size = sizeof(u64),
+ .align = sizeof(u64),
+ .get = pac_mask_get,
+ /* this cannot be set dynamically */
+ },
+#endif
};
static const struct user_regset_view user_aarch64_view = {
diff --git a/include/uapi/linux/elf.h b/include/uapi/linux/elf.h
index b59ee07..cae3d1e 100644
--- a/include/uapi/linux/elf.h
+++ b/include/uapi/linux/elf.h
@@ -414,6 +414,7 @@
#define NT_ARM_HW_BREAK 0x402 /* ARM hardware breakpoint registers */
#define NT_ARM_HW_WATCH 0x403 /* ARM hardware watchpoint registers */
#define NT_ARM_SYSTEM_CALL 0x404 /* ARM system call number */
+#define NT_ARM_PAC_MASK 0x405 /* ARM pointer authentication code masks */
#define NT_METAG_CBUF 0x500 /* Metag catch buffer registers */
#define NT_METAG_RPIPE 0x501 /* Metag read pipeline state */
#define NT_METAG_TLS 0x502 /* Metag TLS pointer */
--
1.9.1
The ARMv8.3 pointer authentication extension adds:
* New fields in ID_AA64ISAR1 to report the presence of pointer
authentication functionality.
* New control bits in SCTLR_ELx to enable this functionality.
* New system registers to hold the keys necessary for this
functionality.
* A new ESR_ELx.EC code used when the new instructions are affected by
configurable traps
This patch adds the relevant definitions to <asm/sysreg.h> and
<asm/esr.h> for these, to be used by subsequent patches.
Signed-off-by: Mark Rutland <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Suzuki K Poulose <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/arm64/include/asm/esr.h | 3 ++-
arch/arm64/include/asm/sysreg.h | 30 ++++++++++++++++++++++++++++++
2 files changed, 32 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index d14c478..dd61adc 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -29,7 +29,8 @@
#define ESR_ELx_EC_CP14_LS (0x06)
#define ESR_ELx_EC_FP_ASIMD (0x07)
#define ESR_ELx_EC_CP10_ID (0x08)
-/* Unallocated EC: 0x09 - 0x0B */
+#define ESR_ELx_EC_PAC (0x09)
+/* Unallocated EC: 0x0A - 0x0B */
#define ESR_ELx_EC_CP14_64 (0x0C)
/* Unallocated EC: 0x0d */
#define ESR_ELx_EC_ILL (0x0E)
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index c776bde..2ed69aa 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -122,6 +122,19 @@
#define SYS_CTR_EL0 sys_reg(3, 3, 0, 0, 1)
#define SYS_DCZID_EL0 sys_reg(3, 3, 0, 0, 7)
+#define SYS_APIAKEYLO_EL1 sys_reg(3, 0, 2, 1, 0)
+#define SYS_APIAKEYHI_EL1 sys_reg(3, 0, 2, 1, 1)
+#define SYS_APIBKEYLO_EL1 sys_reg(3, 0, 2, 1, 2)
+#define SYS_APIBKEYHI_EL1 sys_reg(3, 0, 2, 1, 3)
+
+#define SYS_APDAKEYLO_EL1 sys_reg(3, 0, 2, 2, 0)
+#define SYS_APDAKEYHI_EL1 sys_reg(3, 0, 2, 2, 1)
+#define SYS_APDBKEYLO_EL1 sys_reg(3, 0, 2, 2, 2)
+#define SYS_APDBKEYHI_EL1 sys_reg(3, 0, 2, 2, 3)
+
+#define SYS_APGAKEYLO_EL1 sys_reg(3, 0, 2, 3, 0)
+#define SYS_APGAKEYHI_EL1 sys_reg(3, 0, 2, 3, 1)
+
#define REG_PSTATE_PAN_IMM sys_reg(0, 0, 4, 0, 4)
#define REG_PSTATE_UAO_IMM sys_reg(0, 0, 4, 0, 3)
@@ -131,7 +144,11 @@
(!!x)<<8 | 0x1f)
/* Common SCTLR_ELx flags. */
+#define SCTLR_ELx_ENIA (1 << 31)
+#define SCTLR_ELx_ENIB (1 << 30)
+#define SCTLR_ELx_ENDA (1 << 27)
#define SCTLR_ELx_EE (1 << 25)
+#define SCTLR_ELx_ENDB (1 << 13)
#define SCTLR_ELx_I (1 << 12)
#define SCTLR_ELx_SA (1 << 3)
#define SCTLR_ELx_C (1 << 2)
@@ -157,9 +174,22 @@
#define ID_AA64ISAR0_AES_SHIFT 4
/* id_aa64isar1 */
+#define ID_AA64ISAR1_GPI_SHIFT 28
+#define ID_AA64ISAR1_GPA_SHIFT 24
#define ID_AA64ISAR1_LRCPC_SHIFT 20
#define ID_AA64ISAR1_FCMA_SHIFT 16
#define ID_AA64ISAR1_JSCVT_SHIFT 12
+#define ID_AA64ISAR1_APA_SHIFT 4
+#define ID_AA64ISAR1_API_SHIFT 8
+
+#define ID_AA64ISAR1_APA_NI 0x0
+#define ID_AA64ISAR1_APA_ARCHITECTED 0x1
+#define ID_AA64ISAR1_API_NI 0x0
+#define ID_AA64ISAR1_API_IMP_DEF 0x1
+#define ID_AA64ISAR1_GPA_NI 0x0
+#define ID_AA64ISAR1_GPA_ARCHITECTED 0x1
+#define ID_AA64ISAR1_GPI_NI 0x0
+#define ID_AA64ISAR1_GPI_IMP_DEF 0x1
/* id_aa64pfr0 */
#define ID_AA64PFR0_GIC_SHIFT 24
--
1.9.1
On 4/3/2017 11:19 AM, Mark Rutland wrote:
> This series adds support for the ARMv8.3 pointer authentication extension.
>
> I've included a quick intro to the extension below, with the usual series
> description below that. The final patch of the series adds additional
> documentation regarding the extension.
>
> I've based the series on the arm64 for-next/core branch [1]. I'm aware that
> this series may conflict with other patches currently in flight (e.g.
> allocation of ELF notes), and I intend to rebase this series as things settle.
>
> I've pushed the series to the arm64/pointer-auth branch [2] of my linux tree.
> I've also pushed out a necessary bootwrapper patch to the pointer-auth branch
> [3] of my bootwrapper repo.
>
>
> Extension Overview
> ==================
>
> The ARMv8.3 pointer authentication extension adds functionality to detect
> modification of pointer values, mitigating certain classes of attack such as
> stack smashing, and making return oriented programming attacks harder
>
> The extension introduces the concept of a pointer authentication code (PAC),
> which is stored in some upper bits of pointers. Each PAC is derived from the
> original pointer, another 64-bit value (e.g. the stack pointer), and a secret
> 128-bit key.
>
> New instructions are added which can be used to:
>
> * Insert a PAC into a pointer
> * Strip a PAC from a pointer
> * Authenticate strip a PAC from a pointer
>
> If authentication succeeds, the code is removed, yielding the original pointer.
> If authentication fails, bits are set in the pointer such that it is guaranteed
> to cause a fault if used.
>
> These instructions can make use of four keys:
>
> * APIAKey (A.K.A. Instruction A key)
> * APIBKey (A.K.A. Instruction B key)
> * APDAKey (A.K.A. Data A key)
> * APDBKey (A.K.A. Data B Key)
>
> A subset of these instruction encodings have been allocated from the HINT
> space, and will operate as NOPs on any ARMv8 parts which do not feature the
> extension (or if purposefully disabled by the kernel). Software using only this
> subset of the instructions should function correctly on all ARMv8-A parts.
>
> Additionally, instructions are added to authenticate small blocks of memory in
> similar fashion, using APGAKey (A.K.A. Generic key).
>
>
> This Series
> ===========
>
> This series enables the use of instructions using APIAKey, which is initialised
> and maintained per-process (shared by all threads). This series does not add
> support for APIBKey, APDAKey, APDBKey, nor APGAKey. The series only supports
> the use of an architected algorithm.
>
> I've given this some basic testing with a homebrew test suite. More ideally,
> we'd add some tests to the kernel source tree.
>
> I've added some basic KVM support, but this doesn't cater for systems with
> mismatched support. Looking forward, we'll need ID register emulation in KVM so
> that we can hide features from guests to cater for cases like this.
>
> There are also a few questions to consider, e.g:
>
> * Should we expose a per-process data key now, to go with the insn key?
> * Should keys be per-thread rather than per-process?
> * Should we expose generic authentication (i.e. APGAKey)?
> * Should the kernel remove PACs when unwinding user stacks?
>
> Thanks,
> Mark.
>
> [1] git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-next/core
> [2] git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git arm64/pointer-auth
> [3] git://git.kernel.org/pub/scm/linux/kernel/git/mark/boot-wrapper-aarch64.git pointer-auth
>
> Mark Rutland (9):
> asm-generic: mm_hooks: allow hooks to be overridden individually
> arm64: add pointer authentication register bits
> arm64/cpufeature: add ARMv8.3 id_aa64isar1 bits
> arm64/cpufeature: detect pointer authentication
> arm64: Don't trap host pointer auth use to EL2
> arm64: add basic pointer authentication support
> arm64: expose PAC bit positions via ptrace
> arm64/kvm: context-switch PAC registers
> arm64: docs: document pointer authentication
>
> Documentation/arm64/booting.txt | 8 +++
> Documentation/arm64/pointer-authentication.txt | 78 +++++++++++++++++++++
> arch/arm64/Kconfig | 23 ++++++
> arch/arm64/include/asm/cpucaps.h | 4 +-
> arch/arm64/include/asm/esr.h | 3 +-
> arch/arm64/include/asm/kvm_arm.h | 2 +
> arch/arm64/include/asm/kvm_emulate.h | 15 ++++
> arch/arm64/include/asm/kvm_host.h | 12 ++++
> arch/arm64/include/asm/mmu.h | 5 ++
> arch/arm64/include/asm/mmu_context.h | 25 ++++++-
> arch/arm64/include/asm/pointer_auth.h | 96 ++++++++++++++++++++++++++
> arch/arm64/include/asm/sysreg.h | 30 ++++++++
> arch/arm64/include/uapi/asm/hwcap.h | 1 +
> arch/arm64/include/uapi/asm/ptrace.h | 5 ++
> arch/arm64/kernel/cpufeature.c | 39 ++++++++++-
> arch/arm64/kernel/cpuinfo.c | 1 +
> arch/arm64/kernel/head.S | 19 ++++-
> arch/arm64/kernel/ptrace.c | 39 +++++++++++
> arch/arm64/kvm/hyp/sysreg-sr.c | 43 ++++++++++++
> include/asm-generic/mm_hooks.h | 12 ++++
> include/uapi/linux/elf.h | 1 +
> 21 files changed, 454 insertions(+), 7 deletions(-)
> create mode 100644 Documentation/arm64/pointer-authentication.txt
> create mode 100644 arch/arm64/include/asm/pointer_auth.h
>
Tested on Qualcomm platform with ARMV8 architecture (without 8.3 extensions) for
backwards compatibility (meaning I did not pass -march=armv8.3-a to GCC; only
-msign-return-address=all). The HINT PACIASP/AUTIASP caused no issues and no
other issues were encountered. Will test again once a platform is available with
8.3-a extensions.
Thanks
--
Adam Wallis
Qualcomm Datacenter Technologies as an affiliate of Qualcomm Technologies, Inc.
Qualcomm Technologies, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project.
Hi Mark,
On 03/04/17 16:19, Mark Rutland wrote:
> If we have pointer authentication support, a guest may wish to use it.
> This patch adds the infrastructure to allow it to do so.
>
> This is sufficient for basic testing, but not for real-world usage. A
> guest will still see pointer authentication support advertised in the ID
> registers, and we will need to trap accesses to these to provide
> santized values.
>
> Signed-off-by: Mark Rutland <[email protected]>
> Cc: Christoffer Dall <[email protected]>
> Cc: Marc Zyngier <[email protected]>
> Cc: [email protected]
> ---
> arch/arm64/include/asm/kvm_emulate.h | 15 +++++++++++++
> arch/arm64/include/asm/kvm_host.h | 12 ++++++++++
> arch/arm64/kvm/hyp/sysreg-sr.c | 43 ++++++++++++++++++++++++++++++++++++
> 3 files changed, 70 insertions(+)
>
> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
> index f5ea0ba..0c3cb43 100644
> --- a/arch/arm64/include/asm/kvm_emulate.h
> +++ b/arch/arm64/include/asm/kvm_emulate.h
> @@ -28,6 +28,8 @@
> #include <asm/kvm_arm.h>
> #include <asm/kvm_mmio.h>
> #include <asm/ptrace.h>
> +#include <asm/cpucaps.h>
> +#include <asm/cpufeature.h>
> #include <asm/cputype.h>
> #include <asm/virt.h>
>
> @@ -49,6 +51,19 @@ static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
> vcpu->arch.hcr_el2 |= HCR_E2H;
> if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features))
> vcpu->arch.hcr_el2 &= ~HCR_RW;
> +
> + /*
> + * Address auth and generic auth share the same enable bits, so we have
> + * to ensure both are uniform before we can enable support in a guest.
> + * Until we have the infrastructure to detect uniform absence of a
> + * feature, only permit the case when both are supported.
> + *
> + * Note that a guest will still see the feature in ID_AA64_ISAR1 until
> + * we introduce code to emulate the ID registers.
> + */
> + if (cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH) &&
> + cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH))
> + vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK);
Instead of unconditionally allowing the guest to access this feature...
> }
>
> static inline unsigned long vcpu_get_hcr(struct kvm_vcpu *vcpu)
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index e7705e7..b25f710 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -133,6 +133,18 @@ enum vcpu_sysreg {
> PMSWINC_EL0, /* Software Increment Register */
> PMUSERENR_EL0, /* User Enable Register */
>
> + /* Pointer Authentication Registers */
> + APIAKEYLO_EL1,
> + APIAKEYHI_EL1,
> + APIBKEYLO_EL1,
> + APIBKEYHI_EL1,
> + APDAKEYLO_EL1,
> + APDAKEYHI_EL1,
> + APDBKEYLO_EL1,
> + APDBKEYHI_EL1,
> + APGAKEYLO_EL1,
> + APGAKEYHI_EL1,
> +
> /* 32bit specific registers. Keep them at the end of the range */
> DACR32_EL2, /* Domain Access Control Register */
> IFSR32_EL2, /* Instruction Fault Status Register */
> diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c
> index 9341376..3440b42 100644
> --- a/arch/arm64/kvm/hyp/sysreg-sr.c
> +++ b/arch/arm64/kvm/hyp/sysreg-sr.c
> @@ -18,6 +18,8 @@
> #include <linux/compiler.h>
> #include <linux/kvm_host.h>
>
> +#include <asm/cpucaps.h>
> +#include <asm/cpufeature.h>
> #include <asm/kvm_asm.h>
> #include <asm/kvm_hyp.h>
>
> @@ -31,6 +33,24 @@ static void __hyp_text __sysreg_do_nothing(struct kvm_cpu_context *ctxt) { }
> * pstate, and guest must save everything.
> */
>
> +#define __save_ap_key(regs, key) \
> + regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1); \
> + regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1)
> +
> +static void __hyp_text __sysreg_save_ap_keys(struct kvm_cpu_context *ctxt)
> +{
> + if (cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH)) {
> + __save_ap_key(ctxt->sys_regs, APIA);
> + __save_ap_key(ctxt->sys_regs, APIB);
> + __save_ap_key(ctxt->sys_regs, APDA);
> + __save_ap_key(ctxt->sys_regs, APDB);
> + }
> +
> + if (cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH)) {
> + __save_ap_key(ctxt->sys_regs, APGA);
> + }
> +}
> +
...which immediately translates in quite a bit of sysreg churn on both
host and guest (specially given that these are not banked by exception
level), could we make it a bit more lazy instead?
Even an "enable on first use" would be good, given that it is likely
that we'll have non PAC-enabled VMs for quite a while.
Thoughts?
M.
--
Jazz is not dead. It just smells funny...
On Mon, Apr 03, 2017 at 04:19:23PM +0100, Mark Rutland wrote:
> When pointer authentication is in use, data/instruction pointers have a
> number of PAC bits inserted into them. The number and position of these
> bits depends on the configured TCR_ELx.TxSZ and whether tagging is
> enabled. ARMv8.3 allows tagging to differ for instruction and data
> pointers.
>
> For userspace debuggers to unwind the stack and/or to follow pointer
> chains, they need to be able to remove the PAC bits before attempting to
> use a pointer.
>
> This patch adds a new structure with masks describing the location of
> PAC bits in instruction and data pointers, which userspace can query via
> PTRACE_GETREGSET. By clearing these bits from pointers, userspace can
> acquire the PAC-less versions.
>
> This new regset is exposed when the kernel is built with (user) pointer
> authentication support, and the feature is enabled. Otherwise, it is
> hidden.
>
> Note that even if the feature is available and enabled, we cannot
> determine whether userspace is making use of the feature, so debuggers
> need to cope with this case regardless.
>
> Signed-off-by: Mark Rutland <[email protected]>
> Cc: Catalin Marinas <[email protected]>
> Cc: Jiong Wang <[email protected]>
> Cc: Will Deacon <[email protected]>
> ---
> arch/arm64/include/asm/pointer_auth.h | 8 +++++++
> arch/arm64/include/uapi/asm/ptrace.h | 5 +++++
> arch/arm64/kernel/ptrace.c | 39 +++++++++++++++++++++++++++++++++++
> include/uapi/linux/elf.h | 1 +
> 4 files changed, 53 insertions(+)
>
> diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
> index 345df24..ed505fe 100644
> --- a/arch/arm64/include/asm/pointer_auth.h
> +++ b/arch/arm64/include/asm/pointer_auth.h
> @@ -16,9 +16,11 @@
> #ifndef __ASM_POINTER_AUTH_H
> #define __ASM_POINTER_AUTH_H
>
> +#include <linux/bitops.h>
> #include <linux/random.h>
>
> #include <asm/cpufeature.h>
> +#include <asm/memory.h>
> #include <asm/sysreg.h>
>
> #ifdef CONFIG_ARM64_POINTER_AUTHENTICATION
> @@ -70,6 +72,12 @@ static inline void ptrauth_keys_dup(struct ptrauth_keys *old,
> *new = *old;
> }
>
> +/*
> + * The pointer bits used by a pointer authentication code.
> + * If we were to use tagged pointers, bits 63:56 would also apply.
> + */
> +#define ptrauth_pac_mask() GENMASK(54, VA_BITS)
Tagged pointers _are_ enabled for userspace by default, no?
[...]
> diff --git a/include/uapi/linux/elf.h b/include/uapi/linux/elf.h
> index b59ee07..cae3d1e 100644
> --- a/include/uapi/linux/elf.h
> +++ b/include/uapi/linux/elf.h
> @@ -414,6 +414,7 @@
> #define NT_ARM_HW_BREAK 0x402 /* ARM hardware breakpoint registers */
> #define NT_ARM_HW_WATCH 0x403 /* ARM hardware watchpoint registers */
> #define NT_ARM_SYSTEM_CALL 0x404 /* ARM system call number */
> +#define NT_ARM_PAC_MASK 0x405 /* ARM pointer authentication code masks */
The is the value tentatively assigned to NT_ARM_SVE.
Cheers
---Dave
On Tue, Jul 25, 2017 at 01:11:48PM +0100, Dave Martin wrote:
> On Mon, Apr 03, 2017 at 04:19:23PM +0100, Mark Rutland wrote:
> > +/*
> > + * The pointer bits used by a pointer authentication code.
> > + * If we were to use tagged pointers, bits 63:56 would also apply.
> > + */
> > +#define ptrauth_pac_mask() GENMASK(54, VA_BITS)
>
> Tagged pointers _are_ enabled for userspace by default, no?
Yes; I'd meant s/tagged/untagged/.
I've corrected this to:
/*
* The EL0 pointer bits used by a pointer authentication code.
* This is dependent on TBI0 being enabled, or bits 63:56 would also apply.
*/
> > diff --git a/include/uapi/linux/elf.h b/include/uapi/linux/elf.h
> > index b59ee07..cae3d1e 100644
> > --- a/include/uapi/linux/elf.h
> > +++ b/include/uapi/linux/elf.h
> > @@ -414,6 +414,7 @@
> > #define NT_ARM_HW_BREAK 0x402 /* ARM hardware breakpoint registers */
> > #define NT_ARM_HW_WATCH 0x403 /* ARM hardware watchpoint registers */
> > #define NT_ARM_SYSTEM_CALL 0x404 /* ARM system call number */
> > +#define NT_ARM_PAC_MASK 0x405 /* ARM pointer authentication code masks */
>
> The is the value tentatively assigned to NT_ARM_SVE.
I must've generated this patch before I corrected this; my local branch
(and kernel.org) have 0x406 here.
Sorry about that.
Mark.
On Tue, Jul 25, 2017 at 03:59:04PM +0100, Mark Rutland wrote:
> On Tue, Jul 25, 2017 at 01:11:48PM +0100, Dave Martin wrote:
> > On Mon, Apr 03, 2017 at 04:19:23PM +0100, Mark Rutland wrote:
> > > +/*
> > > + * The pointer bits used by a pointer authentication code.
> > > + * If we were to use tagged pointers, bits 63:56 would also apply.
> > > + */
> > > +#define ptrauth_pac_mask() GENMASK(54, VA_BITS)
> >
> > Tagged pointers _are_ enabled for userspace by default, no?
>
> Yes; I'd meant s/tagged/untagged/.
>
> I've corrected this to:
>
> /*
> * The EL0 pointer bits used by a pointer authentication code.
> * This is dependent on TBI0 being enabled, or bits 63:56 would also apply.
> */
Yes, that's better. If we do enable untagged pointers for userspace at
some point though, this is likely to be missed.
I don't have a good answer to this.
> > > diff --git a/include/uapi/linux/elf.h b/include/uapi/linux/elf.h
> > > index b59ee07..cae3d1e 100644
> > > --- a/include/uapi/linux/elf.h
> > > +++ b/include/uapi/linux/elf.h
> > > @@ -414,6 +414,7 @@
> > > #define NT_ARM_HW_BREAK 0x402 /* ARM hardware breakpoint registers */
> > > #define NT_ARM_HW_WATCH 0x403 /* ARM hardware watchpoint registers */
> > > #define NT_ARM_SYSTEM_CALL 0x404 /* ARM system call number */
> > > +#define NT_ARM_PAC_MASK 0x405 /* ARM pointer authentication code masks */
> >
> > The is the value tentatively assigned to NT_ARM_SVE.
>
> I must've generated this patch before I corrected this; my local branch
> (and kernel.org) have 0x406 here.
>
> Sorry about that.
Shame, I had a rant about pragmatism prepped and ready ;)
Cheers
---Dave