2018-12-07 18:41:02

by Kristina Martsenko

[permalink] [raw]
Subject: [PATCH v6 00/13] ARMv8.3 pointer authentication userspace support

Hi,

This series adds support for the ARMv8.3 pointer authentication extension,
enabling userspace return address protection with GCC 7 and above.

(The previous version also had in-kernel pointer authentication patches
as RFC; these will be updated and sent at a later time.)

Changes since v5 [1]:
- Exposed all 5 keys (not just APIAKey) [Will]
- New prctl for reinitializing keys [Will]
- New ptrace options for getting and setting keys [Will]
- Keys now per-thread instead of per-mm [Catalin]
- Fixed cpufeature detection for late CPUs [Suzuki]
- Added comments for ESR_ELx_EC_* definitions [Will]
- Rebased onto v4.20-rc5

This series is based on v4.20-rc5. The aarch64 bootwrapper [2] does the
necessary EL3 setup.

The patches are also available at:
git://linux-arm.org/linux-km.git ptrauth-user


Extension Overview
==================

The ARMv8.3 pointer authentication extension adds functionality to detect
modification of pointer values, mitigating certain classes of attack such as
stack smashing, and making return oriented programming attacks harder.

The extension introduces the concept of a pointer authentication code (PAC),
which is stored in some upper bits of pointers. Each PAC is derived from the
original pointer, another 64-bit value (e.g. the stack pointer), and a secret
128-bit key.

New instructions are added which can be used to:

* Insert a PAC into a pointer
* Strip a PAC from a pointer
* Authenticate strip a PAC from a pointer

If authentication succeeds, the code is removed, yielding the original pointer.
If authentication fails, bits are set in the pointer such that it is guaranteed
to cause a fault if used.

These instructions can make use of four keys:

* APIAKey (A.K.A. Instruction A key)
* APIBKey (A.K.A. Instruction B key)
* APDAKey (A.K.A. Data A key)
* APDBKey (A.K.A. Data B Key)

A subset of these instruction encodings have been allocated from the HINT
space, and will operate as NOPs on any ARMv8-A parts which do not feature the
extension (or if purposefully disabled by the kernel). Software using only this
subset of the instructions should function correctly on all ARMv8-A parts.

Additionally, instructions are added to authenticate small blocks of memory in
similar fashion, using APGAKey (A.K.A. Generic key).


This series
===========

This series enables userspace to use any pointer authentication instructions,
using any of the 5 keys. The keys are initialised and maintained per-process
(shared by all threads).

For the time being, this series hides pointer authentication functionality from
KVM guests. Amit Kachhap is currently looking into supporting pointer
authentication in guests.

Setting uprobes on pointer authentication instructions is not yet supported, and
may cause the application to behave in unexpected ways.

Feedback and comments are welcome.

Thanks,
Kristina

[1] https://lore.kernel.org/lkml/[email protected]/
[2] git://git.kernel.org/pub/scm/linux/kernel/git/mark/boot-wrapper-aarch64.git


Kristina Martsenko (3):
arm64: add comments about EC exception levels
arm64: add prctl control for resetting ptrauth keys
arm64: add ptrace regsets for ptrauth key management

Mark Rutland (10):
arm64: add pointer authentication register bits
arm64/kvm: consistently handle host HCR_EL2 flags
arm64/kvm: hide ptrauth from guests
arm64: Don't trap host pointer auth use to EL2
arm64/cpufeature: detect pointer authentication
arm64: add basic pointer authentication support
arm64: expose user PAC bit positions via ptrace
arm64: perf: strip PAC when unwinding userspace
arm64: enable pointer authentication
arm64: docs: document pointer authentication

Documentation/arm64/booting.txt | 8 ++
Documentation/arm64/cpu-feature-registers.txt | 8 ++
Documentation/arm64/elf_hwcaps.txt | 12 +++
Documentation/arm64/pointer-authentication.txt | 93 +++++++++++++++++++++
arch/arm64/Kconfig | 23 ++++++
arch/arm64/include/asm/cpucaps.h | 8 +-
arch/arm64/include/asm/cpufeature.h | 12 +++
arch/arm64/include/asm/esr.h | 17 ++--
arch/arm64/include/asm/kvm_arm.h | 3 +
arch/arm64/include/asm/pointer_auth.h | 93 +++++++++++++++++++++
arch/arm64/include/asm/processor.h | 4 +
arch/arm64/include/asm/sysreg.h | 30 +++++++
arch/arm64/include/asm/thread_info.h | 4 +
arch/arm64/include/uapi/asm/hwcap.h | 2 +
arch/arm64/include/uapi/asm/ptrace.h | 25 ++++++
arch/arm64/kernel/Makefile | 1 +
arch/arm64/kernel/cpufeature.c | 103 +++++++++++++++++++++++
arch/arm64/kernel/cpuinfo.c | 2 +
arch/arm64/kernel/head.S | 5 +-
arch/arm64/kernel/perf_callchain.c | 6 +-
arch/arm64/kernel/pointer_auth.c | 47 +++++++++++
arch/arm64/kernel/process.c | 4 +
arch/arm64/kernel/ptrace.c | 110 +++++++++++++++++++++++++
arch/arm64/kvm/handle_exit.c | 18 ++++
arch/arm64/kvm/hyp/switch.c | 2 +-
arch/arm64/kvm/sys_regs.c | 8 ++
include/uapi/linux/elf.h | 3 +
include/uapi/linux/prctl.h | 8 ++
kernel/sys.c | 8 ++
29 files changed, 653 insertions(+), 14 deletions(-)
create mode 100644 Documentation/arm64/pointer-authentication.txt
create mode 100644 arch/arm64/include/asm/pointer_auth.h
create mode 100644 arch/arm64/kernel/pointer_auth.c

--
2.11.0



2018-12-07 18:41:05

by Kristina Martsenko

[permalink] [raw]
Subject: [PATCH v6 01/13] arm64: add comments about EC exception levels

To make it clear which exceptions can't be taken to EL1 or EL2, add
comments next to the ESR_ELx_EC_* macro definitions.

Signed-off-by: Kristina Martsenko <[email protected]>
---
arch/arm64/include/asm/esr.h | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index 676de2ec1762..23602a0083ad 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -29,23 +29,23 @@
#define ESR_ELx_EC_CP14_MR (0x05)
#define ESR_ELx_EC_CP14_LS (0x06)
#define ESR_ELx_EC_FP_ASIMD (0x07)
-#define ESR_ELx_EC_CP10_ID (0x08)
+#define ESR_ELx_EC_CP10_ID (0x08) /* EL2 only */
/* Unallocated EC: 0x09 - 0x0B */
#define ESR_ELx_EC_CP14_64 (0x0C)
/* Unallocated EC: 0x0d */
#define ESR_ELx_EC_ILL (0x0E)
/* Unallocated EC: 0x0F - 0x10 */
#define ESR_ELx_EC_SVC32 (0x11)
-#define ESR_ELx_EC_HVC32 (0x12)
-#define ESR_ELx_EC_SMC32 (0x13)
+#define ESR_ELx_EC_HVC32 (0x12) /* EL2 only */
+#define ESR_ELx_EC_SMC32 (0x13) /* EL2 and above */
/* Unallocated EC: 0x14 */
#define ESR_ELx_EC_SVC64 (0x15)
-#define ESR_ELx_EC_HVC64 (0x16)
-#define ESR_ELx_EC_SMC64 (0x17)
+#define ESR_ELx_EC_HVC64 (0x16) /* EL2 and above */
+#define ESR_ELx_EC_SMC64 (0x17) /* EL2 and above */
#define ESR_ELx_EC_SYS64 (0x18)
#define ESR_ELx_EC_SVE (0x19)
/* Unallocated EC: 0x1A - 0x1E */
-#define ESR_ELx_EC_IMP_DEF (0x1f)
+#define ESR_ELx_EC_IMP_DEF (0x1f) /* EL3 only */
#define ESR_ELx_EC_IABT_LOW (0x20)
#define ESR_ELx_EC_IABT_CUR (0x21)
#define ESR_ELx_EC_PC_ALIGN (0x22)
@@ -68,7 +68,7 @@
/* Unallocated EC: 0x36 - 0x37 */
#define ESR_ELx_EC_BKPT32 (0x38)
/* Unallocated EC: 0x39 */
-#define ESR_ELx_EC_VECTOR32 (0x3A)
+#define ESR_ELx_EC_VECTOR32 (0x3A) /* EL2 only */
/* Unallocted EC: 0x3B */
#define ESR_ELx_EC_BRK64 (0x3C)
/* Unallocated EC: 0x3D - 0x3F */
--
2.11.0


2018-12-07 18:41:11

by Kristina Martsenko

[permalink] [raw]
Subject: [PATCH v6 02/13] arm64: add pointer authentication register bits

From: Mark Rutland <[email protected]>

The ARMv8.3 pointer authentication extension adds:

* New fields in ID_AA64ISAR1 to report the presence of pointer
authentication functionality.

* New control bits in SCTLR_ELx to enable this functionality.

* New system registers to hold the keys necessary for this
functionality.

* A new ESR_ELx.EC code used when the new instructions are affected by
configurable traps

This patch adds the relevant definitions to <asm/sysreg.h> and
<asm/esr.h> for these, to be used by subsequent patches.

Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Kristina Martsenko <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Suzuki K Poulose <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/arm64/include/asm/esr.h | 3 ++-
arch/arm64/include/asm/sysreg.h | 30 ++++++++++++++++++++++++++++++
2 files changed, 32 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index 23602a0083ad..52233f00d53d 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -30,7 +30,8 @@
#define ESR_ELx_EC_CP14_LS (0x06)
#define ESR_ELx_EC_FP_ASIMD (0x07)
#define ESR_ELx_EC_CP10_ID (0x08) /* EL2 only */
-/* Unallocated EC: 0x09 - 0x0B */
+#define ESR_ELx_EC_PAC (0x09) /* EL2 and above */
+/* Unallocated EC: 0x0A - 0x0B */
#define ESR_ELx_EC_CP14_64 (0x0C)
/* Unallocated EC: 0x0d */
#define ESR_ELx_EC_ILL (0x0E)
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 842fb9572661..cb6d7a2a2316 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -183,6 +183,19 @@
#define SYS_TTBR1_EL1 sys_reg(3, 0, 2, 0, 1)
#define SYS_TCR_EL1 sys_reg(3, 0, 2, 0, 2)

+#define SYS_APIAKEYLO_EL1 sys_reg(3, 0, 2, 1, 0)
+#define SYS_APIAKEYHI_EL1 sys_reg(3, 0, 2, 1, 1)
+#define SYS_APIBKEYLO_EL1 sys_reg(3, 0, 2, 1, 2)
+#define SYS_APIBKEYHI_EL1 sys_reg(3, 0, 2, 1, 3)
+
+#define SYS_APDAKEYLO_EL1 sys_reg(3, 0, 2, 2, 0)
+#define SYS_APDAKEYHI_EL1 sys_reg(3, 0, 2, 2, 1)
+#define SYS_APDBKEYLO_EL1 sys_reg(3, 0, 2, 2, 2)
+#define SYS_APDBKEYHI_EL1 sys_reg(3, 0, 2, 2, 3)
+
+#define SYS_APGAKEYLO_EL1 sys_reg(3, 0, 2, 3, 0)
+#define SYS_APGAKEYHI_EL1 sys_reg(3, 0, 2, 3, 1)
+
#define SYS_ICC_PMR_EL1 sys_reg(3, 0, 4, 6, 0)

#define SYS_AFSR0_EL1 sys_reg(3, 0, 5, 1, 0)
@@ -432,9 +445,13 @@

/* Common SCTLR_ELx flags. */
#define SCTLR_ELx_DSSBS (1UL << 44)
+#define SCTLR_ELx_ENIA (1 << 31)
+#define SCTLR_ELx_ENIB (1 << 30)
+#define SCTLR_ELx_ENDA (1 << 27)
#define SCTLR_ELx_EE (1 << 25)
#define SCTLR_ELx_IESB (1 << 21)
#define SCTLR_ELx_WXN (1 << 19)
+#define SCTLR_ELx_ENDB (1 << 13)
#define SCTLR_ELx_I (1 << 12)
#define SCTLR_ELx_SA (1 << 3)
#define SCTLR_ELx_C (1 << 2)
@@ -528,11 +545,24 @@
#define ID_AA64ISAR0_AES_SHIFT 4

/* id_aa64isar1 */
+#define ID_AA64ISAR1_GPI_SHIFT 28
+#define ID_AA64ISAR1_GPA_SHIFT 24
#define ID_AA64ISAR1_LRCPC_SHIFT 20
#define ID_AA64ISAR1_FCMA_SHIFT 16
#define ID_AA64ISAR1_JSCVT_SHIFT 12
+#define ID_AA64ISAR1_API_SHIFT 8
+#define ID_AA64ISAR1_APA_SHIFT 4
#define ID_AA64ISAR1_DPB_SHIFT 0

+#define ID_AA64ISAR1_APA_NI 0x0
+#define ID_AA64ISAR1_APA_ARCHITECTED 0x1
+#define ID_AA64ISAR1_API_NI 0x0
+#define ID_AA64ISAR1_API_IMP_DEF 0x1
+#define ID_AA64ISAR1_GPA_NI 0x0
+#define ID_AA64ISAR1_GPA_ARCHITECTED 0x1
+#define ID_AA64ISAR1_GPI_NI 0x0
+#define ID_AA64ISAR1_GPI_IMP_DEF 0x1
+
/* id_aa64pfr0 */
#define ID_AA64PFR0_CSV3_SHIFT 60
#define ID_AA64PFR0_CSV2_SHIFT 56
--
2.11.0


2018-12-07 18:41:22

by Kristina Martsenko

[permalink] [raw]
Subject: [PATCH v6 03/13] arm64/kvm: consistently handle host HCR_EL2 flags

From: Mark Rutland <[email protected]>

In KVM we define the configuration of HCR_EL2 for a VHE HOST in
HCR_HOST_VHE_FLAGS, but we don't have a similar definition for the
non-VHE host flags, and open-code HCR_RW. Further, in head.S we
open-code the flags for VHE and non-VHE configurations.

In future, we're going to want to configure more flags for the host, so
lets add a HCR_HOST_NVHE_FLAGS defintion, and consistently use both
HCR_HOST_VHE_FLAGS and HCR_HOST_NVHE_FLAGS in the kvm code and head.S.

We now use mov_q to generate the HCR_EL2 value, as we use when
configuring other registers in head.S.

Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Kristina Martsenko <[email protected]>
Reviewed-by: Christoffer Dall <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: [email protected]
---
arch/arm64/include/asm/kvm_arm.h | 1 +
arch/arm64/kernel/head.S | 5 ++---
arch/arm64/kvm/hyp/switch.c | 2 +-
3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 6f602af5263c..c8825c5a8dd0 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -87,6 +87,7 @@
HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW | HCR_TLOR | \
HCR_FMO | HCR_IMO)
#define HCR_VIRT_EXCP_MASK (HCR_VSE | HCR_VI | HCR_VF)
+#define HCR_HOST_NVHE_FLAGS (HCR_RW)
#define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)

/* TCR_EL2 Registers bits */
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 4471f570a295..b207a2ce4bc6 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -496,10 +496,9 @@ ENTRY(el2_setup)
#endif

/* Hyp configuration. */
- mov x0, #HCR_RW // 64-bit EL1
+ mov_q x0, HCR_HOST_NVHE_FLAGS
cbz x2, set_hcr
- orr x0, x0, #HCR_TGE // Enable Host Extensions
- orr x0, x0, #HCR_E2H
+ mov_q x0, HCR_HOST_VHE_FLAGS
set_hcr:
msr hcr_el2, x0
isb
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index 7cc175c88a37..f6e02cc4d856 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -157,7 +157,7 @@ static void __hyp_text __deactivate_traps_nvhe(void)
mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;

write_sysreg(mdcr_el2, mdcr_el2);
- write_sysreg(HCR_RW, hcr_el2);
+ write_sysreg(HCR_HOST_NVHE_FLAGS, hcr_el2);
write_sysreg(CPTR_EL2_DEFAULT, cptr_el2);
}

--
2.11.0


2018-12-07 18:41:33

by Kristina Martsenko

[permalink] [raw]
Subject: [PATCH v6 04/13] arm64/kvm: hide ptrauth from guests

From: Mark Rutland <[email protected]>

In subsequent patches we're going to expose ptrauth to the host kernel
and userspace, but things are a bit trickier for guest kernels. For the
time being, let's hide ptrauth from KVM guests.

Regardless of how well-behaved the guest kernel is, guest userspace
could attempt to use ptrauth instructions, triggering a trap to EL2,
resulting in noise from kvm_handle_unknown_ec(). So let's write up a
handler for the PAC trap, which silently injects an UNDEF into the
guest, as if the feature were really missing.

Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Kristina Martsenko <[email protected]>
Reviewed-by: Andrew Jones <[email protected]>
Reviewed-by: Christoffer Dall <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: [email protected]
---
arch/arm64/kvm/handle_exit.c | 18 ++++++++++++++++++
arch/arm64/kvm/sys_regs.c | 8 ++++++++
2 files changed, 26 insertions(+)

diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 35a81bebd02b..ab35929dcb3c 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -173,6 +173,23 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
return 1;
}

+/*
+ * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
+ * a NOP).
+ */
+static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+ /*
+ * We don't currently support ptrauth in a guest, and we mask the ID
+ * registers to prevent well-behaved guests from trying to make use of
+ * it.
+ *
+ * Inject an UNDEF, as if the feature really isn't present.
+ */
+ kvm_inject_undefined(vcpu);
+ return 1;
+}
+
static exit_handle_fn arm_exit_handlers[] = {
[0 ... ESR_ELx_EC_MAX] = kvm_handle_unknown_ec,
[ESR_ELx_EC_WFx] = kvm_handle_wfx,
@@ -195,6 +212,7 @@ static exit_handle_fn arm_exit_handlers[] = {
[ESR_ELx_EC_BKPT32] = kvm_handle_guest_debug,
[ESR_ELx_EC_BRK64] = kvm_handle_guest_debug,
[ESR_ELx_EC_FP_ASIMD] = handle_no_fpsimd,
+ [ESR_ELx_EC_PAC] = kvm_handle_ptrauth,
};

static exit_handle_fn kvm_get_exit_handler(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 22fbbdbece3c..1ca592d38c3c 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1040,6 +1040,14 @@ static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
kvm_debug("SVE unsupported for guests, suppressing\n");

val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT);
+ } else if (id == SYS_ID_AA64ISAR1_EL1) {
+ const u64 ptrauth_mask = (0xfUL << ID_AA64ISAR1_APA_SHIFT) |
+ (0xfUL << ID_AA64ISAR1_API_SHIFT) |
+ (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
+ (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
+ if (val & ptrauth_mask)
+ kvm_debug("ptrauth unsupported for guests, suppressing\n");
+ val &= ~ptrauth_mask;
} else if (id == SYS_ID_AA64MMFR1_EL1) {
if (val & (0xfUL << ID_AA64MMFR1_LOR_SHIFT))
kvm_debug("LORegions unsupported for guests, suppressing\n");
--
2.11.0


2018-12-07 18:41:38

by Kristina Martsenko

[permalink] [raw]
Subject: [PATCH v6 05/13] arm64: Don't trap host pointer auth use to EL2

From: Mark Rutland <[email protected]>

To allow EL0 (and/or EL1) to use pointer authentication functionality,
we must ensure that pointer authentication instructions and accesses to
pointer authentication keys are not trapped to EL2.

This patch ensures that HCR_EL2 is configured appropriately when the
kernel is booted at EL2. For non-VHE kernels we set HCR_EL2.{API,APK},
ensuring that EL1 can access keys and permit EL0 use of instructions.
For VHE kernels host EL0 (TGE && E2H) is unaffected by these settings,
and it doesn't matter how we configure HCR_EL2.{API,APK}, so we don't
bother setting them.

This does not enable support for KVM guests, since KVM manages HCR_EL2
itself when running VMs.

Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Kristina Martsenko <[email protected]>
Acked-by: Christoffer Dall <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: [email protected]
---
arch/arm64/include/asm/kvm_arm.h | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index c8825c5a8dd0..f9123fe8fcf3 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -24,6 +24,8 @@

/* Hyp Configuration Register (HCR) bits */
#define HCR_FWB (UL(1) << 46)
+#define HCR_API (UL(1) << 41)
+#define HCR_APK (UL(1) << 40)
#define HCR_TEA (UL(1) << 37)
#define HCR_TERR (UL(1) << 36)
#define HCR_TLOR (UL(1) << 35)
@@ -87,7 +89,7 @@
HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW | HCR_TLOR | \
HCR_FMO | HCR_IMO)
#define HCR_VIRT_EXCP_MASK (HCR_VSE | HCR_VI | HCR_VF)
-#define HCR_HOST_NVHE_FLAGS (HCR_RW)
+#define HCR_HOST_NVHE_FLAGS (HCR_RW | HCR_API | HCR_APK)
#define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)

/* TCR_EL2 Registers bits */
--
2.11.0


2018-12-07 18:41:53

by Kristina Martsenko

[permalink] [raw]
Subject: [PATCH v6 07/13] arm64: add basic pointer authentication support

From: Mark Rutland <[email protected]>

This patch adds basic support for pointer authentication, allowing
userspace to make use of APIAKey, APIBKey, APDAKey, APDBKey, and
APGAKey. The kernel maintains key values for each process (shared by all
threads within), which are initialised to random values at exec() time.

The ID_AA64ISAR1_EL1.{APA,API,GPA,GPI} fields are exposed to userspace,
to describe that pointer authentication instructions are available and
that the kernel is managing the keys. Two new hwcaps are added for the
same reason: PACA (for address authentication) and PACG (for generic
authentication).

Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Kristina Martsenko <[email protected]>
Tested-by: Adam Wallis <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Ramana Radhakrishnan <[email protected]>
Cc: Suzuki K Poulose <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/arm64/include/asm/pointer_auth.h | 75 +++++++++++++++++++++++++++++++++++
arch/arm64/include/asm/thread_info.h | 4 ++
arch/arm64/include/uapi/asm/hwcap.h | 2 +
arch/arm64/kernel/cpufeature.c | 13 ++++++
arch/arm64/kernel/cpuinfo.c | 2 +
arch/arm64/kernel/process.c | 4 ++
6 files changed, 100 insertions(+)
create mode 100644 arch/arm64/include/asm/pointer_auth.h

diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
new file mode 100644
index 000000000000..fc7ffe8e326f
--- /dev/null
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -0,0 +1,75 @@
+// SPDX-License-Identifier: GPL-2.0
+#ifndef __ASM_POINTER_AUTH_H
+#define __ASM_POINTER_AUTH_H
+
+#include <linux/random.h>
+
+#include <asm/cpufeature.h>
+#include <asm/sysreg.h>
+
+#ifdef CONFIG_ARM64_PTR_AUTH
+/*
+ * Each key is a 128-bit quantity which is split across a pair of 64-bit
+ * registers (Lo and Hi).
+ */
+struct ptrauth_key {
+ unsigned long lo, hi;
+};
+
+/*
+ * We give each process its own keys, which are shared by all threads. The keys
+ * are inherited upon fork(), and reinitialised upon exec*().
+ */
+struct ptrauth_keys {
+ struct ptrauth_key apia;
+ struct ptrauth_key apib;
+ struct ptrauth_key apda;
+ struct ptrauth_key apdb;
+ struct ptrauth_key apga;
+};
+
+static inline void ptrauth_keys_init(struct ptrauth_keys *keys)
+{
+ if (system_supports_address_auth())
+ get_random_bytes(keys, sizeof(struct ptrauth_key) * 4);
+
+ if (system_supports_generic_auth())
+ get_random_bytes(&keys->apga, sizeof(struct ptrauth_key));
+}
+
+#define __ptrauth_key_install(k, v) \
+do { \
+ struct ptrauth_key __pki_v = (v); \
+ write_sysreg_s(__pki_v.lo, SYS_ ## k ## KEYLO_EL1); \
+ write_sysreg_s(__pki_v.hi, SYS_ ## k ## KEYHI_EL1); \
+} while (0)
+
+static inline void ptrauth_keys_switch(struct ptrauth_keys *keys)
+{
+ if (system_supports_address_auth()) {
+ __ptrauth_key_install(APIA, keys->apia);
+ __ptrauth_key_install(APIB, keys->apib);
+ __ptrauth_key_install(APDA, keys->apda);
+ __ptrauth_key_install(APDB, keys->apdb);
+ }
+
+ if (system_supports_generic_auth())
+ __ptrauth_key_install(APGA, keys->apga);
+}
+
+#define ptrauth_thread_init_user(tsk) \
+do { \
+ struct task_struct *__ptiu_tsk = (tsk); \
+ ptrauth_keys_init(&__ptiu_tsk->thread_info.keys_user); \
+ ptrauth_keys_switch(&__ptiu_tsk->thread_info.keys_user); \
+} while (0)
+
+#define ptrauth_thread_switch(tsk) \
+ ptrauth_keys_switch(&(tsk)->thread_info.keys_user)
+
+#else /* CONFIG_ARM64_PTR_AUTH */
+#define ptrauth_thread_init_user(tsk)
+#define ptrauth_thread_switch(tsk)
+#endif /* CONFIG_ARM64_PTR_AUTH */
+
+#endif /* __ASM_POINTER_AUTH_H */
diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
index cb2c10a8f0a8..ea9272fb52d4 100644
--- a/arch/arm64/include/asm/thread_info.h
+++ b/arch/arm64/include/asm/thread_info.h
@@ -28,6 +28,7 @@
struct task_struct;

#include <asm/memory.h>
+#include <asm/pointer_auth.h>
#include <asm/stack_pointer.h>
#include <asm/types.h>

@@ -43,6 +44,9 @@ struct thread_info {
u64 ttbr0; /* saved TTBR0_EL1 */
#endif
int preempt_count; /* 0 => preemptable, <0 => bug */
+#ifdef CONFIG_ARM64_PTR_AUTH
+ struct ptrauth_keys keys_user;
+#endif
};

#define thread_saved_pc(tsk) \
diff --git a/arch/arm64/include/uapi/asm/hwcap.h b/arch/arm64/include/uapi/asm/hwcap.h
index 2bcd6e4f3474..22efc70aa0a1 100644
--- a/arch/arm64/include/uapi/asm/hwcap.h
+++ b/arch/arm64/include/uapi/asm/hwcap.h
@@ -49,5 +49,7 @@
#define HWCAP_ILRCPC (1 << 26)
#define HWCAP_FLAGM (1 << 27)
#define HWCAP_SSBS (1 << 28)
+#define HWCAP_PACA (1 << 29)
+#define HWCAP_PACG (1 << 30)

#endif /* _UAPI__ASM_HWCAP_H */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index f8e3c3568a79..6daa2f451eb9 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1154,6 +1154,12 @@ static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused)
#endif /* CONFIG_ARM64_RAS_EXTN */

#ifdef CONFIG_ARM64_PTR_AUTH
+static void cpu_enable_address_auth(struct arm64_cpu_capabilities const *cap)
+{
+ sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_ENIA | SCTLR_ELx_ENIB |
+ SCTLR_ELx_ENDA | SCTLR_ELx_ENDB);
+}
+
static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
int __unused)
{
@@ -1431,6 +1437,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.capability = ARM64_HAS_ADDRESS_AUTH,
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
.matches = has_address_auth,
+ .cpu_enable = cpu_enable_address_auth,
},
{
.desc = "Generic authentication (architected algorithm)",
@@ -1504,6 +1511,12 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_SVE_SHIFT, FTR_UNSIGNED, ID_AA64PFR0_SVE, CAP_HWCAP, HWCAP_SVE),
#endif
HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_SSBS_SHIFT, FTR_UNSIGNED, ID_AA64PFR1_SSBS_PSTATE_INSNS, CAP_HWCAP, HWCAP_SSBS),
+#ifdef CONFIG_ARM64_PTR_AUTH
+ { .desc = "HWCAP_PACA", .type = ARM64_CPUCAP_SYSTEM_FEATURE, .matches = has_address_auth,
+ .hwcap_type = CAP_HWCAP, .hwcap = HWCAP_PACA },
+ { .desc = "HWCAP_PACG", .type = ARM64_CPUCAP_SYSTEM_FEATURE, .matches = has_generic_auth,
+ .hwcap_type = CAP_HWCAP, .hwcap = HWCAP_PACG },
+#endif
{},
};

diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
index bcc2831399cb..e7c7cad8dd85 100644
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -82,6 +82,8 @@ static const char *const hwcap_str[] = {
"ilrcpc",
"flagm",
"ssbs",
+ "paca",
+ "pacg",
NULL
};

diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index d9a4c2d6dd8b..17a6b4dd6e46 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -57,6 +57,7 @@
#include <asm/fpsimd.h>
#include <asm/mmu_context.h>
#include <asm/processor.h>
+#include <asm/pointer_auth.h>
#include <asm/stacktrace.h>

#ifdef CONFIG_STACKPROTECTOR
@@ -429,6 +430,7 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
contextidr_thread_switch(next);
entry_task_switch(next);
uao_thread_switch(next);
+ ptrauth_thread_switch(next);

/*
* Complete any pending TLB or cache maintenance on this CPU in case
@@ -496,4 +498,6 @@ unsigned long arch_randomize_brk(struct mm_struct *mm)
void arch_setup_new_exec(void)
{
current->mm->context.flags = is_compat_task() ? MMCF_AARCH32 : 0;
+
+ ptrauth_thread_init_user(current);
}
--
2.11.0


2018-12-07 18:41:55

by Kristina Martsenko

[permalink] [raw]
Subject: [PATCH v6 06/13] arm64/cpufeature: detect pointer authentication

From: Mark Rutland <[email protected]>

So that we can dynamically handle the presence of pointer authentication
functionality, wire up probing code in cpufeature.c.

From ARMv8.3 onwards, ID_AA64ISAR1 is no longer entirely RES0, and now
has four fields describing the presence of pointer authentication
functionality:

* APA - address authentication present, using an architected algorithm
* API - address authentication present, using an IMP DEF algorithm
* GPA - generic authentication present, using an architected algorithm
* GPI - generic authentication present, using an IMP DEF algorithm

This patch checks for both address and generic authentication,
separately. It is assumed that if all CPUs support an IMP DEF algorithm,
the same algorithm is used across all CPUs.

Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Kristina Martsenko <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Suzuki K Poulose <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/arm64/include/asm/cpucaps.h | 8 +++-
arch/arm64/include/asm/cpufeature.h | 12 +++++
arch/arm64/kernel/cpufeature.c | 90 +++++++++++++++++++++++++++++++++++++
3 files changed, 109 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index 6e2d254c09eb..62fc48604263 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -54,7 +54,13 @@
#define ARM64_HAS_CRC32 33
#define ARM64_SSBS 34
#define ARM64_WORKAROUND_1188873 35
+#define ARM64_HAS_ADDRESS_AUTH_ARCH 36
+#define ARM64_HAS_ADDRESS_AUTH_IMP_DEF 37
+#define ARM64_HAS_ADDRESS_AUTH 38
+#define ARM64_HAS_GENERIC_AUTH_ARCH 39
+#define ARM64_HAS_GENERIC_AUTH_IMP_DEF 40
+#define ARM64_HAS_GENERIC_AUTH 41

-#define ARM64_NCAPS 36
+#define ARM64_NCAPS 42

#endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 7e2ec64aa414..1c8393ffabff 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -514,6 +514,18 @@ static inline bool system_supports_cnp(void)
cpus_have_const_cap(ARM64_HAS_CNP);
}

+static inline bool system_supports_address_auth(void)
+{
+ return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
+ cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH);
+}
+
+static inline bool system_supports_generic_auth(void)
+{
+ return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
+ cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH);
+}
+
#define ARM64_SSBD_UNKNOWN -1
#define ARM64_SSBD_FORCE_DISABLE 0
#define ARM64_SSBD_KERNEL 1
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index aec5ecb85737..f8e3c3568a79 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -141,9 +141,17 @@ static const struct arm64_ftr_bits ftr_id_aa64isar0[] = {
};

static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
+ ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
+ FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_GPI_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
+ FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_GPA_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_LRCPC_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
+ FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
+ FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
ARM64_FTR_END,
};
@@ -1145,6 +1153,36 @@ static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused)
}
#endif /* CONFIG_ARM64_RAS_EXTN */

+#ifdef CONFIG_ARM64_PTR_AUTH
+static bool has_address_auth(const struct arm64_cpu_capabilities *entry,
+ int __unused)
+{
+ u64 isar1 = read_sanitised_ftr_reg(SYS_ID_AA64ISAR1_EL1);
+ bool api, apa;
+
+ apa = cpuid_feature_extract_unsigned_field(isar1,
+ ID_AA64ISAR1_APA_SHIFT) > 0;
+ api = cpuid_feature_extract_unsigned_field(isar1,
+ ID_AA64ISAR1_API_SHIFT) > 0;
+
+ return apa || api;
+}
+
+static bool has_generic_auth(const struct arm64_cpu_capabilities *entry,
+ int __unused)
+{
+ u64 isar1 = read_sanitised_ftr_reg(SYS_ID_AA64ISAR1_EL1);
+ bool gpi, gpa;
+
+ gpa = cpuid_feature_extract_unsigned_field(isar1,
+ ID_AA64ISAR1_GPA_SHIFT) > 0;
+ gpi = cpuid_feature_extract_unsigned_field(isar1,
+ ID_AA64ISAR1_GPI_SHIFT) > 0;
+
+ return gpa || gpi;
+}
+#endif /* CONFIG_ARM64_PTR_AUTH */
+
static const struct arm64_cpu_capabilities arm64_features[] = {
{
.desc = "GIC system register CPU interface",
@@ -1368,6 +1406,58 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.cpu_enable = cpu_enable_cnp,
},
#endif
+#ifdef CONFIG_ARM64_PTR_AUTH
+ {
+ .desc = "Address authentication (architected algorithm)",
+ .capability = ARM64_HAS_ADDRESS_AUTH_ARCH,
+ .type = ARM64_CPUCAP_SYSTEM_FEATURE,
+ .sys_reg = SYS_ID_AA64ISAR1_EL1,
+ .sign = FTR_UNSIGNED,
+ .field_pos = ID_AA64ISAR1_APA_SHIFT,
+ .min_field_value = ID_AA64ISAR1_APA_ARCHITECTED,
+ .matches = has_cpuid_feature,
+ },
+ {
+ .desc = "Address authentication (IMP DEF algorithm)",
+ .capability = ARM64_HAS_ADDRESS_AUTH_IMP_DEF,
+ .type = ARM64_CPUCAP_SYSTEM_FEATURE,
+ .sys_reg = SYS_ID_AA64ISAR1_EL1,
+ .sign = FTR_UNSIGNED,
+ .field_pos = ID_AA64ISAR1_API_SHIFT,
+ .min_field_value = ID_AA64ISAR1_API_IMP_DEF,
+ .matches = has_cpuid_feature,
+ },
+ {
+ .capability = ARM64_HAS_ADDRESS_AUTH,
+ .type = ARM64_CPUCAP_SYSTEM_FEATURE,
+ .matches = has_address_auth,
+ },
+ {
+ .desc = "Generic authentication (architected algorithm)",
+ .capability = ARM64_HAS_GENERIC_AUTH_ARCH,
+ .type = ARM64_CPUCAP_SYSTEM_FEATURE,
+ .sys_reg = SYS_ID_AA64ISAR1_EL1,
+ .sign = FTR_UNSIGNED,
+ .field_pos = ID_AA64ISAR1_GPA_SHIFT,
+ .min_field_value = ID_AA64ISAR1_GPA_ARCHITECTED,
+ .matches = has_cpuid_feature,
+ },
+ {
+ .desc = "Generic authentication (IMP DEF algorithm)",
+ .capability = ARM64_HAS_GENERIC_AUTH_IMP_DEF,
+ .type = ARM64_CPUCAP_SYSTEM_FEATURE,
+ .sys_reg = SYS_ID_AA64ISAR1_EL1,
+ .sign = FTR_UNSIGNED,
+ .field_pos = ID_AA64ISAR1_GPI_SHIFT,
+ .min_field_value = ID_AA64ISAR1_GPI_IMP_DEF,
+ .matches = has_cpuid_feature,
+ },
+ {
+ .capability = ARM64_HAS_GENERIC_AUTH,
+ .type = ARM64_CPUCAP_SYSTEM_FEATURE,
+ .matches = has_generic_auth,
+ },
+#endif /* CONFIG_ARM64_PTR_AUTH */
{},
};

--
2.11.0


2018-12-07 18:41:58

by Kristina Martsenko

[permalink] [raw]
Subject: [PATCH v6 08/13] arm64: expose user PAC bit positions via ptrace

From: Mark Rutland <[email protected]>

When pointer authentication is in use, data/instruction pointers have a
number of PAC bits inserted into them. The number and position of these
bits depends on the configured TCR_ELx.TxSZ and whether tagging is
enabled. ARMv8.3 allows tagging to differ for instruction and data
pointers.

For userspace debuggers to unwind the stack and/or to follow pointer
chains, they need to be able to remove the PAC bits before attempting to
use a pointer.

This patch adds a new structure with masks describing the location of
the PAC bits in userspace instruction and data pointers (i.e. those
addressable via TTBR0), which userspace can query via PTRACE_GETREGSET.
By clearing these bits from pointers (and replacing them with the value
of bit 55), userspace can acquire the PAC-less versions.

This new regset is exposed when the kernel is built with (user) pointer
authentication support, and the address authentication feature is
enabled. Otherwise, the regset is hidden.

Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Kristina Martsenko <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Ramana Radhakrishnan <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/arm64/include/asm/pointer_auth.h | 8 ++++++++
arch/arm64/include/uapi/asm/ptrace.h | 7 +++++++
arch/arm64/kernel/ptrace.c | 38 +++++++++++++++++++++++++++++++++++
include/uapi/linux/elf.h | 1 +
4 files changed, 54 insertions(+)

diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
index fc7ffe8e326f..5721228836c1 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -2,9 +2,11 @@
#ifndef __ASM_POINTER_AUTH_H
#define __ASM_POINTER_AUTH_H

+#include <linux/bitops.h>
#include <linux/random.h>

#include <asm/cpufeature.h>
+#include <asm/memory.h>
#include <asm/sysreg.h>

#ifdef CONFIG_ARM64_PTR_AUTH
@@ -57,6 +59,12 @@ static inline void ptrauth_keys_switch(struct ptrauth_keys *keys)
__ptrauth_key_install(APGA, keys->apga);
}

+/*
+ * The EL0 pointer bits used by a pointer authentication code.
+ * This is dependent on TBI0 being enabled, or bits 63:56 would also apply.
+ */
+#define ptrauth_pac_mask() GENMASK(54, VA_BITS)
+
#define ptrauth_thread_init_user(tsk) \
do { \
struct task_struct *__ptiu_tsk = (tsk); \
diff --git a/arch/arm64/include/uapi/asm/ptrace.h b/arch/arm64/include/uapi/asm/ptrace.h
index a36227fdb084..c2f249bcd829 100644
--- a/arch/arm64/include/uapi/asm/ptrace.h
+++ b/arch/arm64/include/uapi/asm/ptrace.h
@@ -229,6 +229,13 @@ struct user_sve_header {
SVE_PT_SVE_OFFSET + SVE_PT_SVE_SIZE(vq, flags) \
: SVE_PT_FPSIMD_OFFSET + SVE_PT_FPSIMD_SIZE(vq, flags))

+/* pointer authentication masks (NT_ARM_PAC_MASK) */
+
+struct user_pac_mask {
+ __u64 data_mask;
+ __u64 insn_mask;
+};
+
#endif /* __ASSEMBLY__ */

#endif /* _UAPI__ASM_PTRACE_H */
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index 1710a2d01669..6c1f63cb6c4e 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -46,6 +46,7 @@
#include <asm/debug-monitors.h>
#include <asm/fpsimd.h>
#include <asm/pgtable.h>
+#include <asm/pointer_auth.h>
#include <asm/stacktrace.h>
#include <asm/syscall.h>
#include <asm/traps.h>
@@ -956,6 +957,30 @@ static int sve_set(struct task_struct *target,

#endif /* CONFIG_ARM64_SVE */

+#ifdef CONFIG_ARM64_PTR_AUTH
+static int pac_mask_get(struct task_struct *target,
+ const struct user_regset *regset,
+ unsigned int pos, unsigned int count,
+ void *kbuf, void __user *ubuf)
+{
+ /*
+ * The PAC bits can differ across data and instruction pointers
+ * depending on TCR_EL1.TBID*, which we may make use of in future, so
+ * we expose separate masks.
+ */
+ unsigned long mask = ptrauth_pac_mask();
+ struct user_pac_mask uregs = {
+ .data_mask = mask,
+ .insn_mask = mask,
+ };
+
+ if (!system_supports_address_auth())
+ return -EINVAL;
+
+ return user_regset_copyout(&pos, &count, &kbuf, &ubuf, &uregs, 0, -1);
+}
+#endif /* CONFIG_ARM64_PTR_AUTH */
+
enum aarch64_regset {
REGSET_GPR,
REGSET_FPR,
@@ -968,6 +993,9 @@ enum aarch64_regset {
#ifdef CONFIG_ARM64_SVE
REGSET_SVE,
#endif
+#ifdef CONFIG_ARM64_PTR_AUTH
+ REGSET_PAC_MASK,
+#endif
};

static const struct user_regset aarch64_regsets[] = {
@@ -1037,6 +1065,16 @@ static const struct user_regset aarch64_regsets[] = {
.get_size = sve_get_size,
},
#endif
+#ifdef CONFIG_ARM64_PTR_AUTH
+ [REGSET_PAC_MASK] = {
+ .core_note_type = NT_ARM_PAC_MASK,
+ .n = sizeof(struct user_pac_mask) / sizeof(u64),
+ .size = sizeof(u64),
+ .align = sizeof(u64),
+ .get = pac_mask_get,
+ /* this cannot be set dynamically */
+ },
+#endif
};

static const struct user_regset_view user_aarch64_view = {
diff --git a/include/uapi/linux/elf.h b/include/uapi/linux/elf.h
index c5358e0ae7c5..3f23273d690c 100644
--- a/include/uapi/linux/elf.h
+++ b/include/uapi/linux/elf.h
@@ -420,6 +420,7 @@ typedef struct elf64_shdr {
#define NT_ARM_HW_WATCH 0x403 /* ARM hardware watchpoint registers */
#define NT_ARM_SYSTEM_CALL 0x404 /* ARM system call number */
#define NT_ARM_SVE 0x405 /* ARM Scalable Vector Extension registers */
+#define NT_ARM_PAC_MASK 0x406 /* ARM pointer authentication code masks */
#define NT_ARC_V2 0x600 /* ARCv2 accumulator/extra registers */
#define NT_VMCOREDD 0x700 /* Vmcore Device Dump Note */
#define NT_MIPS_DSP 0x800 /* MIPS DSP ASE registers */
--
2.11.0


2018-12-07 18:42:05

by Kristina Martsenko

[permalink] [raw]
Subject: [PATCH v6 09/13] arm64: perf: strip PAC when unwinding userspace

From: Mark Rutland <[email protected]>

When the kernel is unwinding userspace callchains, we can't expect that
the userspace consumer of these callchains has the data necessary to
strip the PAC from the stored LR.

This patch has the kernel strip the PAC from user stackframes when the
in-kernel unwinder is used. This only affects the LR value, and not the
FP.

This only affects the in-kernel unwinder. When userspace performs
unwinding, it is up to userspace to strip PACs as necessary (which can
be determined from DWARF information).

Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Kristina Martsenko <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Ramana Radhakrishnan <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/arm64/include/asm/pointer_auth.h | 7 +++++++
arch/arm64/kernel/perf_callchain.c | 6 +++++-
2 files changed, 12 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
index 5721228836c1..89190d93c850 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -65,6 +65,12 @@ static inline void ptrauth_keys_switch(struct ptrauth_keys *keys)
*/
#define ptrauth_pac_mask() GENMASK(54, VA_BITS)

+/* Only valid for EL0 TTBR0 instruction pointers */
+static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
+{
+ return ptr & ~ptrauth_pac_mask();
+}
+
#define ptrauth_thread_init_user(tsk) \
do { \
struct task_struct *__ptiu_tsk = (tsk); \
@@ -76,6 +82,7 @@ do { \
ptrauth_keys_switch(&(tsk)->thread_info.keys_user)

#else /* CONFIG_ARM64_PTR_AUTH */
+#define ptrauth_strip_insn_pac(lr) (lr)
#define ptrauth_thread_init_user(tsk)
#define ptrauth_thread_switch(tsk)
#endif /* CONFIG_ARM64_PTR_AUTH */
diff --git a/arch/arm64/kernel/perf_callchain.c b/arch/arm64/kernel/perf_callchain.c
index bcafd7dcfe8b..94754f07f67a 100644
--- a/arch/arm64/kernel/perf_callchain.c
+++ b/arch/arm64/kernel/perf_callchain.c
@@ -18,6 +18,7 @@
#include <linux/perf_event.h>
#include <linux/uaccess.h>

+#include <asm/pointer_auth.h>
#include <asm/stacktrace.h>

struct frame_tail {
@@ -35,6 +36,7 @@ user_backtrace(struct frame_tail __user *tail,
{
struct frame_tail buftail;
unsigned long err;
+ unsigned long lr;

/* Also check accessibility of one struct frame_tail beyond */
if (!access_ok(VERIFY_READ, tail, sizeof(buftail)))
@@ -47,7 +49,9 @@ user_backtrace(struct frame_tail __user *tail,
if (err)
return NULL;

- perf_callchain_store(entry, buftail.lr);
+ lr = ptrauth_strip_insn_pac(buftail.lr);
+
+ perf_callchain_store(entry, lr);

/*
* Frame pointers should strictly progress back up the stack
--
2.11.0


2018-12-07 18:42:18

by Kristina Martsenko

[permalink] [raw]
Subject: [PATCH v6 11/13] arm64: add ptrace regsets for ptrauth key management

Add two new ptrace regsets, which can be used to request and change the
pointer authentication keys of a thread. NT_ARM_PACA_KEYS gives access
to the instruction/data address keys, and NT_ARM_PACG_KEYS to the
generic authentication key.

The regsets are only exposed if the kernel is compiled with
CONFIG_CHECKPOINT_RESTORE=y, as the intended use case is checkpointing
and restoring processes that are using pointer authentication. Normally
applications or debuggers should not need to know the keys (and exposing
the keys is a security risk), so the regsets are not exposed by default.

Signed-off-by: Kristina Martsenko <[email protected]>
---
arch/arm64/include/uapi/asm/ptrace.h | 18 +++++++++
arch/arm64/kernel/ptrace.c | 72 ++++++++++++++++++++++++++++++++++++
include/uapi/linux/elf.h | 2 +
3 files changed, 92 insertions(+)

diff --git a/arch/arm64/include/uapi/asm/ptrace.h b/arch/arm64/include/uapi/asm/ptrace.h
index c2f249bcd829..fafa7f6decf9 100644
--- a/arch/arm64/include/uapi/asm/ptrace.h
+++ b/arch/arm64/include/uapi/asm/ptrace.h
@@ -236,6 +236,24 @@ struct user_pac_mask {
__u64 insn_mask;
};

+/* pointer authentication keys (NT_ARM_PACA_KEYS, NT_ARM_PACG_KEYS) */
+
+struct user_pac_address_keys {
+ __u64 apiakey_lo;
+ __u64 apiakey_hi;
+ __u64 apibkey_lo;
+ __u64 apibkey_hi;
+ __u64 apdakey_lo;
+ __u64 apdakey_hi;
+ __u64 apdbkey_lo;
+ __u64 apdbkey_hi;
+};
+
+struct user_pac_generic_keys {
+ __u64 apgakey_lo;
+ __u64 apgakey_hi;
+};
+
#endif /* __ASSEMBLY__ */

#endif /* _UAPI__ASM_PTRACE_H */
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index 6c1f63cb6c4e..f18f14c64d1e 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -979,6 +979,56 @@ static int pac_mask_get(struct task_struct *target,

return user_regset_copyout(&pos, &count, &kbuf, &ubuf, &uregs, 0, -1);
}
+
+#ifdef CONFIG_CHECKPOINT_RESTORE
+static int pac_address_keys_get(struct task_struct *target,
+ const struct user_regset *regset,
+ unsigned int pos, unsigned int count,
+ void *kbuf, void __user *ubuf)
+{
+ if (!system_supports_address_auth())
+ return -EINVAL;
+
+ return user_regset_copyout(&pos, &count, &kbuf, &ubuf,
+ &target->thread_info.keys_user, 0, -1);
+}
+
+static int pac_address_keys_set(struct task_struct *target,
+ const struct user_regset *regset,
+ unsigned int pos, unsigned int count,
+ const void *kbuf, const void __user *ubuf)
+{
+ if (!system_supports_address_auth())
+ return -EINVAL;
+
+ return user_regset_copyin(&pos, &count, &kbuf, &ubuf,
+ &target->thread_info.keys_user, 0, -1);
+}
+
+static int pac_generic_keys_get(struct task_struct *target,
+ const struct user_regset *regset,
+ unsigned int pos, unsigned int count,
+ void *kbuf, void __user *ubuf)
+{
+ if (!system_supports_generic_auth())
+ return -EINVAL;
+
+ return user_regset_copyout(&pos, &count, &kbuf, &ubuf,
+ &target->thread_info.keys_user.apga, 0, -1);
+}
+
+static int pac_generic_keys_set(struct task_struct *target,
+ const struct user_regset *regset,
+ unsigned int pos, unsigned int count,
+ const void *kbuf, const void __user *ubuf)
+{
+ if (!system_supports_generic_auth())
+ return -EINVAL;
+
+ return user_regset_copyin(&pos, &count, &kbuf, &ubuf,
+ &target->thread_info.keys_user.apga, 0, -1);
+}
+#endif /* CONFIG_CHECKPOINT_RESTORE */
#endif /* CONFIG_ARM64_PTR_AUTH */

enum aarch64_regset {
@@ -995,6 +1045,10 @@ enum aarch64_regset {
#endif
#ifdef CONFIG_ARM64_PTR_AUTH
REGSET_PAC_MASK,
+#ifdef CONFIG_CHECKPOINT_RESTORE
+ REGSET_PACA_KEYS,
+ REGSET_PACG_KEYS,
+#endif
#endif
};

@@ -1074,6 +1128,24 @@ static const struct user_regset aarch64_regsets[] = {
.get = pac_mask_get,
/* this cannot be set dynamically */
},
+#ifdef CONFIG_CHECKPOINT_RESTORE
+ [REGSET_PACA_KEYS] = {
+ .core_note_type = NT_ARM_PACA_KEYS,
+ .n = sizeof(struct user_pac_address_keys) / sizeof(u64),
+ .size = sizeof(u64),
+ .align = sizeof(u64),
+ .get = pac_address_keys_get,
+ .set = pac_address_keys_set,
+ },
+ [REGSET_PACG_KEYS] = {
+ .core_note_type = NT_ARM_PACG_KEYS,
+ .n = sizeof(struct user_pac_generic_keys) / sizeof(u64),
+ .size = sizeof(u64),
+ .align = sizeof(u64),
+ .get = pac_generic_keys_get,
+ .set = pac_generic_keys_set,
+ },
+#endif
#endif
};

diff --git a/include/uapi/linux/elf.h b/include/uapi/linux/elf.h
index 3f23273d690c..c1afbc592531 100644
--- a/include/uapi/linux/elf.h
+++ b/include/uapi/linux/elf.h
@@ -421,6 +421,8 @@ typedef struct elf64_shdr {
#define NT_ARM_SYSTEM_CALL 0x404 /* ARM system call number */
#define NT_ARM_SVE 0x405 /* ARM Scalable Vector Extension registers */
#define NT_ARM_PAC_MASK 0x406 /* ARM pointer authentication code masks */
+#define NT_ARM_PACA_KEYS 0x407 /* ARM pointer authentication address keys */
+#define NT_ARM_PACG_KEYS 0x408 /* ARM pointer authentication generic key */
#define NT_ARC_V2 0x600 /* ARCv2 accumulator/extra registers */
#define NT_VMCOREDD 0x700 /* Vmcore Device Dump Note */
#define NT_MIPS_DSP 0x800 /* MIPS DSP ASE registers */
--
2.11.0


2018-12-07 18:43:06

by Kristina Martsenko

[permalink] [raw]
Subject: [PATCH v6 10/13] arm64: add prctl control for resetting ptrauth keys

Add an arm64-specific prctl to allow a thread to reinitialize its
pointer authentication keys to random values. This can be useful when
exec() is not used for starting new processes, to ensure that different
processes still have different keys.

Signed-off-by: Kristina Martsenko <[email protected]>
---
arch/arm64/include/asm/pointer_auth.h | 3 +++
arch/arm64/include/asm/processor.h | 4 +++
arch/arm64/kernel/Makefile | 1 +
arch/arm64/kernel/pointer_auth.c | 47 +++++++++++++++++++++++++++++++++++
include/uapi/linux/prctl.h | 8 ++++++
kernel/sys.c | 8 ++++++
6 files changed, 71 insertions(+)
create mode 100644 arch/arm64/kernel/pointer_auth.c

diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
index 89190d93c850..7797bc346c6b 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -59,6 +59,8 @@ static inline void ptrauth_keys_switch(struct ptrauth_keys *keys)
__ptrauth_key_install(APGA, keys->apga);
}

+extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg);
+
/*
* The EL0 pointer bits used by a pointer authentication code.
* This is dependent on TBI0 being enabled, or bits 63:56 would also apply.
@@ -82,6 +84,7 @@ do { \
ptrauth_keys_switch(&(tsk)->thread_info.keys_user)

#else /* CONFIG_ARM64_PTR_AUTH */
+#define ptrauth_prctl_reset_keys(tsk, arg) (-EINVAL)
#define ptrauth_strip_insn_pac(lr) (lr)
#define ptrauth_thread_init_user(tsk)
#define ptrauth_thread_switch(tsk)
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index 6b0d4dff5012..40ccfb7605b6 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -46,6 +46,7 @@
#include <asm/hw_breakpoint.h>
#include <asm/lse.h>
#include <asm/pgtable-hwdef.h>
+#include <asm/pointer_auth.h>
#include <asm/ptrace.h>
#include <asm/types.h>

@@ -270,6 +271,9 @@ extern void __init minsigstksz_setup(void);
#define SVE_SET_VL(arg) sve_set_current_vl(arg)
#define SVE_GET_VL() sve_get_current_vl()

+/* PR_PAC_RESET_KEYS prctl */
+#define PAC_RESET_KEYS(tsk, arg) ptrauth_prctl_reset_keys(tsk, arg)
+
/*
* For CONFIG_GCC_PLUGIN_STACKLEAK
*
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 4c8b13bede80..096740ab81d2 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -57,6 +57,7 @@ arm64-obj-$(CONFIG_CRASH_DUMP) += crash_dump.o
arm64-obj-$(CONFIG_CRASH_CORE) += crash_core.o
arm64-obj-$(CONFIG_ARM_SDE_INTERFACE) += sdei.o
arm64-obj-$(CONFIG_ARM64_SSBD) += ssbd.o
+arm64-obj-$(CONFIG_ARM64_PTR_AUTH) += pointer_auth.o

obj-y += $(arm64-obj-y) vdso/ probes/
obj-m += $(arm64-obj-m)
diff --git a/arch/arm64/kernel/pointer_auth.c b/arch/arm64/kernel/pointer_auth.c
new file mode 100644
index 000000000000..b9f6f5f3409a
--- /dev/null
+++ b/arch/arm64/kernel/pointer_auth.c
@@ -0,0 +1,47 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/errno.h>
+#include <linux/prctl.h>
+#include <linux/random.h>
+#include <linux/sched.h>
+#include <asm/cpufeature.h>
+#include <asm/pointer_auth.h>
+
+int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
+{
+ struct ptrauth_keys *keys = &tsk->thread_info.keys_user;
+ unsigned long addr_key_mask = PR_PAC_APIAKEY | PR_PAC_APIBKEY |
+ PR_PAC_APDAKEY | PR_PAC_APDBKEY;
+ unsigned long key_mask = addr_key_mask | PR_PAC_APGAKEY;
+
+ if (!system_supports_address_auth() && !system_supports_generic_auth())
+ return -EINVAL;
+
+ if (!arg) {
+ ptrauth_keys_init(keys);
+ ptrauth_keys_switch(keys);
+ return 0;
+ }
+
+ if (arg & ~key_mask)
+ return -EINVAL;
+
+ if (((arg & addr_key_mask) && !system_supports_address_auth()) ||
+ ((arg & PR_PAC_APGAKEY) && !system_supports_generic_auth()))
+ return -EINVAL;
+
+ if (arg & PR_PAC_APIAKEY)
+ get_random_bytes(&keys->apia, sizeof(keys->apia));
+ if (arg & PR_PAC_APIBKEY)
+ get_random_bytes(&keys->apib, sizeof(keys->apib));
+ if (arg & PR_PAC_APDAKEY)
+ get_random_bytes(&keys->apda, sizeof(keys->apda));
+ if (arg & PR_PAC_APDBKEY)
+ get_random_bytes(&keys->apdb, sizeof(keys->apdb));
+ if (arg & PR_PAC_APGAKEY)
+ get_random_bytes(&keys->apga, sizeof(keys->apga));
+
+ ptrauth_keys_switch(keys);
+
+ return 0;
+}
diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h
index b17201edfa09..b4875a93363a 100644
--- a/include/uapi/linux/prctl.h
+++ b/include/uapi/linux/prctl.h
@@ -220,4 +220,12 @@ struct prctl_mm_map {
# define PR_SPEC_DISABLE (1UL << 2)
# define PR_SPEC_FORCE_DISABLE (1UL << 3)

+/* Reset arm64 pointer authentication keys */
+#define PR_PAC_RESET_KEYS 54
+# define PR_PAC_APIAKEY (1UL << 0)
+# define PR_PAC_APIBKEY (1UL << 1)
+# define PR_PAC_APDAKEY (1UL << 2)
+# define PR_PAC_APDBKEY (1UL << 3)
+# define PR_PAC_APGAKEY (1UL << 4)
+
#endif /* _LINUX_PRCTL_H */
diff --git a/kernel/sys.c b/kernel/sys.c
index 123bd73046ec..64b5a230f38d 100644
--- a/kernel/sys.c
+++ b/kernel/sys.c
@@ -121,6 +121,9 @@
#ifndef SVE_GET_VL
# define SVE_GET_VL() (-EINVAL)
#endif
+#ifndef PAC_RESET_KEYS
+# define PAC_RESET_KEYS(a, b) (-EINVAL)
+#endif

/*
* this is where the system-wide overflow UID and GID are defined, for
@@ -2476,6 +2479,11 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
return -EINVAL;
error = arch_prctl_spec_ctrl_set(me, arg2, arg3);
break;
+ case PR_PAC_RESET_KEYS:
+ if (arg3 || arg4 || arg5)
+ return -EINVAL;
+ error = PAC_RESET_KEYS(me, arg2);
+ break;
default:
error = -EINVAL;
break;
--
2.11.0


2018-12-07 18:43:31

by Kristina Martsenko

[permalink] [raw]
Subject: [PATCH v6 12/13] arm64: enable pointer authentication

From: Mark Rutland <[email protected]>

Now that all the necessary bits are in place for userspace, add the
necessary Kconfig logic to allow this to be enabled.

Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Kristina Martsenko <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/arm64/Kconfig | 23 +++++++++++++++++++++++
1 file changed, 23 insertions(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index ea2ab0330e3a..5279a8646fc6 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1188,6 +1188,29 @@ config ARM64_CNP

endmenu

+menu "ARMv8.3 architectural features"
+
+config ARM64_PTR_AUTH
+ bool "Enable support for pointer authentication"
+ default y
+ help
+ Pointer authentication (part of the ARMv8.3 Extensions) provides
+ instructions for signing and authenticating pointers against secret
+ keys, which can be used to mitigate Return Oriented Programming (ROP)
+ and other attacks.
+
+ This option enables these instructions at EL0 (i.e. for userspace).
+
+ Choosing this option will cause the kernel to initialise secret keys
+ for each process at exec() time, with these keys being
+ context-switched along with the process.
+
+ The feature is detected at runtime. If the feature is not present in
+ hardware it will not be advertised to userspace nor will it be
+ enabled.
+
+endmenu
+
config ARM64_SVE
bool "ARM Scalable Vector Extension support"
default y
--
2.11.0


2018-12-07 18:43:33

by Kristina Martsenko

[permalink] [raw]
Subject: [PATCH v6 13/13] arm64: docs: document pointer authentication

From: Mark Rutland <[email protected]>

Now that we've added code to support pointer authentication, add some
documentation so that people can figure out if/how to use it.

Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Kristina Martsenko <[email protected]>
Reviewed-by: Ramana Radhakrishnan <[email protected]>
Cc: Andrew Jones <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Ramana Radhakrishnan <[email protected]>
Cc: Will Deacon <[email protected]>
---
Documentation/arm64/booting.txt | 8 +++
Documentation/arm64/cpu-feature-registers.txt | 8 +++
Documentation/arm64/elf_hwcaps.txt | 12 ++++
Documentation/arm64/pointer-authentication.txt | 93 ++++++++++++++++++++++++++
4 files changed, 121 insertions(+)
create mode 100644 Documentation/arm64/pointer-authentication.txt

diff --git a/Documentation/arm64/booting.txt b/Documentation/arm64/booting.txt
index 8d0df62c3fe0..8df9f4658d6f 100644
--- a/Documentation/arm64/booting.txt
+++ b/Documentation/arm64/booting.txt
@@ -205,6 +205,14 @@ Before jumping into the kernel, the following conditions must be met:
ICC_SRE_EL2.SRE (bit 0) must be initialised to 0b0.
- The DT or ACPI tables must describe a GICv2 interrupt controller.

+ For CPUs with pointer authentication functionality:
+ - If EL3 is present:
+ SCR_EL3.APK (bit 16) must be initialised to 0b1
+ SCR_EL3.API (bit 17) must be initialised to 0b1
+ - If the kernel is entered at EL1:
+ HCR_EL2.APK (bit 40) must be initialised to 0b1
+ HCR_EL2.API (bit 41) must be initialised to 0b1
+
The requirements described above for CPU mode, caches, MMUs, architected
timers, coherency and system registers apply to all CPUs. All CPUs must
enter the kernel in the same exception level.
diff --git a/Documentation/arm64/cpu-feature-registers.txt b/Documentation/arm64/cpu-feature-registers.txt
index 7964f03846b1..d4b4dd1fe786 100644
--- a/Documentation/arm64/cpu-feature-registers.txt
+++ b/Documentation/arm64/cpu-feature-registers.txt
@@ -184,12 +184,20 @@ infrastructure:
x--------------------------------------------------x
| Name | bits | visible |
|--------------------------------------------------|
+ | GPI | [31-28] | y |
+ |--------------------------------------------------|
+ | GPA | [27-24] | y |
+ |--------------------------------------------------|
| LRCPC | [23-20] | y |
|--------------------------------------------------|
| FCMA | [19-16] | y |
|--------------------------------------------------|
| JSCVT | [15-12] | y |
|--------------------------------------------------|
+ | API | [11-8] | y |
+ |--------------------------------------------------|
+ | APA | [7-4] | y |
+ |--------------------------------------------------|
| DPB | [3-0] | y |
x--------------------------------------------------x

diff --git a/Documentation/arm64/elf_hwcaps.txt b/Documentation/arm64/elf_hwcaps.txt
index ea819ae024dd..13d6691b37be 100644
--- a/Documentation/arm64/elf_hwcaps.txt
+++ b/Documentation/arm64/elf_hwcaps.txt
@@ -182,3 +182,15 @@ HWCAP_FLAGM
HWCAP_SSBS

Functionality implied by ID_AA64PFR1_EL1.SSBS == 0b0010.
+
+HWCAP_PACA
+
+ Functionality implied by ID_AA64ISAR1_EL1.APA == 0b0001 or
+ ID_AA64ISAR1_EL1.API == 0b0001, as described by
+ Documentation/arm64/pointer-authentication.txt.
+
+HWCAP_PACG
+
+ Functionality implied by ID_AA64ISAR1_EL1.GPA == 0b0001 or
+ ID_AA64ISAR1_EL1.GPI == 0b0001, as described by
+ Documentation/arm64/pointer-authentication.txt.
diff --git a/Documentation/arm64/pointer-authentication.txt b/Documentation/arm64/pointer-authentication.txt
new file mode 100644
index 000000000000..5baca42ba146
--- /dev/null
+++ b/Documentation/arm64/pointer-authentication.txt
@@ -0,0 +1,93 @@
+Pointer authentication in AArch64 Linux
+=======================================
+
+Author: Mark Rutland <[email protected]>
+Date: 2017-07-19
+
+This document briefly describes the provision of pointer authentication
+functionality in AArch64 Linux.
+
+
+Architecture overview
+---------------------
+
+The ARMv8.3 Pointer Authentication extension adds primitives that can be
+used to mitigate certain classes of attack where an attacker can corrupt
+the contents of some memory (e.g. the stack).
+
+The extension uses a Pointer Authentication Code (PAC) to determine
+whether pointers have been modified unexpectedly. A PAC is derived from
+a pointer, another value (such as the stack pointer), and a secret key
+held in system registers.
+
+The extension adds instructions to insert a valid PAC into a pointer,
+and to verify/remove the PAC from a pointer. The PAC occupies a number
+of high-order bits of the pointer, which varies dependent on the
+configured virtual address size and whether pointer tagging is in use.
+
+A subset of these instructions have been allocated from the HINT
+encoding space. In the absence of the extension (or when disabled),
+these instructions behave as NOPs. Applications and libraries using
+these instructions operate correctly regardless of the presence of the
+extension.
+
+The extension provides five separate keys to generate PACs - two for
+instruction addresses (APIAKey, APIBKey), two for data addresses
+(APDAKey, APDBKey), and one for generic authentication (APGAKey).
+
+
+Basic support
+-------------
+
+When CONFIG_ARM64_PTR_AUTH is selected, and relevant HW support is
+present, the kernel will assign random key values to each process at
+exec*() time. The keys are shared by all threads within the process, and
+are preserved across fork().
+
+Presence of address authentication functionality is advertised via
+HWCAP_PACA, and generic authentication functionality via HWCAP_PACG.
+
+The number of bits that the PAC occupies in a pointer is 55 minus the
+virtual address size configured by the kernel. For example, with a
+virtual address size of 48, the PAC is 7 bits wide.
+
+Recent versions of GCC can compile code with APIAKey-based return
+address protection when passed the -msign-return-address option. This
+uses instructions in the HINT space (unless -march=armv8.3-a or higher
+is also passed), and such code can run on systems without the pointer
+authentication extension.
+
+In addition to exec(), keys can also be reinitialized to random values
+using the PR_PAC_RESET_KEYS prctl. A bitmask of PR_PAC_APIAKEY,
+PR_PAC_APIBKEY, PR_PAC_APDAKEY, PR_PAC_APDBKEY and PR_PAC_APGAKEY
+specifies which keys are to be reinitialized; specifying 0 means "all
+keys".
+
+
+Debugging
+---------
+
+When CONFIG_ARM64_PTR_AUTH is selected, and HW support for address
+authentication is present, the kernel will expose the position of TTBR0
+PAC bits in the NT_ARM_PAC_MASK regset (struct user_pac_mask), which
+userspace can acquire via PTRACE_GETREGSET.
+
+The regset is exposed only when HWCAP_PACA is set. Separate masks are
+exposed for data pointers and instruction pointers, as the set of PAC
+bits can vary between the two. Note that the masks apply to TTBR0
+addresses, and are not valid to apply to TTBR1 addresses (e.g. kernel
+pointers).
+
+Additionally, when CONFIG_CHECKPOINT_RESTORE is also set, the kernel
+will expose the NT_ARM_PACA_KEYS and NT_ARM_PACG_KEYS regsets (struct
+user_pac_address_keys and struct user_pac_generic_keys). These can be
+used to get and set the keys for a thread.
+
+
+Virtualization
+--------------
+
+Pointer authentication is not currently supported in KVM guests. KVM
+will mask the feature bits from ID_AA64ISAR1_EL1, and attempted use of
+the feature will result in an UNDEFINED exception being injected into
+the guest.
--
2.11.0


2018-12-08 10:32:09

by Marc Zyngier

[permalink] [raw]
Subject: Re: [PATCH v6 03/13] arm64/kvm: consistently handle host HCR_EL2 flags

On Fri, 07 Dec 2018 18:39:21 +0000,
Kristina Martsenko <[email protected]> wrote:
>
> From: Mark Rutland <[email protected]>
>
> In KVM we define the configuration of HCR_EL2 for a VHE HOST in
> HCR_HOST_VHE_FLAGS, but we don't have a similar definition for the
> non-VHE host flags, and open-code HCR_RW. Further, in head.S we
> open-code the flags for VHE and non-VHE configurations.
>
> In future, we're going to want to configure more flags for the host, so
> lets add a HCR_HOST_NVHE_FLAGS defintion, and consistently use both
> HCR_HOST_VHE_FLAGS and HCR_HOST_NVHE_FLAGS in the kvm code and head.S.
>
> We now use mov_q to generate the HCR_EL2 value, as we use when
> configuring other registers in head.S.
>
> Signed-off-by: Mark Rutland <[email protected]>
> Signed-off-by: Kristina Martsenko <[email protected]>
> Reviewed-by: Christoffer Dall <[email protected]>
> Cc: Catalin Marinas <[email protected]>
> Cc: Marc Zyngier <[email protected]>
> Cc: Will Deacon <[email protected]>
> Cc: [email protected]

Reviewed-by: Marc Zyngier <[email protected]>

M.

--
Jazz is not dead, it just smell funny.

2018-12-08 10:33:33

by Marc Zyngier

[permalink] [raw]
Subject: Re: [PATCH v6 04/13] arm64/kvm: hide ptrauth from guests

On Fri, 07 Dec 2018 18:39:22 +0000,
Kristina Martsenko <[email protected]> wrote:
>
> From: Mark Rutland <[email protected]>
>
> In subsequent patches we're going to expose ptrauth to the host kernel
> and userspace, but things are a bit trickier for guest kernels. For the
> time being, let's hide ptrauth from KVM guests.
>
> Regardless of how well-behaved the guest kernel is, guest userspace
> could attempt to use ptrauth instructions, triggering a trap to EL2,
> resulting in noise from kvm_handle_unknown_ec(). So let's write up a
> handler for the PAC trap, which silently injects an UNDEF into the
> guest, as if the feature were really missing.
>
> Signed-off-by: Mark Rutland <[email protected]>
> Signed-off-by: Kristina Martsenko <[email protected]>
> Reviewed-by: Andrew Jones <[email protected]>
> Reviewed-by: Christoffer Dall <[email protected]>
> Cc: Marc Zyngier <[email protected]>
> Cc: [email protected]

Reviewed-by: Marc Zyngier <[email protected]>

M.

--
Jazz is not dead, it just smell funny.

2018-12-09 14:25:25

by Richard Henderson

[permalink] [raw]
Subject: Re: [PATCH v6 02/13] arm64: add pointer authentication register bits

On 12/7/18 12:39 PM, Kristina Martsenko wrote:
> #define SCTLR_ELx_DSSBS (1UL << 44)
> +#define SCTLR_ELx_ENIA (1 << 31)

1U or 1UL lest you produce signed -0x80000000.

Otherwise,
Reviewed-by: Richard Henderson <[email protected]>


r~

2018-12-09 14:35:58

by Richard Henderson

[permalink] [raw]
Subject: Re: [PATCH v6 01/13] arm64: add comments about EC exception levels

On 12/7/18 12:39 PM, Kristina Martsenko wrote:
> To make it clear which exceptions can't be taken to EL1 or EL2, add
> comments next to the ESR_ELx_EC_* macro definitions.
>
> Signed-off-by: Kristina Martsenko <[email protected]>
> ---
> arch/arm64/include/asm/esr.h | 14 +++++++-------
> 1 file changed, 7 insertions(+), 7 deletions(-)

Reviewed-by: Richard Henderson <[email protected]>


r~

2018-12-09 14:36:25

by Richard Henderson

[permalink] [raw]
Subject: Re: [PATCH v6 03/13] arm64/kvm: consistently handle host HCR_EL2 flags

On 12/7/18 12:39 PM, Kristina Martsenko wrote:
> From: Mark Rutland <[email protected]>
>
> In KVM we define the configuration of HCR_EL2 for a VHE HOST in
> HCR_HOST_VHE_FLAGS, but we don't have a similar definition for the
> non-VHE host flags, and open-code HCR_RW. Further, in head.S we
> open-code the flags for VHE and non-VHE configurations.
>
> In future, we're going to want to configure more flags for the host, so
> lets add a HCR_HOST_NVHE_FLAGS defintion, and consistently use both
> HCR_HOST_VHE_FLAGS and HCR_HOST_NVHE_FLAGS in the kvm code and head.S.
>
> We now use mov_q to generate the HCR_EL2 value, as we use when
> configuring other registers in head.S.
>
> Signed-off-by: Mark Rutland <[email protected]>
> Signed-off-by: Kristina Martsenko <[email protected]>
> Reviewed-by: Christoffer Dall <[email protected]>
> Cc: Catalin Marinas <[email protected]>
> Cc: Marc Zyngier <[email protected]>
> Cc: Will Deacon <[email protected]>
> Cc: [email protected]
> ---
> arch/arm64/include/asm/kvm_arm.h | 1 +
> arch/arm64/kernel/head.S | 5 ++---
> arch/arm64/kvm/hyp/switch.c | 2 +-
> 3 files changed, 4 insertions(+), 4 deletions(-)

Reviewed-by: Richard Henderson <[email protected]>


r~


2018-12-09 14:54:17

by Richard Henderson

[permalink] [raw]
Subject: Re: [PATCH v6 04/13] arm64/kvm: hide ptrauth from guests

On 12/7/18 12:39 PM, Kristina Martsenko wrote:
> From: Mark Rutland <[email protected]>
>
> In subsequent patches we're going to expose ptrauth to the host kernel
> and userspace, but things are a bit trickier for guest kernels. For the
> time being, let's hide ptrauth from KVM guests.
>
> Regardless of how well-behaved the guest kernel is, guest userspace
> could attempt to use ptrauth instructions, triggering a trap to EL2,
> resulting in noise from kvm_handle_unknown_ec(). So let's write up a
> handler for the PAC trap, which silently injects an UNDEF into the
> guest, as if the feature were really missing.

Reviewing the long thread that accompanied v5, I thought we were *not* going to
trap PAuth instructions from the guest.

In particular, the OS distribution may legitimately be built to include
hint-space nops. This includes XPACLRI, which is used by the C++ exception
unwinder and not controlled by SCTLR_EL1.EnI{A,B}.

It seems like the header comment here, and

> +/*
> + * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
> + * a NOP).
> + */
> +static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
> +

here, need updating.


r~

2018-12-09 14:55:40

by Richard Henderson

[permalink] [raw]
Subject: Re: [PATCH v6 05/13] arm64: Don't trap host pointer auth use to EL2

On 12/7/18 12:39 PM, Kristina Martsenko wrote:
> From: Mark Rutland <[email protected]>
>
> To allow EL0 (and/or EL1) to use pointer authentication functionality,
> we must ensure that pointer authentication instructions and accesses to
> pointer authentication keys are not trapped to EL2.
>
> This patch ensures that HCR_EL2 is configured appropriately when the
> kernel is booted at EL2. For non-VHE kernels we set HCR_EL2.{API,APK},
> ensuring that EL1 can access keys and permit EL0 use of instructions.
> For VHE kernels host EL0 (TGE && E2H) is unaffected by these settings,
> and it doesn't matter how we configure HCR_EL2.{API,APK}, so we don't
> bother setting them.
>
> This does not enable support for KVM guests, since KVM manages HCR_EL2
> itself when running VMs.
>
> Signed-off-by: Mark Rutland <[email protected]>
> Signed-off-by: Kristina Martsenko <[email protected]>
> Acked-by: Christoffer Dall <[email protected]>
> Cc: Catalin Marinas <[email protected]>
> Cc: Marc Zyngier <[email protected]>
> Cc: Will Deacon <[email protected]>
> Cc: [email protected]
> ---
> arch/arm64/include/asm/kvm_arm.h | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)

Reviewed-by: Richard Henderson <[email protected]>


r~

2018-12-09 14:59:04

by Richard Henderson

[permalink] [raw]
Subject: Re: [PATCH v6 06/13] arm64/cpufeature: detect pointer authentication

On 12/7/18 12:39 PM, Kristina Martsenko wrote:
> From: Mark Rutland <[email protected]>
>
> So that we can dynamically handle the presence of pointer authentication
> functionality, wire up probing code in cpufeature.c.
>
> From ARMv8.3 onwards, ID_AA64ISAR1 is no longer entirely RES0, and now
> has four fields describing the presence of pointer authentication
> functionality:
>
> * APA - address authentication present, using an architected algorithm
> * API - address authentication present, using an IMP DEF algorithm
> * GPA - generic authentication present, using an architected algorithm
> * GPI - generic authentication present, using an IMP DEF algorithm
>
> This patch checks for both address and generic authentication,
> separately. It is assumed that if all CPUs support an IMP DEF algorithm,
> the same algorithm is used across all CPUs.
>
> Signed-off-by: Mark Rutland <[email protected]>
> Signed-off-by: Kristina Martsenko <[email protected]>
> Cc: Catalin Marinas <[email protected]>
> Cc: Suzuki K Poulose <[email protected]>
> Cc: Will Deacon <[email protected]>
> ---
> arch/arm64/include/asm/cpucaps.h | 8 +++-
> arch/arm64/include/asm/cpufeature.h | 12 +++++
> arch/arm64/kernel/cpufeature.c | 90 +++++++++++++++++++++++++++++++++++++
> 3 files changed, 109 insertions(+), 1 deletion(-)

Reviewed-by: Richard Henderson <[email protected]>


r~


2018-12-09 15:00:42

by Richard Henderson

[permalink] [raw]
Subject: Re: [PATCH v6 07/13] arm64: add basic pointer authentication support

On 12/7/18 12:39 PM, Kristina Martsenko wrote:
> From: Mark Rutland <[email protected]>
>
> This patch adds basic support for pointer authentication, allowing
> userspace to make use of APIAKey, APIBKey, APDAKey, APDBKey, and
> APGAKey. The kernel maintains key values for each process (shared by all
> threads within), which are initialised to random values at exec() time.
>
> The ID_AA64ISAR1_EL1.{APA,API,GPA,GPI} fields are exposed to userspace,
> to describe that pointer authentication instructions are available and
> that the kernel is managing the keys. Two new hwcaps are added for the
> same reason: PACA (for address authentication) and PACG (for generic
> authentication).
>
> Signed-off-by: Mark Rutland <[email protected]>
> Signed-off-by: Kristina Martsenko <[email protected]>
> Tested-by: Adam Wallis <[email protected]>
> Cc: Catalin Marinas <[email protected]>
> Cc: Ramana Radhakrishnan <[email protected]>
> Cc: Suzuki K Poulose <[email protected]>
> Cc: Will Deacon <[email protected]>
> ---
> arch/arm64/include/asm/pointer_auth.h | 75 +++++++++++++++++++++++++++++++++++
> arch/arm64/include/asm/thread_info.h | 4 ++
> arch/arm64/include/uapi/asm/hwcap.h | 2 +
> arch/arm64/kernel/cpufeature.c | 13 ++++++
> arch/arm64/kernel/cpuinfo.c | 2 +
> arch/arm64/kernel/process.c | 4 ++
> 6 files changed, 100 insertions(+)
> create mode 100644 arch/arm64/include/asm/pointer_auth.h

Reviewed-by: Richard Henderson <[email protected]>


r~


2018-12-09 15:04:56

by Richard Henderson

[permalink] [raw]
Subject: Re: [PATCH v6 08/13] arm64: expose user PAC bit positions via ptrace

On 12/7/18 12:39 PM, Kristina Martsenko wrote:
> From: Mark Rutland <[email protected]>
>
> When pointer authentication is in use, data/instruction pointers have a
> number of PAC bits inserted into them. The number and position of these
> bits depends on the configured TCR_ELx.TxSZ and whether tagging is
> enabled. ARMv8.3 allows tagging to differ for instruction and data
> pointers.
>
> For userspace debuggers to unwind the stack and/or to follow pointer
> chains, they need to be able to remove the PAC bits before attempting to
> use a pointer.
>
> This patch adds a new structure with masks describing the location of
> the PAC bits in userspace instruction and data pointers (i.e. those
> addressable via TTBR0), which userspace can query via PTRACE_GETREGSET.
> By clearing these bits from pointers (and replacing them with the value
> of bit 55), userspace can acquire the PAC-less versions.
>
> This new regset is exposed when the kernel is built with (user) pointer
> authentication support, and the address authentication feature is
> enabled. Otherwise, the regset is hidden.
>
> Signed-off-by: Mark Rutland <[email protected]>
> Signed-off-by: Kristina Martsenko <[email protected]>
> Cc: Catalin Marinas <[email protected]>
> Cc: Ramana Radhakrishnan <[email protected]>
> Cc: Will Deacon <[email protected]>
> ---
> arch/arm64/include/asm/pointer_auth.h | 8 ++++++++
> arch/arm64/include/uapi/asm/ptrace.h | 7 +++++++
> arch/arm64/kernel/ptrace.c | 38 +++++++++++++++++++++++++++++++++++
> include/uapi/linux/elf.h | 1 +
> 4 files changed, 54 insertions(+)

Reviewed-by: Richard Henderson <[email protected]>


r~


2018-12-09 15:43:51

by Richard Henderson

[permalink] [raw]
Subject: Re: [PATCH v6 08/13] arm64: expose user PAC bit positions via ptrace

On 12/7/18 12:39 PM, Kristina Martsenko wrote:
> When pointer authentication is in use, data/instruction pointers have a
> number of PAC bits inserted into them. The number and position of these
> bits depends on the configured TCR_ELx.TxSZ and whether tagging is
> enabled. ARMv8.3 allows tagging to differ for instruction and data
> pointers.

At this point I think it's worth starting a discussion about pointer tagging,
and how we can make it controllable and not mandatory.

With this patch set, we are enabling 7 authentication bits: [54:48].

However, it won't be too long before someone implements support for
ARMv8.2-LVA, at which point, without changes to mandatory pointer tagging, we
will only have 3 authentication bits: [54:52]. This seems useless and easily
brute-force-able.

I assume that pointer tagging is primarily used by Android, since I'm not aware
of anything else that uses it at all.

Unfortunately, there is no obvious path to making this optional that does not
break compatibility with Documentation/arm64/tagged-pointers.txt.

I've been thinking that there ought to be some sort of global setting, akin to
/proc/sys/kernel/randomize_va_space, as well as a prctl which an application
could use to selectively enable TBI/TBID for an application that actually uses
tagging.

The global /proc setting allows the default to remain 1, which would let any
application using tagging to continue working. If there are none, the sysadmin
can set the default to 0. Going forward, applications could be updated to use
the prctl, allowing more systems to set the default to 0.

FWIW, pointer authentication continues to work when enabling TBI, but not the
other way around. Thus the prctl could be used to enable TBI at any point, but
if libc is built with PAuth, there's no way to turn it back off again.



r~

2018-12-10 12:51:56

by Catalin Marinas

[permalink] [raw]
Subject: Re: [PATCH v6 08/13] arm64: expose user PAC bit positions via ptrace

On Sun, Dec 09, 2018 at 09:41:31AM -0600, Richard Henderson wrote:
> On 12/7/18 12:39 PM, Kristina Martsenko wrote:
> > When pointer authentication is in use, data/instruction pointers have a
> > number of PAC bits inserted into them. The number and position of these
> > bits depends on the configured TCR_ELx.TxSZ and whether tagging is
> > enabled. ARMv8.3 allows tagging to differ for instruction and data
> > pointers.
>
> At this point I think it's worth starting a discussion about pointer tagging,
> and how we can make it controllable and not mandatory.
>
> With this patch set, we are enabling 7 authentication bits: [54:48].
>
> However, it won't be too long before someone implements support for
> ARMv8.2-LVA, at which point, without changes to mandatory pointer tagging, we
> will only have 3 authentication bits: [54:52]. This seems useless and easily
> brute-force-able.

Such support is already here (about to be queued):

https://lore.kernel.org/linux-arm-kernel/[email protected]/

> I assume that pointer tagging is primarily used by Android, since I'm not aware
> of anything else that uses it at all.

I would expect it to be enabled more widely (Linux distros), though only
the support for instructions currently in the NOP space.

> Unfortunately, there is no obvious path to making this optional that does not
> break compatibility with Documentation/arm64/tagged-pointers.txt.

There is also the ARMv8.5 MTE (memory tagging) which relies on tagged
pointers.

> I've been thinking that there ought to be some sort of global setting, akin to
> /proc/sys/kernel/randomize_va_space, as well as a prctl which an application
> could use to selectively enable TBI/TBID for an application that actually uses
> tagging.

An alternative would be to allow the opt-in to 52-bit VA, leaving it at
48-bit by default. However, it has the problem of changing the PAC size
and not being able to return.

> The global /proc setting allows the default to remain 1, which would let any
> application using tagging to continue working. If there are none, the sysadmin
> can set the default to 0. Going forward, applications could be updated to use
> the prctl, allowing more systems to set the default to 0.
>
> FWIW, pointer authentication continues to work when enabling TBI, but not the
> other way around. Thus the prctl could be used to enable TBI at any point, but
> if libc is built with PAuth, there's no way to turn it back off again.

This may work but, as you said, TBI is user ABI at this point, we can't
take it away now (at the time we didn't forsee the pauth).

Talking briefly with Will/Kristina/Mark, I think the best option is to
make 52-bit VA default off in the kernel config. Whoever needs it
enabled (enterprise systems) should be aware of the reduced PAC bits. I
don't really think we have a better solution.

--
Catalin

2018-12-10 14:24:40

by Richard Henderson

[permalink] [raw]
Subject: Re: [PATCH v6 08/13] arm64: expose user PAC bit positions via ptrace

On 12/10/18 6:03 AM, Catalin Marinas wrote:
>> However, it won't be too long before someone implements support for
>> ARMv8.2-LVA, at which point, without changes to mandatory pointer tagging, we
>> will only have 3 authentication bits: [54:52]. This seems useless and easily
>> brute-force-able.
>
> Such support is already here (about to be queued):
>
> https://lore.kernel.org/linux-arm-kernel/[email protected]/

Thanks for the pointer.

>> Unfortunately, there is no obvious path to making this optional that does not
>> break compatibility with Documentation/arm64/tagged-pointers.txt.
>
> There is also the ARMv8.5 MTE (memory tagging) which relies on tagged
> pointers.

So it does. I hadn't read through that extension completely before.

> An alternative would be to allow the opt-in to 52-bit VA, leaving it at
> 48-bit by default. However, it has the problem of changing the PAC size
> and not being able to return.

Perhaps the opt-in should be at exec time, with ELF flags (or equivalent) on
the application. Because, as you say, changing the shape of the PAC in the
middle of execution is in general not possible.

It isn't perfect, since old kernels won't fail to exec an application setting
flags that can't be supported. And it requires tooling changes.


r~

2018-12-10 16:31:32

by Catalin Marinas

[permalink] [raw]
Subject: Re: [PATCH v6 08/13] arm64: expose user PAC bit positions via ptrace

On Mon, Dec 10, 2018 at 02:29:45PM +0000, Will Deacon wrote:
> On Mon, Dec 10, 2018 at 08:22:06AM -0600, Richard Henderson wrote:
> > On 12/10/18 6:03 AM, Catalin Marinas wrote:
> > >> However, it won't be too long before someone implements support for
> > >> ARMv8.2-LVA, at which point, without changes to mandatory pointer tagging, we
> > >> will only have 3 authentication bits: [54:52]. This seems useless and easily
> > >> brute-force-able.
[...]
> > Perhaps the opt-in should be at exec time, with ELF flags (or equivalent) on
> > the application. Because, as you say, changing the shape of the PAC in the
> > middle of execution is in general not possible.
>
> I think we'd still have a potential performance problem with that approach,
> since we'd end up having to context-switch TCR.T0SZ, which is permitted to
> be cached in a TLB and would therefore force us to introduce TLB
> invalidation when context-switching between tasks using 52-bit VAs and tasks
> using 48-bit VAs.
>
> There's a chance we could get the architecture tightened here, but it's
> not something we've pushed for so far and it depends on what's already been
> built.

Just a quick summary of our internal discussion:

ARMv8.3 also comes with a new bit, TCR_EL1.TBIDx, which practically
disables TBI for code pointers. This bit allows us to use 11 bits for
code PtrAuth with 52-bit VA.

Now, the problem is that TBI for code pointers is user ABI, so we can't
simply disable it. We may be able to do this with memory tagging since
that's an opt-in feature (prctl) where the user is aware that the top
byte of a pointer is no longer ignored. However, that's probably for a
future discussion.

--
Catalin

2018-12-10 16:43:55

by Will Deacon

[permalink] [raw]
Subject: Re: [PATCH v6 08/13] arm64: expose user PAC bit positions via ptrace

On Mon, Dec 10, 2018 at 08:22:06AM -0600, Richard Henderson wrote:
> On 12/10/18 6:03 AM, Catalin Marinas wrote:
> >> However, it won't be too long before someone implements support for
> >> ARMv8.2-LVA, at which point, without changes to mandatory pointer tagging, we
> >> will only have 3 authentication bits: [54:52]. This seems useless and easily
> >> brute-force-able.
> >
> > Such support is already here (about to be queued):
> >
> > https://lore.kernel.org/linux-arm-kernel/[email protected]/
>
> Thanks for the pointer.
>
> >> Unfortunately, there is no obvious path to making this optional that does not
> >> break compatibility with Documentation/arm64/tagged-pointers.txt.
> >
> > There is also the ARMv8.5 MTE (memory tagging) which relies on tagged
> > pointers.
>
> So it does. I hadn't read through that extension completely before.
>
> > An alternative would be to allow the opt-in to 52-bit VA, leaving it at
> > 48-bit by default. However, it has the problem of changing the PAC size
> > and not being able to return.
>
> Perhaps the opt-in should be at exec time, with ELF flags (or equivalent) on
> the application. Because, as you say, changing the shape of the PAC in the
> middle of execution is in general not possible.

I think we'd still have a potential performance problem with that approach,
since we'd end up having to context-switch TCR.T0SZ, which is permitted to
be cached in a TLB and would therefore force us to introduce TLB
invalidation when context-switching between tasks using 52-bit VAs and tasks
using 48-bit VAs.

There's a chance we could get the architecture tightened here, but it's
not something we've pushed for so far and it depends on what's already been
built.

Will

2018-12-10 20:37:00

by Kristina Martsenko

[permalink] [raw]
Subject: Re: [PATCH v6 02/13] arm64: add pointer authentication register bits

On 09/12/2018 14:24, Richard Henderson wrote:
> On 12/7/18 12:39 PM, Kristina Martsenko wrote:
>> #define SCTLR_ELx_DSSBS (1UL << 44)
>> +#define SCTLR_ELx_ENIA (1 << 31)
>
> 1U or 1UL lest you produce signed -0x80000000.

Thanks, this was setting all SCTLR bits above 31 as well... Now fixed.

> Otherwise,
> Reviewed-by: Richard Henderson <[email protected]>

Thanks for all the review!

Kristina

2018-12-10 21:43:15

by Kristina Martsenko

[permalink] [raw]
Subject: Re: [PATCH v6 04/13] arm64/kvm: hide ptrauth from guests

On 09/12/2018 14:53, Richard Henderson wrote:
> On 12/7/18 12:39 PM, Kristina Martsenko wrote:
>> From: Mark Rutland <[email protected]>
>>
>> In subsequent patches we're going to expose ptrauth to the host kernel
>> and userspace, but things are a bit trickier for guest kernels. For the
>> time being, let's hide ptrauth from KVM guests.
>>
>> Regardless of how well-behaved the guest kernel is, guest userspace
>> could attempt to use ptrauth instructions, triggering a trap to EL2,
>> resulting in noise from kvm_handle_unknown_ec(). So let's write up a
>> handler for the PAC trap, which silently injects an UNDEF into the
>> guest, as if the feature were really missing.
>
> Reviewing the long thread that accompanied v5, I thought we were *not* going to
> trap PAuth instructions from the guest.
>
> In particular, the OS distribution may legitimately be built to include
> hint-space nops. This includes XPACLRI, which is used by the C++ exception
> unwinder and not controlled by SCTLR_EL1.EnI{A,B}.

The plan was to disable trapping, yes. However, after that thread there
was a retrospective change applied to the architecture, such that the
XPACLRI (and XPACD/XPACI) instructions are no longer trapped by
HCR_EL2.API. (The public documentation on this has not been updated
yet.) This means that no HINT-space instructions should trap anymore.
(The guest is expected to not set SCTLR_EL1.EnI{A,B} since
ID_AA64ISAR1_EL1.{APA,API} read as 0.)

> It seems like the header comment here, and
Sorry, which header comment?

>> +/*
>> + * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
>> + * a NOP).
>> + */
>> +static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
>> +
>
> here, need updating.

Changed it to "a trapped ptrauth instruction".

Kristina

2018-12-10 23:34:10

by Richard Henderson

[permalink] [raw]
Subject: Re: [PATCH v6 04/13] arm64/kvm: hide ptrauth from guests

On 12/10/18 2:12 PM, Kristina Martsenko wrote:
> The plan was to disable trapping, yes. However, after that thread there
> was a retrospective change applied to the architecture, such that the
> XPACLRI (and XPACD/XPACI) instructions are no longer trapped by
> HCR_EL2.API. (The public documentation on this has not been updated
> yet.) This means that no HINT-space instructions should trap anymore.

Ah, thanks for the update. I'll update my QEMU patch set.

>> It seems like the header comment here, and
> Sorry, which header comment?

Sorry, the patch commit message.


r~

2018-12-10 23:34:26

by Kristina Martsenko

[permalink] [raw]
Subject: Re: [PATCH v6 04/13] arm64/kvm: hide ptrauth from guests

On 10/12/2018 20:22, Richard Henderson wrote:
> On 12/10/18 2:12 PM, Kristina Martsenko wrote:
>> The plan was to disable trapping, yes. However, after that thread there
>> was a retrospective change applied to the architecture, such that the
>> XPACLRI (and XPACD/XPACI) instructions are no longer trapped by
>> HCR_EL2.API. (The public documentation on this has not been updated
>> yet.) This means that no HINT-space instructions should trap anymore.
>
> Ah, thanks for the update. I'll update my QEMU patch set.
>
>>> It seems like the header comment here, and
>> Sorry, which header comment?
>
> Sorry, the patch commit message.

Ah ok. Still seems correct.

Kristina

2018-12-11 20:10:28

by Will Deacon

[permalink] [raw]
Subject: Re: [PATCH v6 02/13] arm64: add pointer authentication register bits

On Mon, Dec 10, 2018 at 07:54:25PM +0000, Kristina Martsenko wrote:
> On 09/12/2018 14:24, Richard Henderson wrote:
> > On 12/7/18 12:39 PM, Kristina Martsenko wrote:
> >> #define SCTLR_ELx_DSSBS (1UL << 44)
> >> +#define SCTLR_ELx_ENIA (1 << 31)
> >
> > 1U or 1UL lest you produce signed -0x80000000.
>
> Thanks, this was setting all SCTLR bits above 31 as well... Now fixed.

Ouch, that's subtle and a mistake that we're likely to keep making in
the future, I fear. I've bitten the bullet and replaced all these
definitions with the _BITUL() macro instead. That also means we don't
have to worry about these constants being used in assembly files when
using older versions of binutils.

Will

--->8

From 25f3852cd8912174c3410414115783799357230a Mon Sep 17 00:00:00 2001
From: Will Deacon <[email protected]>
Date: Tue, 11 Dec 2018 16:42:31 +0000
Subject: [PATCH] arm64: sysreg: Use _BITUL() when defining register bits

Using shifts directly is error-prone and can cause inadvertent sign
extensions or build problems with older versions of binutils.

Consistent use of the _BITUL() macro makes these problems disappear.

Signed-off-by: Will Deacon <[email protected]>
---
arch/arm64/include/asm/sysreg.h | 81 +++++++++++++++++++++--------------------
1 file changed, 41 insertions(+), 40 deletions(-)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index cea9e53be729..8310cc58d50c 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -20,6 +20,7 @@
#ifndef __ASM_SYSREG_H
#define __ASM_SYSREG_H

+#include <linux/const.h>
#include <linux/stringify.h>

/*
@@ -444,31 +445,31 @@
#define SYS_ICH_LR15_EL2 __SYS__LR8_EL2(7)

/* Common SCTLR_ELx flags. */
-#define SCTLR_ELx_DSSBS (1UL << 44)
-#define SCTLR_ELx_ENIA (1 << 31)
-#define SCTLR_ELx_ENIB (1 << 30)
-#define SCTLR_ELx_ENDA (1 << 27)
-#define SCTLR_ELx_EE (1 << 25)
-#define SCTLR_ELx_IESB (1 << 21)
-#define SCTLR_ELx_WXN (1 << 19)
-#define SCTLR_ELx_ENDB (1 << 13)
-#define SCTLR_ELx_I (1 << 12)
-#define SCTLR_ELx_SA (1 << 3)
-#define SCTLR_ELx_C (1 << 2)
-#define SCTLR_ELx_A (1 << 1)
-#define SCTLR_ELx_M 1
+#define SCTLR_ELx_DSSBS (_BITUL(44))
+#define SCTLR_ELx_ENIA (_BITUL(31))
+#define SCTLR_ELx_ENIB (_BITUL(30))
+#define SCTLR_ELx_ENDA (_BITUL(27))
+#define SCTLR_ELx_EE (_BITUL(25))
+#define SCTLR_ELx_IESB (_BITUL(21))
+#define SCTLR_ELx_WXN (_BITUL(19))
+#define SCTLR_ELx_ENDB (_BITUL(13))
+#define SCTLR_ELx_I (_BITUL(12))
+#define SCTLR_ELx_SA (_BITUL(3))
+#define SCTLR_ELx_C (_BITUL(2))
+#define SCTLR_ELx_A (_BITUL(1))
+#define SCTLR_ELx_M (_BITUL(0))

#define SCTLR_ELx_FLAGS (SCTLR_ELx_M | SCTLR_ELx_A | SCTLR_ELx_C | \
SCTLR_ELx_SA | SCTLR_ELx_I | SCTLR_ELx_IESB)

/* SCTLR_EL2 specific flags. */
-#define SCTLR_EL2_RES1 ((1 << 4) | (1 << 5) | (1 << 11) | (1 << 16) | \
- (1 << 18) | (1 << 22) | (1 << 23) | (1 << 28) | \
- (1 << 29))
-#define SCTLR_EL2_RES0 ((1 << 6) | (1 << 7) | (1 << 8) | (1 << 9) | \
- (1 << 10) | (1 << 13) | (1 << 14) | (1 << 15) | \
- (1 << 17) | (1 << 20) | (1 << 24) | (1 << 26) | \
- (1 << 27) | (1 << 30) | (1 << 31) | \
+#define SCTLR_EL2_RES1 ((_BITUL(4)) | (_BITUL(5)) | (_BITUL(11)) | (_BITUL(16)) | \
+ (_BITUL(18)) | (_BITUL(22)) | (_BITUL(23)) | (_BITUL(28)) | \
+ (_BITUL(29)))
+#define SCTLR_EL2_RES0 ((_BITUL(6)) | (_BITUL(7)) | (_BITUL(8)) | (_BITUL(9)) | \
+ (_BITUL(10)) | (_BITUL(13)) | (_BITUL(14)) | (_BITUL(15)) | \
+ (_BITUL(17)) | (_BITUL(20)) | (_BITUL(24)) | (_BITUL(26)) | \
+ (_BITUL(27)) | (_BITUL(30)) | (_BITUL(31)) | \
(0xffffefffUL << 32))

#ifdef CONFIG_CPU_BIG_ENDIAN
@@ -490,23 +491,23 @@
#endif

/* SCTLR_EL1 specific flags. */
-#define SCTLR_EL1_UCI (1 << 26)
-#define SCTLR_EL1_E0E (1 << 24)
-#define SCTLR_EL1_SPAN (1 << 23)
-#define SCTLR_EL1_NTWE (1 << 18)
-#define SCTLR_EL1_NTWI (1 << 16)
-#define SCTLR_EL1_UCT (1 << 15)
-#define SCTLR_EL1_DZE (1 << 14)
-#define SCTLR_EL1_UMA (1 << 9)
-#define SCTLR_EL1_SED (1 << 8)
-#define SCTLR_EL1_ITD (1 << 7)
-#define SCTLR_EL1_CP15BEN (1 << 5)
-#define SCTLR_EL1_SA0 (1 << 4)
-
-#define SCTLR_EL1_RES1 ((1 << 11) | (1 << 20) | (1 << 22) | (1 << 28) | \
- (1 << 29))
-#define SCTLR_EL1_RES0 ((1 << 6) | (1 << 10) | (1 << 13) | (1 << 17) | \
- (1 << 27) | (1 << 30) | (1 << 31) | \
+#define SCTLR_EL1_UCI (_BITUL(26))
+#define SCTLR_EL1_E0E (_BITUL(24))
+#define SCTLR_EL1_SPAN (_BITUL(23))
+#define SCTLR_EL1_NTWE (_BITUL(18))
+#define SCTLR_EL1_NTWI (_BITUL(16))
+#define SCTLR_EL1_UCT (_BITUL(15))
+#define SCTLR_EL1_DZE (_BITUL(14))
+#define SCTLR_EL1_UMA (_BITUL(9))
+#define SCTLR_EL1_SED (_BITUL(8))
+#define SCTLR_EL1_ITD (_BITUL(7))
+#define SCTLR_EL1_CP15BEN (_BITUL(5))
+#define SCTLR_EL1_SA0 (_BITUL(4))
+
+#define SCTLR_EL1_RES1 ((_BITUL(11)) | (_BITUL(20)) | (_BITUL(22)) | (_BITUL(28)) | \
+ (_BITUL(29)))
+#define SCTLR_EL1_RES0 ((_BITUL(6)) | (_BITUL(10)) | (_BITUL(13)) | (_BITUL(17)) | \
+ (_BITUL(27)) | (_BITUL(30)) | (_BITUL(31)) | \
(0xffffefffUL << 32))

#ifdef CONFIG_CPU_BIG_ENDIAN
@@ -706,13 +707,13 @@
#define ZCR_ELx_LEN_SIZE 9
#define ZCR_ELx_LEN_MASK 0x1ff

-#define CPACR_EL1_ZEN_EL1EN (1 << 16) /* enable EL1 access */
-#define CPACR_EL1_ZEN_EL0EN (1 << 17) /* enable EL0 access, if EL1EN set */
+#define CPACR_EL1_ZEN_EL1EN (_BITUL(16)) /* enable EL1 access */
+#define CPACR_EL1_ZEN_EL0EN (_BITUL(17)) /* enable EL0 access, if EL1EN set */
#define CPACR_EL1_ZEN (CPACR_EL1_ZEN_EL1EN | CPACR_EL1_ZEN_EL0EN)


/* Safe value for MPIDR_EL1: Bit31:RES1, Bit30:U:0, Bit24:MT:0 */
-#define SYS_MPIDR_SAFE_VAL (1UL << 31)
+#define SYS_MPIDR_SAFE_VAL (_BITUL(31))

#ifdef __ASSEMBLY__

--
2.1.4


2018-12-12 15:23:40

by Dave Martin

[permalink] [raw]
Subject: Re: [PATCH v6 10/13] arm64: add prctl control for resetting ptrauth keys

On Fri, Dec 07, 2018 at 06:39:28PM +0000, Kristina Martsenko wrote:
> Add an arm64-specific prctl to allow a thread to reinitialize its
> pointer authentication keys to random values. This can be useful when
> exec() is not used for starting new processes, to ensure that different
> processes still have different keys.
>
> Signed-off-by: Kristina Martsenko <[email protected]>
> ---
> arch/arm64/include/asm/pointer_auth.h | 3 +++
> arch/arm64/include/asm/processor.h | 4 +++
> arch/arm64/kernel/Makefile | 1 +
> arch/arm64/kernel/pointer_auth.c | 47 +++++++++++++++++++++++++++++++++++
> include/uapi/linux/prctl.h | 8 ++++++
> kernel/sys.c | 8 ++++++
> 6 files changed, 71 insertions(+)
> create mode 100644 arch/arm64/kernel/pointer_auth.c
>
> diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
> index 89190d93c850..7797bc346c6b 100644
> --- a/arch/arm64/include/asm/pointer_auth.h
> +++ b/arch/arm64/include/asm/pointer_auth.h
> @@ -59,6 +59,8 @@ static inline void ptrauth_keys_switch(struct ptrauth_keys *keys)
> __ptrauth_key_install(APGA, keys->apga);
> }
>
> +extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg);
> +
> /*
> * The EL0 pointer bits used by a pointer authentication code.
> * This is dependent on TBI0 being enabled, or bits 63:56 would also apply.
> @@ -82,6 +84,7 @@ do { \
> ptrauth_keys_switch(&(tsk)->thread_info.keys_user)
>
> #else /* CONFIG_ARM64_PTR_AUTH */
> +#define ptrauth_prctl_reset_keys(tsk, arg) (-EINVAL)
> #define ptrauth_strip_insn_pac(lr) (lr)
> #define ptrauth_thread_init_user(tsk)
> #define ptrauth_thread_switch(tsk)
> diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
> index 6b0d4dff5012..40ccfb7605b6 100644
> --- a/arch/arm64/include/asm/processor.h
> +++ b/arch/arm64/include/asm/processor.h
> @@ -46,6 +46,7 @@
> #include <asm/hw_breakpoint.h>
> #include <asm/lse.h>
> #include <asm/pgtable-hwdef.h>
> +#include <asm/pointer_auth.h>
> #include <asm/ptrace.h>
> #include <asm/types.h>
>
> @@ -270,6 +271,9 @@ extern void __init minsigstksz_setup(void);
> #define SVE_SET_VL(arg) sve_set_current_vl(arg)
> #define SVE_GET_VL() sve_get_current_vl()
>
> +/* PR_PAC_RESET_KEYS prctl */
> +#define PAC_RESET_KEYS(tsk, arg) ptrauth_prctl_reset_keys(tsk, arg)
> +
> /*
> * For CONFIG_GCC_PLUGIN_STACKLEAK
> *
> diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
> index 4c8b13bede80..096740ab81d2 100644
> --- a/arch/arm64/kernel/Makefile
> +++ b/arch/arm64/kernel/Makefile
> @@ -57,6 +57,7 @@ arm64-obj-$(CONFIG_CRASH_DUMP) += crash_dump.o
> arm64-obj-$(CONFIG_CRASH_CORE) += crash_core.o
> arm64-obj-$(CONFIG_ARM_SDE_INTERFACE) += sdei.o
> arm64-obj-$(CONFIG_ARM64_SSBD) += ssbd.o
> +arm64-obj-$(CONFIG_ARM64_PTR_AUTH) += pointer_auth.o
>
> obj-y += $(arm64-obj-y) vdso/ probes/
> obj-m += $(arm64-obj-m)
> diff --git a/arch/arm64/kernel/pointer_auth.c b/arch/arm64/kernel/pointer_auth.c
> new file mode 100644
> index 000000000000..b9f6f5f3409a
> --- /dev/null
> +++ b/arch/arm64/kernel/pointer_auth.c
> @@ -0,0 +1,47 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +#include <linux/errno.h>
> +#include <linux/prctl.h>
> +#include <linux/random.h>
> +#include <linux/sched.h>
> +#include <asm/cpufeature.h>
> +#include <asm/pointer_auth.h>
> +
> +int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
> +{
> + struct ptrauth_keys *keys = &tsk->thread_info.keys_user;
> + unsigned long addr_key_mask = PR_PAC_APIAKEY | PR_PAC_APIBKEY |
> + PR_PAC_APDAKEY | PR_PAC_APDBKEY;
> + unsigned long key_mask = addr_key_mask | PR_PAC_APGAKEY;
> +
> + if (!system_supports_address_auth() && !system_supports_generic_auth())
> + return -EINVAL;
> +
> + if (!arg) {
> + ptrauth_keys_init(keys);
> + ptrauth_keys_switch(keys);
> + return 0;
> + }
> +
> + if (arg & ~key_mask)
> + return -EINVAL;
> +
> + if (((arg & addr_key_mask) && !system_supports_address_auth()) ||
> + ((arg & PR_PAC_APGAKEY) && !system_supports_generic_auth()))
> + return -EINVAL;
> +
> + if (arg & PR_PAC_APIAKEY)
> + get_random_bytes(&keys->apia, sizeof(keys->apia));
> + if (arg & PR_PAC_APIBKEY)
> + get_random_bytes(&keys->apib, sizeof(keys->apib));
> + if (arg & PR_PAC_APDAKEY)
> + get_random_bytes(&keys->apda, sizeof(keys->apda));
> + if (arg & PR_PAC_APDBKEY)
> + get_random_bytes(&keys->apdb, sizeof(keys->apdb));
> + if (arg & PR_PAC_APGAKEY)
> + get_random_bytes(&keys->apga, sizeof(keys->apga));
> +
> + ptrauth_keys_switch(keys);
> +
> + return 0;
> +}
> diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h
> index b17201edfa09..b4875a93363a 100644
> --- a/include/uapi/linux/prctl.h
> +++ b/include/uapi/linux/prctl.h
> @@ -220,4 +220,12 @@ struct prctl_mm_map {
> # define PR_SPEC_DISABLE (1UL << 2)
> # define PR_SPEC_FORCE_DISABLE (1UL << 3)
>
> +/* Reset arm64 pointer authentication keys */
> +#define PR_PAC_RESET_KEYS 54
> +# define PR_PAC_APIAKEY (1UL << 0)
> +# define PR_PAC_APIBKEY (1UL << 1)
> +# define PR_PAC_APDAKEY (1UL << 2)
> +# define PR_PAC_APDBKEY (1UL << 3)
> +# define PR_PAC_APGAKEY (1UL << 4)
> +
> #endif /* _LINUX_PRCTL_H */
> diff --git a/kernel/sys.c b/kernel/sys.c
> index 123bd73046ec..64b5a230f38d 100644
> --- a/kernel/sys.c
> +++ b/kernel/sys.c
> @@ -121,6 +121,9 @@
> #ifndef SVE_GET_VL
> # define SVE_GET_VL() (-EINVAL)
> #endif
> +#ifndef PAC_RESET_KEYS
> +# define PAC_RESET_KEYS(a, b) (-EINVAL)
> +#endif
>
> /*
> * this is where the system-wide overflow UID and GID are defined, for
> @@ -2476,6 +2479,11 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
> return -EINVAL;
> error = arch_prctl_spec_ctrl_set(me, arg2, arg3);
> break;
> + case PR_PAC_RESET_KEYS:
> + if (arg3 || arg4 || arg5)
> + return -EINVAL;
> + error = PAC_RESET_KEYS(me, arg2);
> + break;

If this only ever operated on current, can we drop the task argument?

(Last time I looked, the task argument is useless for all existing
prctls -- I have some outstanding refactoring to get rid of it entirely.)

Since arg2 contains unused flag bits and we already return -EINVAL if
any of those are set, we can define a new flag in arg2 in the future if
we want to extend this interface.

So I think we can drop the checks on arg3..arg5 and avoid the need
for the caller to supply , 0, 0, 0) in every call to this prctl.

(But if someone else objects to this approach I'm happy to concede on
this point).

Cheers
---Dave

2018-12-12 15:26:26

by Dave Martin

[permalink] [raw]
Subject: Re: [PATCH v6 11/13] arm64: add ptrace regsets for ptrauth key management

On Fri, Dec 07, 2018 at 06:39:29PM +0000, Kristina Martsenko wrote:
> Add two new ptrace regsets, which can be used to request and change the
> pointer authentication keys of a thread. NT_ARM_PACA_KEYS gives access
> to the instruction/data address keys, and NT_ARM_PACG_KEYS to the
> generic authentication key.
>
> The regsets are only exposed if the kernel is compiled with
> CONFIG_CHECKPOINT_RESTORE=y, as the intended use case is checkpointing
> and restoring processes that are using pointer authentication. Normally
> applications or debuggers should not need to know the keys (and exposing
> the keys is a security risk), so the regsets are not exposed by default.

If CONFIG_CHECKPOINT_RESTORE is a useful feature, it will be =y on a
wide variety of systems. So I think making the ptrace interface depend
on it may just add potentially untested config variations with little
real security benefit.

If there is perceived to be a security issue here, we would need some
mechanism therefore to control ptrace visibiliy of the keys on a finer
grained basis, and then #ifdeffing the regsets out becomes pointless.


There are alreads mechanisms to restrict ptrace at runtime though --
are those not sufficient for us?

(For example, without CAP_PTRACE_ATTACH, other users' or setuid
processes are not accessible via ptrace. Some security modules, Yama
for example, add additional, runtime controllable restrictions, such
as forbidding a process from tracing a task that it not one of its
children.)

In my opinion if a process is ptraceable at all then the tracer can
compromise it trivially in a wide variety of ways, even in the presence
of ptrauth.

So we should keep things simple and expose the keys unconditionally.

(Others' views might differ of course, but I can't see a convincing
counterargument right now. I haven't looked at historical posts, so
maybe there was discussion already...)

>
> Signed-off-by: Kristina Martsenko <[email protected]>
> ---
> arch/arm64/include/uapi/asm/ptrace.h | 18 +++++++++
> arch/arm64/kernel/ptrace.c | 72 ++++++++++++++++++++++++++++++++++++
> include/uapi/linux/elf.h | 2 +
> 3 files changed, 92 insertions(+)
>
> diff --git a/arch/arm64/include/uapi/asm/ptrace.h b/arch/arm64/include/uapi/asm/ptrace.h
> index c2f249bcd829..fafa7f6decf9 100644
> --- a/arch/arm64/include/uapi/asm/ptrace.h
> +++ b/arch/arm64/include/uapi/asm/ptrace.h
> @@ -236,6 +236,24 @@ struct user_pac_mask {
> __u64 insn_mask;
> };
>
> +/* pointer authentication keys (NT_ARM_PACA_KEYS, NT_ARM_PACG_KEYS) */
> +
> +struct user_pac_address_keys {
> + __u64 apiakey_lo;
> + __u64 apiakey_hi;
> + __u64 apibkey_lo;
> + __u64 apibkey_hi;
> + __u64 apdakey_lo;
> + __u64 apdakey_hi;
> + __u64 apdbkey_lo;
> + __u64 apdbkey_hi;
> +};
> +
> +struct user_pac_generic_keys {
> + __u64 apgakey_lo;
> + __u64 apgakey_hi;
> +};
> +

Are these intentionally different from the kernel's struct ptrauth_keys?

> #endif /* __ASSEMBLY__ */
>
> #endif /* _UAPI__ASM_PTRACE_H */
> diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
> index 6c1f63cb6c4e..f18f14c64d1e 100644
> --- a/arch/arm64/kernel/ptrace.c
> +++ b/arch/arm64/kernel/ptrace.c
> @@ -979,6 +979,56 @@ static int pac_mask_get(struct task_struct *target,
>
> return user_regset_copyout(&pos, &count, &kbuf, &ubuf, &uregs, 0, -1);
> }
> +
> +#ifdef CONFIG_CHECKPOINT_RESTORE
> +static int pac_address_keys_get(struct task_struct *target,
> + const struct user_regset *regset,
> + unsigned int pos, unsigned int count,
> + void *kbuf, void __user *ubuf)
> +{
> + if (!system_supports_address_auth())
> + return -EINVAL;
> +
> + return user_regset_copyout(&pos, &count, &kbuf, &ubuf,
> + &target->thread_info.keys_user, 0, -1);

How does this interact with CONFIG_HARDENED_USERCOPY?
(I haven't really played with this myself, but the issue was reported
by someone else when I was working on the SVE regset implementation.)

Because thread_info is in task_struct for arm64, I think it will be
subject to the arch_thread_struct_whitelist() (see <asm/processor.h>.)
This may cause failures reading/writing the ptrauth regsets when this
option is enabled. (It seems =n in our defconfig today.)

The usercopy hardening code seems to cope with a contiguous whitelisted
region only, so it probably couldn't easily include the ptrauth keys.

(Possibly this is a non-issue for reasone I'm not seeing -- I haven't
tried this configuration recently.)


If we cannot avoid the use of incompatible types for the user and kernel
views of the ptrauth keys, then it may be more straightforward to simply
declare a local struct user_pac_address_keys here and populate it field
by field from thread_info, then do the _copyout on that.

I'm not too keen on the type mismatch and the "-1" here. That means we
rely on regset->n and regset->size being set correctly elsewhere in
order to guard against buffer overruns in thread_info, but in this
case the regset size and the sizeof keys_user are not even the same.

This is a potential pitfall for future maintenance that it would be
preferable to avoid: if the regset definition and kernel structures
go out of sync in some way in the future, we could be vulnerable to
kernel buffer overruns, rather than just userspace seeing wrong
behaviour.

> +}
> +
> +static int pac_address_keys_set(struct task_struct *target,
> + const struct user_regset *regset,
> + unsigned int pos, unsigned int count,
> + const void *kbuf, const void __user *ubuf)
> +{
> + if (!system_supports_address_auth())
> + return -EINVAL;
> +
> + return user_regset_copyin(&pos, &count, &kbuf, &ubuf,
> + &target->thread_info.keys_user, 0, -1);

The same comments apply here.

Note, if using a local struct, you need to be careful to avoid leaking
uninitialised kernel stack into the regset, so the struct must be fully
initialised and must not have any implicit tail-padding or padding
between fields. (user_pac_address_keys and user_pac_generic_keys look
OK on this point.)

The most straightforward way to do this is to populate your struct from
thread_info, do the _copyin(), then transfer the fields of the modified
local struct back to thread_info.

You'll have to do these copies in a couple of places, so if you go
down this route it may be worth wrapping them in helpers.

> +}
> +
> +static int pac_generic_keys_get(struct task_struct *target,
> + const struct user_regset *regset,
> + unsigned int pos, unsigned int count,
> + void *kbuf, void __user *ubuf)
> +{
> + if (!system_supports_generic_auth())
> + return -EINVAL;
> +
> + return user_regset_copyout(&pos, &count, &kbuf, &ubuf,
> + &target->thread_info.keys_user.apga, 0, -1);
> +}
> +
> +static int pac_generic_keys_set(struct task_struct *target,
> + const struct user_regset *regset,
> + unsigned int pos, unsigned int count,
> + const void *kbuf, const void __user *ubuf)
> +{
> + if (!system_supports_generic_auth())
> + return -EINVAL;
> +
> + return user_regset_copyin(&pos, &count, &kbuf, &ubuf,
> + &target->thread_info.keys_user.apga, 0, -1);
> +}
> +#endif /* CONFIG_CHECKPOINT_RESTORE */
> #endif /* CONFIG_ARM64_PTR_AUTH */

[...]

Cheers
---Dave

2018-12-12 19:38:06

by Will Deacon

[permalink] [raw]
Subject: Re: [PATCH v6 00/13] ARMv8.3 pointer authentication userspace support

Hi Kristina,

On Fri, Dec 07, 2018 at 06:39:18PM +0000, Kristina Martsenko wrote:
> This series adds support for the ARMv8.3 pointer authentication extension,
> enabling userspace return address protection with GCC 7 and above.

I've pushed this out to for-next/ptr-auth with the following changes:

* Rebased onto for-next/core
* Dropped the GET/SET ptrace requests for now, due to fragile regset usage
* Moved the user key state from thread_info to thread_struct
* Removed use of VA_BITS for the PAC mask (doesn't work with 52-bit user VAs)
* Added a patch to use _BITUL for the bit definitions in sysreg.h
* Tidied up the HWCAP generation [need to run this past Suzuki]
* Reduced the number of CPU caps from 6 to 4 [also need to run past Suzuki]
* Added Reviewed-by tags

I've not merged it into for-next/core yet, because it could use some testing
and review first.

Cheers,

Will

2018-12-13 18:02:55

by Will Deacon

[permalink] [raw]
Subject: Re: [PATCH v6 00/13] ARMv8.3 pointer authentication userspace support

On Wed, Dec 12, 2018 at 07:35:44PM +0000, Will Deacon wrote:
> On Fri, Dec 07, 2018 at 06:39:18PM +0000, Kristina Martsenko wrote:
> > This series adds support for the ARMv8.3 pointer authentication extension,
> > enabling userspace return address protection with GCC 7 and above.
>
> I've pushed this out to for-next/ptr-auth with the following changes:
>
> * Rebased onto for-next/core
> * Dropped the GET/SET ptrace requests for now, due to fragile regset usage
> * Moved the user key state from thread_info to thread_struct
> * Removed use of VA_BITS for the PAC mask (doesn't work with 52-bit user VAs)
> * Added a patch to use _BITUL for the bit definitions in sysreg.h
> * Tidied up the HWCAP generation [need to run this past Suzuki]
> * Reduced the number of CPU caps from 6 to 4 [also need to run past Suzuki]
> * Added Reviewed-by tags
>
> I've not merged it into for-next/core yet, because it could use some testing
> and review first.

This is now in for-next/core, so I've removed the for-next/ptr-auth staging
branch.

Will

2018-12-19 16:03:07

by Peter Maydell

[permalink] [raw]
Subject: Re: [PATCH v6 04/13] arm64/kvm: hide ptrauth from guests

On Mon, 10 Dec 2018 at 20:22, Richard Henderson
<[email protected]> wrote:
>
> On 12/10/18 2:12 PM, Kristina Martsenko wrote:
> > The plan was to disable trapping, yes. However, after that thread there
> > was a retrospective change applied to the architecture, such that the
> > XPACLRI (and XPACD/XPACI) instructions are no longer trapped by
> > HCR_EL2.API. (The public documentation on this has not been updated
> > yet.) This means that no HINT-space instructions should trap anymore.
>
> Ah, thanks for the update. I'll update my QEMU patch set.

Just to follow up on this loose end, this change to HCR_EL2.API
trap behaviour is documented in the 00bet9 release of the system
register XML which came out today:
https://developer.arm.com/products/architecture/cpu-architecture/a-profile/exploration-tools/system-registers-for-armv8-a

thanks
-- PMM

2019-01-04 02:38:12

by Pavel Machek

[permalink] [raw]
Subject: Re: [PATCH v6 07/13] arm64: add basic pointer authentication support

On Fri 2018-12-07 18:39:25, Kristina Martsenko wrote:
> From: Mark Rutland <[email protected]>
>
> This patch adds basic support for pointer authentication, allowing
> userspace to make use of APIAKey, APIBKey, APDAKey, APDBKey, and
> APGAKey. The kernel maintains key values for each process (shared by all
> threads within), which are initialised to random values at exec()
time.

...

> +/*
> + * We give each process its own keys, which are shared by all threads. The keys
> + * are inherited upon fork(), and reinitialised upon exec*().
> + */
> +struct ptrauth_keys {
> + struct ptrauth_key apia;
> + struct ptrauth_key apib;
> + struct ptrauth_key apda;
> + struct ptrauth_key apdb;
> + struct ptrauth_key apga;
> +};

intstruction_a, data_a, generic_a? Should be easier to understand than
"apdb" ...

Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html


Attachments:
(No filename) (994.00 B)
signature.asc (188.00 B)
Digital signature
Download all attachments

2019-01-04 11:39:55

by Pavel Machek

[permalink] [raw]
Subject: Re: [PATCH v6 07/13] arm64: add basic pointer authentication support

On Fri 2019-01-04 09:21:30, Marc Zyngier wrote:
> On 03/01/2019 20:29, Pavel Machek wrote:
> > On Fri 2018-12-07 18:39:25, Kristina Martsenko wrote:
> >> From: Mark Rutland <[email protected]>
> >>
> >> This patch adds basic support for pointer authentication,
> >> allowing userspace to make use of APIAKey, APIBKey, APDAKey,
> >> APDBKey, and APGAKey. The kernel maintains key values for each
> >> process (shared by all threads within), which are initialised to
> >> random values at exec()
> > time.
> >
> > ...
> >
> >> +/* + * We give each process its own keys, which are shared by
> >> all threads. The keys + * are inherited upon fork(), and
> >> reinitialised upon exec*(). + */ +struct ptrauth_keys { + struct
> >> ptrauth_key apia; + struct ptrauth_key apib; + struct ptrauth_key
> >> apda; + struct ptrauth_key apdb; + struct ptrauth_key apga; +};
> >
> > intstruction_a, data_a, generic_a? Should be easier to understand
> > than "apdb" ...
>
> ... until you realize that these names do match the documentation,
> which makes it even easier to understand how the code uses the
> architecture.

See how not even the commit log matches the documentation then?

Naming something "apdb" is just bad... Just because the documentation
is evil does not mean it should be followed...

Pavel

--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html


Attachments:
(No filename) (1.45 kB)
signature.asc (188.00 B)
Digital signature
Download all attachments

2019-01-04 11:41:41

by Marc Zyngier

[permalink] [raw]
Subject: Re: [PATCH v6 07/13] arm64: add basic pointer authentication support

On 03/01/2019 20:29, Pavel Machek wrote:
> On Fri 2018-12-07 18:39:25, Kristina Martsenko wrote:
>> From: Mark Rutland <[email protected]>
>>
>> This patch adds basic support for pointer authentication,
>> allowing userspace to make use of APIAKey, APIBKey, APDAKey,
>> APDBKey, and APGAKey. The kernel maintains key values for each
>> process (shared by all threads within), which are initialised to
>> random values at exec()
> time.
>
> ...
>
>> +/* + * We give each process its own keys, which are shared by
>> all threads. The keys + * are inherited upon fork(), and
>> reinitialised upon exec*(). + */ +struct ptrauth_keys { + struct
>> ptrauth_key apia; + struct ptrauth_key apib; + struct ptrauth_key
>> apda; + struct ptrauth_key apdb; + struct ptrauth_key apga; +};
>
> intstruction_a, data_a, generic_a? Should be easier to understand
> than "apdb" ...

... until you realize that these names do match the documentation,
which makes it even easier to understand how the code uses the
architecture.

M.
--
Jazz is not dead. It just smells funny...

2019-01-04 19:05:13

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCH v6 07/13] arm64: add basic pointer authentication support

On Fri, Jan 04, 2019 at 10:33:40AM +0100, Pavel Machek wrote:
> On Fri 2019-01-04 09:21:30, Marc Zyngier wrote:
> > On 03/01/2019 20:29, Pavel Machek wrote:
> > > On Fri 2018-12-07 18:39:25, Kristina Martsenko wrote:
> > >> From: Mark Rutland <[email protected]>
> > >>
> > >> This patch adds basic support for pointer authentication,
> > >> allowing userspace to make use of APIAKey, APIBKey, APDAKey,
> > >> APDBKey, and APGAKey. The kernel maintains key values for each
> > >> process (shared by all threads within), which are initialised to
> > >> random values at exec()
> > > time.
> > >
> > > ...
> > >
> > >> +/* + * We give each process its own keys, which are shared by
> > >> all threads. The keys + * are inherited upon fork(), and
> > >> reinitialised upon exec*(). + */ +struct ptrauth_keys { + struct
> > >> ptrauth_key apia; + struct ptrauth_key apib; + struct ptrauth_key
> > >> apda; + struct ptrauth_key apdb; + struct ptrauth_key apga; +};
> > >
> > > intstruction_a, data_a, generic_a? Should be easier to understand
> > > than "apdb" ...
> >
> > ... until you realize that these names do match the documentation,
> > which makes it even easier to understand how the code uses the
> > architecture.
>
> See how not even the commit log matches the documentation then?

The commit message exactly matches the documentation, as it refers to:

APIAKey, APIBKey, APDAKey, APDBKey, and APGAKey

... which are the architected names for those registers, in all the
documentation.

Searching "apga" in the ARM ARM finds all of the relevant information on
APGAKey_EL1. Searching "generic_a" finds precisely nothing, as it's a
term which you invented, that no-one else has previously used.

Likewise for the other key names.

> Naming something "apdb" is just bad... Just because the documentation
> is evil does not mean it should be followed...

It is in no way evil to use the documented names for things.

It is unhelpful to make up terminology that no-one else uses.

Mark.