Memory protection keys enable applications to protect its
address space from inadvertent access or corruption from
itself.
The overall idea:
A process allocates a key and associates it with
an address range within its address space.
The process then can dynamically set read/write
permissions on the key without involving the
kernel. Any code that violates the permissions
of the address space; as defined by its associated
key, will receive a segmentation fault.
This patch series enables the feature on PPC64 HPTE
platform.
ISA3.0 section 5.7.13 describes the detailed specifications.
Testing:
This patch series has passed all the protection key
tests available in the selftests directory.
The tests are updated to work on both x86 and powerpc.
version v5:
(1) reverted back to the old design -- store the
key in the pte, instead of bypassing it.
The v4 design slowed down the hash page path.
(2) detects key violation when kernel is told to
access user pages.
(3) further refined the patches into smaller consumable
units
(4) page faults handlers captures the faulting key
from the pte instead of the vma. This closes a
race between where the key update in the vma and
a key fault caused cause by the key programmed
in the pte.
(5) a key created with access-denied should
also set it up to deny write. Fixed it.
(6) protection-key number is displayed in smaps
the x86 way.
version v4:
(1) patches no more depend on the pte bits to program
the hpte -- comment by Balbir
(2) documentation updates
(3) fixed a bug in the selftest.
(4) unlike x86, powerpc lets signal handler change key
permission bits; the change will persist across
signal handler boundaries. Earlier we allowed
the signal handler to modify a field in the siginfo
structure which would than be used by the kernel
to program the key protection register (AMR)
-- resolves a issue raised by Ben.
"Calls to sys_swapcontext with a made-up context
will end up with a crap AMR if done by code who
didn't know about that register".
(5) these changes enable protection keys on 4k-page
kernel aswell.
version v3:
(1) split the patches into smaller consumable
patches.
(2) added the ability to disable execute permission
on a key at creation.
(3) rename calc_pte_to_hpte_pkey_bits() to
pte_to_hpte_pkey_bits() -- suggested by Anshuman
(4) some code optimization and clarity in
do_page_fault()
(5) A bug fix while invalidating a hpte slot in
__hash_page_4K() -- noticed by Aneesh
version v2:
(1) documentation and selftest added
(2) fixed a bug in 4k hpte backed 64k pte where page
invalidation was not done correctly, and
initialization of second-part-of-the-pte was not
done correctly if the pte was not yet Hashed
with a hpte. Reported by Aneesh.
(3) Fixed ABI breakage caused in siginfo structure.
Reported by Anshuman.
version v1: Initial version
Ram Pai (38):
powerpc: Free up four 64K PTE bits in 4K backed HPTE pages
powerpc: Free up four 64K PTE bits in 64K backed HPTE pages
powerpc: introduce pte_set_hash_slot() helper
powerpc: introduce pte_get_hash_gslot() helper
powerpc: capture the PTE format changes in the dump pte report
powerpc: use helper functions in __hash_page_64K() for 64K PTE
powerpc: use helper functions in __hash_page_huge() for 64K PTE
powerpc: use helper functions in __hash_page_4K() for 64K PTE
powerpc: use helper functions in __hash_page_4K() for 4K PTE
powerpc: use helper functions in flush_hash_page()
mm: introduce an additional vma bit for powerpc pkey
mm: ability to disable execute permission on a key at creation
x86: disallow pkey creation with PKEY_DISABLE_EXECUTE
powerpc: initial plumbing for key management
powerpc: helper function to read,write AMR,IAMR,UAMOR registers
powerpc: implementation for arch_set_user_pkey_access()
powerpc: sys_pkey_alloc() and sys_pkey_free() system calls
powerpc: store and restore the pkey state across context switches
powerpc: introduce execute-only pkey
powerpc: ability to associate pkey to a vma
powerpc: implementation for arch_override_mprotect_pkey()
powerpc: map vma key-protection bits to pte key bits.
powerpc: sys_pkey_mprotect() system call
powerpc: Program HPTE key protection bits
powerpc: helper to validate key-access permissions of a pte
powerpc: check key protection for user page access
powerpc: Macro the mask used for checking DSI exception
powerpc: implementation for arch_vma_access_permitted()
powerpc: Handle exceptions caused by pkey violation
powerpc: capture AMR register content on pkey violation
powerpc: introduce get_pte_pkey() helper
powerpc: capture the violated protection key on fault
powerpc: Deliver SEGV signal on pkey violation
procfs: display the protection-key number associated with a vma
selftest: Move protecton key selftest to arch neutral directory
selftest: PowerPC specific test updates to memory protection keys
Documentation: Move protecton key documentation to arch neutral
directory
Documentation: PowerPC specific updates to memory protection keys
Documentation/vm/protection-keys.txt | 130 +++
Documentation/x86/protection-keys.txt | 85 --
arch/powerpc/Kconfig | 16 +
arch/powerpc/include/asm/book3s/64/hash-4k.h | 20 +
arch/powerpc/include/asm/book3s/64/hash-64k.h | 60 +-
arch/powerpc/include/asm/book3s/64/hash.h | 7 +-
arch/powerpc/include/asm/book3s/64/mmu-hash.h | 10 +
arch/powerpc/include/asm/book3s/64/mmu.h | 10 +
arch/powerpc/include/asm/book3s/64/pgtable.h | 96 ++-
arch/powerpc/include/asm/mman.h | 16 +-
arch/powerpc/include/asm/mmu_context.h | 5 +
arch/powerpc/include/asm/paca.h | 4 +
arch/powerpc/include/asm/pkeys.h | 159 +++
arch/powerpc/include/asm/processor.h | 5 +
arch/powerpc/include/asm/reg.h | 7 +-
arch/powerpc/include/asm/systbl.h | 3 +
arch/powerpc/include/asm/unistd.h | 6 +-
arch/powerpc/include/uapi/asm/ptrace.h | 3 +-
arch/powerpc/include/uapi/asm/unistd.h | 3 +
arch/powerpc/kernel/asm-offsets.c | 6 +
arch/powerpc/kernel/exceptions-64s.S | 2 +-
arch/powerpc/kernel/process.c | 18 +
arch/powerpc/kernel/setup_64.c | 8 +
arch/powerpc/kernel/signal_32.c | 5 +
arch/powerpc/kernel/signal_64.c | 4 +
arch/powerpc/kernel/traps.c | 14 +
arch/powerpc/mm/Makefile | 1 +
arch/powerpc/mm/dump_linuxpagetables.c | 3 +-
arch/powerpc/mm/fault.c | 26 +
arch/powerpc/mm/hash64_4k.c | 14 +-
arch/powerpc/mm/hash64_64k.c | 124 ++-
arch/powerpc/mm/hash_utils_64.c | 68 +-
arch/powerpc/mm/hugetlbpage-hash64.c | 16 +-
arch/powerpc/mm/mmu_context_book3s64.c | 5 +
arch/powerpc/mm/pkeys.c | 243 ++++
arch/x86/kernel/fpu/xstate.c | 3 +
fs/proc/task_mmu.c | 6 +-
include/linux/mm.h | 18 +-
include/uapi/asm-generic/mman-common.h | 4 +-
tools/testing/selftests/vm/Makefile | 1 +
tools/testing/selftests/vm/pkey-helpers.h | 365 ++++++
tools/testing/selftests/vm/protection_keys.c | 1488 +++++++++++++++++++++++++
tools/testing/selftests/x86/Makefile | 2 +-
tools/testing/selftests/x86/pkey-helpers.h | 219 ----
tools/testing/selftests/x86/protection_keys.c | 1395 -----------------------
45 files changed, 2872 insertions(+), 1831 deletions(-)
create mode 100644 Documentation/vm/protection-keys.txt
delete mode 100644 Documentation/x86/protection-keys.txt
create mode 100644 arch/powerpc/include/asm/pkeys.h
create mode 100644 arch/powerpc/mm/pkeys.c
create mode 100644 tools/testing/selftests/vm/pkey-helpers.h
create mode 100644 tools/testing/selftests/vm/protection_keys.c
delete mode 100644 tools/testing/selftests/x86/pkey-helpers.h
delete mode 100644 tools/testing/selftests/x86/protection_keys.c
Rearrange 64K PTE bits to free up bits 3, 4, 5 and 6
in the 64K backed HPTE pages. This along with the earlier
patch will entirely free up the four bits from 64K PTE.
The bit numbers are big-endian as defined in the ISA3.0
This patch does the following change to 64K PTE backed
by 64K HPTE.
H_PAGE_F_SECOND (S) which occupied bit 4 moves to the
second part of the pte to bit 60.
H_PAGE_F_GIX (G,I,X) which occupied bit 5, 6 and 7 also
moves to the second part of the pte to bit 61,
62, 63, 64 respectively
since bit 7 is now freed up, we move H_PAGE_BUSY (B) from
bit 9 to bit 7.
The second part of the PTE will hold
(H_PAGE_F_SECOND|H_PAGE_F_GIX) at bit 60,61,62,63.
Before the patch, the 64K HPTE backed 64k PTE format was
as follows
0 1 2 3 4 5 6 7 8 9 10...........................63
: : : : : : : : : : : :
v v v v v v v v v v v v
,-,-,-,-,--,--,--,--,-,-,-,-,-,------------------,-,-,-,
|x|x|x| |S |G |I |X |x|B|x|x|x|................|.|.|.|.| <- primary pte
'_'_'_'_'__'__'__'__'_'_'_'_'_'________________'_'_'_'_'
| | | | | | | | | | | | |..................| | | | | <- secondary pte
'_'_'_'_'__'__'__'__'_'_'_'_'__________________'_'_'_'_'
After the patch, the 64k HPTE backed 64k PTE format is
as follows
0 1 2 3 4 5 6 7 8 9 10...........................63
: : : : : : : : : : : :
v v v v v v v v v v v v
,-,-,-,-,--,--,--,--,-,-,-,-,-,------------------,-,-,-,
|x|x|x| | | | |B |x|x|x|x|x|................|.|.|.|.| <- primary pte
'_'_'_'_'__'__'__'__'_'_'_'_'_'________________'_'_'_'_'
| | | | | | | | | | | | |..................|S|G|I|X| <- secondary pte
'_'_'_'_'__'__'__'__'_'_'_'_'__________________'_'_'_'_'
The above PTE changes is applicable to hugetlbpages aswell.
The patch does the following code changes:
a) moves the H_PAGE_F_SECOND and H_PAGE_F_GIX to 4k PTE
header since it is no more needed b the 64k PTEs.
b) abstracts out __real_pte() and __rpte_to_hidx() so the
caller need not know the bit location of the slot.
c) moves the slot bits the secondary pte.
Signed-off-by: Ram Pai <[email protected]>
---
arch/powerpc/include/asm/book3s/64/hash-4k.h | 3 ++
arch/powerpc/include/asm/book3s/64/hash-64k.h | 29 ++++++++++-------------
arch/powerpc/include/asm/book3s/64/hash.h | 3 --
arch/powerpc/mm/hash64_64k.c | 30 ++++++++++++++++++------
arch/powerpc/mm/hugetlbpage-hash64.c | 22 ++++++++++++++----
5 files changed, 55 insertions(+), 32 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/64/hash-4k.h b/arch/powerpc/include/asm/book3s/64/hash-4k.h
index a306c0a..1e60099 100644
--- a/arch/powerpc/include/asm/book3s/64/hash-4k.h
+++ b/arch/powerpc/include/asm/book3s/64/hash-4k.h
@@ -16,6 +16,9 @@
#define H_PUD_TABLE_SIZE (sizeof(pud_t) << H_PUD_INDEX_SIZE)
#define H_PGD_TABLE_SIZE (sizeof(pgd_t) << H_PGD_INDEX_SIZE)
+#define H_PAGE_F_GIX_SHIFT 56
+#define H_PAGE_F_SECOND _RPAGE_RSV2 /* HPTE is in 2ndary HPTEG */
+#define H_PAGE_F_GIX (_RPAGE_RSV3 | _RPAGE_RSV4 | _RPAGE_RPN44)
#define H_PAGE_BUSY _RPAGE_RSV1 /* software: PTE & hash are busy */
/* PTE flags to conserve for HPTE identification */
diff --git a/arch/powerpc/include/asm/book3s/64/hash-64k.h b/arch/powerpc/include/asm/book3s/64/hash-64k.h
index 62e580c..c281f18 100644
--- a/arch/powerpc/include/asm/book3s/64/hash-64k.h
+++ b/arch/powerpc/include/asm/book3s/64/hash-64k.h
@@ -12,7 +12,7 @@
*/
#define H_PAGE_COMBO _RPAGE_RPN0 /* this is a combo 4k page */
#define H_PAGE_4K_PFN _RPAGE_RPN1 /* PFN is for a single 4k page */
-#define H_PAGE_BUSY _RPAGE_RPN42 /* software: PTE & hash are busy */
+#define H_PAGE_BUSY _RPAGE_RPN44 /* software: PTE & hash are busy */
/*
* We need to differentiate between explicit huge page and THP huge
@@ -21,8 +21,7 @@
#define H_PAGE_THP_HUGE H_PAGE_4K_PFN
/* PTE flags to conserve for HPTE identification */
-#define _PAGE_HPTEFLAGS (H_PAGE_BUSY | H_PAGE_F_SECOND | \
- H_PAGE_F_GIX | H_PAGE_HASHPTE | H_PAGE_COMBO)
+#define _PAGE_HPTEFLAGS (H_PAGE_BUSY | H_PAGE_HASHPTE | H_PAGE_COMBO)
/*
* we support 16 fragments per PTE page of 64K size.
*/
@@ -50,24 +49,22 @@ static inline real_pte_t __real_pte(pte_t pte, pte_t *ptep)
unsigned long *hidxp;
rpte.pte = pte;
- rpte.hidx = 0;
- if (pte_val(pte) & H_PAGE_COMBO) {
- /*
- * Make sure we order the hidx load against the H_PAGE_COMBO
- * check. The store side ordering is done in __hash_page_4K
- */
- smp_rmb();
- hidxp = (unsigned long *)(ptep + PTRS_PER_PTE);
- rpte.hidx = *hidxp;
- }
+ /*
+ * Ensure that we do not read the hidx before we read
+ * the pte. Because the writer side is expected
+ * to finish writing the hidx first followed by the pte,
+ * by using smp_wmb().
+ * pte_set_hash_slot() ensures that.
+ */
+ smp_rmb();
+ hidxp = (unsigned long *)(ptep + PTRS_PER_PTE);
+ rpte.hidx = *hidxp;
return rpte;
}
static inline unsigned long __rpte_to_hidx(real_pte_t rpte, unsigned long index)
{
- if ((pte_val(rpte.pte) & H_PAGE_COMBO))
- return (rpte.hidx >> (index<<2)) & 0xf;
- return (pte_val(rpte.pte) >> H_PAGE_F_GIX_SHIFT) & 0xf;
+ return ((rpte.hidx >> (index<<2)) & 0xfUL);
}
#define __rpte_to_pte(r) ((r).pte)
diff --git a/arch/powerpc/include/asm/book3s/64/hash.h b/arch/powerpc/include/asm/book3s/64/hash.h
index 2d72964..d27f885 100644
--- a/arch/powerpc/include/asm/book3s/64/hash.h
+++ b/arch/powerpc/include/asm/book3s/64/hash.h
@@ -8,9 +8,6 @@
*
*/
#define H_PTE_NONE_MASK _PAGE_HPTEFLAGS
-#define H_PAGE_F_GIX_SHIFT 56
-#define H_PAGE_F_SECOND _RPAGE_RSV2 /* HPTE is in 2ndary HPTEG */
-#define H_PAGE_F_GIX (_RPAGE_RSV3 | _RPAGE_RSV4 | _RPAGE_RPN44)
#define H_PAGE_HASHPTE _RPAGE_RPN43 /* PTE has associated HPTE */
#ifdef CONFIG_PPC_64K_PAGES
diff --git a/arch/powerpc/mm/hash64_64k.c b/arch/powerpc/mm/hash64_64k.c
index e573bd3..0012618 100644
--- a/arch/powerpc/mm/hash64_64k.c
+++ b/arch/powerpc/mm/hash64_64k.c
@@ -243,6 +243,8 @@ int __hash_page_64K(unsigned long ea, unsigned long access,
unsigned long vsid, pte_t *ptep, unsigned long trap,
unsigned long flags, int ssize)
{
+ real_pte_t rpte;
+ unsigned long *hidxp;
unsigned long hpte_group;
unsigned long rflags, pa;
unsigned long old_pte, new_pte;
@@ -279,6 +281,7 @@ int __hash_page_64K(unsigned long ea, unsigned long access,
} while (!pte_xchg(ptep, __pte(old_pte), __pte(new_pte)));
rflags = htab_convert_pte_flags(new_pte);
+ rpte = __real_pte(__pte(old_pte), ptep);
if (cpu_has_feature(CPU_FTR_NOEXECUTE) &&
!cpu_has_feature(CPU_FTR_COHERENT_ICACHE))
@@ -286,15 +289,17 @@ int __hash_page_64K(unsigned long ea, unsigned long access,
vpn = hpt_vpn(ea, vsid, ssize);
if (unlikely(old_pte & H_PAGE_HASHPTE)) {
- /*
- * There MIGHT be an HPTE for this pte
- */
+ unsigned long hash, slot, hidx;
+
hash = hpt_hash(vpn, shift, ssize);
- if (old_pte & H_PAGE_F_SECOND)
+ hidx = __rpte_to_hidx(rpte, 0);
+ if (hidx & _PTEIDX_SECONDARY)
hash = ~hash;
slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
- slot += (old_pte & H_PAGE_F_GIX) >> H_PAGE_F_GIX_SHIFT;
-
+ slot += hidx & _PTEIDX_GROUP_IX;
+ /*
+ * There MIGHT be an HPTE for this pte
+ */
if (mmu_hash_ops.hpte_updatepp(slot, rflags, vpn, MMU_PAGE_64K,
MMU_PAGE_64K, ssize,
flags) == -1)
@@ -344,9 +349,18 @@ int __hash_page_64K(unsigned long ea, unsigned long access,
MMU_PAGE_64K, MMU_PAGE_64K, old_pte);
return -1;
}
+
+ /*
+ * Insert slot number & secondary bit in PTE second half.
+ */
+ hidxp = (unsigned long *)(ptep + PTRS_PER_PTE);
+ rpte.hidx &= ~(0xfUL);
+ *hidxp = rpte.hidx | (slot & 0xfUL);
+ /*
+ * check __real_pte for details on matching smp_rmb()
+ */
+ smp_wmb();
new_pte = (new_pte & ~_PAGE_HPTEFLAGS) | H_PAGE_HASHPTE;
- new_pte |= (slot << H_PAGE_F_GIX_SHIFT) &
- (H_PAGE_F_SECOND | H_PAGE_F_GIX);
}
*ptep = __pte(new_pte & ~H_PAGE_BUSY);
return 0;
diff --git a/arch/powerpc/mm/hugetlbpage-hash64.c b/arch/powerpc/mm/hugetlbpage-hash64.c
index a84bb44..6f7aee3 100644
--- a/arch/powerpc/mm/hugetlbpage-hash64.c
+++ b/arch/powerpc/mm/hugetlbpage-hash64.c
@@ -22,6 +22,8 @@ int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid,
pte_t *ptep, unsigned long trap, unsigned long flags,
int ssize, unsigned int shift, unsigned int mmu_psize)
{
+ real_pte_t rpte;
+ unsigned long *hidxp;
unsigned long vpn;
unsigned long old_pte, new_pte;
unsigned long rflags, pa, sz;
@@ -61,6 +63,7 @@ int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid,
} while(!pte_xchg(ptep, __pte(old_pte), __pte(new_pte)));
rflags = htab_convert_pte_flags(new_pte);
+ rpte = __real_pte(__pte(old_pte), ptep);
sz = ((1UL) << shift);
if (!cpu_has_feature(CPU_FTR_COHERENT_ICACHE))
@@ -71,13 +74,14 @@ int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid,
/* Check if pte already has an hpte (case 2) */
if (unlikely(old_pte & H_PAGE_HASHPTE)) {
/* There MIGHT be an HPTE for this pte */
- unsigned long hash, slot;
+ unsigned long hash, slot, hidx;
hash = hpt_hash(vpn, shift, ssize);
- if (old_pte & H_PAGE_F_SECOND)
+ hidx = __rpte_to_hidx(rpte, 0);
+ if (hidx & _PTEIDX_SECONDARY)
hash = ~hash;
slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
- slot += (old_pte & H_PAGE_F_GIX) >> H_PAGE_F_GIX_SHIFT;
+ slot += hidx & _PTEIDX_GROUP_IX;
if (mmu_hash_ops.hpte_updatepp(slot, rflags, vpn, mmu_psize,
mmu_psize, ssize, flags) == -1)
@@ -106,8 +110,16 @@ int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid,
return -1;
}
- new_pte |= (slot << H_PAGE_F_GIX_SHIFT) &
- (H_PAGE_F_SECOND | H_PAGE_F_GIX);
+ /*
+ * Insert slot number & secondary bit in PTE second half.
+ */
+ hidxp = (unsigned long *)(ptep + PTRS_PER_PTE);
+ rpte.hidx &= ~(0xfUL);
+ *hidxp = rpte.hidx | (slot & 0xfUL);
+ /*
+ * check __real_pte for details on matching smp_rmb()
+ */
+ smp_wmb();
}
/*
--
1.7.1
Introduce pte_get_hash_gslot()() which returns the slot number of the
HPTE in the global hash table.
This function will come in handy as we work towards re-arranging the
PTE bits in the later patches.
Signed-off-by: Ram Pai <[email protected]>
---
arch/powerpc/include/asm/book3s/64/hash.h | 3 +++
arch/powerpc/mm/hash_utils_64.c | 18 ++++++++++++++++++
2 files changed, 21 insertions(+), 0 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/64/hash.h b/arch/powerpc/include/asm/book3s/64/hash.h
index d27f885..277158c 100644
--- a/arch/powerpc/include/asm/book3s/64/hash.h
+++ b/arch/powerpc/include/asm/book3s/64/hash.h
@@ -156,6 +156,9 @@ static inline int hash__pte_none(pte_t pte)
return (pte_val(pte) & ~H_PTE_NONE_MASK) == 0;
}
+unsigned long pte_get_hash_gslot(unsigned long vpn, unsigned long shift,
+ int ssize, real_pte_t rpte, unsigned int subpg_index);
+
/* This low level function performs the actual PTE insertion
* Setting the PTE depends on the MMU type and other factors. It's
* an horrible mess that I'm not going to try to clean up now but
diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index 1b494d0..d3604da 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -1591,6 +1591,24 @@ static inline void tm_flush_hash_page(int local)
}
#endif
+/*
+ * return the global hash slot, corresponding to the given
+ * pte, which contains the hpte.
+ */
+unsigned long pte_get_hash_gslot(unsigned long vpn, unsigned long shift,
+ int ssize, real_pte_t rpte, unsigned int subpg_index)
+{
+ unsigned long hash, slot, hidx;
+
+ hash = hpt_hash(vpn, shift, ssize);
+ hidx = __rpte_to_hidx(rpte, subpg_index);
+ if (hidx & _PTEIDX_SECONDARY)
+ hash = ~hash;
+ slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
+ slot += hidx & _PTEIDX_GROUP_IX;
+ return slot;
+}
+
/* WARNING: This is called from hash_low_64.S, if you change this prototype,
* do not forget to update the assembly call site !
*/
--
1.7.1
replace redundant code in __hash_page_huge() with helper
functions pte_get_hash_gslot() and pte_set_hash_slot()
Signed-off-by: Ram Pai <[email protected]>
---
arch/powerpc/mm/hugetlbpage-hash64.c | 24 ++++--------------------
1 files changed, 4 insertions(+), 20 deletions(-)
diff --git a/arch/powerpc/mm/hugetlbpage-hash64.c b/arch/powerpc/mm/hugetlbpage-hash64.c
index 6f7aee3..e6dcd50 100644
--- a/arch/powerpc/mm/hugetlbpage-hash64.c
+++ b/arch/powerpc/mm/hugetlbpage-hash64.c
@@ -23,7 +23,6 @@ int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid,
int ssize, unsigned int shift, unsigned int mmu_psize)
{
real_pte_t rpte;
- unsigned long *hidxp;
unsigned long vpn;
unsigned long old_pte, new_pte;
unsigned long rflags, pa, sz;
@@ -74,16 +73,10 @@ int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid,
/* Check if pte already has an hpte (case 2) */
if (unlikely(old_pte & H_PAGE_HASHPTE)) {
/* There MIGHT be an HPTE for this pte */
- unsigned long hash, slot, hidx;
+ unsigned long gslot;
- hash = hpt_hash(vpn, shift, ssize);
- hidx = __rpte_to_hidx(rpte, 0);
- if (hidx & _PTEIDX_SECONDARY)
- hash = ~hash;
- slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
- slot += hidx & _PTEIDX_GROUP_IX;
-
- if (mmu_hash_ops.hpte_updatepp(slot, rflags, vpn, mmu_psize,
+ gslot = pte_get_hash_gslot(vpn, shift, ssize, rpte, 0);
+ if (mmu_hash_ops.hpte_updatepp(gslot, rflags, vpn, mmu_psize,
mmu_psize, ssize, flags) == -1)
old_pte &= ~_PAGE_HPTEFLAGS;
}
@@ -110,16 +103,7 @@ int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid,
return -1;
}
- /*
- * Insert slot number & secondary bit in PTE second half.
- */
- hidxp = (unsigned long *)(ptep + PTRS_PER_PTE);
- rpte.hidx &= ~(0xfUL);
- *hidxp = rpte.hidx | (slot & 0xfUL);
- /*
- * check __real_pte for details on matching smp_rmb()
- */
- smp_wmb();
+ new_pte |= pte_set_hash_slot(ptep, rpte, 0, slot);
}
/*
--
1.7.1
Currently sys_pkey_create() provides the ability to disable read
and write permission on the key, at creation. powerpc has the
hardware support to disable execute on a pkey as well.This patch
enhances the interface to let disable execute at key creation
time. x86 does not allow this. Hence the next patch will add
ability in x86 to return error if PKEY_DISABLE_EXECUTE is
specified.
Signed-off-by: Ram Pai <[email protected]>
---
include/uapi/asm-generic/mman-common.h | 4 +++-
1 files changed, 3 insertions(+), 1 deletions(-)
diff --git a/include/uapi/asm-generic/mman-common.h b/include/uapi/asm-generic/mman-common.h
index 8c27db0..bf4fa07 100644
--- a/include/uapi/asm-generic/mman-common.h
+++ b/include/uapi/asm-generic/mman-common.h
@@ -74,7 +74,9 @@
#define PKEY_DISABLE_ACCESS 0x1
#define PKEY_DISABLE_WRITE 0x2
+#define PKEY_DISABLE_EXECUTE 0x4
#define PKEY_ACCESS_MASK (PKEY_DISABLE_ACCESS |\
- PKEY_DISABLE_WRITE)
+ PKEY_DISABLE_WRITE |\
+ PKEY_DISABLE_EXECUTE)
#endif /* __ASM_GENERIC_MMAN_COMMON_H */
--
1.7.1
arch independent code calls arch_override_mprotect_pkey()
to return a pkey that best matches the requested protection.
This patch provides the implementation.
Signed-off-by: Ram Pai <[email protected]>
---
arch/powerpc/include/asm/pkeys.h | 10 ++++++-
arch/powerpc/mm/pkeys.c | 47 ++++++++++++++++++++++++++++++++++++++
2 files changed, 55 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h
index f148e84..20846c2 100644
--- a/arch/powerpc/include/asm/pkeys.h
+++ b/arch/powerpc/include/asm/pkeys.h
@@ -13,6 +13,11 @@ static inline u64 pkey_to_vmflag_bits(u16 pkey)
((pkey & 0x10UL) ? VM_PKEY_BIT4 : 0x0UL));
}
+static inline int vma_pkey(struct vm_area_struct *vma)
+{
+ return (vma->vm_flags & ARCH_VM_PKEY_FLAGS) >> VM_PKEY_SHIFT;
+}
+
#define arch_max_pkey() 32
#define AMR_AD_BIT 0x1UL
#define AMR_WD_BIT 0x2UL
@@ -102,11 +107,12 @@ static inline int execute_only_pkey(struct mm_struct *mm)
return __execute_only_pkey(mm);
}
-
+extern int __arch_override_mprotect_pkey(struct vm_area_struct *vma,
+ int prot, int pkey);
static inline int arch_override_mprotect_pkey(struct vm_area_struct *vma,
int prot, int pkey)
{
- return 0;
+ return __arch_override_mprotect_pkey(vma, prot, pkey);
}
extern int __arch_set_user_pkey_access(struct task_struct *tsk, int pkey,
diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c
index 6c90317..c60a045 100644
--- a/arch/powerpc/mm/pkeys.c
+++ b/arch/powerpc/mm/pkeys.c
@@ -123,3 +123,50 @@ int __execute_only_pkey(struct mm_struct *mm)
mm->context.execute_only_pkey = execute_only_pkey;
return execute_only_pkey;
}
+
+static inline bool vma_is_pkey_exec_only(struct vm_area_struct *vma)
+{
+ /* Do this check first since the vm_flags should be hot */
+ if ((vma->vm_flags & (VM_READ | VM_WRITE | VM_EXEC)) != VM_EXEC)
+ return false;
+
+ return (vma_pkey(vma) == vma->vm_mm->context.execute_only_pkey);
+}
+
+/*
+ * This should only be called for *plain* mprotect calls.
+ */
+int __arch_override_mprotect_pkey(struct vm_area_struct *vma, int prot,
+ int pkey)
+{
+ /*
+ * Is this an mprotect_pkey() call? If so, never
+ * override the value that came from the user.
+ */
+ if (pkey != -1)
+ return pkey;
+
+ /*
+ * If the currently associated pkey is execute-only,
+ * but the requested protection requires read or write,
+ * move it back to the default pkey.
+ */
+ if (vma_is_pkey_exec_only(vma) &&
+ (prot & (PROT_READ|PROT_WRITE)))
+ return 0;
+
+ /*
+ * the requested protection is execute-only. Hence
+ * lets use a execute-only pkey.
+ */
+ if (prot == PROT_EXEC) {
+ pkey = execute_only_pkey(vma->vm_mm);
+ if (pkey > 0)
+ return pkey;
+ }
+
+ /*
+ * nothing to override.
+ */
+ return vma_pkey(vma);
+}
--
1.7.1
Replace the magic number used to check for DSI exception
with a meaningful value.
Signed-off-by: Ram Pai <[email protected]>
---
arch/powerpc/include/asm/reg.h | 7 ++++++-
arch/powerpc/kernel/exceptions-64s.S | 2 +-
2 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
index 7e50e47..ba110dd 100644
--- a/arch/powerpc/include/asm/reg.h
+++ b/arch/powerpc/include/asm/reg.h
@@ -272,16 +272,21 @@
#define SPRN_DAR 0x013 /* Data Address Register */
#define SPRN_DBCR 0x136 /* e300 Data Breakpoint Control Reg */
#define SPRN_DSISR 0x012 /* Data Storage Interrupt Status Register */
+#define DSISR_BIT32 0x80000000 /* not defined */
#define DSISR_NOHPTE 0x40000000 /* no translation found */
+#define DSISR_PAGEATTR_CONFLT 0x20000000 /* page attribute conflict */
+#define DSISR_BIT35 0x10000000 /* not defined */
#define DSISR_PROTFAULT 0x08000000 /* protection fault */
#define DSISR_BADACCESS 0x04000000 /* bad access to CI or G */
#define DSISR_ISSTORE 0x02000000 /* access was a store */
#define DSISR_DABRMATCH 0x00400000 /* hit data breakpoint */
-#define DSISR_NOSEGMENT 0x00200000 /* SLB miss */
#define DSISR_KEYFAULT 0x00200000 /* Key fault */
+#define DSISR_BIT43 0x00100000 /* not defined */
#define DSISR_UNSUPP_MMU 0x00080000 /* Unsupported MMU config */
#define DSISR_SET_RC 0x00040000 /* Failed setting of R/C bits */
#define DSISR_PGDIRFAULT 0x00020000 /* Fault on page directory */
+#define DSISR_PAGE_FAULT_MASK (DSISR_BIT32 | DSISR_PAGEATTR_CONFLT | \
+ DSISR_BADACCESS | DSISR_BIT43)
#define SPRN_TBRL 0x10C /* Time Base Read Lower Register (user, R/O) */
#define SPRN_TBRU 0x10D /* Time Base Read Upper Register (user, R/O) */
#define SPRN_CIR 0x11B /* Chip Information Register (hyper, R/0) */
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index ae418b8..3fd0528 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -1411,7 +1411,7 @@ USE_TEXT_SECTION()
.balign IFETCH_ALIGN_BYTES
do_hash_page:
#ifdef CONFIG_PPC_STD_MMU_64
- andis. r0,r4,0xa410 /* weird error? */
+ andis. r0,r4,DSISR_PAGE_FAULT_MASK@h
bne- handle_page_fault /* if not, try to insert a HPTE */
andis. r0,r4,DSISR_DABRMATCH@h
bne- handle_dabr_fault
--
1.7.1
Handle Data and Instruction exceptions caused by memory
protection-key.
The CPU will detect the key fault if the HPTE is already
programmed with the key.
However if the HPTE is not hashed, a key fault will not
be detected by the hardware. The software will detect
pkey violation in such a case.
Signed-off-by: Ram Pai <[email protected]>
---
arch/powerpc/include/asm/reg.h | 2 +-
arch/powerpc/mm/fault.c | 21 +++++++++++++++++++++
2 files changed, 22 insertions(+), 1 deletions(-)
diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
index ba110dd..6e2a860 100644
--- a/arch/powerpc/include/asm/reg.h
+++ b/arch/powerpc/include/asm/reg.h
@@ -286,7 +286,7 @@
#define DSISR_SET_RC 0x00040000 /* Failed setting of R/C bits */
#define DSISR_PGDIRFAULT 0x00020000 /* Fault on page directory */
#define DSISR_PAGE_FAULT_MASK (DSISR_BIT32 | DSISR_PAGEATTR_CONFLT | \
- DSISR_BADACCESS | DSISR_BIT43)
+ DSISR_BADACCESS | DSISR_KEYFAULT | DSISR_BIT43)
#define SPRN_TBRL 0x10C /* Time Base Read Lower Register (user, R/O) */
#define SPRN_TBRU 0x10D /* Time Base Read Upper Register (user, R/O) */
#define SPRN_CIR 0x11B /* Chip Information Register (hyper, R/0) */
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 3a7d580..ea74fe2 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -261,6 +261,13 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
}
#endif
+#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
+ if (error_code & DSISR_KEYFAULT) {
+ code = SEGV_PKUERR;
+ goto bad_area_nosemaphore;
+ }
+#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
+
/* We restore the interrupt state now */
if (!arch_irq_disabled_regs(regs))
local_irq_enable();
@@ -441,6 +448,20 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
WARN_ON_ONCE(error_code & DSISR_PROTFAULT);
#endif /* CONFIG_PPC_STD_MMU */
+#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
+ if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE,
+ is_exec, 0)) {
+ code = SEGV_PKUERR;
+ goto bad_area;
+ }
+#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
+
+
+ /* handle_mm_fault() needs to know if its a instruction access
+ * fault.
+ */
+ if (is_exec)
+ flags |= FAULT_FLAG_INSTRUCTION;
/*
* If for any reason at all we couldn't handle the fault,
* make sure we exit gracefully rather than endlessly redo
--
1.7.1
capture AMR register contents, and save it in paca
whenever a pkey violation is detected.
This value will be needed to deliver pkey-violation
signal to the task.
Signed-off-by: Ram Pai <[email protected]>
---
arch/powerpc/include/asm/paca.h | 3 +++
arch/powerpc/kernel/asm-offsets.c | 5 +++++
arch/powerpc/mm/fault.c | 2 ++
3 files changed, 10 insertions(+), 0 deletions(-)
diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
index 1c09f8f..c8bd1fc 100644
--- a/arch/powerpc/include/asm/paca.h
+++ b/arch/powerpc/include/asm/paca.h
@@ -92,6 +92,9 @@ struct paca_struct {
struct dtl_entry *dispatch_log_end;
#endif /* CONFIG_PPC_STD_MMU_64 */
u64 dscr_default; /* per-CPU default DSCR */
+#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
+ u64 paca_amr; /* value of amr at exception */
+#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
#ifdef CONFIG_PPC_STD_MMU_64
/*
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index 709e234..17f5d8a 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -241,6 +241,11 @@ int main(void)
OFFSET(PACAHWCPUID, paca_struct, hw_cpu_id);
OFFSET(PACAKEXECSTATE, paca_struct, kexec_state);
OFFSET(PACA_DSCR_DEFAULT, paca_struct, dscr_default);
+
+#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
+ OFFSET(PACA_AMR, paca_struct, paca_amr);
+#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
+
OFFSET(ACCOUNT_STARTTIME, paca_struct, accounting.starttime);
OFFSET(ACCOUNT_STARTTIME_USER, paca_struct, accounting.starttime_user);
OFFSET(ACCOUNT_USER_TIME, paca_struct, accounting.utime);
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index ea74fe2..a6710f5 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -264,6 +264,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
if (error_code & DSISR_KEYFAULT) {
code = SEGV_PKUERR;
+ get_paca()->paca_amr = read_amr();
goto bad_area_nosemaphore;
}
#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
@@ -451,6 +452,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE,
is_exec, 0)) {
+ get_paca()->paca_amr = read_amr();
code = SEGV_PKUERR;
goto bad_area;
}
--
1.7.1
Map the PTE protection key bits to the HPTE key protection bits,
while creating HPTE entries.
Signed-off-by: Ram Pai <[email protected]>
---
arch/powerpc/include/asm/book3s/64/mmu-hash.h | 5 +++++
arch/powerpc/include/asm/pkeys.h | 9 +++++++++
arch/powerpc/mm/hash_utils_64.c | 5 +++++
3 files changed, 19 insertions(+), 0 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/64/mmu-hash.h b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
index 6981a52..f7a6ed3 100644
--- a/arch/powerpc/include/asm/book3s/64/mmu-hash.h
+++ b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
@@ -90,6 +90,8 @@
#define HPTE_R_PP0 ASM_CONST(0x8000000000000000)
#define HPTE_R_TS ASM_CONST(0x4000000000000000)
#define HPTE_R_KEY_HI ASM_CONST(0x3000000000000000)
+#define HPTE_R_KEY_BIT0 ASM_CONST(0x2000000000000000)
+#define HPTE_R_KEY_BIT1 ASM_CONST(0x1000000000000000)
#define HPTE_R_RPN_SHIFT 12
#define HPTE_R_RPN ASM_CONST(0x0ffffffffffff000)
#define HPTE_R_RPN_3_0 ASM_CONST(0x01fffffffffff000)
@@ -104,6 +106,9 @@
#define HPTE_R_C ASM_CONST(0x0000000000000080)
#define HPTE_R_R ASM_CONST(0x0000000000000100)
#define HPTE_R_KEY_LO ASM_CONST(0x0000000000000e00)
+#define HPTE_R_KEY_BIT2 ASM_CONST(0x0000000000000800)
+#define HPTE_R_KEY_BIT3 ASM_CONST(0x0000000000000400)
+#define HPTE_R_KEY_BIT4 ASM_CONST(0x0000000000000200)
#define HPTE_V_1TB_SEG ASM_CONST(0x4000000000000000)
#define HPTE_V_VRMA_MASK ASM_CONST(0x4001ffffff000000)
diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h
index c681de9..6477b87 100644
--- a/arch/powerpc/include/asm/pkeys.h
+++ b/arch/powerpc/include/asm/pkeys.h
@@ -22,6 +22,15 @@ static inline u64 vmflag_to_page_pkey_bits(u64 vm_flags)
((vm_flags & VM_PKEY_BIT4) ? H_PAGE_PKEY_BIT0 : 0x0UL));
}
+static inline u64 pte_to_hpte_pkey_bits(u64 pteflags)
+{
+ return (((pteflags & H_PAGE_PKEY_BIT0) ? HPTE_R_KEY_BIT0 : 0x0UL) |
+ ((pteflags & H_PAGE_PKEY_BIT1) ? HPTE_R_KEY_BIT1 : 0x0UL) |
+ ((pteflags & H_PAGE_PKEY_BIT2) ? HPTE_R_KEY_BIT2 : 0x0UL) |
+ ((pteflags & H_PAGE_PKEY_BIT3) ? HPTE_R_KEY_BIT3 : 0x0UL) |
+ ((pteflags & H_PAGE_PKEY_BIT4) ? HPTE_R_KEY_BIT4 : 0x0UL));
+}
+
static inline int vma_pkey(struct vm_area_struct *vma)
{
return (vma->vm_flags & ARCH_VM_PKEY_FLAGS) >> VM_PKEY_SHIFT;
diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index d863696..1e74529 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -35,6 +35,7 @@
#include <linux/memblock.h>
#include <linux/context_tracking.h>
#include <linux/libfdt.h>
+#include <linux/pkeys.h>
#include <asm/debugfs.h>
#include <asm/processor.h>
@@ -230,6 +231,10 @@ unsigned long htab_convert_pte_flags(unsigned long pteflags)
*/
rflags |= HPTE_R_M;
+#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
+ rflags |= pte_to_hpte_pkey_bits(pteflags);
+#endif
+
return rflags;
}
--
1.7.1
Signed-off-by: Ram Pai <[email protected]>
---
tools/testing/selftests/vm/Makefile | 1 +
tools/testing/selftests/vm/pkey-helpers.h | 219 ++++
tools/testing/selftests/vm/protection_keys.c | 1395 +++++++++++++++++++++++++
tools/testing/selftests/x86/Makefile | 2 +-
tools/testing/selftests/x86/pkey-helpers.h | 219 ----
tools/testing/selftests/x86/protection_keys.c | 1395 -------------------------
6 files changed, 1616 insertions(+), 1615 deletions(-)
create mode 100644 tools/testing/selftests/vm/pkey-helpers.h
create mode 100644 tools/testing/selftests/vm/protection_keys.c
delete mode 100644 tools/testing/selftests/x86/pkey-helpers.h
delete mode 100644 tools/testing/selftests/x86/protection_keys.c
diff --git a/tools/testing/selftests/vm/Makefile b/tools/testing/selftests/vm/Makefile
index cbb29e4..1d32f78 100644
--- a/tools/testing/selftests/vm/Makefile
+++ b/tools/testing/selftests/vm/Makefile
@@ -17,6 +17,7 @@ TEST_GEN_FILES += transhuge-stress
TEST_GEN_FILES += userfaultfd
TEST_GEN_FILES += mlock-random-test
TEST_GEN_FILES += virtual_address_range
+TEST_GEN_FILES += protection_keys
TEST_PROGS := run_vmtests
diff --git a/tools/testing/selftests/vm/pkey-helpers.h b/tools/testing/selftests/vm/pkey-helpers.h
new file mode 100644
index 0000000..b202939
--- /dev/null
+++ b/tools/testing/selftests/vm/pkey-helpers.h
@@ -0,0 +1,219 @@
+#ifndef _PKEYS_HELPER_H
+#define _PKEYS_HELPER_H
+#define _GNU_SOURCE
+#include <string.h>
+#include <stdarg.h>
+#include <stdio.h>
+#include <stdint.h>
+#include <stdbool.h>
+#include <signal.h>
+#include <assert.h>
+#include <stdlib.h>
+#include <ucontext.h>
+#include <sys/mman.h>
+
+#define NR_PKEYS 16
+#define PKRU_BITS_PER_PKEY 2
+
+#ifndef DEBUG_LEVEL
+#define DEBUG_LEVEL 0
+#endif
+#define DPRINT_IN_SIGNAL_BUF_SIZE 4096
+extern int dprint_in_signal;
+extern char dprint_in_signal_buffer[DPRINT_IN_SIGNAL_BUF_SIZE];
+static inline void sigsafe_printf(const char *format, ...)
+{
+ va_list ap;
+
+ va_start(ap, format);
+ if (!dprint_in_signal) {
+ vprintf(format, ap);
+ } else {
+ int len = vsnprintf(dprint_in_signal_buffer,
+ DPRINT_IN_SIGNAL_BUF_SIZE,
+ format, ap);
+ /*
+ * len is amount that would have been printed,
+ * but actual write is truncated at BUF_SIZE.
+ */
+ if (len > DPRINT_IN_SIGNAL_BUF_SIZE)
+ len = DPRINT_IN_SIGNAL_BUF_SIZE;
+ write(1, dprint_in_signal_buffer, len);
+ }
+ va_end(ap);
+}
+#define dprintf_level(level, args...) do { \
+ if (level <= DEBUG_LEVEL) \
+ sigsafe_printf(args); \
+ fflush(NULL); \
+} while (0)
+#define dprintf0(args...) dprintf_level(0, args)
+#define dprintf1(args...) dprintf_level(1, args)
+#define dprintf2(args...) dprintf_level(2, args)
+#define dprintf3(args...) dprintf_level(3, args)
+#define dprintf4(args...) dprintf_level(4, args)
+
+extern unsigned int shadow_pkru;
+static inline unsigned int __rdpkru(void)
+{
+ unsigned int eax, edx;
+ unsigned int ecx = 0;
+ unsigned int pkru;
+
+ asm volatile(".byte 0x0f,0x01,0xee\n\t"
+ : "=a" (eax), "=d" (edx)
+ : "c" (ecx));
+ pkru = eax;
+ return pkru;
+}
+
+static inline unsigned int _rdpkru(int line)
+{
+ unsigned int pkru = __rdpkru();
+
+ dprintf4("rdpkru(line=%d) pkru: %x shadow: %x\n",
+ line, pkru, shadow_pkru);
+ assert(pkru == shadow_pkru);
+
+ return pkru;
+}
+
+#define rdpkru() _rdpkru(__LINE__)
+
+static inline void __wrpkru(unsigned int pkru)
+{
+ unsigned int eax = pkru;
+ unsigned int ecx = 0;
+ unsigned int edx = 0;
+
+ dprintf4("%s() changing %08x to %08x\n", __func__, __rdpkru(), pkru);
+ asm volatile(".byte 0x0f,0x01,0xef\n\t"
+ : : "a" (eax), "c" (ecx), "d" (edx));
+ assert(pkru == __rdpkru());
+}
+
+static inline void wrpkru(unsigned int pkru)
+{
+ dprintf4("%s() changing %08x to %08x\n", __func__, __rdpkru(), pkru);
+ /* will do the shadow check for us: */
+ rdpkru();
+ __wrpkru(pkru);
+ shadow_pkru = pkru;
+ dprintf4("%s(%08x) pkru: %08x\n", __func__, pkru, __rdpkru());
+}
+
+/*
+ * These are technically racy. since something could
+ * change PKRU between the read and the write.
+ */
+static inline void __pkey_access_allow(int pkey, int do_allow)
+{
+ unsigned int pkru = rdpkru();
+ int bit = pkey * 2;
+
+ if (do_allow)
+ pkru &= (1<<bit);
+ else
+ pkru |= (1<<bit);
+
+ dprintf4("pkru now: %08x\n", rdpkru());
+ wrpkru(pkru);
+}
+
+static inline void __pkey_write_allow(int pkey, int do_allow_write)
+{
+ long pkru = rdpkru();
+ int bit = pkey * 2 + 1;
+
+ if (do_allow_write)
+ pkru &= (1<<bit);
+ else
+ pkru |= (1<<bit);
+
+ wrpkru(pkru);
+ dprintf4("pkru now: %08x\n", rdpkru());
+}
+
+#define PROT_PKEY0 0x10 /* protection key value (bit 0) */
+#define PROT_PKEY1 0x20 /* protection key value (bit 1) */
+#define PROT_PKEY2 0x40 /* protection key value (bit 2) */
+#define PROT_PKEY3 0x80 /* protection key value (bit 3) */
+
+#define PAGE_SIZE 4096
+#define MB (1<<20)
+
+static inline void __cpuid(unsigned int *eax, unsigned int *ebx,
+ unsigned int *ecx, unsigned int *edx)
+{
+ /* ecx is often an input as well as an output. */
+ asm volatile(
+ "cpuid;"
+ : "=a" (*eax),
+ "=b" (*ebx),
+ "=c" (*ecx),
+ "=d" (*edx)
+ : "0" (*eax), "2" (*ecx));
+}
+
+/* Intel-defined CPU features, CPUID level 0x00000007:0 (ecx) */
+#define X86_FEATURE_PKU (1<<3) /* Protection Keys for Userspace */
+#define X86_FEATURE_OSPKE (1<<4) /* OS Protection Keys Enable */
+
+static inline int cpu_has_pku(void)
+{
+ unsigned int eax;
+ unsigned int ebx;
+ unsigned int ecx;
+ unsigned int edx;
+
+ eax = 0x7;
+ ecx = 0x0;
+ __cpuid(&eax, &ebx, &ecx, &edx);
+
+ if (!(ecx & X86_FEATURE_PKU)) {
+ dprintf2("cpu does not have PKU\n");
+ return 0;
+ }
+ if (!(ecx & X86_FEATURE_OSPKE)) {
+ dprintf2("cpu does not have OSPKE\n");
+ return 0;
+ }
+ return 1;
+}
+
+#define XSTATE_PKRU_BIT (9)
+#define XSTATE_PKRU 0x200
+
+int pkru_xstate_offset(void)
+{
+ unsigned int eax;
+ unsigned int ebx;
+ unsigned int ecx;
+ unsigned int edx;
+ int xstate_offset;
+ int xstate_size;
+ unsigned long XSTATE_CPUID = 0xd;
+ int leaf;
+
+ /* assume that XSTATE_PKRU is set in XCR0 */
+ leaf = XSTATE_PKRU_BIT;
+ {
+ eax = XSTATE_CPUID;
+ ecx = leaf;
+ __cpuid(&eax, &ebx, &ecx, &edx);
+
+ if (leaf == XSTATE_PKRU_BIT) {
+ xstate_offset = ebx;
+ xstate_size = eax;
+ }
+ }
+
+ if (xstate_size == 0) {
+ printf("could not find size/offset of PKRU in xsave state\n");
+ return 0;
+ }
+
+ return xstate_offset;
+}
+
+#endif /* _PKEYS_HELPER_H */
diff --git a/tools/testing/selftests/vm/protection_keys.c b/tools/testing/selftests/vm/protection_keys.c
new file mode 100644
index 0000000..3237bc0
--- /dev/null
+++ b/tools/testing/selftests/vm/protection_keys.c
@@ -0,0 +1,1395 @@
+/*
+ * Tests x86 Memory Protection Keys (see Documentation/x86/protection-keys.txt)
+ *
+ * There are examples in here of:
+ * * how to set protection keys on memory
+ * * how to set/clear bits in PKRU (the rights register)
+ * * how to handle SEGV_PKRU signals and extract pkey-relevant
+ * information from the siginfo
+ *
+ * Things to add:
+ * make sure KSM and KSM COW breaking works
+ * prefault pages in at malloc, or not
+ * protect MPX bounds tables with protection keys?
+ * make sure VMA splitting/merging is working correctly
+ * OOMs can destroy mm->mmap (see exit_mmap()), so make sure it is immune to pkeys
+ * look for pkey "leaks" where it is still set on a VMA but "freed" back to the kernel
+ * do a plain mprotect() to a mprotect_pkey() area and make sure the pkey sticks
+ *
+ * Compile like this:
+ * gcc -o protection_keys -O2 -g -std=gnu99 -pthread -Wall protection_keys.c -lrt -ldl -lm
+ * gcc -m32 -o protection_keys_32 -O2 -g -std=gnu99 -pthread -Wall protection_keys.c -lrt -ldl -lm
+ */
+#define _GNU_SOURCE
+#include <errno.h>
+#include <linux/futex.h>
+#include <sys/time.h>
+#include <sys/syscall.h>
+#include <string.h>
+#include <stdio.h>
+#include <stdint.h>
+#include <stdbool.h>
+#include <signal.h>
+#include <assert.h>
+#include <stdlib.h>
+#include <ucontext.h>
+#include <sys/mman.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#include <unistd.h>
+#include <sys/ptrace.h>
+#include <setjmp.h>
+
+#include "pkey-helpers.h"
+
+int iteration_nr = 1;
+int test_nr;
+
+unsigned int shadow_pkru;
+
+#define HPAGE_SIZE (1UL<<21)
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof(*(x)))
+#define ALIGN_UP(x, align_to) (((x) + ((align_to)-1)) & ~((align_to)-1))
+#define ALIGN_DOWN(x, align_to) ((x) & ~((align_to)-1))
+#define ALIGN_PTR_UP(p, ptr_align_to) ((typeof(p))ALIGN_UP((unsigned long)(p), ptr_align_to))
+#define ALIGN_PTR_DOWN(p, ptr_align_to) ((typeof(p))ALIGN_DOWN((unsigned long)(p), ptr_align_to))
+#define __stringify_1(x...) #x
+#define __stringify(x...) __stringify_1(x)
+
+#define PTR_ERR_ENOTSUP ((void *)-ENOTSUP)
+
+int dprint_in_signal;
+char dprint_in_signal_buffer[DPRINT_IN_SIGNAL_BUF_SIZE];
+
+extern void abort_hooks(void);
+#define pkey_assert(condition) do { \
+ if (!(condition)) { \
+ dprintf0("assert() at %s::%d test_nr: %d iteration: %d\n", \
+ __FILE__, __LINE__, \
+ test_nr, iteration_nr); \
+ dprintf0("errno at assert: %d", errno); \
+ abort_hooks(); \
+ assert(condition); \
+ } \
+} while (0)
+#define raw_assert(cond) assert(cond)
+
+void cat_into_file(char *str, char *file)
+{
+ int fd = open(file, O_RDWR);
+ int ret;
+
+ dprintf2("%s(): writing '%s' to '%s'\n", __func__, str, file);
+ /*
+ * these need to be raw because they are called under
+ * pkey_assert()
+ */
+ raw_assert(fd >= 0);
+ ret = write(fd, str, strlen(str));
+ if (ret != strlen(str)) {
+ perror("write to file failed");
+ fprintf(stderr, "filename: '%s' str: '%s'\n", file, str);
+ raw_assert(0);
+ }
+ close(fd);
+}
+
+#if CONTROL_TRACING > 0
+static int warned_tracing;
+int tracing_root_ok(void)
+{
+ if (geteuid() != 0) {
+ if (!warned_tracing)
+ fprintf(stderr, "WARNING: not run as root, "
+ "can not do tracing control\n");
+ warned_tracing = 1;
+ return 0;
+ }
+ return 1;
+}
+#endif
+
+void tracing_on(void)
+{
+#if CONTROL_TRACING > 0
+#define TRACEDIR "/sys/kernel/debug/tracing"
+ char pidstr[32];
+
+ if (!tracing_root_ok())
+ return;
+
+ sprintf(pidstr, "%d", getpid());
+ cat_into_file("0", TRACEDIR "/tracing_on");
+ cat_into_file("\n", TRACEDIR "/trace");
+ if (1) {
+ cat_into_file("function_graph", TRACEDIR "/current_tracer");
+ cat_into_file("1", TRACEDIR "/options/funcgraph-proc");
+ } else {
+ cat_into_file("nop", TRACEDIR "/current_tracer");
+ }
+ cat_into_file(pidstr, TRACEDIR "/set_ftrace_pid");
+ cat_into_file("1", TRACEDIR "/tracing_on");
+ dprintf1("enabled tracing\n");
+#endif
+}
+
+void tracing_off(void)
+{
+#if CONTROL_TRACING > 0
+ if (!tracing_root_ok())
+ return;
+ cat_into_file("0", "/sys/kernel/debug/tracing/tracing_on");
+#endif
+}
+
+void abort_hooks(void)
+{
+ fprintf(stderr, "running %s()...\n", __func__);
+ tracing_off();
+#ifdef SLEEP_ON_ABORT
+ sleep(SLEEP_ON_ABORT);
+#endif
+}
+
+static inline void __page_o_noops(void)
+{
+ /* 8-bytes of instruction * 512 bytes = 1 page */
+ asm(".rept 512 ; nopl 0x7eeeeeee(%eax) ; .endr");
+}
+
+/*
+ * This attempts to have roughly a page of instructions followed by a few
+ * instructions that do a write, and another page of instructions. That
+ * way, we are pretty sure that the write is in the second page of
+ * instructions and has at least a page of padding behind it.
+ *
+ * *That* lets us be sure to madvise() away the write instruction, which
+ * will then fault, which makes sure that the fault code handles
+ * execute-only memory properly.
+ */
+__attribute__((__aligned__(PAGE_SIZE)))
+void lots_o_noops_around_write(int *write_to_me)
+{
+ dprintf3("running %s()\n", __func__);
+ __page_o_noops();
+ /* Assume this happens in the second page of instructions: */
+ *write_to_me = __LINE__;
+ /* pad out by another page: */
+ __page_o_noops();
+ dprintf3("%s() done\n", __func__);
+}
+
+/* Define some kernel-like types */
+#define u8 uint8_t
+#define u16 uint16_t
+#define u32 uint32_t
+#define u64 uint64_t
+
+#ifdef __i386__
+#define SYS_mprotect_key 380
+#define SYS_pkey_alloc 381
+#define SYS_pkey_free 382
+#define REG_IP_IDX REG_EIP
+#define si_pkey_offset 0x14
+#else
+#define SYS_mprotect_key 329
+#define SYS_pkey_alloc 330
+#define SYS_pkey_free 331
+#define REG_IP_IDX REG_RIP
+#define si_pkey_offset 0x20
+#endif
+
+void dump_mem(void *dumpme, int len_bytes)
+{
+ char *c = (void *)dumpme;
+ int i;
+
+ for (i = 0; i < len_bytes; i += sizeof(u64)) {
+ u64 *ptr = (u64 *)(c + i);
+ dprintf1("dump[%03d][@%p]: %016jx\n", i, ptr, *ptr);
+ }
+}
+
+#define __SI_FAULT (3 << 16)
+#define SEGV_BNDERR (__SI_FAULT|3) /* failed address bound checks */
+#define SEGV_PKUERR (__SI_FAULT|4)
+
+static char *si_code_str(int si_code)
+{
+ if (si_code & SEGV_MAPERR)
+ return "SEGV_MAPERR";
+ if (si_code & SEGV_ACCERR)
+ return "SEGV_ACCERR";
+ if (si_code & SEGV_BNDERR)
+ return "SEGV_BNDERR";
+ if (si_code & SEGV_PKUERR)
+ return "SEGV_PKUERR";
+ return "UNKNOWN";
+}
+
+int pkru_faults;
+int last_si_pkey = -1;
+void signal_handler(int signum, siginfo_t *si, void *vucontext)
+{
+ ucontext_t *uctxt = vucontext;
+ int trapno;
+ unsigned long ip;
+ char *fpregs;
+ u32 *pkru_ptr;
+ u64 si_pkey;
+ u32 *si_pkey_ptr;
+ int pkru_offset;
+ fpregset_t fpregset;
+
+ dprint_in_signal = 1;
+ dprintf1(">>>>===============SIGSEGV============================\n");
+ dprintf1("%s()::%d, pkru: 0x%x shadow: %x\n", __func__, __LINE__,
+ __rdpkru(), shadow_pkru);
+
+ trapno = uctxt->uc_mcontext.gregs[REG_TRAPNO];
+ ip = uctxt->uc_mcontext.gregs[REG_IP_IDX];
+ fpregset = uctxt->uc_mcontext.fpregs;
+ fpregs = (void *)fpregset;
+
+ dprintf2("%s() trapno: %d ip: 0x%lx info->si_code: %s/%d\n", __func__,
+ trapno, ip, si_code_str(si->si_code), si->si_code);
+#ifdef __i386__
+ /*
+ * 32-bit has some extra padding so that userspace can tell whether
+ * the XSTATE header is present in addition to the "legacy" FPU
+ * state. We just assume that it is here.
+ */
+ fpregs += 0x70;
+#endif
+ pkru_offset = pkru_xstate_offset();
+ pkru_ptr = (void *)(&fpregs[pkru_offset]);
+
+ dprintf1("siginfo: %p\n", si);
+ dprintf1(" fpregs: %p\n", fpregs);
+ /*
+ * If we got a PKRU fault, we *HAVE* to have at least one bit set in
+ * here.
+ */
+ dprintf1("pkru_xstate_offset: %d\n", pkru_xstate_offset());
+ if (DEBUG_LEVEL > 4)
+ dump_mem(pkru_ptr - 128, 256);
+ pkey_assert(*pkru_ptr);
+
+ si_pkey_ptr = (u32 *)(((u8 *)si) + si_pkey_offset);
+ dprintf1("si_pkey_ptr: %p\n", si_pkey_ptr);
+ dump_mem(si_pkey_ptr - 8, 24);
+ si_pkey = *si_pkey_ptr;
+ pkey_assert(si_pkey < NR_PKEYS);
+ last_si_pkey = si_pkey;
+
+ if ((si->si_code == SEGV_MAPERR) ||
+ (si->si_code == SEGV_ACCERR) ||
+ (si->si_code == SEGV_BNDERR)) {
+ printf("non-PK si_code, exiting...\n");
+ exit(4);
+ }
+
+ dprintf1("signal pkru from xsave: %08x\n", *pkru_ptr);
+ /* need __rdpkru() version so we do not do shadow_pkru checking */
+ dprintf1("signal pkru from pkru: %08x\n", __rdpkru());
+ dprintf1("si_pkey from siginfo: %jx\n", si_pkey);
+ *(u64 *)pkru_ptr = 0x00000000;
+ dprintf1("WARNING: set PRKU=0 to allow faulting instruction to continue\n");
+ pkru_faults++;
+ dprintf1("<<<<==================================================\n");
+ return;
+ if (trapno == 14) {
+ fprintf(stderr,
+ "ERROR: In signal handler, page fault, trapno = %d, ip = %016lx\n",
+ trapno, ip);
+ fprintf(stderr, "si_addr %p\n", si->si_addr);
+ fprintf(stderr, "REG_ERR: %lx\n",
+ (unsigned long)uctxt->uc_mcontext.gregs[REG_ERR]);
+ exit(1);
+ } else {
+ fprintf(stderr, "unexpected trap %d! at 0x%lx\n", trapno, ip);
+ fprintf(stderr, "si_addr %p\n", si->si_addr);
+ fprintf(stderr, "REG_ERR: %lx\n",
+ (unsigned long)uctxt->uc_mcontext.gregs[REG_ERR]);
+ exit(2);
+ }
+ dprint_in_signal = 0;
+}
+
+int wait_all_children(void)
+{
+ int status;
+ return waitpid(-1, &status, 0);
+}
+
+void sig_chld(int x)
+{
+ dprint_in_signal = 1;
+ dprintf2("[%d] SIGCHLD: %d\n", getpid(), x);
+ dprint_in_signal = 0;
+}
+
+void setup_sigsegv_handler(void)
+{
+ int r, rs;
+ struct sigaction newact;
+ struct sigaction oldact;
+
+ /* #PF is mapped to sigsegv */
+ int signum = SIGSEGV;
+
+ newact.sa_handler = 0;
+ newact.sa_sigaction = signal_handler;
+
+ /*sigset_t - signals to block while in the handler */
+ /* get the old signal mask. */
+ rs = sigprocmask(SIG_SETMASK, 0, &newact.sa_mask);
+ pkey_assert(rs == 0);
+
+ /* call sa_sigaction, not sa_handler*/
+ newact.sa_flags = SA_SIGINFO;
+
+ newact.sa_restorer = 0; /* void(*)(), obsolete */
+ r = sigaction(signum, &newact, &oldact);
+ r = sigaction(SIGALRM, &newact, &oldact);
+ pkey_assert(r == 0);
+}
+
+void setup_handlers(void)
+{
+ signal(SIGCHLD, &sig_chld);
+ setup_sigsegv_handler();
+}
+
+pid_t fork_lazy_child(void)
+{
+ pid_t forkret;
+
+ forkret = fork();
+ pkey_assert(forkret >= 0);
+ dprintf3("[%d] fork() ret: %d\n", getpid(), forkret);
+
+ if (!forkret) {
+ /* in the child */
+ while (1) {
+ dprintf1("child sleeping...\n");
+ sleep(30);
+ }
+ }
+ return forkret;
+}
+
+void davecmp(void *_a, void *_b, int len)
+{
+ int i;
+ unsigned long *a = _a;
+ unsigned long *b = _b;
+
+ for (i = 0; i < len / sizeof(*a); i++) {
+ if (a[i] == b[i])
+ continue;
+
+ dprintf3("[%3d]: a: %016lx b: %016lx\n", i, a[i], b[i]);
+ }
+}
+
+void dumpit(char *f)
+{
+ int fd = open(f, O_RDONLY);
+ char buf[100];
+ int nr_read;
+
+ dprintf2("maps fd: %d\n", fd);
+ do {
+ nr_read = read(fd, &buf[0], sizeof(buf));
+ write(1, buf, nr_read);
+ } while (nr_read > 0);
+ close(fd);
+}
+
+#define PKEY_DISABLE_ACCESS 0x1
+#define PKEY_DISABLE_WRITE 0x2
+
+u32 pkey_get(int pkey, unsigned long flags)
+{
+ u32 mask = (PKEY_DISABLE_ACCESS|PKEY_DISABLE_WRITE);
+ u32 pkru = __rdpkru();
+ u32 shifted_pkru;
+ u32 masked_pkru;
+
+ dprintf1("%s(pkey=%d, flags=%lx) = %x / %d\n",
+ __func__, pkey, flags, 0, 0);
+ dprintf2("%s() raw pkru: %x\n", __func__, pkru);
+
+ shifted_pkru = (pkru >> (pkey * PKRU_BITS_PER_PKEY));
+ dprintf2("%s() shifted_pkru: %x\n", __func__, shifted_pkru);
+ masked_pkru = shifted_pkru & mask;
+ dprintf2("%s() masked pkru: %x\n", __func__, masked_pkru);
+ /*
+ * shift down the relevant bits to the lowest two, then
+ * mask off all the other high bits.
+ */
+ return masked_pkru;
+}
+
+int pkey_set(int pkey, unsigned long rights, unsigned long flags)
+{
+ u32 mask = (PKEY_DISABLE_ACCESS|PKEY_DISABLE_WRITE);
+ u32 old_pkru = __rdpkru();
+ u32 new_pkru;
+
+ /* make sure that 'rights' only contains the bits we expect: */
+ assert(!(rights & ~mask));
+
+ /* copy old pkru */
+ new_pkru = old_pkru;
+ /* mask out bits from pkey in old value: */
+ new_pkru &= ~(mask << (pkey * PKRU_BITS_PER_PKEY));
+ /* OR in new bits for pkey: */
+ new_pkru |= (rights << (pkey * PKRU_BITS_PER_PKEY));
+
+ __wrpkru(new_pkru);
+
+ dprintf3("%s(pkey=%d, rights=%lx, flags=%lx) = %x pkru now: %x old_pkru: %x\n",
+ __func__, pkey, rights, flags, 0, __rdpkru(), old_pkru);
+ return 0;
+}
+
+void pkey_disable_set(int pkey, int flags)
+{
+ unsigned long syscall_flags = 0;
+ int ret;
+ int pkey_rights;
+ u32 orig_pkru = rdpkru();
+
+ dprintf1("START->%s(%d, 0x%x)\n", __func__,
+ pkey, flags);
+ pkey_assert(flags & (PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE));
+
+ pkey_rights = pkey_get(pkey, syscall_flags);
+
+ dprintf1("%s(%d) pkey_get(%d): %x\n", __func__,
+ pkey, pkey, pkey_rights);
+ pkey_assert(pkey_rights >= 0);
+
+ pkey_rights |= flags;
+
+ ret = pkey_set(pkey, pkey_rights, syscall_flags);
+ assert(!ret);
+ /*pkru and flags have the same format */
+ shadow_pkru |= flags << (pkey * 2);
+ dprintf1("%s(%d) shadow: 0x%x\n", __func__, pkey, shadow_pkru);
+
+ pkey_assert(ret >= 0);
+
+ pkey_rights = pkey_get(pkey, syscall_flags);
+ dprintf1("%s(%d) pkey_get(%d): %x\n", __func__,
+ pkey, pkey, pkey_rights);
+
+ dprintf1("%s(%d) pkru: 0x%x\n", __func__, pkey, rdpkru());
+ if (flags)
+ pkey_assert(rdpkru() > orig_pkru);
+ dprintf1("END<---%s(%d, 0x%x)\n", __func__,
+ pkey, flags);
+}
+
+void pkey_disable_clear(int pkey, int flags)
+{
+ unsigned long syscall_flags = 0;
+ int ret;
+ int pkey_rights = pkey_get(pkey, syscall_flags);
+ u32 orig_pkru = rdpkru();
+
+ pkey_assert(flags & (PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE));
+
+ dprintf1("%s(%d) pkey_get(%d): %x\n", __func__,
+ pkey, pkey, pkey_rights);
+ pkey_assert(pkey_rights >= 0);
+
+ pkey_rights |= flags;
+
+ ret = pkey_set(pkey, pkey_rights, 0);
+ /* pkru and flags have the same format */
+ shadow_pkru &= ~(flags << (pkey * 2));
+ pkey_assert(ret >= 0);
+
+ pkey_rights = pkey_get(pkey, syscall_flags);
+ dprintf1("%s(%d) pkey_get(%d): %x\n", __func__,
+ pkey, pkey, pkey_rights);
+
+ dprintf1("%s(%d) pkru: 0x%x\n", __func__, pkey, rdpkru());
+ if (flags)
+ assert(rdpkru() > orig_pkru);
+}
+
+void pkey_write_allow(int pkey)
+{
+ pkey_disable_clear(pkey, PKEY_DISABLE_WRITE);
+}
+void pkey_write_deny(int pkey)
+{
+ pkey_disable_set(pkey, PKEY_DISABLE_WRITE);
+}
+void pkey_access_allow(int pkey)
+{
+ pkey_disable_clear(pkey, PKEY_DISABLE_ACCESS);
+}
+void pkey_access_deny(int pkey)
+{
+ pkey_disable_set(pkey, PKEY_DISABLE_ACCESS);
+}
+
+int sys_mprotect_pkey(void *ptr, size_t size, unsigned long orig_prot,
+ unsigned long pkey)
+{
+ int sret;
+
+ dprintf2("%s(0x%p, %zx, prot=%lx, pkey=%lx)\n", __func__,
+ ptr, size, orig_prot, pkey);
+
+ errno = 0;
+ sret = syscall(SYS_mprotect_key, ptr, size, orig_prot, pkey);
+ if (errno) {
+ dprintf2("SYS_mprotect_key sret: %d\n", sret);
+ dprintf2("SYS_mprotect_key prot: 0x%lx\n", orig_prot);
+ dprintf2("SYS_mprotect_key failed, errno: %d\n", errno);
+ if (DEBUG_LEVEL >= 2)
+ perror("SYS_mprotect_pkey");
+ }
+ return sret;
+}
+
+int sys_pkey_alloc(unsigned long flags, unsigned long init_val)
+{
+ int ret = syscall(SYS_pkey_alloc, flags, init_val);
+ dprintf1("%s(flags=%lx, init_val=%lx) syscall ret: %d errno: %d\n",
+ __func__, flags, init_val, ret, errno);
+ return ret;
+}
+
+int alloc_pkey(void)
+{
+ int ret;
+ unsigned long init_val = 0x0;
+
+ dprintf1("alloc_pkey()::%d, pkru: 0x%x shadow: %x\n",
+ __LINE__, __rdpkru(), shadow_pkru);
+ ret = sys_pkey_alloc(0, init_val);
+ /*
+ * pkey_alloc() sets PKRU, so we need to reflect it in
+ * shadow_pkru:
+ */
+ dprintf4("alloc_pkey()::%d, ret: %d pkru: 0x%x shadow: 0x%x\n",
+ __LINE__, ret, __rdpkru(), shadow_pkru);
+ if (ret) {
+ /* clear both the bits: */
+ shadow_pkru &= ~(0x3 << (ret * 2));
+ dprintf4("alloc_pkey()::%d, ret: %d pkru: 0x%x shadow: 0x%x\n",
+ __LINE__, ret, __rdpkru(), shadow_pkru);
+ /*
+ * move the new state in from init_val
+ * (remember, we cheated and init_val == pkru format)
+ */
+ shadow_pkru |= (init_val << (ret * 2));
+ }
+ dprintf4("alloc_pkey()::%d, ret: %d pkru: 0x%x shadow: 0x%x\n",
+ __LINE__, ret, __rdpkru(), shadow_pkru);
+ dprintf1("alloc_pkey()::%d errno: %d\n", __LINE__, errno);
+ /* for shadow checking: */
+ rdpkru();
+ dprintf4("alloc_pkey()::%d, ret: %d pkru: 0x%x shadow: 0x%x\n",
+ __LINE__, ret, __rdpkru(), shadow_pkru);
+ return ret;
+}
+
+int sys_pkey_free(unsigned long pkey)
+{
+ int ret = syscall(SYS_pkey_free, pkey);
+ dprintf1("%s(pkey=%ld) syscall ret: %d\n", __func__, pkey, ret);
+ return ret;
+}
+
+/*
+ * I had a bug where pkey bits could be set by mprotect() but
+ * not cleared. This ensures we get lots of random bit sets
+ * and clears on the vma and pte pkey bits.
+ */
+int alloc_random_pkey(void)
+{
+ int max_nr_pkey_allocs;
+ int ret;
+ int i;
+ int alloced_pkeys[NR_PKEYS];
+ int nr_alloced = 0;
+ int random_index;
+ memset(alloced_pkeys, 0, sizeof(alloced_pkeys));
+
+ /* allocate every possible key and make a note of which ones we got */
+ max_nr_pkey_allocs = NR_PKEYS;
+ max_nr_pkey_allocs = 1;
+ for (i = 0; i < max_nr_pkey_allocs; i++) {
+ int new_pkey = alloc_pkey();
+ if (new_pkey < 0)
+ break;
+ alloced_pkeys[nr_alloced++] = new_pkey;
+ }
+
+ pkey_assert(nr_alloced > 0);
+ /* select a random one out of the allocated ones */
+ random_index = rand() % nr_alloced;
+ ret = alloced_pkeys[random_index];
+ /* now zero it out so we don't free it next */
+ alloced_pkeys[random_index] = 0;
+
+ /* go through the allocated ones that we did not want and free them */
+ for (i = 0; i < nr_alloced; i++) {
+ int free_ret;
+ if (!alloced_pkeys[i])
+ continue;
+ free_ret = sys_pkey_free(alloced_pkeys[i]);
+ pkey_assert(!free_ret);
+ }
+ dprintf1("%s()::%d, ret: %d pkru: 0x%x shadow: 0x%x\n", __func__,
+ __LINE__, ret, __rdpkru(), shadow_pkru);
+ return ret;
+}
+
+int mprotect_pkey(void *ptr, size_t size, unsigned long orig_prot,
+ unsigned long pkey)
+{
+ int nr_iterations = random() % 100;
+ int ret;
+
+ while (0) {
+ int rpkey = alloc_random_pkey();
+ ret = sys_mprotect_pkey(ptr, size, orig_prot, pkey);
+ dprintf1("sys_mprotect_pkey(%p, %zx, prot=0x%lx, pkey=%ld) ret: %d\n",
+ ptr, size, orig_prot, pkey, ret);
+ if (nr_iterations-- < 0)
+ break;
+
+ dprintf1("%s()::%d, ret: %d pkru: 0x%x shadow: 0x%x\n", __func__,
+ __LINE__, ret, __rdpkru(), shadow_pkru);
+ sys_pkey_free(rpkey);
+ dprintf1("%s()::%d, ret: %d pkru: 0x%x shadow: 0x%x\n", __func__,
+ __LINE__, ret, __rdpkru(), shadow_pkru);
+ }
+ pkey_assert(pkey < NR_PKEYS);
+
+ ret = sys_mprotect_pkey(ptr, size, orig_prot, pkey);
+ dprintf1("mprotect_pkey(%p, %zx, prot=0x%lx, pkey=%ld) ret: %d\n",
+ ptr, size, orig_prot, pkey, ret);
+ pkey_assert(!ret);
+ dprintf1("%s()::%d, ret: %d pkru: 0x%x shadow: 0x%x\n", __func__,
+ __LINE__, ret, __rdpkru(), shadow_pkru);
+ return ret;
+}
+
+struct pkey_malloc_record {
+ void *ptr;
+ long size;
+};
+struct pkey_malloc_record *pkey_malloc_records;
+long nr_pkey_malloc_records;
+void record_pkey_malloc(void *ptr, long size)
+{
+ long i;
+ struct pkey_malloc_record *rec = NULL;
+
+ for (i = 0; i < nr_pkey_malloc_records; i++) {
+ rec = &pkey_malloc_records[i];
+ /* find a free record */
+ if (rec)
+ break;
+ }
+ if (!rec) {
+ /* every record is full */
+ size_t old_nr_records = nr_pkey_malloc_records;
+ size_t new_nr_records = (nr_pkey_malloc_records * 2 + 1);
+ size_t new_size = new_nr_records * sizeof(struct pkey_malloc_record);
+ dprintf2("new_nr_records: %zd\n", new_nr_records);
+ dprintf2("new_size: %zd\n", new_size);
+ pkey_malloc_records = realloc(pkey_malloc_records, new_size);
+ pkey_assert(pkey_malloc_records != NULL);
+ rec = &pkey_malloc_records[nr_pkey_malloc_records];
+ /*
+ * realloc() does not initialize memory, so zero it from
+ * the first new record all the way to the end.
+ */
+ for (i = 0; i < new_nr_records - old_nr_records; i++)
+ memset(rec + i, 0, sizeof(*rec));
+ }
+ dprintf3("filling malloc record[%d/%p]: {%p, %ld}\n",
+ (int)(rec - pkey_malloc_records), rec, ptr, size);
+ rec->ptr = ptr;
+ rec->size = size;
+ nr_pkey_malloc_records++;
+}
+
+void free_pkey_malloc(void *ptr)
+{
+ long i;
+ int ret;
+ dprintf3("%s(%p)\n", __func__, ptr);
+ for (i = 0; i < nr_pkey_malloc_records; i++) {
+ struct pkey_malloc_record *rec = &pkey_malloc_records[i];
+ dprintf4("looking for ptr %p at record[%ld/%p]: {%p, %ld}\n",
+ ptr, i, rec, rec->ptr, rec->size);
+ if ((ptr < rec->ptr) ||
+ (ptr >= rec->ptr + rec->size))
+ continue;
+
+ dprintf3("found ptr %p at record[%ld/%p]: {%p, %ld}\n",
+ ptr, i, rec, rec->ptr, rec->size);
+ nr_pkey_malloc_records--;
+ ret = munmap(rec->ptr, rec->size);
+ dprintf3("munmap ret: %d\n", ret);
+ pkey_assert(!ret);
+ dprintf3("clearing rec->ptr, rec: %p\n", rec);
+ rec->ptr = NULL;
+ dprintf3("done clearing rec->ptr, rec: %p\n", rec);
+ return;
+ }
+ pkey_assert(false);
+}
+
+
+void *malloc_pkey_with_mprotect(long size, int prot, u16 pkey)
+{
+ void *ptr;
+ int ret;
+
+ rdpkru();
+ dprintf1("doing %s(size=%ld, prot=0x%x, pkey=%d)\n", __func__,
+ size, prot, pkey);
+ pkey_assert(pkey < NR_PKEYS);
+ ptr = mmap(NULL, size, prot, MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
+ pkey_assert(ptr != (void *)-1);
+ ret = mprotect_pkey((void *)ptr, PAGE_SIZE, prot, pkey);
+ pkey_assert(!ret);
+ record_pkey_malloc(ptr, size);
+ rdpkru();
+
+ dprintf1("%s() for pkey %d @ %p\n", __func__, pkey, ptr);
+ return ptr;
+}
+
+void *malloc_pkey_anon_huge(long size, int prot, u16 pkey)
+{
+ int ret;
+ void *ptr;
+
+ dprintf1("doing %s(size=%ld, prot=0x%x, pkey=%d)\n", __func__,
+ size, prot, pkey);
+ /*
+ * Guarantee we can fit at least one huge page in the resulting
+ * allocation by allocating space for 2:
+ */
+ size = ALIGN_UP(size, HPAGE_SIZE * 2);
+ ptr = mmap(NULL, size, PROT_NONE, MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
+ pkey_assert(ptr != (void *)-1);
+ record_pkey_malloc(ptr, size);
+ mprotect_pkey(ptr, size, prot, pkey);
+
+ dprintf1("unaligned ptr: %p\n", ptr);
+ ptr = ALIGN_PTR_UP(ptr, HPAGE_SIZE);
+ dprintf1(" aligned ptr: %p\n", ptr);
+ ret = madvise(ptr, HPAGE_SIZE, MADV_HUGEPAGE);
+ dprintf1("MADV_HUGEPAGE ret: %d\n", ret);
+ ret = madvise(ptr, HPAGE_SIZE, MADV_WILLNEED);
+ dprintf1("MADV_WILLNEED ret: %d\n", ret);
+ memset(ptr, 0, HPAGE_SIZE);
+
+ dprintf1("mmap()'d thp for pkey %d @ %p\n", pkey, ptr);
+ return ptr;
+}
+
+int hugetlb_setup_ok;
+#define GET_NR_HUGE_PAGES 10
+void setup_hugetlbfs(void)
+{
+ int err;
+ int fd;
+ char buf[] = "123";
+
+ if (geteuid() != 0) {
+ fprintf(stderr, "WARNING: not run as root, can not do hugetlb test\n");
+ return;
+ }
+
+ cat_into_file(__stringify(GET_NR_HUGE_PAGES), "/proc/sys/vm/nr_hugepages");
+
+ /*
+ * Now go make sure that we got the pages and that they
+ * are 2M pages. Someone might have made 1G the default.
+ */
+ fd = open("/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages", O_RDONLY);
+ if (fd < 0) {
+ perror("opening sysfs 2M hugetlb config");
+ return;
+ }
+
+ /* -1 to guarantee leaving the trailing \0 */
+ err = read(fd, buf, sizeof(buf)-1);
+ close(fd);
+ if (err <= 0) {
+ perror("reading sysfs 2M hugetlb config");
+ return;
+ }
+
+ if (atoi(buf) != GET_NR_HUGE_PAGES) {
+ fprintf(stderr, "could not confirm 2M pages, got: '%s' expected %d\n",
+ buf, GET_NR_HUGE_PAGES);
+ return;
+ }
+
+ hugetlb_setup_ok = 1;
+}
+
+void *malloc_pkey_hugetlb(long size, int prot, u16 pkey)
+{
+ void *ptr;
+ int flags = MAP_ANONYMOUS|MAP_PRIVATE|MAP_HUGETLB;
+
+ if (!hugetlb_setup_ok)
+ return PTR_ERR_ENOTSUP;
+
+ dprintf1("doing %s(%ld, %x, %x)\n", __func__, size, prot, pkey);
+ size = ALIGN_UP(size, HPAGE_SIZE * 2);
+ pkey_assert(pkey < NR_PKEYS);
+ ptr = mmap(NULL, size, PROT_NONE, flags, -1, 0);
+ pkey_assert(ptr != (void *)-1);
+ mprotect_pkey(ptr, size, prot, pkey);
+
+ record_pkey_malloc(ptr, size);
+
+ dprintf1("mmap()'d hugetlbfs for pkey %d @ %p\n", pkey, ptr);
+ return ptr;
+}
+
+void *malloc_pkey_mmap_dax(long size, int prot, u16 pkey)
+{
+ void *ptr;
+ int fd;
+
+ dprintf1("doing %s(size=%ld, prot=0x%x, pkey=%d)\n", __func__,
+ size, prot, pkey);
+ pkey_assert(pkey < NR_PKEYS);
+ fd = open("/dax/foo", O_RDWR);
+ pkey_assert(fd >= 0);
+
+ ptr = mmap(0, size, prot, MAP_SHARED, fd, 0);
+ pkey_assert(ptr != (void *)-1);
+
+ mprotect_pkey(ptr, size, prot, pkey);
+
+ record_pkey_malloc(ptr, size);
+
+ dprintf1("mmap()'d for pkey %d @ %p\n", pkey, ptr);
+ close(fd);
+ return ptr;
+}
+
+void *(*pkey_malloc[])(long size, int prot, u16 pkey) = {
+
+ malloc_pkey_with_mprotect,
+ malloc_pkey_anon_huge,
+ malloc_pkey_hugetlb
+/* can not do direct with the pkey_mprotect() API:
+ malloc_pkey_mmap_direct,
+ malloc_pkey_mmap_dax,
+*/
+};
+
+void *malloc_pkey(long size, int prot, u16 pkey)
+{
+ void *ret;
+ static int malloc_type;
+ int nr_malloc_types = ARRAY_SIZE(pkey_malloc);
+
+ pkey_assert(pkey < NR_PKEYS);
+
+ while (1) {
+ pkey_assert(malloc_type < nr_malloc_types);
+
+ ret = pkey_malloc[malloc_type](size, prot, pkey);
+ pkey_assert(ret != (void *)-1);
+
+ malloc_type++;
+ if (malloc_type >= nr_malloc_types)
+ malloc_type = (random()%nr_malloc_types);
+
+ /* try again if the malloc_type we tried is unsupported */
+ if (ret == PTR_ERR_ENOTSUP)
+ continue;
+
+ break;
+ }
+
+ dprintf3("%s(%ld, prot=%x, pkey=%x) returning: %p\n", __func__,
+ size, prot, pkey, ret);
+ return ret;
+}
+
+int last_pkru_faults;
+void expected_pk_fault(int pkey)
+{
+ dprintf2("%s(): last_pkru_faults: %d pkru_faults: %d\n",
+ __func__, last_pkru_faults, pkru_faults);
+ dprintf2("%s(%d): last_si_pkey: %d\n", __func__, pkey, last_si_pkey);
+ pkey_assert(last_pkru_faults + 1 == pkru_faults);
+ pkey_assert(last_si_pkey == pkey);
+ /*
+ * The signal handler shold have cleared out PKRU to let the
+ * test program continue. We now have to restore it.
+ */
+ if (__rdpkru() != 0)
+ pkey_assert(0);
+
+ __wrpkru(shadow_pkru);
+ dprintf1("%s() set PKRU=%x to restore state after signal nuked it\n",
+ __func__, shadow_pkru);
+ last_pkru_faults = pkru_faults;
+ last_si_pkey = -1;
+}
+
+void do_not_expect_pk_fault(void)
+{
+ pkey_assert(last_pkru_faults == pkru_faults);
+}
+
+int test_fds[10] = { -1 };
+int nr_test_fds;
+void __save_test_fd(int fd)
+{
+ pkey_assert(fd >= 0);
+ pkey_assert(nr_test_fds < ARRAY_SIZE(test_fds));
+ test_fds[nr_test_fds] = fd;
+ nr_test_fds++;
+}
+
+int get_test_read_fd(void)
+{
+ int test_fd = open("/etc/passwd", O_RDONLY);
+ __save_test_fd(test_fd);
+ return test_fd;
+}
+
+void close_test_fds(void)
+{
+ int i;
+
+ for (i = 0; i < nr_test_fds; i++) {
+ if (test_fds[i] < 0)
+ continue;
+ close(test_fds[i]);
+ test_fds[i] = -1;
+ }
+ nr_test_fds = 0;
+}
+
+#define barrier() __asm__ __volatile__("": : :"memory")
+__attribute__((noinline)) int read_ptr(int *ptr)
+{
+ /*
+ * Keep GCC from optimizing this away somehow
+ */
+ barrier();
+ return *ptr;
+}
+
+void test_read_of_write_disabled_region(int *ptr, u16 pkey)
+{
+ int ptr_contents;
+
+ dprintf1("disabling write access to PKEY[1], doing read\n");
+ pkey_write_deny(pkey);
+ ptr_contents = read_ptr(ptr);
+ dprintf1("*ptr: %d\n", ptr_contents);
+ dprintf1("\n");
+}
+void test_read_of_access_disabled_region(int *ptr, u16 pkey)
+{
+ int ptr_contents;
+
+ dprintf1("disabling access to PKEY[%02d], doing read @ %p\n", pkey, ptr);
+ rdpkru();
+ pkey_access_deny(pkey);
+ ptr_contents = read_ptr(ptr);
+ dprintf1("*ptr: %d\n", ptr_contents);
+ expected_pk_fault(pkey);
+}
+void test_write_of_write_disabled_region(int *ptr, u16 pkey)
+{
+ dprintf1("disabling write access to PKEY[%02d], doing write\n", pkey);
+ pkey_write_deny(pkey);
+ *ptr = __LINE__;
+ expected_pk_fault(pkey);
+}
+void test_write_of_access_disabled_region(int *ptr, u16 pkey)
+{
+ dprintf1("disabling access to PKEY[%02d], doing write\n", pkey);
+ pkey_access_deny(pkey);
+ *ptr = __LINE__;
+ expected_pk_fault(pkey);
+}
+void test_kernel_write_of_access_disabled_region(int *ptr, u16 pkey)
+{
+ int ret;
+ int test_fd = get_test_read_fd();
+
+ dprintf1("disabling access to PKEY[%02d], "
+ "having kernel read() to buffer\n", pkey);
+ pkey_access_deny(pkey);
+ ret = read(test_fd, ptr, 1);
+ dprintf1("read ret: %d\n", ret);
+ pkey_assert(ret);
+}
+void test_kernel_write_of_write_disabled_region(int *ptr, u16 pkey)
+{
+ int ret;
+ int test_fd = get_test_read_fd();
+
+ pkey_write_deny(pkey);
+ ret = read(test_fd, ptr, 100);
+ dprintf1("read ret: %d\n", ret);
+ if (ret < 0 && (DEBUG_LEVEL > 0))
+ perror("verbose read result (OK for this to be bad)");
+ pkey_assert(ret);
+}
+
+void test_kernel_gup_of_access_disabled_region(int *ptr, u16 pkey)
+{
+ int pipe_ret, vmsplice_ret;
+ struct iovec iov;
+ int pipe_fds[2];
+
+ pipe_ret = pipe(pipe_fds);
+
+ pkey_assert(pipe_ret == 0);
+ dprintf1("disabling access to PKEY[%02d], "
+ "having kernel vmsplice from buffer\n", pkey);
+ pkey_access_deny(pkey);
+ iov.iov_base = ptr;
+ iov.iov_len = PAGE_SIZE;
+ vmsplice_ret = vmsplice(pipe_fds[1], &iov, 1, SPLICE_F_GIFT);
+ dprintf1("vmsplice() ret: %d\n", vmsplice_ret);
+ pkey_assert(vmsplice_ret == -1);
+
+ close(pipe_fds[0]);
+ close(pipe_fds[1]);
+}
+
+void test_kernel_gup_write_to_write_disabled_region(int *ptr, u16 pkey)
+{
+ int ignored = 0xdada;
+ int futex_ret;
+ int some_int = __LINE__;
+
+ dprintf1("disabling write to PKEY[%02d], "
+ "doing futex gunk in buffer\n", pkey);
+ *ptr = some_int;
+ pkey_write_deny(pkey);
+ futex_ret = syscall(SYS_futex, ptr, FUTEX_WAIT, some_int-1, NULL,
+ &ignored, ignored);
+ if (DEBUG_LEVEL > 0)
+ perror("futex");
+ dprintf1("futex() ret: %d\n", futex_ret);
+}
+
+/* Assumes that all pkeys other than 'pkey' are unallocated */
+void test_pkey_syscalls_on_non_allocated_pkey(int *ptr, u16 pkey)
+{
+ int err;
+ int i;
+
+ /* Note: 0 is the default pkey, so don't mess with it */
+ for (i = 1; i < NR_PKEYS; i++) {
+ if (pkey == i)
+ continue;
+
+ dprintf1("trying get/set/free to non-allocated pkey: %2d\n", i);
+ err = sys_pkey_free(i);
+ pkey_assert(err);
+
+ err = sys_pkey_free(i);
+ pkey_assert(err);
+
+ err = sys_mprotect_pkey(ptr, PAGE_SIZE, PROT_READ, i);
+ pkey_assert(err);
+ }
+}
+
+/* Assumes that all pkeys other than 'pkey' are unallocated */
+void test_pkey_syscalls_bad_args(int *ptr, u16 pkey)
+{
+ int err;
+ int bad_pkey = NR_PKEYS+99;
+
+ /* pass a known-invalid pkey in: */
+ err = sys_mprotect_pkey(ptr, PAGE_SIZE, PROT_READ, bad_pkey);
+ pkey_assert(err);
+}
+
+/* Assumes that all pkeys other than 'pkey' are unallocated */
+void test_pkey_alloc_exhaust(int *ptr, u16 pkey)
+{
+ int err;
+ int allocated_pkeys[NR_PKEYS] = {0};
+ int nr_allocated_pkeys = 0;
+ int i;
+
+ for (i = 0; i < NR_PKEYS*2; i++) {
+ int new_pkey;
+ dprintf1("%s() alloc loop: %d\n", __func__, i);
+ new_pkey = alloc_pkey();
+ dprintf4("%s()::%d, err: %d pkru: 0x%x shadow: 0x%x\n", __func__,
+ __LINE__, err, __rdpkru(), shadow_pkru);
+ rdpkru(); /* for shadow checking */
+ dprintf2("%s() errno: %d ENOSPC: %d\n", __func__, errno, ENOSPC);
+ if ((new_pkey == -1) && (errno == ENOSPC)) {
+ dprintf2("%s() failed to allocate pkey after %d tries\n",
+ __func__, nr_allocated_pkeys);
+ break;
+ }
+ pkey_assert(nr_allocated_pkeys < NR_PKEYS);
+ allocated_pkeys[nr_allocated_pkeys++] = new_pkey;
+ }
+
+ dprintf3("%s()::%d\n", __func__, __LINE__);
+
+ /*
+ * ensure it did not reach the end of the loop without
+ * failure:
+ */
+ pkey_assert(i < NR_PKEYS*2);
+
+ /*
+ * There are 16 pkeys supported in hardware. One is taken
+ * up for the default (0) and another can be taken up by
+ * an execute-only mapping. Ensure that we can allocate
+ * at least 14 (16-2).
+ */
+ pkey_assert(i >= NR_PKEYS-2);
+
+ for (i = 0; i < nr_allocated_pkeys; i++) {
+ err = sys_pkey_free(allocated_pkeys[i]);
+ pkey_assert(!err);
+ rdpkru(); /* for shadow checking */
+ }
+}
+
+void test_ptrace_of_child(int *ptr, u16 pkey)
+{
+ __attribute__((__unused__)) int peek_result;
+ pid_t child_pid;
+ void *ignored = 0;
+ long ret;
+ int status;
+ /*
+ * This is the "control" for our little expermient. Make sure
+ * we can always access it when ptracing.
+ */
+ int *plain_ptr_unaligned = malloc(HPAGE_SIZE);
+ int *plain_ptr = ALIGN_PTR_UP(plain_ptr_unaligned, PAGE_SIZE);
+
+ /*
+ * Fork a child which is an exact copy of this process, of course.
+ * That means we can do all of our tests via ptrace() and then plain
+ * memory access and ensure they work differently.
+ */
+ child_pid = fork_lazy_child();
+ dprintf1("[%d] child pid: %d\n", getpid(), child_pid);
+
+ ret = ptrace(PTRACE_ATTACH, child_pid, ignored, ignored);
+ if (ret)
+ perror("attach");
+ dprintf1("[%d] attach ret: %ld %d\n", getpid(), ret, __LINE__);
+ pkey_assert(ret != -1);
+ ret = waitpid(child_pid, &status, WUNTRACED);
+ if ((ret != child_pid) || !(WIFSTOPPED(status))) {
+ fprintf(stderr, "weird waitpid result %ld stat %x\n",
+ ret, status);
+ pkey_assert(0);
+ }
+ dprintf2("waitpid ret: %ld\n", ret);
+ dprintf2("waitpid status: %d\n", status);
+
+ pkey_access_deny(pkey);
+ pkey_write_deny(pkey);
+
+ /* Write access, untested for now:
+ ret = ptrace(PTRACE_POKEDATA, child_pid, peek_at, data);
+ pkey_assert(ret != -1);
+ dprintf1("poke at %p: %ld\n", peek_at, ret);
+ */
+
+ /*
+ * Try to access the pkey-protected "ptr" via ptrace:
+ */
+ ret = ptrace(PTRACE_PEEKDATA, child_pid, ptr, ignored);
+ /* expect it to work, without an error: */
+ pkey_assert(ret != -1);
+ /* Now access from the current task, and expect an exception: */
+ peek_result = read_ptr(ptr);
+ expected_pk_fault(pkey);
+
+ /*
+ * Try to access the NON-pkey-protected "plain_ptr" via ptrace:
+ */
+ ret = ptrace(PTRACE_PEEKDATA, child_pid, plain_ptr, ignored);
+ /* expect it to work, without an error: */
+ pkey_assert(ret != -1);
+ /* Now access from the current task, and expect NO exception: */
+ peek_result = read_ptr(plain_ptr);
+ do_not_expect_pk_fault();
+
+ ret = ptrace(PTRACE_DETACH, child_pid, ignored, 0);
+ pkey_assert(ret != -1);
+
+ ret = kill(child_pid, SIGKILL);
+ pkey_assert(ret != -1);
+
+ wait(&status);
+
+ free(plain_ptr_unaligned);
+}
+
+void test_executing_on_unreadable_memory(int *ptr, u16 pkey)
+{
+ void *p1;
+ int scratch;
+ int ptr_contents;
+ int ret;
+
+ p1 = ALIGN_PTR_UP(&lots_o_noops_around_write, PAGE_SIZE);
+ dprintf3("&lots_o_noops: %p\n", &lots_o_noops_around_write);
+ /* lots_o_noops_around_write should be page-aligned already */
+ assert(p1 == &lots_o_noops_around_write);
+
+ /* Point 'p1' at the *second* page of the function: */
+ p1 += PAGE_SIZE;
+
+ madvise(p1, PAGE_SIZE, MADV_DONTNEED);
+ lots_o_noops_around_write(&scratch);
+ ptr_contents = read_ptr(p1);
+ dprintf2("ptr (%p) contents@%d: %x\n", p1, __LINE__, ptr_contents);
+
+ ret = mprotect_pkey(p1, PAGE_SIZE, PROT_EXEC, (u64)pkey);
+ pkey_assert(!ret);
+ pkey_access_deny(pkey);
+
+ dprintf2("pkru: %x\n", rdpkru());
+
+ /*
+ * Make sure this is an *instruction* fault
+ */
+ madvise(p1, PAGE_SIZE, MADV_DONTNEED);
+ lots_o_noops_around_write(&scratch);
+ do_not_expect_pk_fault();
+ ptr_contents = read_ptr(p1);
+ dprintf2("ptr (%p) contents@%d: %x\n", p1, __LINE__, ptr_contents);
+ expected_pk_fault(pkey);
+}
+
+void test_mprotect_pkey_on_unsupported_cpu(int *ptr, u16 pkey)
+{
+ int size = PAGE_SIZE;
+ int sret;
+
+ if (cpu_has_pku()) {
+ dprintf1("SKIP: %s: no CPU support\n", __func__);
+ return;
+ }
+
+ sret = syscall(SYS_mprotect_key, ptr, size, PROT_READ, pkey);
+ pkey_assert(sret < 0);
+}
+
+void (*pkey_tests[])(int *ptr, u16 pkey) = {
+ test_read_of_write_disabled_region,
+ test_read_of_access_disabled_region,
+ test_write_of_write_disabled_region,
+ test_write_of_access_disabled_region,
+ test_kernel_write_of_access_disabled_region,
+ test_kernel_write_of_write_disabled_region,
+ test_kernel_gup_of_access_disabled_region,
+ test_kernel_gup_write_to_write_disabled_region,
+ test_executing_on_unreadable_memory,
+ test_ptrace_of_child,
+ test_pkey_syscalls_on_non_allocated_pkey,
+ test_pkey_syscalls_bad_args,
+ test_pkey_alloc_exhaust,
+};
+
+void run_tests_once(void)
+{
+ int *ptr;
+ int prot = PROT_READ|PROT_WRITE;
+
+ for (test_nr = 0; test_nr < ARRAY_SIZE(pkey_tests); test_nr++) {
+ int pkey;
+ int orig_pkru_faults = pkru_faults;
+
+ dprintf1("======================\n");
+ dprintf1("test %d preparing...\n", test_nr);
+
+ tracing_on();
+ pkey = alloc_random_pkey();
+ dprintf1("test %d starting with pkey: %d\n", test_nr, pkey);
+ ptr = malloc_pkey(PAGE_SIZE, prot, pkey);
+ dprintf1("test %d starting...\n", test_nr);
+ pkey_tests[test_nr](ptr, pkey);
+ dprintf1("freeing test memory: %p\n", ptr);
+ free_pkey_malloc(ptr);
+ sys_pkey_free(pkey);
+
+ dprintf1("pkru_faults: %d\n", pkru_faults);
+ dprintf1("orig_pkru_faults: %d\n", orig_pkru_faults);
+
+ tracing_off();
+ close_test_fds();
+
+ printf("test %2d PASSED (iteration %d)\n", test_nr, iteration_nr);
+ dprintf1("======================\n\n");
+ }
+ iteration_nr++;
+}
+
+void pkey_setup_shadow(void)
+{
+ shadow_pkru = __rdpkru();
+}
+
+int main(void)
+{
+ int nr_iterations = 22;
+
+ setup_handlers();
+
+ printf("has pku: %d\n", cpu_has_pku());
+
+ if (!cpu_has_pku()) {
+ int size = PAGE_SIZE;
+ int *ptr;
+
+ printf("running PKEY tests for unsupported CPU/OS\n");
+
+ ptr = mmap(NULL, size, PROT_NONE, MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
+ assert(ptr != (void *)-1);
+ test_mprotect_pkey_on_unsupported_cpu(ptr, 1);
+ exit(0);
+ }
+
+ pkey_setup_shadow();
+ printf("startup pkru: %x\n", rdpkru());
+ setup_hugetlbfs();
+
+ while (nr_iterations-- > 0)
+ run_tests_once();
+
+ printf("done (all tests OK)\n");
+ return 0;
+}
diff --git a/tools/testing/selftests/x86/Makefile b/tools/testing/selftests/x86/Makefile
index 97f187e..fee6181 100644
--- a/tools/testing/selftests/x86/Makefile
+++ b/tools/testing/selftests/x86/Makefile
@@ -6,7 +6,7 @@ include ../lib.mk
TARGETS_C_BOTHBITS := single_step_syscall sysret_ss_attrs syscall_nt ptrace_syscall test_mremap_vdso \
check_initial_reg_state sigreturn ldt_gdt iopl mpx-mini-test ioperm \
- protection_keys test_vdso
+ test_vdso
TARGETS_C_32BIT_ONLY := entry_from_vm86 syscall_arg_fault test_syscall_vdso unwind_vdso \
test_FCMOV test_FCOMI test_FISTTP \
vdso_restorer
diff --git a/tools/testing/selftests/x86/pkey-helpers.h b/tools/testing/selftests/x86/pkey-helpers.h
deleted file mode 100644
index b202939..0000000
--- a/tools/testing/selftests/x86/pkey-helpers.h
+++ /dev/null
@@ -1,219 +0,0 @@
-#ifndef _PKEYS_HELPER_H
-#define _PKEYS_HELPER_H
-#define _GNU_SOURCE
-#include <string.h>
-#include <stdarg.h>
-#include <stdio.h>
-#include <stdint.h>
-#include <stdbool.h>
-#include <signal.h>
-#include <assert.h>
-#include <stdlib.h>
-#include <ucontext.h>
-#include <sys/mman.h>
-
-#define NR_PKEYS 16
-#define PKRU_BITS_PER_PKEY 2
-
-#ifndef DEBUG_LEVEL
-#define DEBUG_LEVEL 0
-#endif
-#define DPRINT_IN_SIGNAL_BUF_SIZE 4096
-extern int dprint_in_signal;
-extern char dprint_in_signal_buffer[DPRINT_IN_SIGNAL_BUF_SIZE];
-static inline void sigsafe_printf(const char *format, ...)
-{
- va_list ap;
-
- va_start(ap, format);
- if (!dprint_in_signal) {
- vprintf(format, ap);
- } else {
- int len = vsnprintf(dprint_in_signal_buffer,
- DPRINT_IN_SIGNAL_BUF_SIZE,
- format, ap);
- /*
- * len is amount that would have been printed,
- * but actual write is truncated at BUF_SIZE.
- */
- if (len > DPRINT_IN_SIGNAL_BUF_SIZE)
- len = DPRINT_IN_SIGNAL_BUF_SIZE;
- write(1, dprint_in_signal_buffer, len);
- }
- va_end(ap);
-}
-#define dprintf_level(level, args...) do { \
- if (level <= DEBUG_LEVEL) \
- sigsafe_printf(args); \
- fflush(NULL); \
-} while (0)
-#define dprintf0(args...) dprintf_level(0, args)
-#define dprintf1(args...) dprintf_level(1, args)
-#define dprintf2(args...) dprintf_level(2, args)
-#define dprintf3(args...) dprintf_level(3, args)
-#define dprintf4(args...) dprintf_level(4, args)
-
-extern unsigned int shadow_pkru;
-static inline unsigned int __rdpkru(void)
-{
- unsigned int eax, edx;
- unsigned int ecx = 0;
- unsigned int pkru;
-
- asm volatile(".byte 0x0f,0x01,0xee\n\t"
- : "=a" (eax), "=d" (edx)
- : "c" (ecx));
- pkru = eax;
- return pkru;
-}
-
-static inline unsigned int _rdpkru(int line)
-{
- unsigned int pkru = __rdpkru();
-
- dprintf4("rdpkru(line=%d) pkru: %x shadow: %x\n",
- line, pkru, shadow_pkru);
- assert(pkru == shadow_pkru);
-
- return pkru;
-}
-
-#define rdpkru() _rdpkru(__LINE__)
-
-static inline void __wrpkru(unsigned int pkru)
-{
- unsigned int eax = pkru;
- unsigned int ecx = 0;
- unsigned int edx = 0;
-
- dprintf4("%s() changing %08x to %08x\n", __func__, __rdpkru(), pkru);
- asm volatile(".byte 0x0f,0x01,0xef\n\t"
- : : "a" (eax), "c" (ecx), "d" (edx));
- assert(pkru == __rdpkru());
-}
-
-static inline void wrpkru(unsigned int pkru)
-{
- dprintf4("%s() changing %08x to %08x\n", __func__, __rdpkru(), pkru);
- /* will do the shadow check for us: */
- rdpkru();
- __wrpkru(pkru);
- shadow_pkru = pkru;
- dprintf4("%s(%08x) pkru: %08x\n", __func__, pkru, __rdpkru());
-}
-
-/*
- * These are technically racy. since something could
- * change PKRU between the read and the write.
- */
-static inline void __pkey_access_allow(int pkey, int do_allow)
-{
- unsigned int pkru = rdpkru();
- int bit = pkey * 2;
-
- if (do_allow)
- pkru &= (1<<bit);
- else
- pkru |= (1<<bit);
-
- dprintf4("pkru now: %08x\n", rdpkru());
- wrpkru(pkru);
-}
-
-static inline void __pkey_write_allow(int pkey, int do_allow_write)
-{
- long pkru = rdpkru();
- int bit = pkey * 2 + 1;
-
- if (do_allow_write)
- pkru &= (1<<bit);
- else
- pkru |= (1<<bit);
-
- wrpkru(pkru);
- dprintf4("pkru now: %08x\n", rdpkru());
-}
-
-#define PROT_PKEY0 0x10 /* protection key value (bit 0) */
-#define PROT_PKEY1 0x20 /* protection key value (bit 1) */
-#define PROT_PKEY2 0x40 /* protection key value (bit 2) */
-#define PROT_PKEY3 0x80 /* protection key value (bit 3) */
-
-#define PAGE_SIZE 4096
-#define MB (1<<20)
-
-static inline void __cpuid(unsigned int *eax, unsigned int *ebx,
- unsigned int *ecx, unsigned int *edx)
-{
- /* ecx is often an input as well as an output. */
- asm volatile(
- "cpuid;"
- : "=a" (*eax),
- "=b" (*ebx),
- "=c" (*ecx),
- "=d" (*edx)
- : "0" (*eax), "2" (*ecx));
-}
-
-/* Intel-defined CPU features, CPUID level 0x00000007:0 (ecx) */
-#define X86_FEATURE_PKU (1<<3) /* Protection Keys for Userspace */
-#define X86_FEATURE_OSPKE (1<<4) /* OS Protection Keys Enable */
-
-static inline int cpu_has_pku(void)
-{
- unsigned int eax;
- unsigned int ebx;
- unsigned int ecx;
- unsigned int edx;
-
- eax = 0x7;
- ecx = 0x0;
- __cpuid(&eax, &ebx, &ecx, &edx);
-
- if (!(ecx & X86_FEATURE_PKU)) {
- dprintf2("cpu does not have PKU\n");
- return 0;
- }
- if (!(ecx & X86_FEATURE_OSPKE)) {
- dprintf2("cpu does not have OSPKE\n");
- return 0;
- }
- return 1;
-}
-
-#define XSTATE_PKRU_BIT (9)
-#define XSTATE_PKRU 0x200
-
-int pkru_xstate_offset(void)
-{
- unsigned int eax;
- unsigned int ebx;
- unsigned int ecx;
- unsigned int edx;
- int xstate_offset;
- int xstate_size;
- unsigned long XSTATE_CPUID = 0xd;
- int leaf;
-
- /* assume that XSTATE_PKRU is set in XCR0 */
- leaf = XSTATE_PKRU_BIT;
- {
- eax = XSTATE_CPUID;
- ecx = leaf;
- __cpuid(&eax, &ebx, &ecx, &edx);
-
- if (leaf == XSTATE_PKRU_BIT) {
- xstate_offset = ebx;
- xstate_size = eax;
- }
- }
-
- if (xstate_size == 0) {
- printf("could not find size/offset of PKRU in xsave state\n");
- return 0;
- }
-
- return xstate_offset;
-}
-
-#endif /* _PKEYS_HELPER_H */
diff --git a/tools/testing/selftests/x86/protection_keys.c b/tools/testing/selftests/x86/protection_keys.c
deleted file mode 100644
index 3237bc0..0000000
--- a/tools/testing/selftests/x86/protection_keys.c
+++ /dev/null
@@ -1,1395 +0,0 @@
-/*
- * Tests x86 Memory Protection Keys (see Documentation/x86/protection-keys.txt)
- *
- * There are examples in here of:
- * * how to set protection keys on memory
- * * how to set/clear bits in PKRU (the rights register)
- * * how to handle SEGV_PKRU signals and extract pkey-relevant
- * information from the siginfo
- *
- * Things to add:
- * make sure KSM and KSM COW breaking works
- * prefault pages in at malloc, or not
- * protect MPX bounds tables with protection keys?
- * make sure VMA splitting/merging is working correctly
- * OOMs can destroy mm->mmap (see exit_mmap()), so make sure it is immune to pkeys
- * look for pkey "leaks" where it is still set on a VMA but "freed" back to the kernel
- * do a plain mprotect() to a mprotect_pkey() area and make sure the pkey sticks
- *
- * Compile like this:
- * gcc -o protection_keys -O2 -g -std=gnu99 -pthread -Wall protection_keys.c -lrt -ldl -lm
- * gcc -m32 -o protection_keys_32 -O2 -g -std=gnu99 -pthread -Wall protection_keys.c -lrt -ldl -lm
- */
-#define _GNU_SOURCE
-#include <errno.h>
-#include <linux/futex.h>
-#include <sys/time.h>
-#include <sys/syscall.h>
-#include <string.h>
-#include <stdio.h>
-#include <stdint.h>
-#include <stdbool.h>
-#include <signal.h>
-#include <assert.h>
-#include <stdlib.h>
-#include <ucontext.h>
-#include <sys/mman.h>
-#include <sys/types.h>
-#include <sys/wait.h>
-#include <sys/stat.h>
-#include <fcntl.h>
-#include <unistd.h>
-#include <sys/ptrace.h>
-#include <setjmp.h>
-
-#include "pkey-helpers.h"
-
-int iteration_nr = 1;
-int test_nr;
-
-unsigned int shadow_pkru;
-
-#define HPAGE_SIZE (1UL<<21)
-#define ARRAY_SIZE(x) (sizeof(x) / sizeof(*(x)))
-#define ALIGN_UP(x, align_to) (((x) + ((align_to)-1)) & ~((align_to)-1))
-#define ALIGN_DOWN(x, align_to) ((x) & ~((align_to)-1))
-#define ALIGN_PTR_UP(p, ptr_align_to) ((typeof(p))ALIGN_UP((unsigned long)(p), ptr_align_to))
-#define ALIGN_PTR_DOWN(p, ptr_align_to) ((typeof(p))ALIGN_DOWN((unsigned long)(p), ptr_align_to))
-#define __stringify_1(x...) #x
-#define __stringify(x...) __stringify_1(x)
-
-#define PTR_ERR_ENOTSUP ((void *)-ENOTSUP)
-
-int dprint_in_signal;
-char dprint_in_signal_buffer[DPRINT_IN_SIGNAL_BUF_SIZE];
-
-extern void abort_hooks(void);
-#define pkey_assert(condition) do { \
- if (!(condition)) { \
- dprintf0("assert() at %s::%d test_nr: %d iteration: %d\n", \
- __FILE__, __LINE__, \
- test_nr, iteration_nr); \
- dprintf0("errno at assert: %d", errno); \
- abort_hooks(); \
- assert(condition); \
- } \
-} while (0)
-#define raw_assert(cond) assert(cond)
-
-void cat_into_file(char *str, char *file)
-{
- int fd = open(file, O_RDWR);
- int ret;
-
- dprintf2("%s(): writing '%s' to '%s'\n", __func__, str, file);
- /*
- * these need to be raw because they are called under
- * pkey_assert()
- */
- raw_assert(fd >= 0);
- ret = write(fd, str, strlen(str));
- if (ret != strlen(str)) {
- perror("write to file failed");
- fprintf(stderr, "filename: '%s' str: '%s'\n", file, str);
- raw_assert(0);
- }
- close(fd);
-}
-
-#if CONTROL_TRACING > 0
-static int warned_tracing;
-int tracing_root_ok(void)
-{
- if (geteuid() != 0) {
- if (!warned_tracing)
- fprintf(stderr, "WARNING: not run as root, "
- "can not do tracing control\n");
- warned_tracing = 1;
- return 0;
- }
- return 1;
-}
-#endif
-
-void tracing_on(void)
-{
-#if CONTROL_TRACING > 0
-#define TRACEDIR "/sys/kernel/debug/tracing"
- char pidstr[32];
-
- if (!tracing_root_ok())
- return;
-
- sprintf(pidstr, "%d", getpid());
- cat_into_file("0", TRACEDIR "/tracing_on");
- cat_into_file("\n", TRACEDIR "/trace");
- if (1) {
- cat_into_file("function_graph", TRACEDIR "/current_tracer");
- cat_into_file("1", TRACEDIR "/options/funcgraph-proc");
- } else {
- cat_into_file("nop", TRACEDIR "/current_tracer");
- }
- cat_into_file(pidstr, TRACEDIR "/set_ftrace_pid");
- cat_into_file("1", TRACEDIR "/tracing_on");
- dprintf1("enabled tracing\n");
-#endif
-}
-
-void tracing_off(void)
-{
-#if CONTROL_TRACING > 0
- if (!tracing_root_ok())
- return;
- cat_into_file("0", "/sys/kernel/debug/tracing/tracing_on");
-#endif
-}
-
-void abort_hooks(void)
-{
- fprintf(stderr, "running %s()...\n", __func__);
- tracing_off();
-#ifdef SLEEP_ON_ABORT
- sleep(SLEEP_ON_ABORT);
-#endif
-}
-
-static inline void __page_o_noops(void)
-{
- /* 8-bytes of instruction * 512 bytes = 1 page */
- asm(".rept 512 ; nopl 0x7eeeeeee(%eax) ; .endr");
-}
-
-/*
- * This attempts to have roughly a page of instructions followed by a few
- * instructions that do a write, and another page of instructions. That
- * way, we are pretty sure that the write is in the second page of
- * instructions and has at least a page of padding behind it.
- *
- * *That* lets us be sure to madvise() away the write instruction, which
- * will then fault, which makes sure that the fault code handles
- * execute-only memory properly.
- */
-__attribute__((__aligned__(PAGE_SIZE)))
-void lots_o_noops_around_write(int *write_to_me)
-{
- dprintf3("running %s()\n", __func__);
- __page_o_noops();
- /* Assume this happens in the second page of instructions: */
- *write_to_me = __LINE__;
- /* pad out by another page: */
- __page_o_noops();
- dprintf3("%s() done\n", __func__);
-}
-
-/* Define some kernel-like types */
-#define u8 uint8_t
-#define u16 uint16_t
-#define u32 uint32_t
-#define u64 uint64_t
-
-#ifdef __i386__
-#define SYS_mprotect_key 380
-#define SYS_pkey_alloc 381
-#define SYS_pkey_free 382
-#define REG_IP_IDX REG_EIP
-#define si_pkey_offset 0x14
-#else
-#define SYS_mprotect_key 329
-#define SYS_pkey_alloc 330
-#define SYS_pkey_free 331
-#define REG_IP_IDX REG_RIP
-#define si_pkey_offset 0x20
-#endif
-
-void dump_mem(void *dumpme, int len_bytes)
-{
- char *c = (void *)dumpme;
- int i;
-
- for (i = 0; i < len_bytes; i += sizeof(u64)) {
- u64 *ptr = (u64 *)(c + i);
- dprintf1("dump[%03d][@%p]: %016jx\n", i, ptr, *ptr);
- }
-}
-
-#define __SI_FAULT (3 << 16)
-#define SEGV_BNDERR (__SI_FAULT|3) /* failed address bound checks */
-#define SEGV_PKUERR (__SI_FAULT|4)
-
-static char *si_code_str(int si_code)
-{
- if (si_code & SEGV_MAPERR)
- return "SEGV_MAPERR";
- if (si_code & SEGV_ACCERR)
- return "SEGV_ACCERR";
- if (si_code & SEGV_BNDERR)
- return "SEGV_BNDERR";
- if (si_code & SEGV_PKUERR)
- return "SEGV_PKUERR";
- return "UNKNOWN";
-}
-
-int pkru_faults;
-int last_si_pkey = -1;
-void signal_handler(int signum, siginfo_t *si, void *vucontext)
-{
- ucontext_t *uctxt = vucontext;
- int trapno;
- unsigned long ip;
- char *fpregs;
- u32 *pkru_ptr;
- u64 si_pkey;
- u32 *si_pkey_ptr;
- int pkru_offset;
- fpregset_t fpregset;
-
- dprint_in_signal = 1;
- dprintf1(">>>>===============SIGSEGV============================\n");
- dprintf1("%s()::%d, pkru: 0x%x shadow: %x\n", __func__, __LINE__,
- __rdpkru(), shadow_pkru);
-
- trapno = uctxt->uc_mcontext.gregs[REG_TRAPNO];
- ip = uctxt->uc_mcontext.gregs[REG_IP_IDX];
- fpregset = uctxt->uc_mcontext.fpregs;
- fpregs = (void *)fpregset;
-
- dprintf2("%s() trapno: %d ip: 0x%lx info->si_code: %s/%d\n", __func__,
- trapno, ip, si_code_str(si->si_code), si->si_code);
-#ifdef __i386__
- /*
- * 32-bit has some extra padding so that userspace can tell whether
- * the XSTATE header is present in addition to the "legacy" FPU
- * state. We just assume that it is here.
- */
- fpregs += 0x70;
-#endif
- pkru_offset = pkru_xstate_offset();
- pkru_ptr = (void *)(&fpregs[pkru_offset]);
-
- dprintf1("siginfo: %p\n", si);
- dprintf1(" fpregs: %p\n", fpregs);
- /*
- * If we got a PKRU fault, we *HAVE* to have at least one bit set in
- * here.
- */
- dprintf1("pkru_xstate_offset: %d\n", pkru_xstate_offset());
- if (DEBUG_LEVEL > 4)
- dump_mem(pkru_ptr - 128, 256);
- pkey_assert(*pkru_ptr);
-
- si_pkey_ptr = (u32 *)(((u8 *)si) + si_pkey_offset);
- dprintf1("si_pkey_ptr: %p\n", si_pkey_ptr);
- dump_mem(si_pkey_ptr - 8, 24);
- si_pkey = *si_pkey_ptr;
- pkey_assert(si_pkey < NR_PKEYS);
- last_si_pkey = si_pkey;
-
- if ((si->si_code == SEGV_MAPERR) ||
- (si->si_code == SEGV_ACCERR) ||
- (si->si_code == SEGV_BNDERR)) {
- printf("non-PK si_code, exiting...\n");
- exit(4);
- }
-
- dprintf1("signal pkru from xsave: %08x\n", *pkru_ptr);
- /* need __rdpkru() version so we do not do shadow_pkru checking */
- dprintf1("signal pkru from pkru: %08x\n", __rdpkru());
- dprintf1("si_pkey from siginfo: %jx\n", si_pkey);
- *(u64 *)pkru_ptr = 0x00000000;
- dprintf1("WARNING: set PRKU=0 to allow faulting instruction to continue\n");
- pkru_faults++;
- dprintf1("<<<<==================================================\n");
- return;
- if (trapno == 14) {
- fprintf(stderr,
- "ERROR: In signal handler, page fault, trapno = %d, ip = %016lx\n",
- trapno, ip);
- fprintf(stderr, "si_addr %p\n", si->si_addr);
- fprintf(stderr, "REG_ERR: %lx\n",
- (unsigned long)uctxt->uc_mcontext.gregs[REG_ERR]);
- exit(1);
- } else {
- fprintf(stderr, "unexpected trap %d! at 0x%lx\n", trapno, ip);
- fprintf(stderr, "si_addr %p\n", si->si_addr);
- fprintf(stderr, "REG_ERR: %lx\n",
- (unsigned long)uctxt->uc_mcontext.gregs[REG_ERR]);
- exit(2);
- }
- dprint_in_signal = 0;
-}
-
-int wait_all_children(void)
-{
- int status;
- return waitpid(-1, &status, 0);
-}
-
-void sig_chld(int x)
-{
- dprint_in_signal = 1;
- dprintf2("[%d] SIGCHLD: %d\n", getpid(), x);
- dprint_in_signal = 0;
-}
-
-void setup_sigsegv_handler(void)
-{
- int r, rs;
- struct sigaction newact;
- struct sigaction oldact;
-
- /* #PF is mapped to sigsegv */
- int signum = SIGSEGV;
-
- newact.sa_handler = 0;
- newact.sa_sigaction = signal_handler;
-
- /*sigset_t - signals to block while in the handler */
- /* get the old signal mask. */
- rs = sigprocmask(SIG_SETMASK, 0, &newact.sa_mask);
- pkey_assert(rs == 0);
-
- /* call sa_sigaction, not sa_handler*/
- newact.sa_flags = SA_SIGINFO;
-
- newact.sa_restorer = 0; /* void(*)(), obsolete */
- r = sigaction(signum, &newact, &oldact);
- r = sigaction(SIGALRM, &newact, &oldact);
- pkey_assert(r == 0);
-}
-
-void setup_handlers(void)
-{
- signal(SIGCHLD, &sig_chld);
- setup_sigsegv_handler();
-}
-
-pid_t fork_lazy_child(void)
-{
- pid_t forkret;
-
- forkret = fork();
- pkey_assert(forkret >= 0);
- dprintf3("[%d] fork() ret: %d\n", getpid(), forkret);
-
- if (!forkret) {
- /* in the child */
- while (1) {
- dprintf1("child sleeping...\n");
- sleep(30);
- }
- }
- return forkret;
-}
-
-void davecmp(void *_a, void *_b, int len)
-{
- int i;
- unsigned long *a = _a;
- unsigned long *b = _b;
-
- for (i = 0; i < len / sizeof(*a); i++) {
- if (a[i] == b[i])
- continue;
-
- dprintf3("[%3d]: a: %016lx b: %016lx\n", i, a[i], b[i]);
- }
-}
-
-void dumpit(char *f)
-{
- int fd = open(f, O_RDONLY);
- char buf[100];
- int nr_read;
-
- dprintf2("maps fd: %d\n", fd);
- do {
- nr_read = read(fd, &buf[0], sizeof(buf));
- write(1, buf, nr_read);
- } while (nr_read > 0);
- close(fd);
-}
-
-#define PKEY_DISABLE_ACCESS 0x1
-#define PKEY_DISABLE_WRITE 0x2
-
-u32 pkey_get(int pkey, unsigned long flags)
-{
- u32 mask = (PKEY_DISABLE_ACCESS|PKEY_DISABLE_WRITE);
- u32 pkru = __rdpkru();
- u32 shifted_pkru;
- u32 masked_pkru;
-
- dprintf1("%s(pkey=%d, flags=%lx) = %x / %d\n",
- __func__, pkey, flags, 0, 0);
- dprintf2("%s() raw pkru: %x\n", __func__, pkru);
-
- shifted_pkru = (pkru >> (pkey * PKRU_BITS_PER_PKEY));
- dprintf2("%s() shifted_pkru: %x\n", __func__, shifted_pkru);
- masked_pkru = shifted_pkru & mask;
- dprintf2("%s() masked pkru: %x\n", __func__, masked_pkru);
- /*
- * shift down the relevant bits to the lowest two, then
- * mask off all the other high bits.
- */
- return masked_pkru;
-}
-
-int pkey_set(int pkey, unsigned long rights, unsigned long flags)
-{
- u32 mask = (PKEY_DISABLE_ACCESS|PKEY_DISABLE_WRITE);
- u32 old_pkru = __rdpkru();
- u32 new_pkru;
-
- /* make sure that 'rights' only contains the bits we expect: */
- assert(!(rights & ~mask));
-
- /* copy old pkru */
- new_pkru = old_pkru;
- /* mask out bits from pkey in old value: */
- new_pkru &= ~(mask << (pkey * PKRU_BITS_PER_PKEY));
- /* OR in new bits for pkey: */
- new_pkru |= (rights << (pkey * PKRU_BITS_PER_PKEY));
-
- __wrpkru(new_pkru);
-
- dprintf3("%s(pkey=%d, rights=%lx, flags=%lx) = %x pkru now: %x old_pkru: %x\n",
- __func__, pkey, rights, flags, 0, __rdpkru(), old_pkru);
- return 0;
-}
-
-void pkey_disable_set(int pkey, int flags)
-{
- unsigned long syscall_flags = 0;
- int ret;
- int pkey_rights;
- u32 orig_pkru = rdpkru();
-
- dprintf1("START->%s(%d, 0x%x)\n", __func__,
- pkey, flags);
- pkey_assert(flags & (PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE));
-
- pkey_rights = pkey_get(pkey, syscall_flags);
-
- dprintf1("%s(%d) pkey_get(%d): %x\n", __func__,
- pkey, pkey, pkey_rights);
- pkey_assert(pkey_rights >= 0);
-
- pkey_rights |= flags;
-
- ret = pkey_set(pkey, pkey_rights, syscall_flags);
- assert(!ret);
- /*pkru and flags have the same format */
- shadow_pkru |= flags << (pkey * 2);
- dprintf1("%s(%d) shadow: 0x%x\n", __func__, pkey, shadow_pkru);
-
- pkey_assert(ret >= 0);
-
- pkey_rights = pkey_get(pkey, syscall_flags);
- dprintf1("%s(%d) pkey_get(%d): %x\n", __func__,
- pkey, pkey, pkey_rights);
-
- dprintf1("%s(%d) pkru: 0x%x\n", __func__, pkey, rdpkru());
- if (flags)
- pkey_assert(rdpkru() > orig_pkru);
- dprintf1("END<---%s(%d, 0x%x)\n", __func__,
- pkey, flags);
-}
-
-void pkey_disable_clear(int pkey, int flags)
-{
- unsigned long syscall_flags = 0;
- int ret;
- int pkey_rights = pkey_get(pkey, syscall_flags);
- u32 orig_pkru = rdpkru();
-
- pkey_assert(flags & (PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE));
-
- dprintf1("%s(%d) pkey_get(%d): %x\n", __func__,
- pkey, pkey, pkey_rights);
- pkey_assert(pkey_rights >= 0);
-
- pkey_rights |= flags;
-
- ret = pkey_set(pkey, pkey_rights, 0);
- /* pkru and flags have the same format */
- shadow_pkru &= ~(flags << (pkey * 2));
- pkey_assert(ret >= 0);
-
- pkey_rights = pkey_get(pkey, syscall_flags);
- dprintf1("%s(%d) pkey_get(%d): %x\n", __func__,
- pkey, pkey, pkey_rights);
-
- dprintf1("%s(%d) pkru: 0x%x\n", __func__, pkey, rdpkru());
- if (flags)
- assert(rdpkru() > orig_pkru);
-}
-
-void pkey_write_allow(int pkey)
-{
- pkey_disable_clear(pkey, PKEY_DISABLE_WRITE);
-}
-void pkey_write_deny(int pkey)
-{
- pkey_disable_set(pkey, PKEY_DISABLE_WRITE);
-}
-void pkey_access_allow(int pkey)
-{
- pkey_disable_clear(pkey, PKEY_DISABLE_ACCESS);
-}
-void pkey_access_deny(int pkey)
-{
- pkey_disable_set(pkey, PKEY_DISABLE_ACCESS);
-}
-
-int sys_mprotect_pkey(void *ptr, size_t size, unsigned long orig_prot,
- unsigned long pkey)
-{
- int sret;
-
- dprintf2("%s(0x%p, %zx, prot=%lx, pkey=%lx)\n", __func__,
- ptr, size, orig_prot, pkey);
-
- errno = 0;
- sret = syscall(SYS_mprotect_key, ptr, size, orig_prot, pkey);
- if (errno) {
- dprintf2("SYS_mprotect_key sret: %d\n", sret);
- dprintf2("SYS_mprotect_key prot: 0x%lx\n", orig_prot);
- dprintf2("SYS_mprotect_key failed, errno: %d\n", errno);
- if (DEBUG_LEVEL >= 2)
- perror("SYS_mprotect_pkey");
- }
- return sret;
-}
-
-int sys_pkey_alloc(unsigned long flags, unsigned long init_val)
-{
- int ret = syscall(SYS_pkey_alloc, flags, init_val);
- dprintf1("%s(flags=%lx, init_val=%lx) syscall ret: %d errno: %d\n",
- __func__, flags, init_val, ret, errno);
- return ret;
-}
-
-int alloc_pkey(void)
-{
- int ret;
- unsigned long init_val = 0x0;
-
- dprintf1("alloc_pkey()::%d, pkru: 0x%x shadow: %x\n",
- __LINE__, __rdpkru(), shadow_pkru);
- ret = sys_pkey_alloc(0, init_val);
- /*
- * pkey_alloc() sets PKRU, so we need to reflect it in
- * shadow_pkru:
- */
- dprintf4("alloc_pkey()::%d, ret: %d pkru: 0x%x shadow: 0x%x\n",
- __LINE__, ret, __rdpkru(), shadow_pkru);
- if (ret) {
- /* clear both the bits: */
- shadow_pkru &= ~(0x3 << (ret * 2));
- dprintf4("alloc_pkey()::%d, ret: %d pkru: 0x%x shadow: 0x%x\n",
- __LINE__, ret, __rdpkru(), shadow_pkru);
- /*
- * move the new state in from init_val
- * (remember, we cheated and init_val == pkru format)
- */
- shadow_pkru |= (init_val << (ret * 2));
- }
- dprintf4("alloc_pkey()::%d, ret: %d pkru: 0x%x shadow: 0x%x\n",
- __LINE__, ret, __rdpkru(), shadow_pkru);
- dprintf1("alloc_pkey()::%d errno: %d\n", __LINE__, errno);
- /* for shadow checking: */
- rdpkru();
- dprintf4("alloc_pkey()::%d, ret: %d pkru: 0x%x shadow: 0x%x\n",
- __LINE__, ret, __rdpkru(), shadow_pkru);
- return ret;
-}
-
-int sys_pkey_free(unsigned long pkey)
-{
- int ret = syscall(SYS_pkey_free, pkey);
- dprintf1("%s(pkey=%ld) syscall ret: %d\n", __func__, pkey, ret);
- return ret;
-}
-
-/*
- * I had a bug where pkey bits could be set by mprotect() but
- * not cleared. This ensures we get lots of random bit sets
- * and clears on the vma and pte pkey bits.
- */
-int alloc_random_pkey(void)
-{
- int max_nr_pkey_allocs;
- int ret;
- int i;
- int alloced_pkeys[NR_PKEYS];
- int nr_alloced = 0;
- int random_index;
- memset(alloced_pkeys, 0, sizeof(alloced_pkeys));
-
- /* allocate every possible key and make a note of which ones we got */
- max_nr_pkey_allocs = NR_PKEYS;
- max_nr_pkey_allocs = 1;
- for (i = 0; i < max_nr_pkey_allocs; i++) {
- int new_pkey = alloc_pkey();
- if (new_pkey < 0)
- break;
- alloced_pkeys[nr_alloced++] = new_pkey;
- }
-
- pkey_assert(nr_alloced > 0);
- /* select a random one out of the allocated ones */
- random_index = rand() % nr_alloced;
- ret = alloced_pkeys[random_index];
- /* now zero it out so we don't free it next */
- alloced_pkeys[random_index] = 0;
-
- /* go through the allocated ones that we did not want and free them */
- for (i = 0; i < nr_alloced; i++) {
- int free_ret;
- if (!alloced_pkeys[i])
- continue;
- free_ret = sys_pkey_free(alloced_pkeys[i]);
- pkey_assert(!free_ret);
- }
- dprintf1("%s()::%d, ret: %d pkru: 0x%x shadow: 0x%x\n", __func__,
- __LINE__, ret, __rdpkru(), shadow_pkru);
- return ret;
-}
-
-int mprotect_pkey(void *ptr, size_t size, unsigned long orig_prot,
- unsigned long pkey)
-{
- int nr_iterations = random() % 100;
- int ret;
-
- while (0) {
- int rpkey = alloc_random_pkey();
- ret = sys_mprotect_pkey(ptr, size, orig_prot, pkey);
- dprintf1("sys_mprotect_pkey(%p, %zx, prot=0x%lx, pkey=%ld) ret: %d\n",
- ptr, size, orig_prot, pkey, ret);
- if (nr_iterations-- < 0)
- break;
-
- dprintf1("%s()::%d, ret: %d pkru: 0x%x shadow: 0x%x\n", __func__,
- __LINE__, ret, __rdpkru(), shadow_pkru);
- sys_pkey_free(rpkey);
- dprintf1("%s()::%d, ret: %d pkru: 0x%x shadow: 0x%x\n", __func__,
- __LINE__, ret, __rdpkru(), shadow_pkru);
- }
- pkey_assert(pkey < NR_PKEYS);
-
- ret = sys_mprotect_pkey(ptr, size, orig_prot, pkey);
- dprintf1("mprotect_pkey(%p, %zx, prot=0x%lx, pkey=%ld) ret: %d\n",
- ptr, size, orig_prot, pkey, ret);
- pkey_assert(!ret);
- dprintf1("%s()::%d, ret: %d pkru: 0x%x shadow: 0x%x\n", __func__,
- __LINE__, ret, __rdpkru(), shadow_pkru);
- return ret;
-}
-
-struct pkey_malloc_record {
- void *ptr;
- long size;
-};
-struct pkey_malloc_record *pkey_malloc_records;
-long nr_pkey_malloc_records;
-void record_pkey_malloc(void *ptr, long size)
-{
- long i;
- struct pkey_malloc_record *rec = NULL;
-
- for (i = 0; i < nr_pkey_malloc_records; i++) {
- rec = &pkey_malloc_records[i];
- /* find a free record */
- if (rec)
- break;
- }
- if (!rec) {
- /* every record is full */
- size_t old_nr_records = nr_pkey_malloc_records;
- size_t new_nr_records = (nr_pkey_malloc_records * 2 + 1);
- size_t new_size = new_nr_records * sizeof(struct pkey_malloc_record);
- dprintf2("new_nr_records: %zd\n", new_nr_records);
- dprintf2("new_size: %zd\n", new_size);
- pkey_malloc_records = realloc(pkey_malloc_records, new_size);
- pkey_assert(pkey_malloc_records != NULL);
- rec = &pkey_malloc_records[nr_pkey_malloc_records];
- /*
- * realloc() does not initialize memory, so zero it from
- * the first new record all the way to the end.
- */
- for (i = 0; i < new_nr_records - old_nr_records; i++)
- memset(rec + i, 0, sizeof(*rec));
- }
- dprintf3("filling malloc record[%d/%p]: {%p, %ld}\n",
- (int)(rec - pkey_malloc_records), rec, ptr, size);
- rec->ptr = ptr;
- rec->size = size;
- nr_pkey_malloc_records++;
-}
-
-void free_pkey_malloc(void *ptr)
-{
- long i;
- int ret;
- dprintf3("%s(%p)\n", __func__, ptr);
- for (i = 0; i < nr_pkey_malloc_records; i++) {
- struct pkey_malloc_record *rec = &pkey_malloc_records[i];
- dprintf4("looking for ptr %p at record[%ld/%p]: {%p, %ld}\n",
- ptr, i, rec, rec->ptr, rec->size);
- if ((ptr < rec->ptr) ||
- (ptr >= rec->ptr + rec->size))
- continue;
-
- dprintf3("found ptr %p at record[%ld/%p]: {%p, %ld}\n",
- ptr, i, rec, rec->ptr, rec->size);
- nr_pkey_malloc_records--;
- ret = munmap(rec->ptr, rec->size);
- dprintf3("munmap ret: %d\n", ret);
- pkey_assert(!ret);
- dprintf3("clearing rec->ptr, rec: %p\n", rec);
- rec->ptr = NULL;
- dprintf3("done clearing rec->ptr, rec: %p\n", rec);
- return;
- }
- pkey_assert(false);
-}
-
-
-void *malloc_pkey_with_mprotect(long size, int prot, u16 pkey)
-{
- void *ptr;
- int ret;
-
- rdpkru();
- dprintf1("doing %s(size=%ld, prot=0x%x, pkey=%d)\n", __func__,
- size, prot, pkey);
- pkey_assert(pkey < NR_PKEYS);
- ptr = mmap(NULL, size, prot, MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
- pkey_assert(ptr != (void *)-1);
- ret = mprotect_pkey((void *)ptr, PAGE_SIZE, prot, pkey);
- pkey_assert(!ret);
- record_pkey_malloc(ptr, size);
- rdpkru();
-
- dprintf1("%s() for pkey %d @ %p\n", __func__, pkey, ptr);
- return ptr;
-}
-
-void *malloc_pkey_anon_huge(long size, int prot, u16 pkey)
-{
- int ret;
- void *ptr;
-
- dprintf1("doing %s(size=%ld, prot=0x%x, pkey=%d)\n", __func__,
- size, prot, pkey);
- /*
- * Guarantee we can fit at least one huge page in the resulting
- * allocation by allocating space for 2:
- */
- size = ALIGN_UP(size, HPAGE_SIZE * 2);
- ptr = mmap(NULL, size, PROT_NONE, MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
- pkey_assert(ptr != (void *)-1);
- record_pkey_malloc(ptr, size);
- mprotect_pkey(ptr, size, prot, pkey);
-
- dprintf1("unaligned ptr: %p\n", ptr);
- ptr = ALIGN_PTR_UP(ptr, HPAGE_SIZE);
- dprintf1(" aligned ptr: %p\n", ptr);
- ret = madvise(ptr, HPAGE_SIZE, MADV_HUGEPAGE);
- dprintf1("MADV_HUGEPAGE ret: %d\n", ret);
- ret = madvise(ptr, HPAGE_SIZE, MADV_WILLNEED);
- dprintf1("MADV_WILLNEED ret: %d\n", ret);
- memset(ptr, 0, HPAGE_SIZE);
-
- dprintf1("mmap()'d thp for pkey %d @ %p\n", pkey, ptr);
- return ptr;
-}
-
-int hugetlb_setup_ok;
-#define GET_NR_HUGE_PAGES 10
-void setup_hugetlbfs(void)
-{
- int err;
- int fd;
- char buf[] = "123";
-
- if (geteuid() != 0) {
- fprintf(stderr, "WARNING: not run as root, can not do hugetlb test\n");
- return;
- }
-
- cat_into_file(__stringify(GET_NR_HUGE_PAGES), "/proc/sys/vm/nr_hugepages");
-
- /*
- * Now go make sure that we got the pages and that they
- * are 2M pages. Someone might have made 1G the default.
- */
- fd = open("/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages", O_RDONLY);
- if (fd < 0) {
- perror("opening sysfs 2M hugetlb config");
- return;
- }
-
- /* -1 to guarantee leaving the trailing \0 */
- err = read(fd, buf, sizeof(buf)-1);
- close(fd);
- if (err <= 0) {
- perror("reading sysfs 2M hugetlb config");
- return;
- }
-
- if (atoi(buf) != GET_NR_HUGE_PAGES) {
- fprintf(stderr, "could not confirm 2M pages, got: '%s' expected %d\n",
- buf, GET_NR_HUGE_PAGES);
- return;
- }
-
- hugetlb_setup_ok = 1;
-}
-
-void *malloc_pkey_hugetlb(long size, int prot, u16 pkey)
-{
- void *ptr;
- int flags = MAP_ANONYMOUS|MAP_PRIVATE|MAP_HUGETLB;
-
- if (!hugetlb_setup_ok)
- return PTR_ERR_ENOTSUP;
-
- dprintf1("doing %s(%ld, %x, %x)\n", __func__, size, prot, pkey);
- size = ALIGN_UP(size, HPAGE_SIZE * 2);
- pkey_assert(pkey < NR_PKEYS);
- ptr = mmap(NULL, size, PROT_NONE, flags, -1, 0);
- pkey_assert(ptr != (void *)-1);
- mprotect_pkey(ptr, size, prot, pkey);
-
- record_pkey_malloc(ptr, size);
-
- dprintf1("mmap()'d hugetlbfs for pkey %d @ %p\n", pkey, ptr);
- return ptr;
-}
-
-void *malloc_pkey_mmap_dax(long size, int prot, u16 pkey)
-{
- void *ptr;
- int fd;
-
- dprintf1("doing %s(size=%ld, prot=0x%x, pkey=%d)\n", __func__,
- size, prot, pkey);
- pkey_assert(pkey < NR_PKEYS);
- fd = open("/dax/foo", O_RDWR);
- pkey_assert(fd >= 0);
-
- ptr = mmap(0, size, prot, MAP_SHARED, fd, 0);
- pkey_assert(ptr != (void *)-1);
-
- mprotect_pkey(ptr, size, prot, pkey);
-
- record_pkey_malloc(ptr, size);
-
- dprintf1("mmap()'d for pkey %d @ %p\n", pkey, ptr);
- close(fd);
- return ptr;
-}
-
-void *(*pkey_malloc[])(long size, int prot, u16 pkey) = {
-
- malloc_pkey_with_mprotect,
- malloc_pkey_anon_huge,
- malloc_pkey_hugetlb
-/* can not do direct with the pkey_mprotect() API:
- malloc_pkey_mmap_direct,
- malloc_pkey_mmap_dax,
-*/
-};
-
-void *malloc_pkey(long size, int prot, u16 pkey)
-{
- void *ret;
- static int malloc_type;
- int nr_malloc_types = ARRAY_SIZE(pkey_malloc);
-
- pkey_assert(pkey < NR_PKEYS);
-
- while (1) {
- pkey_assert(malloc_type < nr_malloc_types);
-
- ret = pkey_malloc[malloc_type](size, prot, pkey);
- pkey_assert(ret != (void *)-1);
-
- malloc_type++;
- if (malloc_type >= nr_malloc_types)
- malloc_type = (random()%nr_malloc_types);
-
- /* try again if the malloc_type we tried is unsupported */
- if (ret == PTR_ERR_ENOTSUP)
- continue;
-
- break;
- }
-
- dprintf3("%s(%ld, prot=%x, pkey=%x) returning: %p\n", __func__,
- size, prot, pkey, ret);
- return ret;
-}
-
-int last_pkru_faults;
-void expected_pk_fault(int pkey)
-{
- dprintf2("%s(): last_pkru_faults: %d pkru_faults: %d\n",
- __func__, last_pkru_faults, pkru_faults);
- dprintf2("%s(%d): last_si_pkey: %d\n", __func__, pkey, last_si_pkey);
- pkey_assert(last_pkru_faults + 1 == pkru_faults);
- pkey_assert(last_si_pkey == pkey);
- /*
- * The signal handler shold have cleared out PKRU to let the
- * test program continue. We now have to restore it.
- */
- if (__rdpkru() != 0)
- pkey_assert(0);
-
- __wrpkru(shadow_pkru);
- dprintf1("%s() set PKRU=%x to restore state after signal nuked it\n",
- __func__, shadow_pkru);
- last_pkru_faults = pkru_faults;
- last_si_pkey = -1;
-}
-
-void do_not_expect_pk_fault(void)
-{
- pkey_assert(last_pkru_faults == pkru_faults);
-}
-
-int test_fds[10] = { -1 };
-int nr_test_fds;
-void __save_test_fd(int fd)
-{
- pkey_assert(fd >= 0);
- pkey_assert(nr_test_fds < ARRAY_SIZE(test_fds));
- test_fds[nr_test_fds] = fd;
- nr_test_fds++;
-}
-
-int get_test_read_fd(void)
-{
- int test_fd = open("/etc/passwd", O_RDONLY);
- __save_test_fd(test_fd);
- return test_fd;
-}
-
-void close_test_fds(void)
-{
- int i;
-
- for (i = 0; i < nr_test_fds; i++) {
- if (test_fds[i] < 0)
- continue;
- close(test_fds[i]);
- test_fds[i] = -1;
- }
- nr_test_fds = 0;
-}
-
-#define barrier() __asm__ __volatile__("": : :"memory")
-__attribute__((noinline)) int read_ptr(int *ptr)
-{
- /*
- * Keep GCC from optimizing this away somehow
- */
- barrier();
- return *ptr;
-}
-
-void test_read_of_write_disabled_region(int *ptr, u16 pkey)
-{
- int ptr_contents;
-
- dprintf1("disabling write access to PKEY[1], doing read\n");
- pkey_write_deny(pkey);
- ptr_contents = read_ptr(ptr);
- dprintf1("*ptr: %d\n", ptr_contents);
- dprintf1("\n");
-}
-void test_read_of_access_disabled_region(int *ptr, u16 pkey)
-{
- int ptr_contents;
-
- dprintf1("disabling access to PKEY[%02d], doing read @ %p\n", pkey, ptr);
- rdpkru();
- pkey_access_deny(pkey);
- ptr_contents = read_ptr(ptr);
- dprintf1("*ptr: %d\n", ptr_contents);
- expected_pk_fault(pkey);
-}
-void test_write_of_write_disabled_region(int *ptr, u16 pkey)
-{
- dprintf1("disabling write access to PKEY[%02d], doing write\n", pkey);
- pkey_write_deny(pkey);
- *ptr = __LINE__;
- expected_pk_fault(pkey);
-}
-void test_write_of_access_disabled_region(int *ptr, u16 pkey)
-{
- dprintf1("disabling access to PKEY[%02d], doing write\n", pkey);
- pkey_access_deny(pkey);
- *ptr = __LINE__;
- expected_pk_fault(pkey);
-}
-void test_kernel_write_of_access_disabled_region(int *ptr, u16 pkey)
-{
- int ret;
- int test_fd = get_test_read_fd();
-
- dprintf1("disabling access to PKEY[%02d], "
- "having kernel read() to buffer\n", pkey);
- pkey_access_deny(pkey);
- ret = read(test_fd, ptr, 1);
- dprintf1("read ret: %d\n", ret);
- pkey_assert(ret);
-}
-void test_kernel_write_of_write_disabled_region(int *ptr, u16 pkey)
-{
- int ret;
- int test_fd = get_test_read_fd();
-
- pkey_write_deny(pkey);
- ret = read(test_fd, ptr, 100);
- dprintf1("read ret: %d\n", ret);
- if (ret < 0 && (DEBUG_LEVEL > 0))
- perror("verbose read result (OK for this to be bad)");
- pkey_assert(ret);
-}
-
-void test_kernel_gup_of_access_disabled_region(int *ptr, u16 pkey)
-{
- int pipe_ret, vmsplice_ret;
- struct iovec iov;
- int pipe_fds[2];
-
- pipe_ret = pipe(pipe_fds);
-
- pkey_assert(pipe_ret == 0);
- dprintf1("disabling access to PKEY[%02d], "
- "having kernel vmsplice from buffer\n", pkey);
- pkey_access_deny(pkey);
- iov.iov_base = ptr;
- iov.iov_len = PAGE_SIZE;
- vmsplice_ret = vmsplice(pipe_fds[1], &iov, 1, SPLICE_F_GIFT);
- dprintf1("vmsplice() ret: %d\n", vmsplice_ret);
- pkey_assert(vmsplice_ret == -1);
-
- close(pipe_fds[0]);
- close(pipe_fds[1]);
-}
-
-void test_kernel_gup_write_to_write_disabled_region(int *ptr, u16 pkey)
-{
- int ignored = 0xdada;
- int futex_ret;
- int some_int = __LINE__;
-
- dprintf1("disabling write to PKEY[%02d], "
- "doing futex gunk in buffer\n", pkey);
- *ptr = some_int;
- pkey_write_deny(pkey);
- futex_ret = syscall(SYS_futex, ptr, FUTEX_WAIT, some_int-1, NULL,
- &ignored, ignored);
- if (DEBUG_LEVEL > 0)
- perror("futex");
- dprintf1("futex() ret: %d\n", futex_ret);
-}
-
-/* Assumes that all pkeys other than 'pkey' are unallocated */
-void test_pkey_syscalls_on_non_allocated_pkey(int *ptr, u16 pkey)
-{
- int err;
- int i;
-
- /* Note: 0 is the default pkey, so don't mess with it */
- for (i = 1; i < NR_PKEYS; i++) {
- if (pkey == i)
- continue;
-
- dprintf1("trying get/set/free to non-allocated pkey: %2d\n", i);
- err = sys_pkey_free(i);
- pkey_assert(err);
-
- err = sys_pkey_free(i);
- pkey_assert(err);
-
- err = sys_mprotect_pkey(ptr, PAGE_SIZE, PROT_READ, i);
- pkey_assert(err);
- }
-}
-
-/* Assumes that all pkeys other than 'pkey' are unallocated */
-void test_pkey_syscalls_bad_args(int *ptr, u16 pkey)
-{
- int err;
- int bad_pkey = NR_PKEYS+99;
-
- /* pass a known-invalid pkey in: */
- err = sys_mprotect_pkey(ptr, PAGE_SIZE, PROT_READ, bad_pkey);
- pkey_assert(err);
-}
-
-/* Assumes that all pkeys other than 'pkey' are unallocated */
-void test_pkey_alloc_exhaust(int *ptr, u16 pkey)
-{
- int err;
- int allocated_pkeys[NR_PKEYS] = {0};
- int nr_allocated_pkeys = 0;
- int i;
-
- for (i = 0; i < NR_PKEYS*2; i++) {
- int new_pkey;
- dprintf1("%s() alloc loop: %d\n", __func__, i);
- new_pkey = alloc_pkey();
- dprintf4("%s()::%d, err: %d pkru: 0x%x shadow: 0x%x\n", __func__,
- __LINE__, err, __rdpkru(), shadow_pkru);
- rdpkru(); /* for shadow checking */
- dprintf2("%s() errno: %d ENOSPC: %d\n", __func__, errno, ENOSPC);
- if ((new_pkey == -1) && (errno == ENOSPC)) {
- dprintf2("%s() failed to allocate pkey after %d tries\n",
- __func__, nr_allocated_pkeys);
- break;
- }
- pkey_assert(nr_allocated_pkeys < NR_PKEYS);
- allocated_pkeys[nr_allocated_pkeys++] = new_pkey;
- }
-
- dprintf3("%s()::%d\n", __func__, __LINE__);
-
- /*
- * ensure it did not reach the end of the loop without
- * failure:
- */
- pkey_assert(i < NR_PKEYS*2);
-
- /*
- * There are 16 pkeys supported in hardware. One is taken
- * up for the default (0) and another can be taken up by
- * an execute-only mapping. Ensure that we can allocate
- * at least 14 (16-2).
- */
- pkey_assert(i >= NR_PKEYS-2);
-
- for (i = 0; i < nr_allocated_pkeys; i++) {
- err = sys_pkey_free(allocated_pkeys[i]);
- pkey_assert(!err);
- rdpkru(); /* for shadow checking */
- }
-}
-
-void test_ptrace_of_child(int *ptr, u16 pkey)
-{
- __attribute__((__unused__)) int peek_result;
- pid_t child_pid;
- void *ignored = 0;
- long ret;
- int status;
- /*
- * This is the "control" for our little expermient. Make sure
- * we can always access it when ptracing.
- */
- int *plain_ptr_unaligned = malloc(HPAGE_SIZE);
- int *plain_ptr = ALIGN_PTR_UP(plain_ptr_unaligned, PAGE_SIZE);
-
- /*
- * Fork a child which is an exact copy of this process, of course.
- * That means we can do all of our tests via ptrace() and then plain
- * memory access and ensure they work differently.
- */
- child_pid = fork_lazy_child();
- dprintf1("[%d] child pid: %d\n", getpid(), child_pid);
-
- ret = ptrace(PTRACE_ATTACH, child_pid, ignored, ignored);
- if (ret)
- perror("attach");
- dprintf1("[%d] attach ret: %ld %d\n", getpid(), ret, __LINE__);
- pkey_assert(ret != -1);
- ret = waitpid(child_pid, &status, WUNTRACED);
- if ((ret != child_pid) || !(WIFSTOPPED(status))) {
- fprintf(stderr, "weird waitpid result %ld stat %x\n",
- ret, status);
- pkey_assert(0);
- }
- dprintf2("waitpid ret: %ld\n", ret);
- dprintf2("waitpid status: %d\n", status);
-
- pkey_access_deny(pkey);
- pkey_write_deny(pkey);
-
- /* Write access, untested for now:
- ret = ptrace(PTRACE_POKEDATA, child_pid, peek_at, data);
- pkey_assert(ret != -1);
- dprintf1("poke at %p: %ld\n", peek_at, ret);
- */
-
- /*
- * Try to access the pkey-protected "ptr" via ptrace:
- */
- ret = ptrace(PTRACE_PEEKDATA, child_pid, ptr, ignored);
- /* expect it to work, without an error: */
- pkey_assert(ret != -1);
- /* Now access from the current task, and expect an exception: */
- peek_result = read_ptr(ptr);
- expected_pk_fault(pkey);
-
- /*
- * Try to access the NON-pkey-protected "plain_ptr" via ptrace:
- */
- ret = ptrace(PTRACE_PEEKDATA, child_pid, plain_ptr, ignored);
- /* expect it to work, without an error: */
- pkey_assert(ret != -1);
- /* Now access from the current task, and expect NO exception: */
- peek_result = read_ptr(plain_ptr);
- do_not_expect_pk_fault();
-
- ret = ptrace(PTRACE_DETACH, child_pid, ignored, 0);
- pkey_assert(ret != -1);
-
- ret = kill(child_pid, SIGKILL);
- pkey_assert(ret != -1);
-
- wait(&status);
-
- free(plain_ptr_unaligned);
-}
-
-void test_executing_on_unreadable_memory(int *ptr, u16 pkey)
-{
- void *p1;
- int scratch;
- int ptr_contents;
- int ret;
-
- p1 = ALIGN_PTR_UP(&lots_o_noops_around_write, PAGE_SIZE);
- dprintf3("&lots_o_noops: %p\n", &lots_o_noops_around_write);
- /* lots_o_noops_around_write should be page-aligned already */
- assert(p1 == &lots_o_noops_around_write);
-
- /* Point 'p1' at the *second* page of the function: */
- p1 += PAGE_SIZE;
-
- madvise(p1, PAGE_SIZE, MADV_DONTNEED);
- lots_o_noops_around_write(&scratch);
- ptr_contents = read_ptr(p1);
- dprintf2("ptr (%p) contents@%d: %x\n", p1, __LINE__, ptr_contents);
-
- ret = mprotect_pkey(p1, PAGE_SIZE, PROT_EXEC, (u64)pkey);
- pkey_assert(!ret);
- pkey_access_deny(pkey);
-
- dprintf2("pkru: %x\n", rdpkru());
-
- /*
- * Make sure this is an *instruction* fault
- */
- madvise(p1, PAGE_SIZE, MADV_DONTNEED);
- lots_o_noops_around_write(&scratch);
- do_not_expect_pk_fault();
- ptr_contents = read_ptr(p1);
- dprintf2("ptr (%p) contents@%d: %x\n", p1, __LINE__, ptr_contents);
- expected_pk_fault(pkey);
-}
-
-void test_mprotect_pkey_on_unsupported_cpu(int *ptr, u16 pkey)
-{
- int size = PAGE_SIZE;
- int sret;
-
- if (cpu_has_pku()) {
- dprintf1("SKIP: %s: no CPU support\n", __func__);
- return;
- }
-
- sret = syscall(SYS_mprotect_key, ptr, size, PROT_READ, pkey);
- pkey_assert(sret < 0);
-}
-
-void (*pkey_tests[])(int *ptr, u16 pkey) = {
- test_read_of_write_disabled_region,
- test_read_of_access_disabled_region,
- test_write_of_write_disabled_region,
- test_write_of_access_disabled_region,
- test_kernel_write_of_access_disabled_region,
- test_kernel_write_of_write_disabled_region,
- test_kernel_gup_of_access_disabled_region,
- test_kernel_gup_write_to_write_disabled_region,
- test_executing_on_unreadable_memory,
- test_ptrace_of_child,
- test_pkey_syscalls_on_non_allocated_pkey,
- test_pkey_syscalls_bad_args,
- test_pkey_alloc_exhaust,
-};
-
-void run_tests_once(void)
-{
- int *ptr;
- int prot = PROT_READ|PROT_WRITE;
-
- for (test_nr = 0; test_nr < ARRAY_SIZE(pkey_tests); test_nr++) {
- int pkey;
- int orig_pkru_faults = pkru_faults;
-
- dprintf1("======================\n");
- dprintf1("test %d preparing...\n", test_nr);
-
- tracing_on();
- pkey = alloc_random_pkey();
- dprintf1("test %d starting with pkey: %d\n", test_nr, pkey);
- ptr = malloc_pkey(PAGE_SIZE, prot, pkey);
- dprintf1("test %d starting...\n", test_nr);
- pkey_tests[test_nr](ptr, pkey);
- dprintf1("freeing test memory: %p\n", ptr);
- free_pkey_malloc(ptr);
- sys_pkey_free(pkey);
-
- dprintf1("pkru_faults: %d\n", pkru_faults);
- dprintf1("orig_pkru_faults: %d\n", orig_pkru_faults);
-
- tracing_off();
- close_test_fds();
-
- printf("test %2d PASSED (iteration %d)\n", test_nr, iteration_nr);
- dprintf1("======================\n\n");
- }
- iteration_nr++;
-}
-
-void pkey_setup_shadow(void)
-{
- shadow_pkru = __rdpkru();
-}
-
-int main(void)
-{
- int nr_iterations = 22;
-
- setup_handlers();
-
- printf("has pku: %d\n", cpu_has_pku());
-
- if (!cpu_has_pku()) {
- int size = PAGE_SIZE;
- int *ptr;
-
- printf("running PKEY tests for unsupported CPU/OS\n");
-
- ptr = mmap(NULL, size, PROT_NONE, MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
- assert(ptr != (void *)-1);
- test_mprotect_pkey_on_unsupported_cpu(ptr, 1);
- exit(0);
- }
-
- pkey_setup_shadow();
- printf("startup pkru: %x\n", rdpkru());
- setup_hugetlbfs();
-
- while (nr_iterations-- > 0)
- run_tests_once();
-
- printf("done (all tests OK)\n");
- return 0;
-}
--
1.7.1
Add documentation updates that capture PowerPC specific changes.
Signed-off-by: Ram Pai <[email protected]>
---
Documentation/vm/protection-keys.txt | 85 ++++++++++++++++++++++++++--------
1 files changed, 65 insertions(+), 20 deletions(-)
diff --git a/Documentation/vm/protection-keys.txt b/Documentation/vm/protection-keys.txt
index b643045..d50b6ab 100644
--- a/Documentation/vm/protection-keys.txt
+++ b/Documentation/vm/protection-keys.txt
@@ -1,21 +1,46 @@
-Memory Protection Keys for Userspace (PKU aka PKEYs) is a CPU feature
-which will be found on future Intel CPUs.
+Memory Protection Keys for Userspace (PKU aka PKEYs) is a CPU feature found in
+new generation of intel CPUs and on PowerPC 7 and higher CPUs.
Memory Protection Keys provides a mechanism for enforcing page-based
-protections, but without requiring modification of the page tables
-when an application changes protection domains. It works by
-dedicating 4 previously ignored bits in each page table entry to a
-"protection key", giving 16 possible keys.
-
-There is also a new user-accessible register (PKRU) with two separate
-bits (Access Disable and Write Disable) for each key. Being a CPU
-register, PKRU is inherently thread-local, potentially giving each
-thread a different set of protections from every other thread.
-
-There are two new instructions (RDPKRU/WRPKRU) for reading and writing
-to the new register. The feature is only available in 64-bit mode,
-even though there is theoretically space in the PAE PTEs. These
-permissions are enforced on data access only and have no effect on
+protections, but without requiring modification of the page tables when an
+application changes protection domains.
+
+
+On Intel:
+
+ It works by dedicating 4 previously ignored bits in each page table
+ entry to a "protection key", giving 16 possible keys.
+
+ There is also a new user-accessible register (PKRU) with two separate
+ bits (Access Disable and Write Disable) for each key. Being a CPU
+ register, PKRU is inherently thread-local, potentially giving each
+ thread a different set of protections from every other thread.
+
+ There are two new instructions (RDPKRU/WRPKRU) for reading and writing
+ to the new register. The feature is only available in 64-bit mode,
+ even though there is theoretically space in the PAE PTEs. These
+ permissions are enforced on data access only and have no effect on
+ instruction fetches.
+
+
+On PowerPC:
+
+ It works by dedicating 5 page table entry bits to a "protection key",
+ giving 32 possible keys.
+
+ There is a user-accessible register (AMR) with two separate bits;
+ Access Disable and Write Disable, for each key. Being a CPU
+ register, AMR is inherently thread-local, potentially giving each
+ thread a different set of protections from every other thread. NOTE:
+ Disabling read permission does not disable write and vice-versa.
+
+ The feature is available on 64-bit HPTE mode only.
+ 'mtspr 0xd, mem' reads the AMR register
+ 'mfspr mem, 0xd' writes into the AMR register.
+
+
+
+Permissions are enforced on data access only and have no effect on
instruction fetches.
=========================== Syscalls ===========================
@@ -28,9 +53,9 @@ There are 3 system calls which directly interact with pkeys:
unsigned long prot, int pkey);
Before a pkey can be used, it must first be allocated with
-pkey_alloc(). An application calls the WRPKRU instruction
+pkey_alloc(). An application calls the WRPKRU/AMR instruction
directly in order to change access permissions to memory covered
-with a key. In this example WRPKRU is wrapped by a C function
+with a key. In this example WRPKRU/AMR is wrapped by a C function
called pkey_set().
int real_prot = PROT_READ|PROT_WRITE;
@@ -52,11 +77,11 @@ is no longer in use:
munmap(ptr, PAGE_SIZE);
pkey_free(pkey);
-(Note: pkey_set() is a wrapper for the RDPKRU and WRPKRU instructions.
+(Note: pkey_set() is a wrapper for the RDPKRU,WRPKRU or AMR instructions.
An example implementation can be found in
tools/testing/selftests/x86/protection_keys.c)
-=========================== Behavior ===========================
+=========================== Behavior =================================
The kernel attempts to make protection keys consistent with the
behavior of a plain mprotect(). For instance if you do this:
@@ -83,3 +108,23 @@ with a read():
The kernel will send a SIGSEGV in both cases, but si_code will be set
to SEGV_PKERR when violating protection keys versus SEGV_ACCERR when
the plain mprotect() permissions are violated.
+
+
+====================================================================
+ Semantic differences
+
+The following semantic differences exist between x86 and power.
+
+a) powerpc allows creation of a key with execute-disabled. The following
+ is allowed on powerpc.
+ pkey = pkey_alloc(0, PKEY_DISABLE_WRITE | PKEY_DISABLE_ACCESS |
+ PKEY_DISABLE_EXECUTE);
+ x86 disallows PKEY_DISABLE_EXECUTE during key creation.
+
+b) changing the permission bits of a key from a signal handler does not
+ persist on x86. The PKRU specific fpregs entry needs to be modified
+ for it to persist. On powerpc the permission bits of the key can be
+ modified by programming the AMR register from the signal handler.
+ The changes persists across signal boundaries.
+
+=====================================================================
--
1.7.1
Since PowerPC and Intel both support memory protection keys, moving
the documenation to arch-neutral directory.
Signed-off-by: Ram Pai <[email protected]>
---
Documentation/vm/protection-keys.txt | 85 +++++++++++++++++++++++++++++++++
Documentation/x86/protection-keys.txt | 85 ---------------------------------
2 files changed, 85 insertions(+), 85 deletions(-)
create mode 100644 Documentation/vm/protection-keys.txt
delete mode 100644 Documentation/x86/protection-keys.txt
diff --git a/Documentation/vm/protection-keys.txt b/Documentation/vm/protection-keys.txt
new file mode 100644
index 0000000..b643045
--- /dev/null
+++ b/Documentation/vm/protection-keys.txt
@@ -0,0 +1,85 @@
+Memory Protection Keys for Userspace (PKU aka PKEYs) is a CPU feature
+which will be found on future Intel CPUs.
+
+Memory Protection Keys provides a mechanism for enforcing page-based
+protections, but without requiring modification of the page tables
+when an application changes protection domains. It works by
+dedicating 4 previously ignored bits in each page table entry to a
+"protection key", giving 16 possible keys.
+
+There is also a new user-accessible register (PKRU) with two separate
+bits (Access Disable and Write Disable) for each key. Being a CPU
+register, PKRU is inherently thread-local, potentially giving each
+thread a different set of protections from every other thread.
+
+There are two new instructions (RDPKRU/WRPKRU) for reading and writing
+to the new register. The feature is only available in 64-bit mode,
+even though there is theoretically space in the PAE PTEs. These
+permissions are enforced on data access only and have no effect on
+instruction fetches.
+
+=========================== Syscalls ===========================
+
+There are 3 system calls which directly interact with pkeys:
+
+ int pkey_alloc(unsigned long flags, unsigned long init_access_rights)
+ int pkey_free(int pkey);
+ int pkey_mprotect(unsigned long start, size_t len,
+ unsigned long prot, int pkey);
+
+Before a pkey can be used, it must first be allocated with
+pkey_alloc(). An application calls the WRPKRU instruction
+directly in order to change access permissions to memory covered
+with a key. In this example WRPKRU is wrapped by a C function
+called pkey_set().
+
+ int real_prot = PROT_READ|PROT_WRITE;
+ pkey = pkey_alloc(0, PKEY_DENY_WRITE);
+ ptr = mmap(NULL, PAGE_SIZE, PROT_NONE, MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
+ ret = pkey_mprotect(ptr, PAGE_SIZE, real_prot, pkey);
+ ... application runs here
+
+Now, if the application needs to update the data at 'ptr', it can
+gain access, do the update, then remove its write access:
+
+ pkey_set(pkey, 0); // clear PKEY_DENY_WRITE
+ *ptr = foo; // assign something
+ pkey_set(pkey, PKEY_DENY_WRITE); // set PKEY_DENY_WRITE again
+
+Now when it frees the memory, it will also free the pkey since it
+is no longer in use:
+
+ munmap(ptr, PAGE_SIZE);
+ pkey_free(pkey);
+
+(Note: pkey_set() is a wrapper for the RDPKRU and WRPKRU instructions.
+ An example implementation can be found in
+ tools/testing/selftests/x86/protection_keys.c)
+
+=========================== Behavior ===========================
+
+The kernel attempts to make protection keys consistent with the
+behavior of a plain mprotect(). For instance if you do this:
+
+ mprotect(ptr, size, PROT_NONE);
+ something(ptr);
+
+you can expect the same effects with protection keys when doing this:
+
+ pkey = pkey_alloc(0, PKEY_DISABLE_WRITE | PKEY_DISABLE_READ);
+ pkey_mprotect(ptr, size, PROT_READ|PROT_WRITE, pkey);
+ something(ptr);
+
+That should be true whether something() is a direct access to 'ptr'
+like:
+
+ *ptr = foo;
+
+or when the kernel does the access on the application's behalf like
+with a read():
+
+ read(fd, ptr, 1);
+
+The kernel will send a SIGSEGV in both cases, but si_code will be set
+to SEGV_PKERR when violating protection keys versus SEGV_ACCERR when
+the plain mprotect() permissions are violated.
diff --git a/Documentation/x86/protection-keys.txt b/Documentation/x86/protection-keys.txt
deleted file mode 100644
index b643045..0000000
--- a/Documentation/x86/protection-keys.txt
+++ /dev/null
@@ -1,85 +0,0 @@
-Memory Protection Keys for Userspace (PKU aka PKEYs) is a CPU feature
-which will be found on future Intel CPUs.
-
-Memory Protection Keys provides a mechanism for enforcing page-based
-protections, but without requiring modification of the page tables
-when an application changes protection domains. It works by
-dedicating 4 previously ignored bits in each page table entry to a
-"protection key", giving 16 possible keys.
-
-There is also a new user-accessible register (PKRU) with two separate
-bits (Access Disable and Write Disable) for each key. Being a CPU
-register, PKRU is inherently thread-local, potentially giving each
-thread a different set of protections from every other thread.
-
-There are two new instructions (RDPKRU/WRPKRU) for reading and writing
-to the new register. The feature is only available in 64-bit mode,
-even though there is theoretically space in the PAE PTEs. These
-permissions are enforced on data access only and have no effect on
-instruction fetches.
-
-=========================== Syscalls ===========================
-
-There are 3 system calls which directly interact with pkeys:
-
- int pkey_alloc(unsigned long flags, unsigned long init_access_rights)
- int pkey_free(int pkey);
- int pkey_mprotect(unsigned long start, size_t len,
- unsigned long prot, int pkey);
-
-Before a pkey can be used, it must first be allocated with
-pkey_alloc(). An application calls the WRPKRU instruction
-directly in order to change access permissions to memory covered
-with a key. In this example WRPKRU is wrapped by a C function
-called pkey_set().
-
- int real_prot = PROT_READ|PROT_WRITE;
- pkey = pkey_alloc(0, PKEY_DENY_WRITE);
- ptr = mmap(NULL, PAGE_SIZE, PROT_NONE, MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
- ret = pkey_mprotect(ptr, PAGE_SIZE, real_prot, pkey);
- ... application runs here
-
-Now, if the application needs to update the data at 'ptr', it can
-gain access, do the update, then remove its write access:
-
- pkey_set(pkey, 0); // clear PKEY_DENY_WRITE
- *ptr = foo; // assign something
- pkey_set(pkey, PKEY_DENY_WRITE); // set PKEY_DENY_WRITE again
-
-Now when it frees the memory, it will also free the pkey since it
-is no longer in use:
-
- munmap(ptr, PAGE_SIZE);
- pkey_free(pkey);
-
-(Note: pkey_set() is a wrapper for the RDPKRU and WRPKRU instructions.
- An example implementation can be found in
- tools/testing/selftests/x86/protection_keys.c)
-
-=========================== Behavior ===========================
-
-The kernel attempts to make protection keys consistent with the
-behavior of a plain mprotect(). For instance if you do this:
-
- mprotect(ptr, size, PROT_NONE);
- something(ptr);
-
-you can expect the same effects with protection keys when doing this:
-
- pkey = pkey_alloc(0, PKEY_DISABLE_WRITE | PKEY_DISABLE_READ);
- pkey_mprotect(ptr, size, PROT_READ|PROT_WRITE, pkey);
- something(ptr);
-
-That should be true whether something() is a direct access to 'ptr'
-like:
-
- *ptr = foo;
-
-or when the kernel does the access on the application's behalf like
-with a read():
-
- read(fd, ptr, 1);
-
-The kernel will send a SIGSEGV in both cases, but si_code will be set
-to SEGV_PKERR when violating protection keys versus SEGV_ACCERR when
-the plain mprotect() permissions are violated.
--
1.7.1
Display the pkey number associated with the vma in smaps of a task.
The key will be seen as below:
ProtectionKey: 0
Signed-off-by: Ram Pai <[email protected]>
---
arch/powerpc/kernel/setup_64.c | 8 ++++++++
1 files changed, 8 insertions(+), 0 deletions(-)
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index f35ff9d..ebc82b3 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -37,6 +37,7 @@
#include <linux/memblock.h>
#include <linux/memory.h>
#include <linux/nmi.h>
+#include <linux/pkeys.h>
#include <asm/io.h>
#include <asm/kdump.h>
@@ -745,3 +746,10 @@ static int __init disable_hardlockup_detector(void)
}
early_initcall(disable_hardlockup_detector);
#endif
+
+#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
+void arch_show_smap(struct seq_file *m, struct vm_area_struct *vma)
+{
+ seq_printf(m, "ProtectionKey: %8u\n", vma_pkey(vma));
+}
+#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
--
1.7.1
Capture the protection key that got violated in paca.
This value will be used by used to inform the signal
handler.
Signed-off-by: Ram Pai <[email protected]>
---
arch/powerpc/include/asm/paca.h | 1 +
arch/powerpc/kernel/asm-offsets.c | 1 +
arch/powerpc/mm/fault.c | 3 +++
3 files changed, 5 insertions(+), 0 deletions(-)
diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
index c8bd1fc..0c06188 100644
--- a/arch/powerpc/include/asm/paca.h
+++ b/arch/powerpc/include/asm/paca.h
@@ -94,6 +94,7 @@ struct paca_struct {
u64 dscr_default; /* per-CPU default DSCR */
#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
u64 paca_amr; /* value of amr at exception */
+ u16 paca_pkey; /* exception causing pkey */
#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
#ifdef CONFIG_PPC_STD_MMU_64
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index 17f5d8a..7dff862 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -244,6 +244,7 @@ int main(void)
#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
OFFSET(PACA_AMR, paca_struct, paca_amr);
+ OFFSET(PACA_PKEY, paca_struct, paca_pkey);
#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
OFFSET(ACCOUNT_STARTTIME, paca_struct, accounting.starttime);
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index a6710f5..c8674a7 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -265,6 +265,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
if (error_code & DSISR_KEYFAULT) {
code = SEGV_PKUERR;
get_paca()->paca_amr = read_amr();
+ get_paca()->paca_pkey = get_pte_pkey(current->mm, address);
goto bad_area_nosemaphore;
}
#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
@@ -290,6 +291,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
+
/*
* We want to do this outside mmap_sem, because reading code around nip
* can result in fault, which will cause a deadlock when called with
@@ -453,6 +455,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE,
is_exec, 0)) {
get_paca()->paca_amr = read_amr();
+ get_paca()->paca_pkey = vma_pkey(vma);
code = SEGV_PKUERR;
goto bad_area;
}
--
1.7.1
Abstracted out the arch specific code into the header file, and
added powerpc specific changes.
a) added 4k-backed hpte, memory allocator, powerpc specific.
b) added three test case where the key is associated after the page is
accessed/allocated/mapped.
c) cleaned up the code to make checkpatch.pl happy
Signed-off-by: Ram Pai <[email protected]>
---
tools/testing/selftests/vm/pkey-helpers.h | 230 +++++++++--
tools/testing/selftests/vm/protection_keys.c | 567 +++++++++++++++-----------
2 files changed, 518 insertions(+), 279 deletions(-)
diff --git a/tools/testing/selftests/vm/pkey-helpers.h b/tools/testing/selftests/vm/pkey-helpers.h
index b202939..69bfa89 100644
--- a/tools/testing/selftests/vm/pkey-helpers.h
+++ b/tools/testing/selftests/vm/pkey-helpers.h
@@ -12,13 +12,72 @@
#include <ucontext.h>
#include <sys/mman.h>
-#define NR_PKEYS 16
-#define PKRU_BITS_PER_PKEY 2
+/* Define some kernel-like types */
+#define u8 uint8_t
+#define u16 uint16_t
+#define u32 uint32_t
+#define u64 uint64_t
+
+#ifdef __i386__ /* arch */
+
+#define SYS_mprotect_key 380
+#define SYS_pkey_alloc 381
+#define SYS_pkey_free 382
+#define REG_IP_IDX REG_EIP
+#define si_pkey_offset 0x14
+
+#define NR_PKEYS 16
+#define NR_RESERVED_PKEYS 1
+#define PKRU_BITS_PER_PKEY 2
+#define PKEY_DISABLE_ACCESS 0x1
+#define PKEY_DISABLE_WRITE 0x2
+#define HPAGE_SIZE (1UL<<21)
+
+#define INIT_PRKU 0x0UL
+
+#elif __powerpc64__ /* arch */
+
+#define SYS_mprotect_key 386
+#define SYS_pkey_alloc 384
+#define SYS_pkey_free 385
+#define si_pkey_offset 0x20
+#define REG_IP_IDX PT_NIP
+#define REG_TRAPNO PT_TRAP
+#define REG_AMR 45
+#define gregs gp_regs
+#define fpregs fp_regs
+
+#define NR_PKEYS 32
+#define NR_RESERVED_PKEYS 3
+#define PKRU_BITS_PER_PKEY 2
+#define PKEY_DISABLE_ACCESS 0x3 /* disable read and write */
+#define PKEY_DISABLE_WRITE 0x2
+#define HPAGE_SIZE (1UL<<24)
+
+#define INIT_PRKU 0x3UL
+#else /* arch */
+
+ NOT SUPPORTED
+
+#endif /* arch */
+
#ifndef DEBUG_LEVEL
#define DEBUG_LEVEL 0
#endif
#define DPRINT_IN_SIGNAL_BUF_SIZE 4096
+
+
+static inline u32 pkey_to_shift(int pkey)
+{
+#ifdef __i386__ /* arch */
+ return pkey * PKRU_BITS_PER_PKEY;
+#elif __powerpc64__ /* arch */
+ return (NR_PKEYS - pkey - 1) * PKRU_BITS_PER_PKEY;
+#endif /* arch */
+}
+
+
extern int dprint_in_signal;
extern char dprint_in_signal_buffer[DPRINT_IN_SIGNAL_BUF_SIZE];
static inline void sigsafe_printf(const char *format, ...)
@@ -53,53 +112,76 @@ static inline void sigsafe_printf(const char *format, ...)
#define dprintf3(args...) dprintf_level(3, args)
#define dprintf4(args...) dprintf_level(4, args)
-extern unsigned int shadow_pkru;
-static inline unsigned int __rdpkru(void)
+extern u64 shadow_pkey_reg;
+
+static inline u64 __rdpkey_reg(void)
{
+#ifdef __i386__ /* arch */
unsigned int eax, edx;
unsigned int ecx = 0;
- unsigned int pkru;
+ unsigned int pkey_reg;
asm volatile(".byte 0x0f,0x01,0xee\n\t"
: "=a" (eax), "=d" (edx)
: "c" (ecx));
- pkru = eax;
- return pkru;
+#elif __powerpc64__ /* arch */
+ u64 eax;
+ u64 pkey_reg;
+
+ asm volatile("mfspr %0, 0xd" : "=r" ((u64)(eax)));
+#endif /* arch */
+ pkey_reg = (u64)eax;
+ return pkey_reg;
}
-static inline unsigned int _rdpkru(int line)
+static inline u64 _rdpkey_reg(int line)
{
- unsigned int pkru = __rdpkru();
+ u64 pkey_reg = __rdpkey_reg();
- dprintf4("rdpkru(line=%d) pkru: %x shadow: %x\n",
- line, pkru, shadow_pkru);
- assert(pkru == shadow_pkru);
+ dprintf4("rdpkey_reg(line=%d) pkey_reg: %lx shadow: %lx\n",
+ line, pkey_reg, shadow_pkey_reg);
+ assert(pkey_reg == shadow_pkey_reg);
- return pkru;
+ return pkey_reg;
}
-#define rdpkru() _rdpkru(__LINE__)
+#define rdpkey_reg() _rdpkey_reg(__LINE__)
-static inline void __wrpkru(unsigned int pkru)
+static inline void __wrpkey_reg(u64 pkey_reg)
{
- unsigned int eax = pkru;
+#ifdef __i386__ /* arch */
+ unsigned int eax = pkey_reg;
unsigned int ecx = 0;
unsigned int edx = 0;
- dprintf4("%s() changing %08x to %08x\n", __func__, __rdpkru(), pkru);
+ dprintf4("%s() changing %lx to %lx\n",
+ __func__, __rdpkey_reg(), pkey_reg);
asm volatile(".byte 0x0f,0x01,0xef\n\t"
: : "a" (eax), "c" (ecx), "d" (edx));
- assert(pkru == __rdpkru());
+ dprintf4("%s() PKRUP after changing %lx to %lx\n",
+ __func__, __rdpkey_reg(), pkey_reg);
+#else /* arch */
+ u64 eax = pkey_reg;
+
+ dprintf4("%s() changing %llx to %llx\n",
+ __func__, __rdpkey_reg(), pkey_reg);
+ asm volatile("mtspr 0xd, %0" : : "r" ((unsigned long)(eax)) : "memory");
+ dprintf4("%s() PKRUP after changing %llx to %llx\n",
+ __func__, __rdpkey_reg(), pkey_reg);
+#endif /* arch */
+ assert(pkey_reg == __rdpkey_reg());
}
-static inline void wrpkru(unsigned int pkru)
+static inline void wrpkey_reg(u64 pkey_reg)
{
- dprintf4("%s() changing %08x to %08x\n", __func__, __rdpkru(), pkru);
+ dprintf4("%s() changing %lx to %lx\n",
+ __func__, __rdpkey_reg(), pkey_reg);
/* will do the shadow check for us: */
- rdpkru();
- __wrpkru(pkru);
- shadow_pkru = pkru;
- dprintf4("%s(%08x) pkru: %08x\n", __func__, pkru, __rdpkru());
+ rdpkey_reg();
+ __wrpkey_reg(pkey_reg);
+ shadow_pkey_reg = pkey_reg;
+ dprintf4("%s(%lx) pkey_reg: %lx\n",
+ __func__, pkey_reg, __rdpkey_reg());
}
/*
@@ -108,40 +190,37 @@ static inline void wrpkru(unsigned int pkru)
*/
static inline void __pkey_access_allow(int pkey, int do_allow)
{
- unsigned int pkru = rdpkru();
+ u64 pkey_reg = rdpkey_reg();
int bit = pkey * 2;
if (do_allow)
- pkru &= (1<<bit);
+ pkey_reg &= (1<<bit);
else
- pkru |= (1<<bit);
+ pkey_reg |= (1<<bit);
- dprintf4("pkru now: %08x\n", rdpkru());
- wrpkru(pkru);
+ dprintf4("pkey_reg now: %lx\n", rdpkey_reg());
+ wrpkey_reg(pkey_reg);
}
static inline void __pkey_write_allow(int pkey, int do_allow_write)
{
- long pkru = rdpkru();
+ u64 pkey_reg = rdpkey_reg();
int bit = pkey * 2 + 1;
if (do_allow_write)
- pkru &= (1<<bit);
+ pkey_reg &= (1<<bit);
else
- pkru |= (1<<bit);
+ pkey_reg |= (1<<bit);
- wrpkru(pkru);
- dprintf4("pkru now: %08x\n", rdpkru());
+ wrpkey_reg(pkey_reg);
+ dprintf4("pkey_reg now: %lx\n", rdpkey_reg());
}
-#define PROT_PKEY0 0x10 /* protection key value (bit 0) */
-#define PROT_PKEY1 0x20 /* protection key value (bit 1) */
-#define PROT_PKEY2 0x40 /* protection key value (bit 2) */
-#define PROT_PKEY3 0x80 /* protection key value (bit 3) */
-
-#define PAGE_SIZE 4096
#define MB (1<<20)
+#ifdef __i386__ /* arch */
+
+#define PAGE_SIZE 4096
static inline void __cpuid(unsigned int *eax, unsigned int *ebx,
unsigned int *ecx, unsigned int *edx)
{
@@ -159,7 +238,7 @@ static inline void __cpuid(unsigned int *eax, unsigned int *ebx,
#define X86_FEATURE_PKU (1<<3) /* Protection Keys for Userspace */
#define X86_FEATURE_OSPKE (1<<4) /* OS Protection Keys Enable */
-static inline int cpu_has_pku(void)
+static inline int cpu_has_pkey(void)
{
unsigned int eax;
unsigned int ebx;
@@ -183,7 +262,6 @@ static inline int cpu_has_pku(void)
#define XSTATE_PKRU_BIT (9)
#define XSTATE_PKRU 0x200
-
int pkru_xstate_offset(void)
{
unsigned int eax;
@@ -216,4 +294,72 @@ int pkru_xstate_offset(void)
return xstate_offset;
}
+/* 8-bytes of instruction * 512 bytes = 1 page */
+#define __page_o_noops() asm(".rept 512 ; nopl 0x7eeeeeee(%eax) ; .endr")
+
+#elif __powerpc64__ /* arch */
+
+#define PAGE_SIZE (0x1UL << 16)
+static inline int cpu_has_pkey(void)
+{
+ return 1;
+}
+
+/* 8-bytes of instruction * 16384bytes = 1 page */
+#define __page_o_noops() asm(".rept 16384 ; nop; .endr")
+
+#endif /* arch */
+
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof(*(x)))
+#define ALIGN_UP(x, align_to) (((x) + ((align_to)-1)) & ~((align_to)-1))
+#define ALIGN_DOWN(x, align_to) ((x) & ~((align_to)-1))
+#define ALIGN_PTR_UP(p, ptr_align_to) \
+ ((typeof(p))ALIGN_UP((unsigned long)(p), ptr_align_to))
+#define ALIGN_PTR_DOWN(p, ptr_align_to) \
+ ((typeof(p))ALIGN_DOWN((unsigned long)(p), ptr_align_to))
+#define __stringify_1(x...) #x
+#define __stringify(x...) __stringify_1(x)
+
+#define PTR_ERR_ENOTSUP ((void *)-ENOTSUP)
+
+extern void abort_hooks(void);
+#define pkey_assert(condition) do { \
+ if (!(condition)) { \
+ dprintf0("assert() at %s::%d test_nr: %d iteration: %d\n", \
+ __FILE__, __LINE__, \
+ test_nr, iteration_nr); \
+ dprintf0("errno at assert: %d", errno); \
+ abort_hooks(); \
+ assert(condition); \
+ } \
+} while (0)
+#define raw_assert(cond) assert(cond)
+
+
+static inline int open_hugepage_file(int flag)
+{
+ int fd;
+#ifdef __i386__ /* arch */
+ fd = open("/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages",
+ O_RDONLY);
+#elif __powerpc64__ /* arch */
+ fd = open("/sys/kernel/mm/hugepages/hugepages-16384kB/nr_hugepages",
+ O_RDONLY);
+#else /* arch */
+ NOT SUPPORTED
+#endif /* arch */
+ return fd;
+}
+
+static inline int get_start_key(void)
+{
+#ifdef __i386__ /* arch */
+ return 1;
+#elif __powerpc64__ /* arch */
+ return 0;
+#else /* arch */
+ NOT SUPPORTED
+#endif /* arch */
+}
+
#endif /* _PKEYS_HELPER_H */
diff --git a/tools/testing/selftests/vm/protection_keys.c b/tools/testing/selftests/vm/protection_keys.c
index 3237bc0..7210ca7 100644
--- a/tools/testing/selftests/vm/protection_keys.c
+++ b/tools/testing/selftests/vm/protection_keys.c
@@ -1,10 +1,10 @@
/*
- * Tests x86 Memory Protection Keys (see Documentation/x86/protection-keys.txt)
+ * Tests Memory Protection Keys (see Documentation/vm/protection-keys.txt)
*
* There are examples in here of:
* * how to set protection keys on memory
- * * how to set/clear bits in PKRU (the rights register)
- * * how to handle SEGV_PKRU signals and extract pkey-relevant
+ * * how to set/clear bits in Protection Key registers (the rights register)
+ * * how to handle SEGV_PKUERR signals and extract pkey-relevant
* information from the siginfo
*
* Things to add:
@@ -12,17 +12,23 @@
* prefault pages in at malloc, or not
* protect MPX bounds tables with protection keys?
* make sure VMA splitting/merging is working correctly
- * OOMs can destroy mm->mmap (see exit_mmap()), so make sure it is immune to pkeys
- * look for pkey "leaks" where it is still set on a VMA but "freed" back to the kernel
- * do a plain mprotect() to a mprotect_pkey() area and make sure the pkey sticks
+ * OOMs can destroy mm->mmap (see exit_mmap()),
+ * so make sure it is immune to pkeys
+ * look for pkey "leaks" where it is still set on a VMA
+ * but "freed" back to the kernel
+ * do a plain mprotect() to a mprotect_pkey() area and make
+ * sure the pkey sticks
*
* Compile like this:
- * gcc -o protection_keys -O2 -g -std=gnu99 -pthread -Wall protection_keys.c -lrt -ldl -lm
- * gcc -m32 -o protection_keys_32 -O2 -g -std=gnu99 -pthread -Wall protection_keys.c -lrt -ldl -lm
+ * gcc -o protection_keys -O2 -g -std=gnu99
+ * -pthread -Wall protection_keys.c -lrt -ldl -lm
+ * gcc -m32 -o protection_keys_32 -O2 -g -std=gnu99
+ * -pthread -Wall protection_keys.c -lrt -ldl -lm
*/
#define _GNU_SOURCE
#include <errno.h>
#include <linux/futex.h>
+#include <time.h>
#include <sys/time.h>
#include <sys/syscall.h>
#include <string.h>
@@ -46,36 +52,11 @@
int iteration_nr = 1;
int test_nr;
-
-unsigned int shadow_pkru;
-
-#define HPAGE_SIZE (1UL<<21)
-#define ARRAY_SIZE(x) (sizeof(x) / sizeof(*(x)))
-#define ALIGN_UP(x, align_to) (((x) + ((align_to)-1)) & ~((align_to)-1))
-#define ALIGN_DOWN(x, align_to) ((x) & ~((align_to)-1))
-#define ALIGN_PTR_UP(p, ptr_align_to) ((typeof(p))ALIGN_UP((unsigned long)(p), ptr_align_to))
-#define ALIGN_PTR_DOWN(p, ptr_align_to) ((typeof(p))ALIGN_DOWN((unsigned long)(p), ptr_align_to))
-#define __stringify_1(x...) #x
-#define __stringify(x...) __stringify_1(x)
-
-#define PTR_ERR_ENOTSUP ((void *)-ENOTSUP)
+u64 shadow_pkey_reg;
int dprint_in_signal;
char dprint_in_signal_buffer[DPRINT_IN_SIGNAL_BUF_SIZE];
-extern void abort_hooks(void);
-#define pkey_assert(condition) do { \
- if (!(condition)) { \
- dprintf0("assert() at %s::%d test_nr: %d iteration: %d\n", \
- __FILE__, __LINE__, \
- test_nr, iteration_nr); \
- dprintf0("errno at assert: %d", errno); \
- abort_hooks(); \
- assert(condition); \
- } \
-} while (0)
-#define raw_assert(cond) assert(cond)
-
void cat_into_file(char *str, char *file)
{
int fd = open(file, O_RDWR);
@@ -153,11 +134,6 @@ void abort_hooks(void)
#endif
}
-static inline void __page_o_noops(void)
-{
- /* 8-bytes of instruction * 512 bytes = 1 page */
- asm(".rept 512 ; nopl 0x7eeeeeee(%eax) ; .endr");
-}
/*
* This attempts to have roughly a page of instructions followed by a few
@@ -181,26 +157,6 @@ void lots_o_noops_around_write(int *write_to_me)
dprintf3("%s() done\n", __func__);
}
-/* Define some kernel-like types */
-#define u8 uint8_t
-#define u16 uint16_t
-#define u32 uint32_t
-#define u64 uint64_t
-
-#ifdef __i386__
-#define SYS_mprotect_key 380
-#define SYS_pkey_alloc 381
-#define SYS_pkey_free 382
-#define REG_IP_IDX REG_EIP
-#define si_pkey_offset 0x14
-#else
-#define SYS_mprotect_key 329
-#define SYS_pkey_alloc 330
-#define SYS_pkey_free 331
-#define REG_IP_IDX REG_RIP
-#define si_pkey_offset 0x20
-#endif
-
void dump_mem(void *dumpme, int len_bytes)
{
char *c = (void *)dumpme;
@@ -208,6 +164,7 @@ void dump_mem(void *dumpme, int len_bytes)
for (i = 0; i < len_bytes; i += sizeof(u64)) {
u64 *ptr = (u64 *)(c + i);
+
dprintf1("dump[%03d][@%p]: %016jx\n", i, ptr, *ptr);
}
}
@@ -229,29 +186,49 @@ void dump_mem(void *dumpme, int len_bytes)
return "UNKNOWN";
}
-int pkru_faults;
+int pkey_faults;
int last_si_pkey = -1;
+
+u64 reset_bits(int pkey, u64 bits)
+{
+ u32 shift = pkey_to_shift(pkey);
+
+ return ~(bits << shift);
+}
+
+u64 left_shift_bits(int pkey, u64 bits)
+{
+ u32 shift = pkey_to_shift(pkey);
+
+ return (bits << shift);
+}
+
+u64 right_shift_bits(int pkey, u64 bits)
+{
+ u32 shift = pkey_to_shift(pkey);
+
+ return (bits >> shift);
+}
+
+void pkey_access_allow(int pkey);
void signal_handler(int signum, siginfo_t *si, void *vucontext)
{
ucontext_t *uctxt = vucontext;
int trapno;
unsigned long ip;
char *fpregs;
- u32 *pkru_ptr;
+ u64 *pkey_reg_ptr;
u64 si_pkey;
u32 *si_pkey_ptr;
- int pkru_offset;
- fpregset_t fpregset;
dprint_in_signal = 1;
dprintf1(">>>>===============SIGSEGV============================\n");
- dprintf1("%s()::%d, pkru: 0x%x shadow: %x\n", __func__, __LINE__,
- __rdpkru(), shadow_pkru);
+ dprintf1("%s()::%d, pkey_reg: 0x%lx shadow: %lx\n", __func__, __LINE__,
+ __rdpkey_reg(), shadow_pkey_reg);
trapno = uctxt->uc_mcontext.gregs[REG_TRAPNO];
ip = uctxt->uc_mcontext.gregs[REG_IP_IDX];
- fpregset = uctxt->uc_mcontext.fpregs;
- fpregs = (void *)fpregset;
+ fpregs = (char *) uctxt->uc_mcontext.fpregs;
dprintf2("%s() trapno: %d ip: 0x%lx info->si_code: %s/%d\n", __func__,
trapno, ip, si_code_str(si->si_code), si->si_code);
@@ -262,20 +239,22 @@ void signal_handler(int signum, siginfo_t *si, void *vucontext)
* state. We just assume that it is here.
*/
fpregs += 0x70;
-#endif
- pkru_offset = pkru_xstate_offset();
- pkru_ptr = (void *)(&fpregs[pkru_offset]);
-
- dprintf1("siginfo: %p\n", si);
- dprintf1(" fpregs: %p\n", fpregs);
+ pkey_reg_ptr = (void *)(&fpregs[pkru_xstate_offset()]);
/*
- * If we got a PKRU fault, we *HAVE* to have at least one bit set in
+ * If we got a key fault, we *HAVE* to have at least one bit set in
* here.
*/
dprintf1("pkru_xstate_offset: %d\n", pkru_xstate_offset());
if (DEBUG_LEVEL > 4)
- dump_mem(pkru_ptr - 128, 256);
- pkey_assert(*pkru_ptr);
+ dump_mem(pkey_reg_ptr - 128, 256);
+#elif __powerpc64__
+ pkey_reg_ptr = &uctxt->uc_mcontext.gregs[REG_AMR];
+#endif
+
+
+ dprintf1("siginfo: %p\n", si);
+ dprintf1(" fpregs: %p\n", fpregs);
+ pkey_assert(*pkey_reg_ptr);
si_pkey_ptr = (u32 *)(((u8 *)si) + si_pkey_offset);
dprintf1("si_pkey_ptr: %p\n", si_pkey_ptr);
@@ -291,36 +270,29 @@ void signal_handler(int signum, siginfo_t *si, void *vucontext)
exit(4);
}
- dprintf1("signal pkru from xsave: %08x\n", *pkru_ptr);
- /* need __rdpkru() version so we do not do shadow_pkru checking */
- dprintf1("signal pkru from pkru: %08x\n", __rdpkru());
+ dprintf1("signal pkey_reg : %08x\n", *pkey_reg_ptr);
+ /*
+ * need __rdpkey_reg() version so we do not do
+ * shadow_pkey_reg checking
+ */
+ dprintf1("signal pkey_reg from pkey_reg: %08x\n", __rdpkey_reg());
dprintf1("si_pkey from siginfo: %jx\n", si_pkey);
- *(u64 *)pkru_ptr = 0x00000000;
- dprintf1("WARNING: set PRKU=0 to allow faulting instruction to continue\n");
- pkru_faults++;
+#ifdef __i386__
+ *(u64 *)pkey_reg_ptr &= reset_bits(si_pkey, PKEY_DISABLE_ACCESS);
+#elif __powerpc64__
+ pkey_access_allow(si_pkey);
+#endif
+ shadow_pkey_reg &= reset_bits(si_pkey, PKEY_DISABLE_ACCESS);
+ dprintf1("WARNING: set PRKU=0 to allow faulting instruction "
+ "to continue\n");
+ pkey_faults++;
dprintf1("<<<<==================================================\n");
- return;
- if (trapno == 14) {
- fprintf(stderr,
- "ERROR: In signal handler, page fault, trapno = %d, ip = %016lx\n",
- trapno, ip);
- fprintf(stderr, "si_addr %p\n", si->si_addr);
- fprintf(stderr, "REG_ERR: %lx\n",
- (unsigned long)uctxt->uc_mcontext.gregs[REG_ERR]);
- exit(1);
- } else {
- fprintf(stderr, "unexpected trap %d! at 0x%lx\n", trapno, ip);
- fprintf(stderr, "si_addr %p\n", si->si_addr);
- fprintf(stderr, "REG_ERR: %lx\n",
- (unsigned long)uctxt->uc_mcontext.gregs[REG_ERR]);
- exit(2);
- }
- dprint_in_signal = 0;
}
int wait_all_children(void)
{
int status;
+
return waitpid(-1, &status, 0);
}
@@ -409,51 +381,50 @@ void dumpit(char *f)
close(fd);
}
-#define PKEY_DISABLE_ACCESS 0x1
-#define PKEY_DISABLE_WRITE 0x2
-
-u32 pkey_get(int pkey, unsigned long flags)
+u64 pkey_get(int pkey, unsigned long flags)
{
- u32 mask = (PKEY_DISABLE_ACCESS|PKEY_DISABLE_WRITE);
- u32 pkru = __rdpkru();
- u32 shifted_pkru;
- u32 masked_pkru;
+ u64 mask = (PKEY_DISABLE_ACCESS|PKEY_DISABLE_WRITE);
+ u64 pkey_reg = __rdpkey_reg();
+ u64 shifted_pkey_reg;
+ u64 masked_pkey_reg;
dprintf1("%s(pkey=%d, flags=%lx) = %x / %d\n",
__func__, pkey, flags, 0, 0);
- dprintf2("%s() raw pkru: %x\n", __func__, pkru);
+ dprintf2("%s() raw pkey_reg: %lx\n", __func__, pkey_reg);
- shifted_pkru = (pkru >> (pkey * PKRU_BITS_PER_PKEY));
- dprintf2("%s() shifted_pkru: %x\n", __func__, shifted_pkru);
- masked_pkru = shifted_pkru & mask;
- dprintf2("%s() masked pkru: %x\n", __func__, masked_pkru);
+ shifted_pkey_reg = right_shift_bits(pkey, pkey_reg);
+ dprintf2("%s() shifted_pkey_reg: %lx\n", __func__, shifted_pkey_reg);
+ masked_pkey_reg = shifted_pkey_reg & mask;
+ dprintf2("%s() masked pkey_reg: %lx\n", __func__, masked_pkey_reg);
/*
* shift down the relevant bits to the lowest two, then
* mask off all the other high bits.
*/
- return masked_pkru;
+ return masked_pkey_reg;
}
int pkey_set(int pkey, unsigned long rights, unsigned long flags)
{
- u32 mask = (PKEY_DISABLE_ACCESS|PKEY_DISABLE_WRITE);
- u32 old_pkru = __rdpkru();
- u32 new_pkru;
+ u64 mask = (PKEY_DISABLE_ACCESS|PKEY_DISABLE_WRITE);
+ u64 old_pkey_reg = __rdpkey_reg();
+ u64 new_pkey_reg;
/* make sure that 'rights' only contains the bits we expect: */
assert(!(rights & ~mask));
- /* copy old pkru */
- new_pkru = old_pkru;
+ /* copy old pkey_reg */
+ new_pkey_reg = old_pkey_reg;
/* mask out bits from pkey in old value: */
- new_pkru &= ~(mask << (pkey * PKRU_BITS_PER_PKEY));
+ new_pkey_reg &= reset_bits(pkey, mask);
/* OR in new bits for pkey: */
- new_pkru |= (rights << (pkey * PKRU_BITS_PER_PKEY));
+ new_pkey_reg |= left_shift_bits(pkey, rights);
- __wrpkru(new_pkru);
+ __wrpkey_reg(new_pkey_reg);
- dprintf3("%s(pkey=%d, rights=%lx, flags=%lx) = %x pkru now: %x old_pkru: %x\n",
- __func__, pkey, rights, flags, 0, __rdpkru(), old_pkru);
+ dprintf3("%s(pkey=%d, rights=%lx, flags=%lx) = %x "
+ "pkey_reg now: %x old_pkey_reg: %x\n",
+ __func__, pkey, rights, flags,
+ 0, __rdpkey_reg(), old_pkey_reg);
return 0;
}
@@ -461,8 +432,8 @@ void pkey_disable_set(int pkey, int flags)
{
unsigned long syscall_flags = 0;
int ret;
- int pkey_rights;
- u32 orig_pkru = rdpkru();
+ u64 pkey_rights;
+ u64 orig_pkey_reg = rdpkey_reg();
dprintf1("START->%s(%d, 0x%x)\n", __func__,
pkey, flags);
@@ -474,23 +445,28 @@ void pkey_disable_set(int pkey, int flags)
pkey, pkey, pkey_rights);
pkey_assert(pkey_rights >= 0);
- pkey_rights |= flags;
+ /* process flags only if they have some new bits enabled */
+ if (flags && !(pkey_rights & flags)) {
+ pkey_rights |= flags;
- ret = pkey_set(pkey, pkey_rights, syscall_flags);
- assert(!ret);
- /*pkru and flags have the same format */
- shadow_pkru |= flags << (pkey * 2);
- dprintf1("%s(%d) shadow: 0x%x\n", __func__, pkey, shadow_pkru);
+ ret = pkey_set(pkey, pkey_rights, syscall_flags);
+ assert(!ret);
+ /*pkey_reg and flags have the same format */
+ shadow_pkey_reg |= left_shift_bits(pkey, flags);
+ dprintf1("%s(%d) shadow: 0x%x\n",
+ __func__, pkey, shadow_pkey_reg);
- pkey_assert(ret >= 0);
+ pkey_assert(ret >= 0);
- pkey_rights = pkey_get(pkey, syscall_flags);
- dprintf1("%s(%d) pkey_get(%d): %x\n", __func__,
- pkey, pkey, pkey_rights);
+ pkey_rights = pkey_get(pkey, syscall_flags);
+ dprintf1("%s(%d) pkey_get(%d): %x\n", __func__,
+ pkey, pkey, pkey_rights);
- dprintf1("%s(%d) pkru: 0x%x\n", __func__, pkey, rdpkru());
- if (flags)
- pkey_assert(rdpkru() > orig_pkru);
+ dprintf1("%s(%d) pkey_reg: 0x%lx\n",
+ __func__, pkey, rdpkey_reg());
+ if (flags)
+ pkey_assert(rdpkey_reg() > orig_pkey_reg);
+ }
dprintf1("END<---%s(%d, 0x%x)\n", __func__,
pkey, flags);
}
@@ -499,8 +475,8 @@ void pkey_disable_clear(int pkey, int flags)
{
unsigned long syscall_flags = 0;
int ret;
- int pkey_rights = pkey_get(pkey, syscall_flags);
- u32 orig_pkru = rdpkru();
+ u64 pkey_rights = pkey_get(pkey, syscall_flags);
+ u64 orig_pkey_reg = rdpkey_reg();
pkey_assert(flags & (PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE));
@@ -508,20 +484,21 @@ void pkey_disable_clear(int pkey, int flags)
pkey, pkey, pkey_rights);
pkey_assert(pkey_rights >= 0);
- pkey_rights |= flags;
+ pkey_rights &= ~flags;
ret = pkey_set(pkey, pkey_rights, 0);
- /* pkru and flags have the same format */
- shadow_pkru &= ~(flags << (pkey * 2));
+ /* pkey_reg and flags have the same format */
+ shadow_pkey_reg &= reset_bits(pkey, flags);
pkey_assert(ret >= 0);
pkey_rights = pkey_get(pkey, syscall_flags);
dprintf1("%s(%d) pkey_get(%d): %x\n", __func__,
pkey, pkey, pkey_rights);
- dprintf1("%s(%d) pkru: 0x%x\n", __func__, pkey, rdpkru());
+ dprintf1("%s(%d) pkey_reg: 0x%x\n",
+ __func__, pkey, rdpkey_reg());
if (flags)
- assert(rdpkru() > orig_pkru);
+ assert(rdpkey_reg() < orig_pkey_reg);
}
void pkey_write_allow(int pkey)
@@ -564,49 +541,72 @@ int sys_mprotect_pkey(void *ptr, size_t size, unsigned long orig_prot,
int sys_pkey_alloc(unsigned long flags, unsigned long init_val)
{
int ret = syscall(SYS_pkey_alloc, flags, init_val);
+
dprintf1("%s(flags=%lx, init_val=%lx) syscall ret: %d errno: %d\n",
__func__, flags, init_val, ret, errno);
return ret;
}
+void pkey_setup_shadow(void)
+{
+ shadow_pkey_reg = __rdpkey_reg();
+}
+
+void pkey_reset_shadow(u32 key)
+{
+ shadow_pkey_reg &= reset_bits(key, 0x3);
+}
+
+void pkey_set_shadow(u32 key, u64 init_val)
+{
+ shadow_pkey_reg |= left_shift_bits(key, init_val);
+}
+
int alloc_pkey(void)
{
int ret;
- unsigned long init_val = 0x0;
+ u64 init_val = 0x0;
- dprintf1("alloc_pkey()::%d, pkru: 0x%x shadow: %x\n",
- __LINE__, __rdpkru(), shadow_pkru);
+ dprintf1("%s()::%d, pkey_reg: 0x%x shadow: %x\n",
+ __func__, __LINE__, __rdpkey_reg(),
+ shadow_pkey_reg);
ret = sys_pkey_alloc(0, init_val);
/*
- * pkey_alloc() sets PKRU, so we need to reflect it in
- * shadow_pkru:
+ * pkey_alloc() sets pkey register, so we need to reflect it in
+ * shadow_pkey_reg:
*/
- dprintf4("alloc_pkey()::%d, ret: %d pkru: 0x%x shadow: 0x%x\n",
- __LINE__, ret, __rdpkru(), shadow_pkru);
+ dprintf4("%s()::%d, ret: %d pkey_reg: 0x%x shadow: 0x%x\n",
+ __func__, __LINE__, ret, __rdpkey_reg(),
+ shadow_pkey_reg);
if (ret) {
/* clear both the bits: */
- shadow_pkru &= ~(0x3 << (ret * 2));
- dprintf4("alloc_pkey()::%d, ret: %d pkru: 0x%x shadow: 0x%x\n",
- __LINE__, ret, __rdpkru(), shadow_pkru);
+ pkey_reset_shadow(ret);
+ dprintf4("%s()::%d, ret: %d pkey_reg: 0x%x shadow:"
+ " 0x%x\n",
+ __func__, __LINE__, ret,
+ __rdpkey_reg(), shadow_pkey_reg);
/*
* move the new state in from init_val
- * (remember, we cheated and init_val == pkru format)
+ * (remember, we cheated and init_val == pkey_reg format)
*/
- shadow_pkru |= (init_val << (ret * 2));
+ pkey_set_shadow(ret, init_val);
}
- dprintf4("alloc_pkey()::%d, ret: %d pkru: 0x%x shadow: 0x%x\n",
- __LINE__, ret, __rdpkru(), shadow_pkru);
- dprintf1("alloc_pkey()::%d errno: %d\n", __LINE__, errno);
+ dprintf4("%s()::%d, ret: %d pkey_reg: 0x%x shadow: 0x%x\n",
+ __func__, __LINE__, ret, __rdpkey_reg(),
+ shadow_pkey_reg);
+ dprintf1("%s()::%d errno: %d\n", __func__, __LINE__, errno);
/* for shadow checking: */
- rdpkru();
- dprintf4("alloc_pkey()::%d, ret: %d pkru: 0x%x shadow: 0x%x\n",
- __LINE__, ret, __rdpkru(), shadow_pkru);
+ rdpkey_reg();
+ dprintf4("%s()::%d, ret: %d pkey_reg: 0x%x shadow: 0x%x\n",
+ __func__, __LINE__, ret, __rdpkey_reg(),
+ shadow_pkey_reg);
return ret;
}
int sys_pkey_free(unsigned long pkey)
{
int ret = syscall(SYS_pkey_free, pkey);
+
dprintf1("%s(pkey=%ld) syscall ret: %d\n", __func__, pkey, ret);
return ret;
}
@@ -624,13 +624,15 @@ int alloc_random_pkey(void)
int alloced_pkeys[NR_PKEYS];
int nr_alloced = 0;
int random_index;
+
memset(alloced_pkeys, 0, sizeof(alloced_pkeys));
+ srand((unsigned int)time(NULL));
/* allocate every possible key and make a note of which ones we got */
max_nr_pkey_allocs = NR_PKEYS;
- max_nr_pkey_allocs = 1;
for (i = 0; i < max_nr_pkey_allocs; i++) {
int new_pkey = alloc_pkey();
+
if (new_pkey < 0)
break;
alloced_pkeys[nr_alloced++] = new_pkey;
@@ -646,13 +648,14 @@ int alloc_random_pkey(void)
/* go through the allocated ones that we did not want and free them */
for (i = 0; i < nr_alloced; i++) {
int free_ret;
+
if (!alloced_pkeys[i])
continue;
free_ret = sys_pkey_free(alloced_pkeys[i]);
pkey_assert(!free_ret);
}
- dprintf1("%s()::%d, ret: %d pkru: 0x%x shadow: 0x%x\n", __func__,
- __LINE__, ret, __rdpkru(), shadow_pkru);
+ dprintf1("%s()::%d, ret: %d pkey_reg: 0x%x shadow: 0x%x\n", __func__,
+ __LINE__, ret, __rdpkey_reg(), shadow_pkey_reg);
return ret;
}
@@ -664,17 +667,22 @@ int mprotect_pkey(void *ptr, size_t size, unsigned long orig_prot,
while (0) {
int rpkey = alloc_random_pkey();
+
ret = sys_mprotect_pkey(ptr, size, orig_prot, pkey);
- dprintf1("sys_mprotect_pkey(%p, %zx, prot=0x%lx, pkey=%ld) ret: %d\n",
+
+ dprintf1("sys_mprotect_pkey(%p, %zx, prot=0x%lx, pkey=%ld) "
+ "ret: %d\n",
ptr, size, orig_prot, pkey, ret);
if (nr_iterations-- < 0)
break;
- dprintf1("%s()::%d, ret: %d pkru: 0x%x shadow: 0x%x\n", __func__,
- __LINE__, ret, __rdpkru(), shadow_pkru);
+ dprintf1("%s()::%d, ret: %d pkey_reg: 0x%x shadow: 0x%x\n",
+ __func__, __LINE__, ret, __rdpkey_reg(),
+ shadow_pkey_reg);
sys_pkey_free(rpkey);
- dprintf1("%s()::%d, ret: %d pkru: 0x%x shadow: 0x%x\n", __func__,
- __LINE__, ret, __rdpkru(), shadow_pkru);
+ dprintf1("%s()::%d, ret: %d pkey_reg: 0x%x shadow: 0x%x\n",
+ __func__, __LINE__, ret, __rdpkey_reg(),
+ shadow_pkey_reg);
}
pkey_assert(pkey < NR_PKEYS);
@@ -682,8 +690,8 @@ int mprotect_pkey(void *ptr, size_t size, unsigned long orig_prot,
dprintf1("mprotect_pkey(%p, %zx, prot=0x%lx, pkey=%ld) ret: %d\n",
ptr, size, orig_prot, pkey, ret);
pkey_assert(!ret);
- dprintf1("%s()::%d, ret: %d pkru: 0x%x shadow: 0x%x\n", __func__,
- __LINE__, ret, __rdpkru(), shadow_pkru);
+ dprintf1("%s()::%d, ret: %d pkey_reg: 0x%x shadow: 0x%x\n", __func__,
+ __LINE__, ret, __rdpkey_reg(), shadow_pkey_reg);
return ret;
}
@@ -708,7 +716,9 @@ void record_pkey_malloc(void *ptr, long size)
/* every record is full */
size_t old_nr_records = nr_pkey_malloc_records;
size_t new_nr_records = (nr_pkey_malloc_records * 2 + 1);
- size_t new_size = new_nr_records * sizeof(struct pkey_malloc_record);
+ size_t new_size = new_nr_records *
+ sizeof(struct pkey_malloc_record);
+
dprintf2("new_nr_records: %zd\n", new_nr_records);
dprintf2("new_size: %zd\n", new_size);
pkey_malloc_records = realloc(pkey_malloc_records, new_size);
@@ -732,9 +742,11 @@ void free_pkey_malloc(void *ptr)
{
long i;
int ret;
+
dprintf3("%s(%p)\n", __func__, ptr);
for (i = 0; i < nr_pkey_malloc_records; i++) {
struct pkey_malloc_record *rec = &pkey_malloc_records[i];
+
dprintf4("looking for ptr %p at record[%ld/%p]: {%p, %ld}\n",
ptr, i, rec, rec->ptr, rec->size);
if ((ptr < rec->ptr) ||
@@ -761,16 +773,46 @@ void free_pkey_malloc(void *ptr)
void *ptr;
int ret;
- rdpkru();
+ rdpkey_reg();
+ dprintf1("doing %s(size=%ld, prot=0x%x, pkey=%d)\n", __func__,
+ size, prot, pkey);
+ pkey_assert(pkey < NR_PKEYS);
+ ptr = mmap(NULL, size, prot, MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
+ pkey_assert(ptr != (void *)-1);
+ ret = mprotect_pkey((void *)ptr, PAGE_SIZE, prot, pkey);
+ pkey_assert(!ret);
+ record_pkey_malloc(ptr, size);
+ rdpkey_reg();
+
+ dprintf1("%s() for pkey %d @ %p\n", __func__, pkey, ptr);
+ return ptr;
+}
+
+void *malloc_pkey_with_mprotect_subpage(long size, int prot, u16 pkey)
+{
+ void *ptr;
+ int ret;
+
+#ifndef __powerpc64__
+ return PTR_ERR_ENOTSUP;
+#endif /* __powerpc64__ */
+ rdpkey_reg();
dprintf1("doing %s(size=%ld, prot=0x%x, pkey=%d)\n", __func__,
size, prot, pkey);
pkey_assert(pkey < NR_PKEYS);
ptr = mmap(NULL, size, prot, MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
pkey_assert(ptr != (void *)-1);
+
+ ret = syscall(__NR_subpage_prot, ptr, size, NULL);
+ if (ret) {
+ perror("subpage_perm");
+ return PTR_ERR_ENOTSUP;
+ }
+
ret = mprotect_pkey((void *)ptr, PAGE_SIZE, prot, pkey);
pkey_assert(!ret);
record_pkey_malloc(ptr, size);
- rdpkru();
+ rdpkey_reg();
dprintf1("%s() for pkey %d @ %p\n", __func__, pkey, ptr);
return ptr;
@@ -815,17 +857,19 @@ void setup_hugetlbfs(void)
char buf[] = "123";
if (geteuid() != 0) {
- fprintf(stderr, "WARNING: not run as root, can not do hugetlb test\n");
+ fprintf(stderr,
+ "WARNING: not run as root, can not do hugetlb test\n");
return;
}
- cat_into_file(__stringify(GET_NR_HUGE_PAGES), "/proc/sys/vm/nr_hugepages");
+ cat_into_file(__stringify(GET_NR_HUGE_PAGES),
+ "/proc/sys/vm/nr_hugepages");
/*
* Now go make sure that we got the pages and that they
* are 2M pages. Someone might have made 1G the default.
*/
- fd = open("/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages", O_RDONLY);
+ fd = open_hugepage_file(O_RDONLY);
if (fd < 0) {
perror("opening sysfs 2M hugetlb config");
return;
@@ -840,7 +884,8 @@ void setup_hugetlbfs(void)
}
if (atoi(buf) != GET_NR_HUGE_PAGES) {
- fprintf(stderr, "could not confirm 2M pages, got: '%s' expected %d\n",
+ fprintf(stderr, "could not confirm 2M pages, got:"
+ " '%s' expected %d\n",
buf, GET_NR_HUGE_PAGES);
return;
}
@@ -895,12 +940,13 @@ void setup_hugetlbfs(void)
void *(*pkey_malloc[])(long size, int prot, u16 pkey) = {
malloc_pkey_with_mprotect,
+ malloc_pkey_with_mprotect_subpage,
malloc_pkey_anon_huge,
malloc_pkey_hugetlb
/* can not do direct with the pkey_mprotect() API:
- malloc_pkey_mmap_direct,
- malloc_pkey_mmap_dax,
-*/
+ * malloc_pkey_mmap_direct,
+ * malloc_pkey_mmap_dax,
+ */
};
void *malloc_pkey(long size, int prot, u16 pkey)
@@ -933,31 +979,32 @@ void setup_hugetlbfs(void)
return ret;
}
-int last_pkru_faults;
-void expected_pk_fault(int pkey)
+int last_pkey_faults;
+void expected_pkey_faults(int pkey)
{
- dprintf2("%s(): last_pkru_faults: %d pkru_faults: %d\n",
- __func__, last_pkru_faults, pkru_faults);
+ dprintf2("%s(): last_pkey_faults: %d pkey_faults: %d\n",
+ __func__, last_pkey_faults, pkey_faults);
dprintf2("%s(%d): last_si_pkey: %d\n", __func__, pkey, last_si_pkey);
- pkey_assert(last_pkru_faults + 1 == pkru_faults);
+ pkey_assert(last_pkey_faults + 1 == pkey_faults);
pkey_assert(last_si_pkey == pkey);
/*
- * The signal handler shold have cleared out PKRU to let the
+ * The signal handler shold have cleared out pkey-register to let the
* test program continue. We now have to restore it.
*/
- if (__rdpkru() != 0)
+ if (__rdpkey_reg() != shadow_pkey_reg)
pkey_assert(0);
- __wrpkru(shadow_pkru);
- dprintf1("%s() set PKRU=%x to restore state after signal nuked it\n",
- __func__, shadow_pkru);
- last_pkru_faults = pkru_faults;
+ __wrpkey_reg(shadow_pkey_reg);
+ dprintf1("%s() set pkey-register=%x to restore state "
+ " after signal nuked it\n",
+ __func__, shadow_pkey_reg);
+ last_pkey_faults = pkey_faults;
last_si_pkey = -1;
}
void do_not_expect_pk_fault(void)
{
- pkey_assert(last_pkru_faults == pkru_faults);
+ pkey_assert(last_pkey_faults == pkey_faults);
}
int test_fds[10] = { -1 };
@@ -973,6 +1020,7 @@ void __save_test_fd(int fd)
int get_test_read_fd(void)
{
int test_fd = open("/etc/passwd", O_RDONLY);
+
__save_test_fd(test_fd);
return test_fd;
}
@@ -1009,32 +1057,76 @@ void test_read_of_write_disabled_region(int *ptr, u16 pkey)
ptr_contents = read_ptr(ptr);
dprintf1("*ptr: %d\n", ptr_contents);
dprintf1("\n");
+ do_not_expect_pk_fault();
}
+
void test_read_of_access_disabled_region(int *ptr, u16 pkey)
{
int ptr_contents;
- dprintf1("disabling access to PKEY[%02d], doing read @ %p\n", pkey, ptr);
- rdpkru();
+ dprintf1("disabling access to PKEY[%02d], doing read @ %p\n",
+ pkey, ptr);
+ rdpkey_reg();
+ pkey_access_deny(pkey);
+ ptr_contents = read_ptr(ptr);
+ dprintf1("*ptr: %d\n", ptr_contents);
+ expected_pkey_faults(pkey);
+}
+
+void test_read_of_access_disabled_region_with_page_already_mapped(int *ptr,
+ u16 pkey)
+{
+ int ptr_contents;
+
+ dprintf1("disabling access to PKEY[%02d], doing read @ %p\n",
+ pkey, ptr);
+ ptr_contents = read_ptr(ptr);
+ dprintf1("reading ptr before disabling the read : %d\n",
+ ptr_contents);
+ rdpkey_reg();
pkey_access_deny(pkey);
ptr_contents = read_ptr(ptr);
dprintf1("*ptr: %d\n", ptr_contents);
- expected_pk_fault(pkey);
+ expected_pkey_faults(pkey);
}
+
+void test_write_of_write_disabled_region_with_page_already_mapped(int *ptr,
+ u16 pkey)
+{
+ *ptr = __LINE__;
+ dprintf1("disabling write access; after accessing the page, "
+ "to PKEY[%02d], doing write\n", pkey);
+ pkey_write_deny(pkey);
+ *ptr = __LINE__;
+ expected_pkey_faults(pkey);
+}
+
void test_write_of_write_disabled_region(int *ptr, u16 pkey)
{
dprintf1("disabling write access to PKEY[%02d], doing write\n", pkey);
pkey_write_deny(pkey);
*ptr = __LINE__;
- expected_pk_fault(pkey);
+ expected_pkey_faults(pkey);
}
void test_write_of_access_disabled_region(int *ptr, u16 pkey)
{
dprintf1("disabling access to PKEY[%02d], doing write\n", pkey);
pkey_access_deny(pkey);
*ptr = __LINE__;
- expected_pk_fault(pkey);
+ expected_pkey_faults(pkey);
+}
+
+void test_write_of_access_disabled_region_with_page_already_mapped(int *ptr,
+ u16 pkey)
+{
+ *ptr = __LINE__;
+ dprintf1("disabling access; after accessing the page, "
+ " to PKEY[%02d], doing write\n", pkey);
+ pkey_access_deny(pkey);
+ *ptr = __LINE__;
+ expected_pkey_faults(pkey);
}
+
void test_kernel_write_of_access_disabled_region(int *ptr, u16 pkey)
{
int ret;
@@ -1103,10 +1195,10 @@ void test_kernel_gup_write_to_write_disabled_region(int *ptr, u16 pkey)
void test_pkey_syscalls_on_non_allocated_pkey(int *ptr, u16 pkey)
{
int err;
- int i;
+ int i = get_start_key();
/* Note: 0 is the default pkey, so don't mess with it */
- for (i = 1; i < NR_PKEYS; i++) {
+ for (; i < NR_PKEYS; i++) {
if (pkey == i)
continue;
@@ -1126,7 +1218,7 @@ void test_pkey_syscalls_on_non_allocated_pkey(int *ptr, u16 pkey)
void test_pkey_syscalls_bad_args(int *ptr, u16 pkey)
{
int err;
- int bad_pkey = NR_PKEYS+99;
+ int bad_pkey = NR_PKEYS+pkey;
/* pass a known-invalid pkey in: */
err = sys_mprotect_pkey(ptr, PAGE_SIZE, PROT_READ, bad_pkey);
@@ -1136,21 +1228,24 @@ void test_pkey_syscalls_bad_args(int *ptr, u16 pkey)
/* Assumes that all pkeys other than 'pkey' are unallocated */
void test_pkey_alloc_exhaust(int *ptr, u16 pkey)
{
- int err;
+ int err = 0;
int allocated_pkeys[NR_PKEYS] = {0};
int nr_allocated_pkeys = 0;
int i;
for (i = 0; i < NR_PKEYS*2; i++) {
int new_pkey;
+
dprintf1("%s() alloc loop: %d\n", __func__, i);
new_pkey = alloc_pkey();
- dprintf4("%s()::%d, err: %d pkru: 0x%x shadow: 0x%x\n", __func__,
- __LINE__, err, __rdpkru(), shadow_pkru);
- rdpkru(); /* for shadow checking */
- dprintf2("%s() errno: %d ENOSPC: %d\n", __func__, errno, ENOSPC);
+ dprintf4("%s()::%d, err: %d pkey_reg: 0x%x shadow: 0x%x\n",
+ __func__, __LINE__, err, __rdpkey_reg(),
+ shadow_pkey_reg);
+ rdpkey_reg(); /* for shadow checking */
+ dprintf2("%s() errno: %d ENOSPC: %d\n", __func__, errno,
+ ENOSPC);
if ((new_pkey == -1) && (errno == ENOSPC)) {
- dprintf2("%s() failed to allocate pkey after %d tries\n",
+ dprintf2("%s() allocate failed pkey after %d tries\n",
__func__, nr_allocated_pkeys);
break;
}
@@ -1165,19 +1260,17 @@ void test_pkey_alloc_exhaust(int *ptr, u16 pkey)
* failure:
*/
pkey_assert(i < NR_PKEYS*2);
-
/*
- * There are 16 pkeys supported in hardware. One is taken
- * up for the default (0) and another can be taken up by
- * an execute-only mapping. Ensure that we can allocate
- * at least 14 (16-2).
+ * There are NR_PKEYS pkeys supported in hardware. NR_RESERVED_KEYS
+ * are reserved. One can be taken up by an execute-only mapping.
+ * Ensure that we can allocate at least the remaining.
*/
- pkey_assert(i >= NR_PKEYS-2);
+ pkey_assert(i >= (NR_PKEYS-NR_RESERVED_PKEYS-1));
for (i = 0; i < nr_allocated_pkeys; i++) {
err = sys_pkey_free(allocated_pkeys[i]);
pkey_assert(!err);
- rdpkru(); /* for shadow checking */
+ rdpkey_reg(); /* for shadow checking */
}
}
@@ -1221,10 +1314,10 @@ void test_ptrace_of_child(int *ptr, u16 pkey)
pkey_write_deny(pkey);
/* Write access, untested for now:
- ret = ptrace(PTRACE_POKEDATA, child_pid, peek_at, data);
- pkey_assert(ret != -1);
- dprintf1("poke at %p: %ld\n", peek_at, ret);
- */
+ * ret = ptrace(PTRACE_POKEDATA, child_pid, peek_at, data);
+ * pkey_assert(ret != -1);
+ * dprintf1("poke at %p: %ld\n", peek_at, ret);
+ */
/*
* Try to access the pkey-protected "ptr" via ptrace:
@@ -1234,7 +1327,7 @@ void test_ptrace_of_child(int *ptr, u16 pkey)
pkey_assert(ret != -1);
/* Now access from the current task, and expect an exception: */
peek_result = read_ptr(ptr);
- expected_pk_fault(pkey);
+ expected_pkey_faults(pkey);
/*
* Try to access the NON-pkey-protected "plain_ptr" via ptrace:
@@ -1281,7 +1374,7 @@ void test_executing_on_unreadable_memory(int *ptr, u16 pkey)
pkey_assert(!ret);
pkey_access_deny(pkey);
- dprintf2("pkru: %x\n", rdpkru());
+ dprintf2("pkey_reg: %x\n", rdpkey_reg());
/*
* Make sure this is an *instruction* fault
@@ -1291,7 +1384,7 @@ void test_executing_on_unreadable_memory(int *ptr, u16 pkey)
do_not_expect_pk_fault();
ptr_contents = read_ptr(p1);
dprintf2("ptr (%p) contents@%d: %x\n", p1, __LINE__, ptr_contents);
- expected_pk_fault(pkey);
+ expected_pkey_faults(pkey);
}
void test_mprotect_pkey_on_unsupported_cpu(int *ptr, u16 pkey)
@@ -1299,7 +1392,7 @@ void test_mprotect_pkey_on_unsupported_cpu(int *ptr, u16 pkey)
int size = PAGE_SIZE;
int sret;
- if (cpu_has_pku()) {
+ if (cpu_has_pkey()) {
dprintf1("SKIP: %s: no CPU support\n", __func__);
return;
}
@@ -1311,8 +1404,11 @@ void test_mprotect_pkey_on_unsupported_cpu(int *ptr, u16 pkey)
void (*pkey_tests[])(int *ptr, u16 pkey) = {
test_read_of_write_disabled_region,
test_read_of_access_disabled_region,
+ test_read_of_access_disabled_region_with_page_already_mapped,
test_write_of_write_disabled_region,
+ test_write_of_write_disabled_region_with_page_already_mapped,
test_write_of_access_disabled_region,
+ test_write_of_access_disabled_region_with_page_already_mapped,
test_kernel_write_of_access_disabled_region,
test_kernel_write_of_write_disabled_region,
test_kernel_gup_of_access_disabled_region,
@@ -1331,7 +1427,7 @@ void run_tests_once(void)
for (test_nr = 0; test_nr < ARRAY_SIZE(pkey_tests); test_nr++) {
int pkey;
- int orig_pkru_faults = pkru_faults;
+ int orig_pkey_faults = pkey_faults;
dprintf1("======================\n");
dprintf1("test %d preparing...\n", test_nr);
@@ -1346,45 +1442,42 @@ void run_tests_once(void)
free_pkey_malloc(ptr);
sys_pkey_free(pkey);
- dprintf1("pkru_faults: %d\n", pkru_faults);
- dprintf1("orig_pkru_faults: %d\n", orig_pkru_faults);
+ dprintf1("pkey_faults: %d\n", pkey_faults);
+ dprintf1("orig_pkey_faults: %d\n", orig_pkey_faults);
tracing_off();
close_test_fds();
- printf("test %2d PASSED (iteration %d)\n", test_nr, iteration_nr);
+ printf("test %2d PASSED (iteration %d)\n",
+ test_nr, iteration_nr);
dprintf1("======================\n\n");
}
iteration_nr++;
}
-void pkey_setup_shadow(void)
-{
- shadow_pkru = __rdpkru();
-}
-
int main(void)
{
int nr_iterations = 22;
setup_handlers();
- printf("has pku: %d\n", cpu_has_pku());
+ printf("has pkey support: %d\n", cpu_has_pkey());
- if (!cpu_has_pku()) {
+ if (!cpu_has_pkey()) {
int size = PAGE_SIZE;
int *ptr;
printf("running PKEY tests for unsupported CPU/OS\n");
- ptr = mmap(NULL, size, PROT_NONE, MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
+ ptr = mmap(NULL, size, PROT_NONE,
+ MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
assert(ptr != (void *)-1);
test_mprotect_pkey_on_unsupported_cpu(ptr, 1);
exit(0);
}
pkey_setup_shadow();
- printf("startup pkru: %x\n", rdpkru());
+ printf("startup pkey_reg: %lx\n", rdpkey_reg());
setup_hugetlbfs();
while (nr_iterations-- > 0)
--
1.7.1
This patch provides the implementation for
arch_vma_access_permitted(). Returns true if the
requested access is allowed by pkey associated with the
vma.
Signed-off-by: Ram Pai <[email protected]>
---
arch/powerpc/include/asm/mmu_context.h | 5 ++++
arch/powerpc/mm/pkeys.c | 40 ++++++++++++++++++++++++++++++++
2 files changed, 45 insertions(+), 0 deletions(-)
diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
index da7e943..bf69ff9 100644
--- a/arch/powerpc/include/asm/mmu_context.h
+++ b/arch/powerpc/include/asm/mmu_context.h
@@ -175,11 +175,16 @@ static inline void arch_bprm_mm_init(struct mm_struct *mm,
{
}
+#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
+bool arch_vma_access_permitted(struct vm_area_struct *vma,
+ bool write, bool execute, bool foreign);
+#else /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
static inline bool arch_vma_access_permitted(struct vm_area_struct *vma,
bool write, bool execute, bool foreign)
{
/* by default, allow everything */
return true;
}
+#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
#endif /* __KERNEL__ */
#endif /* __ASM_POWERPC_MMU_CONTEXT_H */
diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c
index 044a17d..f89a048 100644
--- a/arch/powerpc/mm/pkeys.c
+++ b/arch/powerpc/mm/pkeys.c
@@ -201,3 +201,43 @@ bool arch_pte_access_permitted(u64 pte, bool write, bool execute)
return pkey_access_permitted(pte_to_pkey_bits(pte),
write, execute);
}
+
+/*
+ * We only want to enforce protection keys on the current process
+ * because we effectively have no access to AMR/IAMR for other
+ * processes or any way to tell *which * AMR/IAMR in a threaded
+ * process we could use.
+ *
+ * So do not enforce things if the VMA is not from the current
+ * mm, or if we are in a kernel thread.
+ */
+static inline bool vma_is_foreign(struct vm_area_struct *vma)
+{
+ if (!current->mm)
+ return true;
+ /*
+ * if the VMA is from another process, then AMR/IAMR has no
+ * relevance and should not be enforced.
+ */
+ if (current->mm != vma->vm_mm)
+ return true;
+
+ return false;
+}
+
+bool arch_vma_access_permitted(struct vm_area_struct *vma,
+ bool write, bool execute, bool foreign)
+{
+ int pkey;
+
+ /* allow access if the VMA is not one from this process */
+ if (foreign || vma_is_foreign(vma))
+ return true;
+
+ pkey = vma_pkey(vma);
+
+ if (!pkey)
+ return true;
+
+ return pkey_access_permitted(pkey, write, execute);
+}
--
1.7.1
get_pte_pkey() helper returns the pkey associated with
a address corresponding to a given mm_struct.
Signed-off-by: Ram Pai <[email protected]>
---
arch/powerpc/include/asm/book3s/64/mmu-hash.h | 5 ++++
arch/powerpc/mm/hash_utils_64.c | 28 +++++++++++++++++++++++++
2 files changed, 33 insertions(+), 0 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/64/mmu-hash.h b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
index f7a6ed3..369f9ff 100644
--- a/arch/powerpc/include/asm/book3s/64/mmu-hash.h
+++ b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
@@ -450,6 +450,11 @@ extern int hash_page(unsigned long ea, unsigned long access, unsigned long trap,
int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid,
pte_t *ptep, unsigned long trap, unsigned long flags,
int ssize, unsigned int shift, unsigned int mmu_psize);
+
+#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
+u16 get_pte_pkey(struct mm_struct *mm, unsigned long address);
+#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
+
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
extern int __hash_page_thp(unsigned long ea, unsigned long access,
unsigned long vsid, pmd_t *pmdp, unsigned long trap,
diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index 1e74529..591990c 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -1573,6 +1573,34 @@ void hash_preload(struct mm_struct *mm, unsigned long ea,
local_irq_restore(flags);
}
+#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
+/*
+ * return the protection key associated with the given address
+ * and the mm_struct.
+ */
+u16 get_pte_pkey(struct mm_struct *mm, unsigned long address)
+{
+ pte_t *ptep;
+ u16 pkey = 0;
+ unsigned long flags;
+
+ if (REGION_ID(address) == VMALLOC_REGION_ID)
+ mm = &init_mm;
+
+ if (!mm || !mm->pgd)
+ return 0;
+
+ local_irq_save(flags);
+ ptep = find_linux_pte_or_hugepte(mm->pgd, address,
+ NULL, NULL);
+ if (ptep)
+ pkey = pte_to_pkey_bits(pte_val(READ_ONCE(*ptep)));
+ local_irq_restore(flags);
+
+ return pkey;
+}
+#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
+
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
static inline void tm_flush_hash_page(int local)
{
--
1.7.1
The value of the AMR register at the time of exception
is made available in gp_regs[PT_AMR] of the siginfo.
The value of the pkey, whose protection got violated,
is made available in si_pkey field of the siginfo structure.
Signed-off-by: Ram Pai <[email protected]>
---
arch/powerpc/include/uapi/asm/ptrace.h | 3 ++-
arch/powerpc/kernel/signal_32.c | 5 +++++
arch/powerpc/kernel/signal_64.c | 4 ++++
arch/powerpc/kernel/traps.c | 14 ++++++++++++++
4 files changed, 25 insertions(+), 1 deletions(-)
diff --git a/arch/powerpc/include/uapi/asm/ptrace.h b/arch/powerpc/include/uapi/asm/ptrace.h
index 8036b38..7ec2428 100644
--- a/arch/powerpc/include/uapi/asm/ptrace.h
+++ b/arch/powerpc/include/uapi/asm/ptrace.h
@@ -108,8 +108,9 @@ struct pt_regs {
#define PT_DAR 41
#define PT_DSISR 42
#define PT_RESULT 43
-#define PT_DSCR 44
#define PT_REGS_COUNT 44
+#define PT_DSCR 44
+#define PT_AMR 45
#define PT_FPR0 48 /* each FP reg occupies 2 slots in this space */
diff --git a/arch/powerpc/kernel/signal_32.c b/arch/powerpc/kernel/signal_32.c
index 97bb138..9c4a7f3 100644
--- a/arch/powerpc/kernel/signal_32.c
+++ b/arch/powerpc/kernel/signal_32.c
@@ -500,6 +500,11 @@ static int save_user_regs(struct pt_regs *regs, struct mcontext __user *frame,
(unsigned long) &frame->tramp[2]);
}
+#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
+ if (__put_user(get_paca()->paca_amr, &frame->mc_gregs[PT_AMR]))
+ return 1;
+#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
+
return 0;
}
diff --git a/arch/powerpc/kernel/signal_64.c b/arch/powerpc/kernel/signal_64.c
index c83c115..86a4262 100644
--- a/arch/powerpc/kernel/signal_64.c
+++ b/arch/powerpc/kernel/signal_64.c
@@ -174,6 +174,10 @@ static long setup_sigcontext(struct sigcontext __user *sc,
if (set != NULL)
err |= __put_user(set->sig[0], &sc->oldmask);
+#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
+ err |= __put_user(get_paca()->paca_amr, &sc->gp_regs[PT_AMR]);
+#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
+
return err;
}
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index d4e545d..cc0a8c4 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -20,6 +20,7 @@
#include <linux/sched/debug.h>
#include <linux/kernel.h>
#include <linux/mm.h>
+#include <linux/pkeys.h>
#include <linux/stddef.h>
#include <linux/unistd.h>
#include <linux/ptrace.h>
@@ -247,6 +248,14 @@ void user_single_step_siginfo(struct task_struct *tsk,
info->si_addr = (void __user *)regs->nip;
}
+#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
+static void fill_sig_info_pkey(int si_code, siginfo_t *info, unsigned long addr)
+{
+ WARN_ON(si_code != SEGV_PKUERR);
+ info->si_pkey = get_paca()->paca_pkey;
+}
+#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
+
void _exception(int signr, struct pt_regs *regs, int code, unsigned long addr)
{
siginfo_t info;
@@ -274,6 +283,11 @@ void _exception(int signr, struct pt_regs *regs, int code, unsigned long addr)
info.si_signo = signr;
info.si_code = code;
info.si_addr = (void __user *) addr;
+
+#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
+ fill_sig_info_pkey(code, &info, addr);
+#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
+
force_sig_info(signr, &info, current);
}
--
1.7.1
Make sure that the kernel does not access user pages without
checking their key-protection.
Signed-off-by: Ram Pai <[email protected]>
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 14 ++++++++++++++
1 files changed, 14 insertions(+), 0 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index aad205c..d590f30 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -476,6 +476,20 @@ static inline void write_uamor(u64 value)
extern bool arch_pte_access_permitted(u64 pte, bool write, bool execute);
+#define pte_access_permitted(pte, write) \
+ (pte_present(pte) && \
+ ((!(write) || pte_write(pte)) && \
+ arch_pte_access_permitted(pte_val(pte), !!write, 0)))
+
+/*
+ * We store key in pmd for huge tlb pages. So need
+ * to check for key protection.
+ */
+#define pmd_access_permitted(pmd, write) \
+ (pmd_present(pmd) && \
+ ((!(write) || pmd_write(pmd)) && \
+ arch_pte_access_permitted(pmd_val(pmd), !!write, 0)))
+
#else /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
static inline u64 read_amr(void)
--
1.7.1
helper function that checks if the read/write/execute is allowed
on the pte.
Signed-off-by: Ram Pai <[email protected]>
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 2 +
arch/powerpc/include/asm/pkeys.h | 9 +++++++
arch/powerpc/mm/pkeys.c | 31 ++++++++++++++++++++++++++
3 files changed, 42 insertions(+), 0 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index d9c87c4..aad205c 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -474,6 +474,8 @@ static inline void write_uamor(u64 value)
mtspr(SPRN_UAMOR, value);
}
+extern bool arch_pte_access_permitted(u64 pte, bool write, bool execute);
+
#else /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
static inline u64 read_amr(void)
diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h
index 6477b87..01f2bfc 100644
--- a/arch/powerpc/include/asm/pkeys.h
+++ b/arch/powerpc/include/asm/pkeys.h
@@ -31,6 +31,15 @@ static inline u64 pte_to_hpte_pkey_bits(u64 pteflags)
((pteflags & H_PAGE_PKEY_BIT4) ? HPTE_R_KEY_BIT4 : 0x0UL));
}
+static inline u16 pte_to_pkey_bits(u64 pteflags)
+{
+ return (((pteflags & H_PAGE_PKEY_BIT0) ? 0x10 : 0x0UL) |
+ ((pteflags & H_PAGE_PKEY_BIT1) ? 0x8 : 0x0UL) |
+ ((pteflags & H_PAGE_PKEY_BIT2) ? 0x4 : 0x0UL) |
+ ((pteflags & H_PAGE_PKEY_BIT3) ? 0x2 : 0x0UL) |
+ ((pteflags & H_PAGE_PKEY_BIT4) ? 0x1 : 0x0UL));
+}
+
static inline int vma_pkey(struct vm_area_struct *vma)
{
return (vma->vm_flags & ARCH_VM_PKEY_FLAGS) >> VM_PKEY_SHIFT;
diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c
index c60a045..044a17d 100644
--- a/arch/powerpc/mm/pkeys.c
+++ b/arch/powerpc/mm/pkeys.c
@@ -170,3 +170,34 @@ int __arch_override_mprotect_pkey(struct vm_area_struct *vma, int prot,
*/
return vma_pkey(vma);
}
+
+static bool pkey_access_permitted(int pkey, bool write, bool execute)
+{
+ int pkey_shift;
+ u64 amr;
+
+ if (!pkey)
+ return true;
+
+ pkey_shift = pkeyshift(pkey);
+ if (!(read_uamor() & (0x3UL << pkey_shift)))
+ return true;
+
+ if (execute && !(read_iamr() & (IAMR_EX_BIT << pkey_shift)))
+ return true;
+
+ if (!write) {
+ amr = read_amr();
+ if (!(amr & (AMR_AD_BIT << pkey_shift)))
+ return true;
+ }
+
+ amr = read_amr(); /* delay reading amr uptil absolutely needed */
+ return (write && !(amr & (AMR_WD_BIT << pkey_shift)));
+}
+
+bool arch_pte_access_permitted(u64 pte, bool write, bool execute)
+{
+ return pkey_access_permitted(pte_to_pkey_bits(pte),
+ write, execute);
+}
--
1.7.1
arch-independent code expects the arch to map
a pkey into the vma's protection bit setting.
The patch provides that ability.
Signed-off-by: Ram Pai <[email protected]>
---
arch/powerpc/include/asm/mman.h | 8 +++++++-
arch/powerpc/include/asm/pkeys.h | 14 ++++++++++++--
2 files changed, 19 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/include/asm/mman.h b/arch/powerpc/include/asm/mman.h
index 30922f6..067eec2 100644
--- a/arch/powerpc/include/asm/mman.h
+++ b/arch/powerpc/include/asm/mman.h
@@ -13,6 +13,7 @@
#include <asm/cputable.h>
#include <linux/mm.h>
+#include <linux/pkeys.h>
#include <asm/cpu_has_feature.h>
/*
@@ -22,7 +23,12 @@
static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot,
unsigned long pkey)
{
- return (prot & PROT_SAO) ? VM_SAO : 0;
+#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
+ return (((prot & PROT_SAO) ? VM_SAO : 0) |
+ pkey_to_vmflag_bits(pkey));
+#else
+ return ((prot & PROT_SAO) ? VM_SAO : 0);
+#endif
}
#define arch_calc_vm_prot_bits(prot, pkey) arch_calc_vm_prot_bits(prot, pkey)
diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h
index 4b01c37..f148e84 100644
--- a/arch/powerpc/include/asm/pkeys.h
+++ b/arch/powerpc/include/asm/pkeys.h
@@ -1,13 +1,23 @@
#ifndef _ASM_PPC64_PKEYS_H
#define _ASM_PPC64_PKEYS_H
+#define ARCH_VM_PKEY_FLAGS (VM_PKEY_BIT0 | VM_PKEY_BIT1 | VM_PKEY_BIT2 | \
+ VM_PKEY_BIT3 | VM_PKEY_BIT4)
+
+static inline u64 pkey_to_vmflag_bits(u16 pkey)
+{
+ return (((pkey & 0x1UL) ? VM_PKEY_BIT0 : 0x0UL) |
+ ((pkey & 0x2UL) ? VM_PKEY_BIT1 : 0x0UL) |
+ ((pkey & 0x4UL) ? VM_PKEY_BIT2 : 0x0UL) |
+ ((pkey & 0x8UL) ? VM_PKEY_BIT3 : 0x0UL) |
+ ((pkey & 0x10UL) ? VM_PKEY_BIT4 : 0x0UL));
+}
+
#define arch_max_pkey() 32
#define AMR_AD_BIT 0x1UL
#define AMR_WD_BIT 0x2UL
#define IAMR_EX_BIT 0x1UL
#define AMR_BITS_PER_PKEY 2
-#define ARCH_VM_PKEY_FLAGS (VM_PKEY_BIT0 | VM_PKEY_BIT1 | VM_PKEY_BIT2 | \
- VM_PKEY_BIT3 | VM_PKEY_BIT4)
/*
* Bits are in BE format.
* NOTE: key 31, 1, 0 are not used.
--
1.7.1
Patch provides the ability for a process to
associate a pkey with a address range.
Signed-off-by: Ram Pai <[email protected]>
---
arch/powerpc/include/asm/systbl.h | 1 +
arch/powerpc/include/asm/unistd.h | 4 +---
arch/powerpc/include/uapi/asm/unistd.h | 1 +
3 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/include/asm/systbl.h b/arch/powerpc/include/asm/systbl.h
index 22dd776..b33b551 100644
--- a/arch/powerpc/include/asm/systbl.h
+++ b/arch/powerpc/include/asm/systbl.h
@@ -390,3 +390,4 @@
SYSCALL(statx)
SYSCALL(pkey_alloc)
SYSCALL(pkey_free)
+SYSCALL(pkey_mprotect)
diff --git a/arch/powerpc/include/asm/unistd.h b/arch/powerpc/include/asm/unistd.h
index e0273bc..daf1ba9 100644
--- a/arch/powerpc/include/asm/unistd.h
+++ b/arch/powerpc/include/asm/unistd.h
@@ -12,12 +12,10 @@
#include <uapi/asm/unistd.h>
-#define NR_syscalls 386
+#define NR_syscalls 387
#define __NR__exit __NR_exit
-#define __IGNORE_pkey_mprotect
-
#ifndef __ASSEMBLY__
#include <linux/types.h>
diff --git a/arch/powerpc/include/uapi/asm/unistd.h b/arch/powerpc/include/uapi/asm/unistd.h
index 7993a07..71ae45e 100644
--- a/arch/powerpc/include/uapi/asm/unistd.h
+++ b/arch/powerpc/include/uapi/asm/unistd.h
@@ -396,5 +396,6 @@
#define __NR_statx 383
#define __NR_pkey_alloc 384
#define __NR_pkey_free 385
+#define __NR_pkey_mprotect 386
#endif /* _UAPI_ASM_POWERPC_UNISTD_H_ */
--
1.7.1
map the pkey bits in the pte from the key protection
bits of the vma.
The pte bits used for pkey are 3,4,5,6 and 57. The first
four bits are the same four bits that were freed up initially
in this patch series. remember? :-) Without those four bits
this patch would'nt be possible.
Signed-off-by: Ram Pai <[email protected]>
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 20 +++++++++++++++++++-
arch/powerpc/include/asm/mman.h | 8 ++++++++
arch/powerpc/include/asm/pkeys.h | 9 +++++++++
3 files changed, 36 insertions(+), 1 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index 435d6a7..d9c87c4 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -37,6 +37,7 @@
#define _RPAGE_RSV2 0x0800000000000000UL
#define _RPAGE_RSV3 0x0400000000000000UL
#define _RPAGE_RSV4 0x0200000000000000UL
+#define _RPAGE_RSV5 0x00040UL
#define _PAGE_PTE 0x4000000000000000UL /* distinguishes PTEs from pointers */
#define _PAGE_PRESENT 0x8000000000000000UL /* pte contains a translation */
@@ -56,6 +57,20 @@
/* Max physical address bit as per radix table */
#define _RPAGE_PA_MAX 57
+#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
+#define H_PAGE_PKEY_BIT0 _RPAGE_RSV1
+#define H_PAGE_PKEY_BIT1 _RPAGE_RSV2
+#define H_PAGE_PKEY_BIT2 _RPAGE_RSV3
+#define H_PAGE_PKEY_BIT3 _RPAGE_RSV4
+#define H_PAGE_PKEY_BIT4 _RPAGE_RSV5
+#else /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
+#define H_PAGE_PKEY_BIT0 0
+#define H_PAGE_PKEY_BIT1 0
+#define H_PAGE_PKEY_BIT2 0
+#define H_PAGE_PKEY_BIT3 0
+#define H_PAGE_PKEY_BIT4 0
+#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
+
/*
* Max physical address bit we will use for now.
*
@@ -116,13 +131,16 @@
#define _PAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \
_PAGE_ACCESSED | _PAGE_SPECIAL | _PAGE_PTE | \
_PAGE_SOFT_DIRTY)
+
+#define H_PAGE_PKEY (H_PAGE_PKEY_BIT0 | H_PAGE_PKEY_BIT1 | H_PAGE_PKEY_BIT2 | \
+ H_PAGE_PKEY_BIT3 | H_PAGE_PKEY_BIT4)
/*
* Mask of bits returned by pte_pgprot()
*/
#define PAGE_PROT_BITS (_PAGE_SAO | _PAGE_NON_IDEMPOTENT | _PAGE_TOLERANT | \
H_PAGE_4K_PFN | _PAGE_PRIVILEGED | _PAGE_ACCESSED | \
_PAGE_READ | _PAGE_WRITE | _PAGE_DIRTY | _PAGE_EXEC | \
- _PAGE_SOFT_DIRTY)
+ _PAGE_SOFT_DIRTY | H_PAGE_PKEY)
/*
* We define 2 sets of base prot bits, one for basic pages (ie,
* cacheable kernel and user pages) and one for non cacheable
diff --git a/arch/powerpc/include/asm/mman.h b/arch/powerpc/include/asm/mman.h
index 067eec2..3f7220f 100644
--- a/arch/powerpc/include/asm/mman.h
+++ b/arch/powerpc/include/asm/mman.h
@@ -32,12 +32,20 @@ static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot,
}
#define arch_calc_vm_prot_bits(prot, pkey) arch_calc_vm_prot_bits(prot, pkey)
+
static inline pgprot_t arch_vm_get_page_prot(unsigned long vm_flags)
{
+#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
+ return (vm_flags & VM_SAO) ?
+ __pgprot(_PAGE_SAO | vmflag_to_page_pkey_bits(vm_flags)) :
+ __pgprot(0 | vmflag_to_page_pkey_bits(vm_flags));
+#else
return (vm_flags & VM_SAO) ? __pgprot(_PAGE_SAO) : __pgprot(0);
+#endif
}
#define arch_vm_get_page_prot(vm_flags) arch_vm_get_page_prot(vm_flags)
+
static inline bool arch_validate_prot(unsigned long prot)
{
if (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM | PROT_SAO))
diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h
index 20846c2..c681de9 100644
--- a/arch/powerpc/include/asm/pkeys.h
+++ b/arch/powerpc/include/asm/pkeys.h
@@ -13,6 +13,15 @@ static inline u64 pkey_to_vmflag_bits(u16 pkey)
((pkey & 0x10UL) ? VM_PKEY_BIT4 : 0x0UL));
}
+static inline u64 vmflag_to_page_pkey_bits(u64 vm_flags)
+{
+ return (((vm_flags & VM_PKEY_BIT0) ? H_PAGE_PKEY_BIT4 : 0x0UL) |
+ ((vm_flags & VM_PKEY_BIT1) ? H_PAGE_PKEY_BIT3 : 0x0UL) |
+ ((vm_flags & VM_PKEY_BIT2) ? H_PAGE_PKEY_BIT2 : 0x0UL) |
+ ((vm_flags & VM_PKEY_BIT3) ? H_PAGE_PKEY_BIT1 : 0x0UL) |
+ ((vm_flags & VM_PKEY_BIT4) ? H_PAGE_PKEY_BIT0 : 0x0UL));
+}
+
static inline int vma_pkey(struct vm_area_struct *vma)
{
return (vma->vm_flags & ARCH_VM_PKEY_FLAGS) >> VM_PKEY_SHIFT;
--
1.7.1
This patch provides the detailed implementation for
a user to allocate a key and enable it in the hardware.
It provides the plumbing, but it cannot be used yet
till the system call is implemented. The next patch
will do so.
Signed-off-by: Ram Pai <[email protected]>
---
arch/powerpc/include/asm/pkeys.h | 8 ++++-
arch/powerpc/mm/Makefile | 1 +
arch/powerpc/mm/pkeys.c | 66 ++++++++++++++++++++++++++++++++++++++
3 files changed, 74 insertions(+), 1 deletions(-)
create mode 100644 arch/powerpc/mm/pkeys.c
diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h
index 9345767..1495342 100644
--- a/arch/powerpc/include/asm/pkeys.h
+++ b/arch/powerpc/include/asm/pkeys.h
@@ -2,6 +2,10 @@
#define _ASM_PPC64_PKEYS_H
#define arch_max_pkey() 32
+#define AMR_AD_BIT 0x1UL
+#define AMR_WD_BIT 0x2UL
+#define IAMR_EX_BIT 0x1UL
+#define AMR_BITS_PER_PKEY 2
#define ARCH_VM_PKEY_FLAGS (VM_PKEY_BIT0 | VM_PKEY_BIT1 | VM_PKEY_BIT2 | \
VM_PKEY_BIT3 | VM_PKEY_BIT4)
/*
@@ -93,10 +97,12 @@ static inline int arch_override_mprotect_pkey(struct vm_area_struct *vma,
return 0;
}
+extern int __arch_set_user_pkey_access(struct task_struct *tsk, int pkey,
+ unsigned long init_val);
static inline int arch_set_user_pkey_access(struct task_struct *tsk, int pkey,
unsigned long init_val)
{
- return 0;
+ return __arch_set_user_pkey_access(tsk, pkey, init_val);
}
static inline void pkey_mm_init(struct mm_struct *mm)
diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile
index 7414034..8cc2ff1 100644
--- a/arch/powerpc/mm/Makefile
+++ b/arch/powerpc/mm/Makefile
@@ -45,3 +45,4 @@ obj-$(CONFIG_PPC_COPRO_BASE) += copro_fault.o
obj-$(CONFIG_SPAPR_TCE_IOMMU) += mmu_context_iommu.o
obj-$(CONFIG_PPC_PTDUMP) += dump_linuxpagetables.o
obj-$(CONFIG_PPC_HTDUMP) += dump_hashpagetable.o
+obj-$(CONFIG_PPC64_MEMORY_PROTECTION_KEYS) += pkeys.o
diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c
new file mode 100644
index 0000000..d3ba167
--- /dev/null
+++ b/arch/powerpc/mm/pkeys.c
@@ -0,0 +1,66 @@
+/*
+ * PowerPC Memory Protection Keys management
+ * Copyright (c) 2015, Intel Corporation.
+ * Copyright (c) 2017, IBM Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ */
+#include <linux/pkeys.h> /* PKEY_* */
+#include <uapi/asm-generic/mman-common.h>
+
+/*
+ * set the access right in AMR IAMR and UAMOR register
+ * for @pkey to that specified in @init_val.
+ */
+int __arch_set_user_pkey_access(struct task_struct *tsk, int pkey,
+ unsigned long init_val)
+{
+ u64 old_amr, old_uamor, old_iamr;
+ int pkey_shift = (arch_max_pkey()-pkey-1) * AMR_BITS_PER_PKEY;
+ u64 new_amr_bits = 0x0ul;
+ u64 new_iamr_bits = 0x0ul;
+ u64 new_uamor_bits = 0x3ul;
+
+ /* Set the bits we need in AMR: */
+ if (init_val & PKEY_DISABLE_ACCESS)
+ new_amr_bits |= AMR_AD_BIT | AMR_WD_BIT;
+ if (init_val & PKEY_DISABLE_WRITE)
+ new_amr_bits |= AMR_WD_BIT;
+
+ /*
+ * By default execute is disabled.
+ * To enable execute, PKEY_ENABLE_EXECUTE
+ * needs to be specified.
+ */
+ if ((init_val & PKEY_DISABLE_EXECUTE))
+ new_iamr_bits |= IAMR_EX_BIT;
+
+ /* Shift the bits in to the correct place in AMR for pkey: */
+ new_amr_bits <<= pkey_shift;
+ new_iamr_bits <<= pkey_shift;
+ new_uamor_bits <<= pkey_shift;
+
+ /* Get old AMR and mask off any old bits in place: */
+ old_amr = read_amr();
+ old_amr &= ~((u64)(AMR_AD_BIT|AMR_WD_BIT) << pkey_shift);
+
+ old_iamr = read_iamr();
+ old_iamr &= ~(0x3ul << pkey_shift);
+
+ old_uamor = read_uamor();
+ old_uamor &= ~(0x3ul << pkey_shift);
+
+ /* Write old part along with new part: */
+ write_amr(old_amr | new_amr_bits);
+ write_iamr(old_iamr | new_iamr_bits);
+ write_uamor(old_uamor | new_uamor_bits);
+
+ return 0;
+}
--
1.7.1
Store and restore the AMR, IAMR and UMOR register state of the task
before scheduling out and after scheduling in, respectively.
Signed-off-by: Ram Pai <[email protected]>
---
arch/powerpc/include/asm/processor.h | 5 +++++
arch/powerpc/kernel/process.c | 18 ++++++++++++++++++
2 files changed, 23 insertions(+), 0 deletions(-)
diff --git a/arch/powerpc/include/asm/processor.h b/arch/powerpc/include/asm/processor.h
index a2123f2..1f714df 100644
--- a/arch/powerpc/include/asm/processor.h
+++ b/arch/powerpc/include/asm/processor.h
@@ -310,6 +310,11 @@ struct thread_struct {
struct thread_vr_state ckvr_state; /* Checkpointed VR state */
unsigned long ckvrsave; /* Checkpointed VRSAVE */
#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
+#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
+ unsigned long amr;
+ unsigned long iamr;
+ unsigned long uamor;
+#endif
#ifdef CONFIG_KVM_BOOK3S_32_HANDLER
void* kvm_shadow_vcpu; /* KVM internal data */
#endif /* CONFIG_KVM_BOOK3S_32_HANDLER */
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index baae104..37d001a 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -1096,6 +1096,11 @@ static inline void save_sprs(struct thread_struct *t)
t->tar = mfspr(SPRN_TAR);
}
#endif
+#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
+ t->amr = mfspr(SPRN_AMR);
+ t->iamr = mfspr(SPRN_IAMR);
+ t->uamor = mfspr(SPRN_UAMOR);
+#endif
}
static inline void restore_sprs(struct thread_struct *old_thread,
@@ -1131,6 +1136,14 @@ static inline void restore_sprs(struct thread_struct *old_thread,
mtspr(SPRN_TAR, new_thread->tar);
}
#endif
+#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
+ if (old_thread->amr != new_thread->amr)
+ mtspr(SPRN_AMR, new_thread->amr);
+ if (old_thread->iamr != new_thread->iamr)
+ mtspr(SPRN_IAMR, new_thread->iamr);
+ if (old_thread->uamor != new_thread->uamor)
+ mtspr(SPRN_UAMOR, new_thread->uamor);
+#endif
}
struct task_struct *__switch_to(struct task_struct *prev,
@@ -1686,6 +1699,11 @@ void start_thread(struct pt_regs *regs, unsigned long start, unsigned long sp)
current->thread.tm_texasr = 0;
current->thread.tm_tfiar = 0;
#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
+#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
+ current->thread.amr = 0x0ul;
+ current->thread.iamr = 0x0ul;
+ current->thread.uamor = 0x0ul;
+#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
}
EXPORT_SYMBOL(start_thread);
--
1.7.1
Initial plumbing to manage all the keys supported by the
hardware.
Total 32 keys are supported on powerpc. However pkey 0,1
and 31 are reserved. So effectively we have 29 pkeys.
This patch keeps track of reserved keys, allocated keys
and keys that are currently free.
Also it adds skeletal functions and macros, that the
architecture-independent code expects to be available.
Signed-off-by: Ram Pai <[email protected]>
---
arch/powerpc/Kconfig | 16 +++++
arch/powerpc/include/asm/book3s/64/mmu.h | 9 +++
arch/powerpc/include/asm/pkeys.h | 106 ++++++++++++++++++++++++++++++
arch/powerpc/mm/mmu_context_book3s64.c | 5 ++
4 files changed, 136 insertions(+), 0 deletions(-)
create mode 100644 arch/powerpc/include/asm/pkeys.h
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index f7c8f99..a2480b6 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -871,6 +871,22 @@ config SECCOMP
If unsure, say Y. Only embedded should say N here.
+config PPC64_MEMORY_PROTECTION_KEYS
+ prompt "PowerPC Memory Protection Keys"
+ def_bool y
+ # Note: only available in 64-bit mode
+ depends on PPC64 && PPC_64K_PAGES
+ select ARCH_USES_HIGH_VMA_FLAGS
+ select ARCH_HAS_PKEYS
+ ---help---
+ Memory Protection Keys provides a mechanism for enforcing
+ page-based protections, but without requiring modification of the
+ page tables when an application changes protection domains.
+
+ For details, see Documentation/powerpc/protection-keys.txt
+
+ If unsure, say y.
+
endmenu
config ISA_DMA_API
diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h
index 77529a3..104ad72 100644
--- a/arch/powerpc/include/asm/book3s/64/mmu.h
+++ b/arch/powerpc/include/asm/book3s/64/mmu.h
@@ -108,6 +108,15 @@ struct patb_entry {
#ifdef CONFIG_SPAPR_TCE_IOMMU
struct list_head iommu_group_mem_list;
#endif
+
+#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
+ /*
+ * Each bit represents one protection key.
+ * bit set -> key allocated
+ * bit unset -> key available for allocation
+ */
+ u32 pkey_allocation_map;
+#endif
} mm_context_t;
/*
diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h
new file mode 100644
index 0000000..9345767
--- /dev/null
+++ b/arch/powerpc/include/asm/pkeys.h
@@ -0,0 +1,106 @@
+#ifndef _ASM_PPC64_PKEYS_H
+#define _ASM_PPC64_PKEYS_H
+
+#define arch_max_pkey() 32
+#define ARCH_VM_PKEY_FLAGS (VM_PKEY_BIT0 | VM_PKEY_BIT1 | VM_PKEY_BIT2 | \
+ VM_PKEY_BIT3 | VM_PKEY_BIT4)
+/*
+ * Bits are in BE format.
+ * NOTE: key 31, 1, 0 are not used.
+ * key 0 is used by default. It give read/write/execute permission.
+ * key 31 is reserved by the hypervisor.
+ * key 1 is recommended to be not used.
+ * PowerISA(3.0) page 1015, programming note.
+ */
+#define PKEY_INITIAL_ALLOCAION 0xc0000001
+
+#define pkeybit_mask(pkey) (0x1 << (arch_max_pkey() - pkey - 1))
+
+#define mm_pkey_allocation_map(mm) (mm->context.pkey_allocation_map)
+
+#define mm_set_pkey_allocated(mm, pkey) { \
+ mm_pkey_allocation_map(mm) |= pkeybit_mask(pkey); \
+}
+
+#define mm_set_pkey_free(mm, pkey) { \
+ mm_pkey_allocation_map(mm) &= ~pkeybit_mask(pkey); \
+}
+
+#define mm_set_pkey_is_allocated(mm, pkey) \
+ (mm_pkey_allocation_map(mm) & pkeybit_mask(pkey))
+
+#define mm_set_pkey_is_reserved(mm, pkey) (PKEY_INITIAL_ALLOCAION & \
+ pkeybit_mask(pkey))
+
+static inline bool mm_pkey_is_allocated(struct mm_struct *mm, int pkey)
+{
+ /* a reserved key is never considered as 'explicitly allocated' */
+ return (!mm_set_pkey_is_reserved(mm, pkey) &&
+ mm_set_pkey_is_allocated(mm, pkey));
+}
+
+/*
+ * Returns a positive, 5-bit key on success, or -1 on failure.
+ */
+static inline int mm_pkey_alloc(struct mm_struct *mm)
+{
+ /*
+ * Note: this is the one and only place we make sure
+ * that the pkey is valid as far as the hardware is
+ * concerned. The rest of the kernel trusts that
+ * only good, valid pkeys come out of here.
+ */
+ u32 all_pkeys_mask = (u32)(~(0x0));
+ int ret;
+
+ /*
+ * Are we out of pkeys? We must handle this specially
+ * because ffz() behavior is undefined if there are no
+ * zeros.
+ */
+ if (mm_pkey_allocation_map(mm) == all_pkeys_mask)
+ return -1;
+
+ ret = arch_max_pkey() -
+ ffz((u32)mm_pkey_allocation_map(mm))
+ - 1;
+ mm_set_pkey_allocated(mm, ret);
+ return ret;
+}
+
+static inline int mm_pkey_free(struct mm_struct *mm, int pkey)
+{
+ if (!mm_pkey_is_allocated(mm, pkey))
+ return -EINVAL;
+
+ mm_set_pkey_free(mm, pkey);
+
+ return 0;
+}
+
+/*
+ * Try to dedicate one of the protection keys to be used as an
+ * execute-only protection key.
+ */
+static inline int execute_only_pkey(struct mm_struct *mm)
+{
+ return 0;
+}
+
+static inline int arch_override_mprotect_pkey(struct vm_area_struct *vma,
+ int prot, int pkey)
+{
+ return 0;
+}
+
+static inline int arch_set_user_pkey_access(struct task_struct *tsk, int pkey,
+ unsigned long init_val)
+{
+ return 0;
+}
+
+static inline void pkey_mm_init(struct mm_struct *mm)
+{
+ mm_pkey_allocation_map(mm) = PKEY_INITIAL_ALLOCAION;
+}
+#endif /*_ASM_PPC64_PKEYS_H */
diff --git a/arch/powerpc/mm/mmu_context_book3s64.c b/arch/powerpc/mm/mmu_context_book3s64.c
index c6dca2a..2da9931 100644
--- a/arch/powerpc/mm/mmu_context_book3s64.c
+++ b/arch/powerpc/mm/mmu_context_book3s64.c
@@ -16,6 +16,7 @@
#include <linux/string.h>
#include <linux/types.h>
#include <linux/mm.h>
+#include <linux/pkeys.h>
#include <linux/spinlock.h>
#include <linux/idr.h>
#include <linux/export.h>
@@ -120,6 +121,10 @@ static int hash__init_new_context(struct mm_struct *mm)
subpage_prot_init_new_context(mm);
+#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
+ pkey_mm_init(mm);
+#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
+
return index;
}
--
1.7.1
This patch provides the implementation of execute-only pkey.
The architecture-independent expects the ability to create
and manage a special key which has execute-only permission.
Signed-off-by: Ram Pai <[email protected]>
---
arch/powerpc/include/asm/book3s/64/mmu.h | 1 +
arch/powerpc/include/asm/pkeys.h | 6 +++-
arch/powerpc/mm/pkeys.c | 59 ++++++++++++++++++++++++++++++
3 files changed, 65 insertions(+), 1 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h
index 104ad72..0c0a2a8 100644
--- a/arch/powerpc/include/asm/book3s/64/mmu.h
+++ b/arch/powerpc/include/asm/book3s/64/mmu.h
@@ -116,6 +116,7 @@ struct patb_entry {
* bit unset -> key available for allocation
*/
u32 pkey_allocation_map;
+ s16 execute_only_pkey; /* key holding execute-only protection */
#endif
} mm_context_t;
diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h
index 1495342..4b01c37 100644
--- a/arch/powerpc/include/asm/pkeys.h
+++ b/arch/powerpc/include/asm/pkeys.h
@@ -86,11 +86,13 @@ static inline int mm_pkey_free(struct mm_struct *mm, int pkey)
* Try to dedicate one of the protection keys to be used as an
* execute-only protection key.
*/
+extern int __execute_only_pkey(struct mm_struct *mm);
static inline int execute_only_pkey(struct mm_struct *mm)
{
- return 0;
+ return __execute_only_pkey(mm);
}
+
static inline int arch_override_mprotect_pkey(struct vm_area_struct *vma,
int prot, int pkey)
{
@@ -108,5 +110,7 @@ static inline int arch_set_user_pkey_access(struct task_struct *tsk, int pkey,
static inline void pkey_mm_init(struct mm_struct *mm)
{
mm_pkey_allocation_map(mm) = PKEY_INITIAL_ALLOCAION;
+ /* -1 means unallocated or invalid */
+ mm->context.execute_only_pkey = -1;
}
#endif /*_ASM_PPC64_PKEYS_H */
diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c
index d3ba167..6c90317 100644
--- a/arch/powerpc/mm/pkeys.c
+++ b/arch/powerpc/mm/pkeys.c
@@ -64,3 +64,62 @@ int __arch_set_user_pkey_access(struct task_struct *tsk, int pkey,
return 0;
}
+
+#define pkeyshift(pkey) ((arch_max_pkey()-pkey-1) * AMR_BITS_PER_PKEY)
+
+static inline bool pkey_allows_readwrite(int pkey)
+{
+ int pkey_shift = pkeyshift(pkey);
+
+ if (!(read_uamor() & (0x3UL << pkey_shift)))
+ return true;
+
+ return !(read_amr() & ((AMR_AD_BIT|AMR_WD_BIT) << pkey_shift));
+}
+
+int __execute_only_pkey(struct mm_struct *mm)
+{
+ bool need_to_set_mm_pkey = false;
+ int execute_only_pkey = mm->context.execute_only_pkey;
+ int ret;
+
+ /* Do we need to assign a pkey for mm's execute-only maps? */
+ if (execute_only_pkey == -1) {
+ /* Go allocate one to use, which might fail */
+ execute_only_pkey = mm_pkey_alloc(mm);
+ if (execute_only_pkey < 0)
+ return -1;
+ need_to_set_mm_pkey = true;
+ }
+
+ /*
+ * We do not want to go through the relatively costly
+ * dance to set AMR if we do not need to. Check it
+ * first and assume that if the execute-only pkey is
+ * readwrite-disabled than we do not have to set it
+ * ourselves.
+ */
+ if (!need_to_set_mm_pkey &&
+ !pkey_allows_readwrite(execute_only_pkey))
+ return execute_only_pkey;
+
+ /*
+ * Set up AMR so that it denies access for everything
+ * other than execution.
+ */
+ ret = __arch_set_user_pkey_access(current, execute_only_pkey,
+ (PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE));
+ /*
+ * If the AMR-set operation failed somehow, just return
+ * 0 and effectively disable execute-only support.
+ */
+ if (ret) {
+ mm_set_pkey_free(mm, execute_only_pkey);
+ return -1;
+ }
+
+ /* We got one, store it and use it from here on out */
+ if (need_to_set_mm_pkey)
+ mm->context.execute_only_pkey = execute_only_pkey;
+ return execute_only_pkey;
+}
--
1.7.1
Currently there are only 4bits in the vma flags to support 16 keys
on x86. powerpc supports 32 keys, which needs 5bits. This patch
introduces an addition bit in the vma flags.
Signed-off-by: Ram Pai <[email protected]>
---
fs/proc/task_mmu.c | 6 +++++-
include/linux/mm.h | 18 +++++++++++++-----
2 files changed, 18 insertions(+), 6 deletions(-)
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index f0c8b33..2ddc298 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -666,12 +666,16 @@ static void show_smap_vma_flags(struct seq_file *m, struct vm_area_struct *vma)
[ilog2(VM_MERGEABLE)] = "mg",
[ilog2(VM_UFFD_MISSING)]= "um",
[ilog2(VM_UFFD_WP)] = "uw",
-#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
+#ifdef CONFIG_ARCH_HAS_PKEYS
/* These come out via ProtectionKey: */
[ilog2(VM_PKEY_BIT0)] = "",
[ilog2(VM_PKEY_BIT1)] = "",
[ilog2(VM_PKEY_BIT2)] = "",
[ilog2(VM_PKEY_BIT3)] = "",
+#endif /* CONFIG_ARCH_HAS_PKEYS */
+#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
+ /* Additional bit in ProtectionKey: */
+ [ilog2(VM_PKEY_BIT4)] = "",
#endif
};
size_t i;
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 7cb17c6..3d35bcc 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -208,21 +208,29 @@ extern int overcommit_kbytes_handler(struct ctl_table *, int, void __user *,
#define VM_HIGH_ARCH_BIT_1 33 /* bit only usable on 64-bit architectures */
#define VM_HIGH_ARCH_BIT_2 34 /* bit only usable on 64-bit architectures */
#define VM_HIGH_ARCH_BIT_3 35 /* bit only usable on 64-bit architectures */
+#define VM_HIGH_ARCH_BIT_4 36 /* bit only usable on 64-bit arch */
#define VM_HIGH_ARCH_0 BIT(VM_HIGH_ARCH_BIT_0)
#define VM_HIGH_ARCH_1 BIT(VM_HIGH_ARCH_BIT_1)
#define VM_HIGH_ARCH_2 BIT(VM_HIGH_ARCH_BIT_2)
#define VM_HIGH_ARCH_3 BIT(VM_HIGH_ARCH_BIT_3)
+#define VM_HIGH_ARCH_4 BIT(VM_HIGH_ARCH_BIT_4)
#endif /* CONFIG_ARCH_USES_HIGH_VMA_FLAGS */
-#if defined(CONFIG_X86)
-# define VM_PAT VM_ARCH_1 /* PAT reserves whole VMA at once (x86) */
-#if defined (CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS)
+#ifdef CONFIG_ARCH_HAS_PKEYS
# define VM_PKEY_SHIFT VM_HIGH_ARCH_BIT_0
-# define VM_PKEY_BIT0 VM_HIGH_ARCH_0 /* A protection key is a 4-bit value */
+# define VM_PKEY_BIT0 VM_HIGH_ARCH_0
# define VM_PKEY_BIT1 VM_HIGH_ARCH_1
# define VM_PKEY_BIT2 VM_HIGH_ARCH_2
# define VM_PKEY_BIT3 VM_HIGH_ARCH_3
-#endif
+#endif /* CONFIG_ARCH_HAS_PKEYS */
+
+#if defined(CONFIG_PPC64_MEMORY_PROTECTION_KEYS)
+# define VM_PKEY_BIT4 VM_HIGH_ARCH_4 /* additional key bit used on ppc64 */
+#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
+
+
+#if defined(CONFIG_X86)
+# define VM_PAT VM_ARCH_1 /* PAT reserves whole VMA at once (x86) */
#elif defined(CONFIG_PPC)
# define VM_SAO VM_ARCH_1 /* Strong Access Ordering (powerpc) */
#elif defined(CONFIG_PARISC)
--
1.7.1
Finally this patch provides the ability for a process to
allocate and free a protection key.
Signed-off-by: Ram Pai <[email protected]>
---
arch/powerpc/include/asm/systbl.h | 2 ++
arch/powerpc/include/asm/unistd.h | 4 +---
arch/powerpc/include/uapi/asm/unistd.h | 2 ++
3 files changed, 5 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/include/asm/systbl.h b/arch/powerpc/include/asm/systbl.h
index 1c94708..22dd776 100644
--- a/arch/powerpc/include/asm/systbl.h
+++ b/arch/powerpc/include/asm/systbl.h
@@ -388,3 +388,5 @@
COMPAT_SYS_SPU(pwritev2)
SYSCALL(kexec_file_load)
SYSCALL(statx)
+SYSCALL(pkey_alloc)
+SYSCALL(pkey_free)
diff --git a/arch/powerpc/include/asm/unistd.h b/arch/powerpc/include/asm/unistd.h
index 9ba11db..e0273bc 100644
--- a/arch/powerpc/include/asm/unistd.h
+++ b/arch/powerpc/include/asm/unistd.h
@@ -12,13 +12,11 @@
#include <uapi/asm/unistd.h>
-#define NR_syscalls 384
+#define NR_syscalls 386
#define __NR__exit __NR_exit
#define __IGNORE_pkey_mprotect
-#define __IGNORE_pkey_alloc
-#define __IGNORE_pkey_free
#ifndef __ASSEMBLY__
diff --git a/arch/powerpc/include/uapi/asm/unistd.h b/arch/powerpc/include/uapi/asm/unistd.h
index b85f142..7993a07 100644
--- a/arch/powerpc/include/uapi/asm/unistd.h
+++ b/arch/powerpc/include/uapi/asm/unistd.h
@@ -394,5 +394,7 @@
#define __NR_pwritev2 381
#define __NR_kexec_file_load 382
#define __NR_statx 383
+#define __NR_pkey_alloc 384
+#define __NR_pkey_free 385
#endif /* _UAPI_ASM_POWERPC_UNISTD_H_ */
--
1.7.1
replace redundant code with helper functions
pte_get_hash_gslot() and pte_set_hash_slot()
Signed-off-by: Ram Pai <[email protected]>
---
arch/powerpc/mm/hash64_4k.c | 14 ++++++--------
1 files changed, 6 insertions(+), 8 deletions(-)
diff --git a/arch/powerpc/mm/hash64_4k.c b/arch/powerpc/mm/hash64_4k.c
index 6fa450c..a1eebc1 100644
--- a/arch/powerpc/mm/hash64_4k.c
+++ b/arch/powerpc/mm/hash64_4k.c
@@ -20,6 +20,7 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid,
pte_t *ptep, unsigned long trap, unsigned long flags,
int ssize, int subpg_prot)
{
+ real_pte_t rpte;
unsigned long hpte_group;
unsigned long rflags, pa;
unsigned long old_pte, new_pte;
@@ -54,6 +55,7 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid,
* need to add in 0x1 if it's a read-only user page
*/
rflags = htab_convert_pte_flags(new_pte);
+ rpte = __real_pte(__pte(old_pte), ptep);
if (cpu_has_feature(CPU_FTR_NOEXECUTE) &&
!cpu_has_feature(CPU_FTR_COHERENT_ICACHE))
@@ -64,13 +66,10 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid,
/*
* There MIGHT be an HPTE for this pte
*/
- hash = hpt_hash(vpn, shift, ssize);
- if (old_pte & H_PAGE_F_SECOND)
- hash = ~hash;
- slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
- slot += (old_pte & H_PAGE_F_GIX) >> H_PAGE_F_GIX_SHIFT;
+ unsigned long gslot = pte_get_hash_gslot(vpn, shift,
+ ssize, rpte, 0);
- if (mmu_hash_ops.hpte_updatepp(slot, rflags, vpn, MMU_PAGE_4K,
+ if (mmu_hash_ops.hpte_updatepp(gslot, rflags, vpn, MMU_PAGE_4K,
MMU_PAGE_4K, ssize, flags) == -1)
old_pte &= ~_PAGE_HPTEFLAGS;
}
@@ -118,8 +117,7 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid,
return -1;
}
new_pte = (new_pte & ~_PAGE_HPTEFLAGS) | H_PAGE_HASHPTE;
- new_pte |= (slot << H_PAGE_F_GIX_SHIFT) &
- (H_PAGE_F_SECOND | H_PAGE_F_GIX);
+ new_pte |= pte_set_hash_slot(ptep, rpte, 0, slot);
}
*ptep = __pte(new_pte & ~H_PAGE_BUSY);
return 0;
--
1.7.1
x86 does not support disabling execute permissions on a pkey.
Signed-off-by: Ram Pai <[email protected]>
---
arch/x86/kernel/fpu/xstate.c | 3 +++
1 files changed, 3 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c
index c24ac1e..d582631 100644
--- a/arch/x86/kernel/fpu/xstate.c
+++ b/arch/x86/kernel/fpu/xstate.c
@@ -900,6 +900,9 @@ int arch_set_user_pkey_access(struct task_struct *tsk, int pkey,
if (!boot_cpu_has(X86_FEATURE_OSPKE))
return -EINVAL;
+ if (init_val & PKEY_DISABLE_EXECUTE)
+ return -EINVAL;
+
/* Set the bits we need in PKRU: */
if (init_val & PKEY_DISABLE_ACCESS)
new_pkru_bits |= PKRU_AD_BIT;
--
1.7.1
Implements helper functions to read and write the key related
registers; AMR, IAMR, UAMOR.
AMR register tracks the read,write permission of a key
IAMR register tracks the execute permission of a key
UAMOR register enables and disables a key
Signed-off-by: Ram Pai <[email protected]>
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 60 ++++++++++++++++++++++++++
1 files changed, 60 insertions(+), 0 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index 85bc987..435d6a7 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -428,6 +428,66 @@ static inline void huge_ptep_set_wrprotect(struct mm_struct *mm,
pte_update(mm, addr, ptep, 0, _PAGE_PRIVILEGED, 1);
}
+#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
+
+#include <asm/reg.h>
+static inline u64 read_amr(void)
+{
+ return mfspr(SPRN_AMR);
+}
+static inline void write_amr(u64 value)
+{
+ mtspr(SPRN_AMR, value);
+}
+static inline u64 read_iamr(void)
+{
+ return mfspr(SPRN_IAMR);
+}
+static inline void write_iamr(u64 value)
+{
+ mtspr(SPRN_IAMR, value);
+}
+static inline u64 read_uamor(void)
+{
+ return mfspr(SPRN_UAMOR);
+}
+static inline void write_uamor(u64 value)
+{
+ mtspr(SPRN_UAMOR, value);
+}
+
+#else /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
+
+static inline u64 read_amr(void)
+{
+ WARN(1, "%s called with MEMORY PROTECTION KEYS disabled\n", __func__);
+ return -1;
+}
+static inline void write_amr(u64 value)
+{
+ WARN(1, "%s called with MEMORY PROTECTION KEYS disabled\n", __func__);
+}
+static inline u64 read_uamor(void)
+{
+ WARN(1, "%s called with MEMORY PROTECTION KEYS disabled\n", __func__);
+ return -1;
+}
+static inline void write_uamor(u64 value)
+{
+ WARN(1, "%s called with MEMORY PROTECTION KEYS disabled\n", __func__);
+}
+static inline u64 read_iamr(void)
+{
+ WARN(1, "%s called with MEMORY PROTECTION KEYS disabled\n", __func__);
+ return -1;
+}
+static inline void write_iamr(u64 value)
+{
+ WARN(1, "%s called with MEMORY PROTECTION KEYS disabled\n", __func__);
+}
+
+#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
+
#define __HAVE_ARCH_PTEP_GET_AND_CLEAR
static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
unsigned long addr, pte_t *ptep)
--
1.7.1
replace redundant code in __hash_page_64K() with helper
functions pte_get_hash_gslot() and pte_set_hash_slot()
Signed-off-by: Ram Pai <[email protected]>
---
arch/powerpc/mm/hash64_64k.c | 24 ++++--------------------
1 files changed, 4 insertions(+), 20 deletions(-)
diff --git a/arch/powerpc/mm/hash64_64k.c b/arch/powerpc/mm/hash64_64k.c
index 0012618..645f621 100644
--- a/arch/powerpc/mm/hash64_64k.c
+++ b/arch/powerpc/mm/hash64_64k.c
@@ -244,7 +244,6 @@ int __hash_page_64K(unsigned long ea, unsigned long access,
unsigned long flags, int ssize)
{
real_pte_t rpte;
- unsigned long *hidxp;
unsigned long hpte_group;
unsigned long rflags, pa;
unsigned long old_pte, new_pte;
@@ -289,18 +288,12 @@ int __hash_page_64K(unsigned long ea, unsigned long access,
vpn = hpt_vpn(ea, vsid, ssize);
if (unlikely(old_pte & H_PAGE_HASHPTE)) {
- unsigned long hash, slot, hidx;
-
- hash = hpt_hash(vpn, shift, ssize);
- hidx = __rpte_to_hidx(rpte, 0);
- if (hidx & _PTEIDX_SECONDARY)
- hash = ~hash;
- slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
- slot += hidx & _PTEIDX_GROUP_IX;
+ unsigned long gslot;
/*
* There MIGHT be an HPTE for this pte
*/
- if (mmu_hash_ops.hpte_updatepp(slot, rflags, vpn, MMU_PAGE_64K,
+ gslot = pte_get_hash_gslot(vpn, shift, ssize, rpte, 0);
+ if (mmu_hash_ops.hpte_updatepp(gslot, rflags, vpn, MMU_PAGE_64K,
MMU_PAGE_64K, ssize,
flags) == -1)
old_pte &= ~_PAGE_HPTEFLAGS;
@@ -350,17 +343,8 @@ int __hash_page_64K(unsigned long ea, unsigned long access,
return -1;
}
- /*
- * Insert slot number & secondary bit in PTE second half.
- */
- hidxp = (unsigned long *)(ptep + PTRS_PER_PTE);
- rpte.hidx &= ~(0xfUL);
- *hidxp = rpte.hidx | (slot & 0xfUL);
- /*
- * check __real_pte for details on matching smp_rmb()
- */
- smp_wmb();
new_pte = (new_pte & ~_PAGE_HPTEFLAGS) | H_PAGE_HASHPTE;
+ new_pte |= pte_set_hash_slot(ptep, rpte, 0, slot);
}
*ptep = __pte(new_pte & ~H_PAGE_BUSY);
return 0;
--
1.7.1
replace redundant code in __hash_page_4K() with helper
functions pte_get_hash_gslot() and pte_set_hash_slot()
Signed-off-by: Ram Pai <[email protected]>
---
arch/powerpc/mm/hash64_64k.c | 34 +++++++++-------------------------
1 files changed, 9 insertions(+), 25 deletions(-)
diff --git a/arch/powerpc/mm/hash64_64k.c b/arch/powerpc/mm/hash64_64k.c
index 645f621..c658cb5 100644
--- a/arch/powerpc/mm/hash64_64k.c
+++ b/arch/powerpc/mm/hash64_64k.c
@@ -39,9 +39,8 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid,
{
real_pte_t rpte;
unsigned long hpte_group;
- unsigned long *hidxp;
unsigned int subpg_index;
- unsigned long rflags, pa, hidx;
+ unsigned long rflags, pa;
unsigned long old_pte, new_pte, subpg_pte;
unsigned long vpn, hash, slot, gslot;
unsigned long shift = mmu_psize_defs[MMU_PAGE_4K].shift;
@@ -114,18 +113,13 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid,
if (__rpte_sub_valid(rpte, subpg_index)) {
int ret;
- hash = hpt_hash(vpn, shift, ssize);
- hidx = __rpte_to_hidx(rpte, subpg_index);
- if (hidx & _PTEIDX_SECONDARY)
- hash = ~hash;
- slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
- slot += hidx & _PTEIDX_GROUP_IX;
-
- ret = mmu_hash_ops.hpte_updatepp(slot, rflags, vpn,
+ gslot = pte_get_hash_gslot(vpn, shift, ssize, rpte,
+ subpg_index);
+ ret = mmu_hash_ops.hpte_updatepp(gslot, rflags, vpn,
MMU_PAGE_4K, MMU_PAGE_4K,
ssize, flags);
/*
- *if we failed because typically the HPTE wasn't really here
+ * if we failed because typically the HPTE wasn't really here
* we try an insertion.
*/
if (ret == -1)
@@ -221,20 +215,10 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid,
MMU_PAGE_4K, MMU_PAGE_4K, old_pte);
return -1;
}
- /*
- * Insert slot number & secondary bit in PTE second half,
- * clear H_PAGE_BUSY and set appropriate HPTE slot bit
- * Since we have H_PAGE_BUSY set on ptep, we can be sure
- * nobody is undating hidx.
- */
- hidxp = (unsigned long *)(ptep + PTRS_PER_PTE);
- rpte.hidx &= ~(0xfUL << (subpg_index << 2));
- *hidxp = rpte.hidx | (slot << (subpg_index << 2));
- /*
- * check __real_pte for details on matching smp_rmb()
- */
- smp_wmb();
- new_pte |= H_PAGE_HASHPTE;
+
+ new_pte |= pte_set_hash_slot(ptep, rpte, subpg_index, slot);
+ new_pte |= H_PAGE_HASHPTE;
+
*ptep = __pte(new_pte & ~H_PAGE_BUSY);
return 0;
}
--
1.7.1
replace redundant code in flush_hash_page() with helper function
pte_get_hash_gslot().
Signed-off-by: Ram Pai <[email protected]>
---
arch/powerpc/mm/hash_utils_64.c | 13 ++++---------
1 files changed, 4 insertions(+), 9 deletions(-)
diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index d3604da..d863696 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -1615,23 +1615,18 @@ unsigned long pte_get_hash_gslot(unsigned long vpn, unsigned long shift,
void flush_hash_page(unsigned long vpn, real_pte_t pte, int psize, int ssize,
unsigned long flags)
{
- unsigned long hash, index, shift, hidx, slot;
+ unsigned long index, shift, gslot;
int local = flags & HPTE_LOCAL_UPDATE;
DBG_LOW("flush_hash_page(vpn=%016lx)\n", vpn);
pte_iterate_hashed_subpages(pte, psize, vpn, index, shift) {
- hash = hpt_hash(vpn, shift, ssize);
- hidx = __rpte_to_hidx(pte, index);
- if (hidx & _PTEIDX_SECONDARY)
- hash = ~hash;
- slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
- slot += hidx & _PTEIDX_GROUP_IX;
- DBG_LOW(" sub %ld: hash=%lx, hidx=%lx\n", index, slot, hidx);
+ gslot = pte_get_hash_gslot(vpn, shift, ssize, pte, index);
+ DBG_LOW(" sub %ld: gslot=%lx\n", index, gslot);
/*
* We use same base page size and actual psize, because we don't
* use these functions for hugepage
*/
- mmu_hash_ops.hpte_invalidate(slot, vpn, psize, psize,
+ mmu_hash_ops.hpte_invalidate(gslot, vpn, psize, psize,
ssize, local);
} pte_iterate_hashed_end();
--
1.7.1
The H_PAGE_F_SECOND,H_PAGE_F_GIX are not in the 64K main-PTE.
capture these changes in the dump pte report.
Signed-off-by: Ram Pai <[email protected]>
---
arch/powerpc/mm/dump_linuxpagetables.c | 3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/arch/powerpc/mm/dump_linuxpagetables.c b/arch/powerpc/mm/dump_linuxpagetables.c
index 44fe483..5627edd 100644
--- a/arch/powerpc/mm/dump_linuxpagetables.c
+++ b/arch/powerpc/mm/dump_linuxpagetables.c
@@ -213,7 +213,7 @@ struct flag_info {
.val = H_PAGE_4K_PFN,
.set = "4K_pfn",
}, {
-#endif
+#else /* CONFIG_PPC_64K_PAGES */
.mask = H_PAGE_F_GIX,
.val = H_PAGE_F_GIX,
.set = "f_gix",
@@ -224,6 +224,7 @@ struct flag_info {
.val = H_PAGE_F_SECOND,
.set = "f_second",
}, {
+#endif /* CONFIG_PPC_64K_PAGES */
#endif
.mask = _PAGE_SPECIAL,
.val = _PAGE_SPECIAL,
--
1.7.1
Introduce pte_set_hash_slot().It sets the (H_PAGE_F_SECOND|H_PAGE_F_GIX)
bits at the appropriate location in the PTE of 4K PTE. For
64K PTE, it sets the bits in the second part of the PTE. Though
the implementation for the former just needs the slot parameter, it does
take some additional parameters to keep the prototype consistent.
This function will be handy as we work towards re-arranging the
bits in the later patches.
Signed-off-by: Ram Pai <[email protected]>
---
arch/powerpc/include/asm/book3s/64/hash-4k.h | 15 +++++++++++++++
arch/powerpc/include/asm/book3s/64/hash-64k.h | 25 +++++++++++++++++++++++++
2 files changed, 40 insertions(+), 0 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/64/hash-4k.h b/arch/powerpc/include/asm/book3s/64/hash-4k.h
index 1e60099..d17ed52 100644
--- a/arch/powerpc/include/asm/book3s/64/hash-4k.h
+++ b/arch/powerpc/include/asm/book3s/64/hash-4k.h
@@ -53,6 +53,21 @@ static inline int hash__hugepd_ok(hugepd_t hpd)
}
#endif
+/*
+ * 4k pte format is different from 64k pte format. Saving the
+ * hash_slot is just a matter of returning the pte bits that need to
+ * be modified. On 64k pte, things are a little more involved and
+ * hence needs many more parameters to accomplish the same.
+ * However we want to abstract this out from the caller by keeping
+ * the prototype consistent across the two formats.
+ */
+static inline unsigned long pte_set_hash_slot(pte_t *ptep, real_pte_t rpte,
+ unsigned int subpg_index, unsigned long slot)
+{
+ return (slot << H_PAGE_F_GIX_SHIFT) &
+ (H_PAGE_F_SECOND | H_PAGE_F_GIX);
+}
+
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
static inline char *get_hpte_slot_array(pmd_t *pmdp)
diff --git a/arch/powerpc/include/asm/book3s/64/hash-64k.h b/arch/powerpc/include/asm/book3s/64/hash-64k.h
index c281f18..89ef5a9 100644
--- a/arch/powerpc/include/asm/book3s/64/hash-64k.h
+++ b/arch/powerpc/include/asm/book3s/64/hash-64k.h
@@ -67,6 +67,31 @@ static inline unsigned long __rpte_to_hidx(real_pte_t rpte, unsigned long index)
return ((rpte.hidx >> (index<<2)) & 0xfUL);
}
+/*
+ * Commit the hash slot and return pte bits that needs to be modified.
+ * The caller is expected to modify the pte bits accordingly and
+ * commit the pte to memory.
+ */
+static inline unsigned long pte_set_hash_slot(pte_t *ptep, real_pte_t rpte,
+ unsigned int subpg_index, unsigned long slot)
+{
+ unsigned long *hidxp = (unsigned long *)(ptep + PTRS_PER_PTE);
+
+ rpte.hidx &= ~(0xfUL << (subpg_index << 2));
+ *hidxp = rpte.hidx | (slot << (subpg_index << 2));
+ /*
+ * Commit the hidx bits to memory before returning.
+ * Anyone reading pte must ensure hidx bits are
+ * read only after reading the pte by using the
+ * read-side barrier smp_rmb(). __real_pte() can
+ * help ensure that.
+ */
+ smp_wmb();
+
+ /* no pte bits to be modified, return 0x0UL */
+ return 0x0UL;
+}
+
#define __rpte_to_pte(r) ((r).pte)
extern bool __rpte_sub_valid(real_pte_t rpte, unsigned long index);
/*
--
1.7.1
Rearrange 64K PTE bits to free up bits 3, 4, 5 and 6,
in the 4K backed HPTE pages.These bits continue to be used
for 64K backed HPTE pages in this patch, but will be freed
up in the next patch. The bit numbers are big-endian as
defined in the ISA3.0
The patch does the following change to the 4k htpe backed
64K PTE's format.
H_PAGE_BUSY moves from bit 3 to bit 9 (B bit in the figure
below)
V0 which occupied bit 4 is not used anymore.
V1 which occupied bit 5 is not used anymore.
V2 which occupied bit 6 is not used anymore.
V3 which occupied bit 7 is not used anymore.
Before the patch, the 4k backed 64k PTE format was as follows
0 1 2 3 4 5 6 7 8 9 10...........................63
: : : : : : : : : : : :
v v v v v v v v v v v v
,-,-,-,-,--,--,--,--,-,-,-,-,-,------------------,-,-,-,
|x|x|x|B|V0|V1|V2|V3|x|x|x|x|x|................|.|.|.|.| <- primary pte
'_'_'_'_'__'__'__'__'_'_'_'_'_'________________'_'_'_'_'
|S|G|I|X|S |G |I |X |S|G|I|X|..................|S|G|I|X| <- secondary pte
'_'_'_'_'__'__'__'__'_'_'_'_'__________________'_'_'_'_'
After the patch, the 4k backed 64k PTE format is as follows
0 1 2 3 4 5 6 7 8 9 10...........................63
: : : : : : : : : : : :
v v v v v v v v v v v v
,-,-,-,-,--,--,--,--,-,-,-,-,-,------------------,-,-,-,
|x|x|x| | | | | |x|B|x|x|x|................|.|.|.|.| <- primary pte
'_'_'_'_'__'__'__'__'_'_'_'_'_'________________'_'_'_'_'
|S|G|I|X|S |G |I |X |S|G|I|X|..................|S|G|I|X| <- secondary pte
'_'_'_'_'__'__'__'__'_'_'_'_'__________________'_'_'_'_'
the four bits S,G,I,X (one quadruplet per 4k HPTE) that
cache the hash-bucket slot value, is initialized to
1,1,1,1 indicating -- an invalid slot. If a HPTE gets
cached in a 1111 slot(i.e 7th slot of secondary hash
bucket), it is released immediately. In other words,
even though 1111 is a valid slot value in the hash
bucket, we consider it invalid and release the slot and
the HPTE. This gives us the opportunity to determine
the validity of S,G,I,X bits based on its contents and
not on any of the bits V0,V1,V2 or V3 in the primary PTE
When we release a HPTE cached in the 1111 slot
we also release a legitimate slot in the primary
hash bucket and unmap its corresponding HPTE. This
is to ensure that we do get a HPTE cached in a slot
of the primary hash bucket, the next time we retry.
Though treating 1111 slot as invalid, reduces the
number of available slots in the hash bucket and may
have an effect on the performance, the probabilty of
hitting a 1111 slot is extermely low.
Compared to the current scheme, the above described
scheme reduces the number of false hash table updates
significantly and has the added advantage of
releasing four valuable PTE bits for other purpose.
NOTE:even though bits 3, 4, 5, 6, 7 are not used when
the 64K PTE is backed by 4k HPTE, they continue to be
used if the PTE gets backed by 64k HPTE. The next
patch will decouple that aswell, and truely release the
bits.
This idea was jointly developed by Paul Mackerras,
Aneesh, Michael Ellermen and myself.
4K PTE format remains unchanged currently.
The patch does the following code changes
a) PTE flags are split between 64k and 4k header files.
b) __hash_page_4K() is reimplemented to reflect the
above logic.
Signed-off-by: Ram Pai <[email protected]>
---
arch/powerpc/include/asm/book3s/64/hash-4k.h | 2 +
arch/powerpc/include/asm/book3s/64/hash-64k.h | 8 +--
arch/powerpc/include/asm/book3s/64/hash.h | 1 -
arch/powerpc/mm/hash64_64k.c | 78 ++++++++++++++++---------
arch/powerpc/mm/hash_utils_64.c | 4 +-
5 files changed, 57 insertions(+), 36 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/64/hash-4k.h b/arch/powerpc/include/asm/book3s/64/hash-4k.h
index b4b5e6b..a306c0a 100644
--- a/arch/powerpc/include/asm/book3s/64/hash-4k.h
+++ b/arch/powerpc/include/asm/book3s/64/hash-4k.h
@@ -16,6 +16,8 @@
#define H_PUD_TABLE_SIZE (sizeof(pud_t) << H_PUD_INDEX_SIZE)
#define H_PGD_TABLE_SIZE (sizeof(pgd_t) << H_PGD_INDEX_SIZE)
+#define H_PAGE_BUSY _RPAGE_RSV1 /* software: PTE & hash are busy */
+
/* PTE flags to conserve for HPTE identification */
#define _PAGE_HPTEFLAGS (H_PAGE_BUSY | H_PAGE_HASHPTE | \
H_PAGE_F_SECOND | H_PAGE_F_GIX)
diff --git a/arch/powerpc/include/asm/book3s/64/hash-64k.h b/arch/powerpc/include/asm/book3s/64/hash-64k.h
index 9732837..62e580c 100644
--- a/arch/powerpc/include/asm/book3s/64/hash-64k.h
+++ b/arch/powerpc/include/asm/book3s/64/hash-64k.h
@@ -12,18 +12,14 @@
*/
#define H_PAGE_COMBO _RPAGE_RPN0 /* this is a combo 4k page */
#define H_PAGE_4K_PFN _RPAGE_RPN1 /* PFN is for a single 4k page */
+#define H_PAGE_BUSY _RPAGE_RPN42 /* software: PTE & hash are busy */
+
/*
* We need to differentiate between explicit huge page and THP huge
* page, since THP huge page also need to track real subpage details
*/
#define H_PAGE_THP_HUGE H_PAGE_4K_PFN
-/*
- * Used to track subpage group valid if H_PAGE_COMBO is set
- * This overloads H_PAGE_F_GIX and H_PAGE_F_SECOND
- */
-#define H_PAGE_COMBO_VALID (H_PAGE_F_GIX | H_PAGE_F_SECOND)
-
/* PTE flags to conserve for HPTE identification */
#define _PAGE_HPTEFLAGS (H_PAGE_BUSY | H_PAGE_F_SECOND | \
H_PAGE_F_GIX | H_PAGE_HASHPTE | H_PAGE_COMBO)
diff --git a/arch/powerpc/include/asm/book3s/64/hash.h b/arch/powerpc/include/asm/book3s/64/hash.h
index 4e957b0..2d72964 100644
--- a/arch/powerpc/include/asm/book3s/64/hash.h
+++ b/arch/powerpc/include/asm/book3s/64/hash.h
@@ -9,7 +9,6 @@
*/
#define H_PTE_NONE_MASK _PAGE_HPTEFLAGS
#define H_PAGE_F_GIX_SHIFT 56
-#define H_PAGE_BUSY _RPAGE_RSV1 /* software: PTE & hash are busy */
#define H_PAGE_F_SECOND _RPAGE_RSV2 /* HPTE is in 2ndary HPTEG */
#define H_PAGE_F_GIX (_RPAGE_RSV3 | _RPAGE_RSV4 | _RPAGE_RPN44)
#define H_PAGE_HASHPTE _RPAGE_RPN43 /* PTE has associated HPTE */
diff --git a/arch/powerpc/mm/hash64_64k.c b/arch/powerpc/mm/hash64_64k.c
index 1a68cb1..e573bd3 100644
--- a/arch/powerpc/mm/hash64_64k.c
+++ b/arch/powerpc/mm/hash64_64k.c
@@ -15,34 +15,22 @@
#include <linux/mm.h>
#include <asm/machdep.h>
#include <asm/mmu.h>
+
/*
- * index from 0 - 15
+ * return true, if the entry has a slot value which
+ * the software considers as invalid.
*/
-bool __rpte_sub_valid(real_pte_t rpte, unsigned long index)
+static inline bool hpte_soft_invalid(unsigned long slot)
{
- unsigned long g_idx;
- unsigned long ptev = pte_val(rpte.pte);
-
- g_idx = (ptev & H_PAGE_COMBO_VALID) >> H_PAGE_F_GIX_SHIFT;
- index = index >> 2;
- if (g_idx & (0x1 << index))
- return true;
- else
- return false;
+ return ((slot & 0xfUL) == 0xfUL);
}
+
/*
* index from 0 - 15
*/
-static unsigned long mark_subptegroup_valid(unsigned long ptev, unsigned long index)
+bool __rpte_sub_valid(real_pte_t rpte, unsigned long index)
{
- unsigned long g_idx;
-
- if (!(ptev & H_PAGE_COMBO))
- return ptev;
- index = index >> 2;
- g_idx = 0x1 << index;
-
- return ptev | (g_idx << H_PAGE_F_GIX_SHIFT);
+ return !(hpte_soft_invalid(rpte.hidx >> (index << 2)));
}
int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid,
@@ -50,12 +38,12 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid,
int ssize, int subpg_prot)
{
real_pte_t rpte;
- unsigned long *hidxp;
unsigned long hpte_group;
+ unsigned long *hidxp;
unsigned int subpg_index;
unsigned long rflags, pa, hidx;
unsigned long old_pte, new_pte, subpg_pte;
- unsigned long vpn, hash, slot;
+ unsigned long vpn, hash, slot, gslot;
unsigned long shift = mmu_psize_defs[MMU_PAGE_4K].shift;
/*
@@ -116,8 +104,8 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid,
* On hash insert failure we use old pte value and we don't
* want slot information there if we have a insert failure.
*/
- old_pte &= ~(H_PAGE_HASHPTE | H_PAGE_F_GIX | H_PAGE_F_SECOND);
- new_pte &= ~(H_PAGE_HASHPTE | H_PAGE_F_GIX | H_PAGE_F_SECOND);
+ old_pte &= ~(H_PAGE_HASHPTE);
+ new_pte &= ~(H_PAGE_HASHPTE);
goto htab_insert_hpte;
}
/*
@@ -148,6 +136,15 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid,
}
htab_insert_hpte:
+
+ /*
+ * initialize all hidx entries to invalid value,
+ * the first time the PTE is about to allocate
+ * a 4K hpte
+ */
+ if (!(old_pte & H_PAGE_COMBO))
+ rpte.hidx = ~0x0UL;
+
/*
* handle H_PAGE_4K_PFN case
*/
@@ -172,15 +169,41 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid,
* Primary is full, try the secondary
*/
if (unlikely(slot == -1)) {
+ bool soft_invalid;
+
hpte_group = ((~hash & htab_hash_mask) * HPTES_PER_GROUP) & ~0x7UL;
slot = mmu_hash_ops.hpte_insert(hpte_group, vpn, pa,
rflags, HPTE_V_SECONDARY,
MMU_PAGE_4K, MMU_PAGE_4K,
ssize);
- if (slot == -1) {
- if (mftb() & 0x1)
+
+ soft_invalid = hpte_soft_invalid(slot);
+ if (unlikely(soft_invalid)) {
+ /*
+ * we got a valid slot from a hardware point of view.
+ * but we cannot use it, because we use this special
+ * value; as defined by hpte_soft_invalid(),
+ * to track invalid slots. We cannot use it.
+ * So invalidate it.
+ */
+ gslot = slot & _PTEIDX_GROUP_IX;
+ mmu_hash_ops.hpte_invalidate(hpte_group+gslot, vpn,
+ MMU_PAGE_4K, MMU_PAGE_4K,
+ ssize, 0);
+ }
+
+ if (unlikely(slot == -1 || soft_invalid)) {
+ /*
+ * for soft invalid slot, lets ensure that we
+ * release a slot from the primary, with the
+ * hope that we will acquire that slot next
+ * time we try. This will ensure that we do not
+ * get the same soft-invalid slot.
+ */
+ if (soft_invalid || (mftb() & 0x1))
hpte_group = ((hash & htab_hash_mask) *
HPTES_PER_GROUP) & ~0x7UL;
+
mmu_hash_ops.hpte_remove(hpte_group);
/*
* FIXME!! Should be try the group from which we removed ?
@@ -207,12 +230,11 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid,
hidxp = (unsigned long *)(ptep + PTRS_PER_PTE);
rpte.hidx &= ~(0xfUL << (subpg_index << 2));
*hidxp = rpte.hidx | (slot << (subpg_index << 2));
- new_pte = mark_subptegroup_valid(new_pte, subpg_index);
- new_pte |= H_PAGE_HASHPTE;
/*
* check __real_pte for details on matching smp_rmb()
*/
smp_wmb();
+ new_pte |= H_PAGE_HASHPTE;
*ptep = __pte(new_pte & ~H_PAGE_BUSY);
return 0;
}
diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index f2095ce..1b494d0 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -975,8 +975,9 @@ void __init hash__early_init_devtree(void)
void __init hash__early_init_mmu(void)
{
+#ifndef CONFIG_PPC_64K_PAGES
/*
- * We have code in __hash_page_64K() and elsewhere, which assumes it can
+ * We have code in __hash_page_4K() and elsewhere, which assumes it can
* do the following:
* new_pte |= (slot << H_PAGE_F_GIX_SHIFT) & (H_PAGE_F_SECOND | H_PAGE_F_GIX);
*
@@ -987,6 +988,7 @@ void __init hash__early_init_mmu(void)
* with a BUILD_BUG_ON().
*/
BUILD_BUG_ON(H_PAGE_F_SECOND != (1ul << (H_PAGE_F_GIX_SHIFT + 3)));
+#endif /* CONFIG_PPC_64K_PAGES */
htab_init_page_sizes();
--
1.7.1
On Wed, 2017-07-05 at 14:21 -0700, Ram Pai wrote:
> Rearrange 64K PTE bits to free up bits 3, 4, 5 and 6,
> in the 4K backed HPTE pages.These bits continue to be used
> for 64K backed HPTE pages in this patch, but will be freed
> up in the next patch. The bit numbers are big-endian as
> defined in the ISA3.0
>
> The patch does the following change to the 4k htpe backed
> 64K PTE's format.
>
The diagrams make the patch much easier to understand, thanks!
<snip>
> NOTE:even though bits 3, 4, 5, 6, 7 are not used when
> the 64K PTE is backed by 4k HPTE, they continue to be
> used if the PTE gets backed by 64k HPTE. The next
> patch will decouple that aswell, and truely release the
> bits.
>
<snip>
Balbir Singh.
On 07/06/2017 02:52 AM, Ram Pai wrote:
> Add documentation updates that capture PowerPC specific changes.
>
> Signed-off-by: Ram Pai <[email protected]>
> ---
> Documentation/vm/protection-keys.txt | 85 ++++++++++++++++++++++++++--------
> 1 files changed, 65 insertions(+), 20 deletions(-)
>
> diff --git a/Documentation/vm/protection-keys.txt b/Documentation/vm/protection-keys.txt
> index b643045..d50b6ab 100644
> --- a/Documentation/vm/protection-keys.txt
> +++ b/Documentation/vm/protection-keys.txt
> @@ -1,21 +1,46 @@
> -Memory Protection Keys for Userspace (PKU aka PKEYs) is a CPU feature
> -which will be found on future Intel CPUs.
> +Memory Protection Keys for Userspace (PKU aka PKEYs) is a CPU feature found in
> +new generation of intel CPUs and on PowerPC 7 and higher CPUs.
>
> Memory Protection Keys provides a mechanism for enforcing page-based
> -protections, but without requiring modification of the page tables
> -when an application changes protection domains. It works by
> -dedicating 4 previously ignored bits in each page table entry to a
> -"protection key", giving 16 possible keys.
> -
> -There is also a new user-accessible register (PKRU) with two separate
> -bits (Access Disable and Write Disable) for each key. Being a CPU
> -register, PKRU is inherently thread-local, potentially giving each
> -thread a different set of protections from every other thread.
> -
> -There are two new instructions (RDPKRU/WRPKRU) for reading and writing
> -to the new register. The feature is only available in 64-bit mode,
> -even though there is theoretically space in the PAE PTEs. These
> -permissions are enforced on data access only and have no effect on
> +protections, but without requiring modification of the page tables when an
> +application changes protection domains.
> +
> +
> +On Intel:
> +
> + It works by dedicating 4 previously ignored bits in each page table
> + entry to a "protection key", giving 16 possible keys.
> +
> + There is also a new user-accessible register (PKRU) with two separate
> + bits (Access Disable and Write Disable) for each key. Being a CPU
> + register, PKRU is inherently thread-local, potentially giving each
> + thread a different set of protections from every other thread.
> +
> + There are two new instructions (RDPKRU/WRPKRU) for reading and writing
> + to the new register. The feature is only available in 64-bit mode,
> + even though there is theoretically space in the PAE PTEs. These
> + permissions are enforced on data access only and have no effect on
> + instruction fetches.
> +
> +
> +On PowerPC:
> +
> + It works by dedicating 5 page table entry bits to a "protection key",
> + giving 32 possible keys.
> +
> + There is a user-accessible register (AMR) with two separate bits;
> + Access Disable and Write Disable, for each key. Being a CPU
> + register, AMR is inherently thread-local, potentially giving each
> + thread a different set of protections from every other thread. NOTE:
> + Disabling read permission does not disable write and vice-versa.
We can only enable/disable entire access or write. Then how
read permission can be changed with protection keys directly ?
On 07/06/2017 02:52 AM, Ram Pai wrote:
> Display the pkey number associated with the vma in smaps of a task.
> The key will be seen as below:
>
> ProtectionKey: 0
>
> Signed-off-by: Ram Pai <[email protected]>
> ---
> arch/powerpc/kernel/setup_64.c | 8 ++++++++
> 1 files changed, 8 insertions(+), 0 deletions(-)
>
> diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
> index f35ff9d..ebc82b3 100644
> --- a/arch/powerpc/kernel/setup_64.c
> +++ b/arch/powerpc/kernel/setup_64.c
> @@ -37,6 +37,7 @@
> #include <linux/memblock.h>
> #include <linux/memory.h>
> #include <linux/nmi.h>
> +#include <linux/pkeys.h>
>
> #include <asm/io.h>
> #include <asm/kdump.h>
> @@ -745,3 +746,10 @@ static int __init disable_hardlockup_detector(void)
> }
> early_initcall(disable_hardlockup_detector);
> #endif
> +
> +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
Why not for X86 protection keys ?
On 07/06/2017 02:52 AM, Ram Pai wrote:
> The value of the AMR register at the time of exception
> is made available in gp_regs[PT_AMR] of the siginfo.
>
> The value of the pkey, whose protection got violated,
> is made available in si_pkey field of the siginfo structure.
>
> Signed-off-by: Ram Pai <[email protected]>
> ---
> arch/powerpc/include/uapi/asm/ptrace.h | 3 ++-
> arch/powerpc/kernel/signal_32.c | 5 +++++
> arch/powerpc/kernel/signal_64.c | 4 ++++
> arch/powerpc/kernel/traps.c | 14 ++++++++++++++
> 4 files changed, 25 insertions(+), 1 deletions(-)
>
> diff --git a/arch/powerpc/include/uapi/asm/ptrace.h b/arch/powerpc/include/uapi/asm/ptrace.h
> index 8036b38..7ec2428 100644
> --- a/arch/powerpc/include/uapi/asm/ptrace.h
> +++ b/arch/powerpc/include/uapi/asm/ptrace.h
> @@ -108,8 +108,9 @@ struct pt_regs {
> #define PT_DAR 41
> #define PT_DSISR 42
> #define PT_RESULT 43
> -#define PT_DSCR 44
> #define PT_REGS_COUNT 44
> +#define PT_DSCR 44
> +#define PT_AMR 45
Why PT_DSCR was moved down ? This change is redundant here.
On 07/06/2017 02:52 AM, Ram Pai wrote:
> Capture the protection key that got violated in paca.
> This value will be used by used to inform the signal
> handler.
>
> Signed-off-by: Ram Pai <[email protected]>
> ---
> arch/powerpc/include/asm/paca.h | 1 +
> arch/powerpc/kernel/asm-offsets.c | 1 +
> arch/powerpc/mm/fault.c | 3 +++
> 3 files changed, 5 insertions(+), 0 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
> index c8bd1fc..0c06188 100644
> --- a/arch/powerpc/include/asm/paca.h
> +++ b/arch/powerpc/include/asm/paca.h
> @@ -94,6 +94,7 @@ struct paca_struct {
> u64 dscr_default; /* per-CPU default DSCR */
> #ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
> u64 paca_amr; /* value of amr at exception */
> + u16 paca_pkey; /* exception causing pkey */
> #endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
>
> #ifdef CONFIG_PPC_STD_MMU_64
> diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
> index 17f5d8a..7dff862 100644
> --- a/arch/powerpc/kernel/asm-offsets.c
> +++ b/arch/powerpc/kernel/asm-offsets.c
> @@ -244,6 +244,7 @@ int main(void)
>
> #ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
> OFFSET(PACA_AMR, paca_struct, paca_amr);
> + OFFSET(PACA_PKEY, paca_struct, paca_pkey);
> #endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
>
> OFFSET(ACCOUNT_STARTTIME, paca_struct, accounting.starttime);
> diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
> index a6710f5..c8674a7 100644
> --- a/arch/powerpc/mm/fault.c
> +++ b/arch/powerpc/mm/fault.c
> @@ -265,6 +265,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
> if (error_code & DSISR_KEYFAULT) {
> code = SEGV_PKUERR;
> get_paca()->paca_amr = read_amr();
> + get_paca()->paca_pkey = get_pte_pkey(current->mm, address);
> goto bad_area_nosemaphore;
> }
> #endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
> @@ -290,6 +291,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
>
> perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
>
> +
Stray empty line addition here.
> /*
> * We want to do this outside mmap_sem, because reading code around nip
> * can result in fault, which will cause a deadlock when called with
> @@ -453,6 +455,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
> if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE,
> is_exec, 0)) {
> get_paca()->paca_amr = read_amr();
> + get_paca()->paca_pkey = vma_pkey(vma);
Why not get_pte_pkey() here as well ? IIUC both these function would
give us the same pkey, then why is the difference when we process a
page fault for real protection key violation in HW compared to cross
checking of VMA protection key in SW for regular page faults.
On 07/06/2017 02:52 AM, Ram Pai wrote:
> get_pte_pkey() helper returns the pkey associated with
> a address corresponding to a given mm_struct.
>
> Signed-off-by: Ram Pai <[email protected]>
> ---
> arch/powerpc/include/asm/book3s/64/mmu-hash.h | 5 ++++
> arch/powerpc/mm/hash_utils_64.c | 28 +++++++++++++++++++++++++
> 2 files changed, 33 insertions(+), 0 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/book3s/64/mmu-hash.h b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
> index f7a6ed3..369f9ff 100644
> --- a/arch/powerpc/include/asm/book3s/64/mmu-hash.h
> +++ b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
> @@ -450,6 +450,11 @@ extern int hash_page(unsigned long ea, unsigned long access, unsigned long trap,
> int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid,
> pte_t *ptep, unsigned long trap, unsigned long flags,
> int ssize, unsigned int shift, unsigned int mmu_psize);
> +
> +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
> +u16 get_pte_pkey(struct mm_struct *mm, unsigned long address);
> +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
> +
> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> extern int __hash_page_thp(unsigned long ea, unsigned long access,
> unsigned long vsid, pmd_t *pmdp, unsigned long trap,
> diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
> index 1e74529..591990c 100644
> --- a/arch/powerpc/mm/hash_utils_64.c
> +++ b/arch/powerpc/mm/hash_utils_64.c
> @@ -1573,6 +1573,34 @@ void hash_preload(struct mm_struct *mm, unsigned long ea,
> local_irq_restore(flags);
> }
>
> +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
> +/*
> + * return the protection key associated with the given address
> + * and the mm_struct.
> + */
> +u16 get_pte_pkey(struct mm_struct *mm, unsigned long address)
> +{
> + pte_t *ptep;
> + u16 pkey = 0;
> + unsigned long flags;
> +
> + if (REGION_ID(address) == VMALLOC_REGION_ID)
> + mm = &init_mm;
IIUC, protection keys are only applicable for user space. This
function is getting used to populate siginfo structure. Then how
can we ever request this for any address in VMALLOC region.
> +
> + if (!mm || !mm->pgd)
> + return 0;
Is this really required at this stage ?
On 07/06/2017 02:51 AM, Ram Pai wrote:
> Memory protection keys enable applications to protect its
> address space from inadvertent access or corruption from
> itself.
>
> The overall idea:
>
> A process allocates a key and associates it with
> an address range within its address space.
> The process then can dynamically set read/write
> permissions on the key without involving the
> kernel. Any code that violates the permissions
> of the address space; as defined by its associated
> key, will receive a segmentation fault.
>
> This patch series enables the feature on PPC64 HPTE
> platform.
>
> ISA3.0 section 5.7.13 describes the detailed specifications.
>
>
> Testing:
> This patch series has passed all the protection key
> tests available in the selftests directory.
> The tests are updated to work on both x86 and powerpc.
>
> version v5:
> (1) reverted back to the old design -- store the
> key in the pte, instead of bypassing it.
> The v4 design slowed down the hash page path.
> (2) detects key violation when kernel is told to
> access user pages.
> (3) further refined the patches into smaller consumable
> units
> (4) page faults handlers captures the faulting key
> from the pte instead of the vma. This closes a
> race between where the key update in the vma and
> a key fault caused cause by the key programmed
> in the pte.
> (5) a key created with access-denied should
> also set it up to deny write. Fixed it.
> (6) protection-key number is displayed in smaps
> the x86 way.
Hello Ram,
This patch series has now grown a lot. Do you have this
hosted some where for us to pull and test it out ? BTW
do you have data points to show the difference in
performance between this version and the last one where
we skipped the bits from PTE and directly programmed the
HPTE entries looking into VMA bits.
- Anshuman
On Mon, Jul 10, 2017 at 08:40:19AM +0530, Anshuman Khandual wrote:
> On 07/06/2017 02:52 AM, Ram Pai wrote:
> > Capture the protection key that got violated in paca.
> > This value will be used by used to inform the signal
> > handler.
> >
> > Signed-off-by: Ram Pai <[email protected]>
> > ---
> > arch/powerpc/include/asm/paca.h | 1 +
> > arch/powerpc/kernel/asm-offsets.c | 1 +
> > arch/powerpc/mm/fault.c | 3 +++
> > 3 files changed, 5 insertions(+), 0 deletions(-)
> >
> > diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
> > index c8bd1fc..0c06188 100644
> > --- a/arch/powerpc/include/asm/paca.h
> > +++ b/arch/powerpc/include/asm/paca.h
> > @@ -94,6 +94,7 @@ struct paca_struct {
> > u64 dscr_default; /* per-CPU default DSCR */
> > #ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
> > u64 paca_amr; /* value of amr at exception */
> > + u16 paca_pkey; /* exception causing pkey */
> > #endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
> >
> > #ifdef CONFIG_PPC_STD_MMU_64
> > diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
> > index 17f5d8a..7dff862 100644
> > --- a/arch/powerpc/kernel/asm-offsets.c
> > +++ b/arch/powerpc/kernel/asm-offsets.c
> > @@ -244,6 +244,7 @@ int main(void)
> >
> > #ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
> > OFFSET(PACA_AMR, paca_struct, paca_amr);
> > + OFFSET(PACA_PKEY, paca_struct, paca_pkey);
> > #endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
> >
> > OFFSET(ACCOUNT_STARTTIME, paca_struct, accounting.starttime);
> > diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
> > index a6710f5..c8674a7 100644
> > --- a/arch/powerpc/mm/fault.c
> > +++ b/arch/powerpc/mm/fault.c
> > @@ -265,6 +265,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
> > if (error_code & DSISR_KEYFAULT) {
> > code = SEGV_PKUERR;
> > get_paca()->paca_amr = read_amr();
> > + get_paca()->paca_pkey = get_pte_pkey(current->mm, address);
> > goto bad_area_nosemaphore;
> > }
> > #endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
> > @@ -290,6 +291,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
> >
> > perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
> >
> > +
>
> Stray empty line addition here.
>
> > /*
> > * We want to do this outside mmap_sem, because reading code around nip
> > * can result in fault, which will cause a deadlock when called with
> > @@ -453,6 +455,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
> > if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE,
> > is_exec, 0)) {
> > get_paca()->paca_amr = read_amr();
> > + get_paca()->paca_pkey = vma_pkey(vma);
>
> Why not get_pte_pkey() here as well ? IIUC both these function would
> give us the same pkey, then why is the difference when we process a
> page fault for real protection key violation in HW compared to cross
> checking of VMA protection key in SW for regular page faults.
Unfortunately if we have reached here, it means the pgd-pmd-pdt-...pte
structures have not yet been totally populated for the task. Hence we
cannot walk the tree, to find the pte, to find the key. hence we have to
depend on vma_pkey() to get the key from the vma.
RP
--
Ram Pai
On Mon, Jul 10, 2017 at 08:41:30AM +0530, Anshuman Khandual wrote:
> On 07/06/2017 02:52 AM, Ram Pai wrote:
> > get_pte_pkey() helper returns the pkey associated with
> > a address corresponding to a given mm_struct.
> >
> > Signed-off-by: Ram Pai <[email protected]>
> > ---
> > arch/powerpc/include/asm/book3s/64/mmu-hash.h | 5 ++++
> > arch/powerpc/mm/hash_utils_64.c | 28 +++++++++++++++++++++++++
> > 2 files changed, 33 insertions(+), 0 deletions(-)
> >
> > diff --git a/arch/powerpc/include/asm/book3s/64/mmu-hash.h b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
> > index f7a6ed3..369f9ff 100644
> > --- a/arch/powerpc/include/asm/book3s/64/mmu-hash.h
> > +++ b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
> > @@ -450,6 +450,11 @@ extern int hash_page(unsigned long ea, unsigned long access, unsigned long trap,
> > int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid,
> > pte_t *ptep, unsigned long trap, unsigned long flags,
> > int ssize, unsigned int shift, unsigned int mmu_psize);
> > +
> > +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
> > +u16 get_pte_pkey(struct mm_struct *mm, unsigned long address);
> > +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
> > +
> > #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> > extern int __hash_page_thp(unsigned long ea, unsigned long access,
> > unsigned long vsid, pmd_t *pmdp, unsigned long trap,
> > diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
> > index 1e74529..591990c 100644
> > --- a/arch/powerpc/mm/hash_utils_64.c
> > +++ b/arch/powerpc/mm/hash_utils_64.c
> > @@ -1573,6 +1573,34 @@ void hash_preload(struct mm_struct *mm, unsigned long ea,
> > local_irq_restore(flags);
> > }
> >
> > +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
> > +/*
> > + * return the protection key associated with the given address
> > + * and the mm_struct.
> > + */
> > +u16 get_pte_pkey(struct mm_struct *mm, unsigned long address)
> > +{
> > + pte_t *ptep;
> > + u16 pkey = 0;
> > + unsigned long flags;
> > +
> > + if (REGION_ID(address) == VMALLOC_REGION_ID)
> > + mm = &init_mm;
>
> IIUC, protection keys are only applicable for user space. This
> function is getting used to populate siginfo structure. Then how
> can we ever request this for any address in VMALLOC region.
make sense. this check is not needed.
>
> > +
> > + if (!mm || !mm->pgd)
> > + return 0;
>
> Is this really required at this stage ?
its a sanity check to gaurd against bad inputs. See a problem?
RP
--
Ram Pai
On Mon, Jul 10, 2017 at 08:37:04AM +0530, Anshuman Khandual wrote:
> On 07/06/2017 02:52 AM, Ram Pai wrote:
> > Add documentation updates that capture PowerPC specific changes.
> >
> > Signed-off-by: Ram Pai <[email protected]>
> > ---
> > Documentation/vm/protection-keys.txt | 85 ++++++++++++++++++++++++++--------
> > 1 files changed, 65 insertions(+), 20 deletions(-)
> >
> > diff --git a/Documentation/vm/protection-keys.txt b/Documentation/vm/protection-keys.txt
> > index b643045..d50b6ab 100644
> > --- a/Documentation/vm/protection-keys.txt
> > +++ b/Documentation/vm/protection-keys.txt
> > @@ -1,21 +1,46 @@
> > -Memory Protection Keys for Userspace (PKU aka PKEYs) is a CPU feature
> > -which will be found on future Intel CPUs.
> > +Memory Protection Keys for Userspace (PKU aka PKEYs) is a CPU feature found in
> > +new generation of intel CPUs and on PowerPC 7 and higher CPUs.
> >
> > Memory Protection Keys provides a mechanism for enforcing page-based
> > -protections, but without requiring modification of the page tables
> > -when an application changes protection domains. It works by
> > -dedicating 4 previously ignored bits in each page table entry to a
> > -"protection key", giving 16 possible keys.
> > -
> > -There is also a new user-accessible register (PKRU) with two separate
> > -bits (Access Disable and Write Disable) for each key. Being a CPU
> > -register, PKRU is inherently thread-local, potentially giving each
> > -thread a different set of protections from every other thread.
> > -
> > -There are two new instructions (RDPKRU/WRPKRU) for reading and writing
> > -to the new register. The feature is only available in 64-bit mode,
> > -even though there is theoretically space in the PAE PTEs. These
> > -permissions are enforced on data access only and have no effect on
> > +protections, but without requiring modification of the page tables when an
> > +application changes protection domains.
> > +
> > +
> > +On Intel:
> > +
> > + It works by dedicating 4 previously ignored bits in each page table
> > + entry to a "protection key", giving 16 possible keys.
> > +
> > + There is also a new user-accessible register (PKRU) with two separate
> > + bits (Access Disable and Write Disable) for each key. Being a CPU
> > + register, PKRU is inherently thread-local, potentially giving each
> > + thread a different set of protections from every other thread.
> > +
> > + There are two new instructions (RDPKRU/WRPKRU) for reading and writing
> > + to the new register. The feature is only available in 64-bit mode,
> > + even though there is theoretically space in the PAE PTEs. These
> > + permissions are enforced on data access only and have no effect on
> > + instruction fetches.
> > +
> > +
> > +On PowerPC:
> > +
> > + It works by dedicating 5 page table entry bits to a "protection key",
> > + giving 32 possible keys.
> > +
> > + There is a user-accessible register (AMR) with two separate bits;
> > + Access Disable and Write Disable, for each key. Being a CPU
> > + register, AMR is inherently thread-local, potentially giving each
> > + thread a different set of protections from every other thread. NOTE:
> > + Disabling read permission does not disable write and vice-versa.
>
> We can only enable/disable entire access or write. Then how
> read permission can be changed with protection keys directly ?
Good catch. On powerpc there is a disable read and disable write. They
both can be combined to disable access. Will fix the error. Read it
as 'Access Read' . thanks.
RP
On Mon, Jul 10, 2017 at 08:37:28AM +0530, Anshuman Khandual wrote:
> On 07/06/2017 02:52 AM, Ram Pai wrote:
> > Display the pkey number associated with the vma in smaps of a task.
> > The key will be seen as below:
> >
> > ProtectionKey: 0
> >
> > Signed-off-by: Ram Pai <[email protected]>
> > ---
> > arch/powerpc/kernel/setup_64.c | 8 ++++++++
> > 1 files changed, 8 insertions(+), 0 deletions(-)
> >
> > diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
> > index f35ff9d..ebc82b3 100644
> > --- a/arch/powerpc/kernel/setup_64.c
> > +++ b/arch/powerpc/kernel/setup_64.c
> > @@ -37,6 +37,7 @@
> > #include <linux/memblock.h>
> > #include <linux/memory.h>
> > #include <linux/nmi.h>
> > +#include <linux/pkeys.h>
> >
> > #include <asm/io.h>
> > #include <asm/kdump.h>
> > @@ -745,3 +746,10 @@ static int __init disable_hardlockup_detector(void)
> > }
> > early_initcall(disable_hardlockup_detector);
> > #endif
> > +
> > +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
>
> Why not for X86 protection keys ?
hmm.. I dont understand the comment.
--
Ram Pai
On Mon, Jul 10, 2017 at 11:13:23AM +0530, Anshuman Khandual wrote:
> On 07/06/2017 02:51 AM, Ram Pai wrote:
> > Memory protection keys enable applications to protect its
> > address space from inadvertent access or corruption from
> > itself.
> >
> > The overall idea:
> >
> > A process allocates a key and associates it with
> > an address range within its address space.
> > The process then can dynamically set read/write
> > permissions on the key without involving the
> > kernel. Any code that violates the permissions
> > of the address space; as defined by its associated
> > key, will receive a segmentation fault.
> >
> > This patch series enables the feature on PPC64 HPTE
> > platform.
> >
> > ISA3.0 section 5.7.13 describes the detailed specifications.
> >
> >
> > Testing:
> > This patch series has passed all the protection key
> > tests available in the selftests directory.
> > The tests are updated to work on both x86 and powerpc.
> >
> > version v5:
> > (1) reverted back to the old design -- store the
> > key in the pte, instead of bypassing it.
> > The v4 design slowed down the hash page path.
> > (2) detects key violation when kernel is told to
> > access user pages.
> > (3) further refined the patches into smaller consumable
> > units
> > (4) page faults handlers captures the faulting key
> > from the pte instead of the vma. This closes a
> > race between where the key update in the vma and
> > a key fault caused cause by the key programmed
> > in the pte.
> > (5) a key created with access-denied should
> > also set it up to deny write. Fixed it.
> > (6) protection-key number is displayed in smaps
> > the x86 way.
>
> Hello Ram,
>
> This patch series has now grown a lot. Do you have this
> hosted some where for us to pull and test it out ? BTW
https://github.com/rampai/memorykeys.git
branch memkey.v5.3
> do you have data points to show the difference in
> performance between this version and the last one where
> we skipped the bits from PTE and directly programmed the
> HPTE entries looking into VMA bits.
No. I dont. I am hoping you can help me out with this.
RP
On Sun, Jul 09, 2017 at 11:05:44PM -0700, Ram Pai wrote:
> On Mon, Jul 10, 2017 at 11:13:23AM +0530, Anshuman Khandual wrote:
> > On 07/06/2017 02:51 AM, Ram Pai wrote:
.....
>
> > do you have data points to show the difference in
> > performance between this version and the last one where
> > we skipped the bits from PTE and directly programmed the
> > HPTE entries looking into VMA bits.
>
> No. I dont. I am hoping you can help me out with this.
Anshuman,
The last version where we skipped the PTE bits is guaranteed
to be bad/horrible. For one it has a bug, since it accesses
the vma without a lock. And even if we did take a lock, it
will slow down the page-hash path un-acceptably. So there is
no point measuring the performance of that design.
I think the number we want to measure is -- the performance
with the current design and comparing that to the performance
without memkey feature. We want to find if there is
any degradation by adding this feature.
RP
On Wed, 5 Jul 2017 14:21:39 -0700
Ram Pai <[email protected]> wrote:
> Rearrange 64K PTE bits to free up bits 3, 4, 5 and 6
> in the 64K backed HPTE pages. This along with the earlier
> patch will entirely free up the four bits from 64K PTE.
> The bit numbers are big-endian as defined in the ISA3.0
>
> This patch does the following change to 64K PTE backed
> by 64K HPTE.
>
> H_PAGE_F_SECOND (S) which occupied bit 4 moves to the
> second part of the pte to bit 60.
> H_PAGE_F_GIX (G,I,X) which occupied bit 5, 6 and 7 also
> moves to the second part of the pte to bit 61,
> 62, 63, 64 respectively
>
> since bit 7 is now freed up, we move H_PAGE_BUSY (B) from
> bit 9 to bit 7.
>
> The second part of the PTE will hold
> (H_PAGE_F_SECOND|H_PAGE_F_GIX) at bit 60,61,62,63.
>
> Before the patch, the 64K HPTE backed 64k PTE format was
> as follows
>
> 0 1 2 3 4 5 6 7 8 9 10...........................63
> : : : : : : : : : : : :
> v v v v v v v v v v v v
>
> ,-,-,-,-,--,--,--,--,-,-,-,-,-,------------------,-,-,-,
> |x|x|x| |S |G |I |X |x|B|x|x|x|................|.|.|.|.| <- primary pte
> '_'_'_'_'__'__'__'__'_'_'_'_'_'________________'_'_'_'_'
> | | | | | | | | | | | | |..................| | | | | <- secondary pte
> '_'_'_'_'__'__'__'__'_'_'_'_'__________________'_'_'_'_'
>
It's not entirely clear what the secondary pte contains
today and how many of the bits are free today?
Balbir Singh.
On 07/10/2017 11:25 AM, Ram Pai wrote:
> On Mon, Jul 10, 2017 at 08:41:30AM +0530, Anshuman Khandual wrote:
>> On 07/06/2017 02:52 AM, Ram Pai wrote:
>>> get_pte_pkey() helper returns the pkey associated with
>>> a address corresponding to a given mm_struct.
>>>
>>> Signed-off-by: Ram Pai <[email protected]>
>>> ---
>>> arch/powerpc/include/asm/book3s/64/mmu-hash.h | 5 ++++
>>> arch/powerpc/mm/hash_utils_64.c | 28 +++++++++++++++++++++++++
>>> 2 files changed, 33 insertions(+), 0 deletions(-)
>>>
>>> diff --git a/arch/powerpc/include/asm/book3s/64/mmu-hash.h b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
>>> index f7a6ed3..369f9ff 100644
>>> --- a/arch/powerpc/include/asm/book3s/64/mmu-hash.h
>>> +++ b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
>>> @@ -450,6 +450,11 @@ extern int hash_page(unsigned long ea, unsigned long access, unsigned long trap,
>>> int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid,
>>> pte_t *ptep, unsigned long trap, unsigned long flags,
>>> int ssize, unsigned int shift, unsigned int mmu_psize);
>>> +
>>> +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
>>> +u16 get_pte_pkey(struct mm_struct *mm, unsigned long address);
>>> +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
>>> +
>>> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>> extern int __hash_page_thp(unsigned long ea, unsigned long access,
>>> unsigned long vsid, pmd_t *pmdp, unsigned long trap,
>>> diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
>>> index 1e74529..591990c 100644
>>> --- a/arch/powerpc/mm/hash_utils_64.c
>>> +++ b/arch/powerpc/mm/hash_utils_64.c
>>> @@ -1573,6 +1573,34 @@ void hash_preload(struct mm_struct *mm, unsigned long ea,
>>> local_irq_restore(flags);
>>> }
>>>
>>> +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
>>> +/*
>>> + * return the protection key associated with the given address
>>> + * and the mm_struct.
>>> + */
>>> +u16 get_pte_pkey(struct mm_struct *mm, unsigned long address)
>>> +{
>>> + pte_t *ptep;
>>> + u16 pkey = 0;
>>> + unsigned long flags;
>>> +
>>> + if (REGION_ID(address) == VMALLOC_REGION_ID)
>>> + mm = &init_mm;
>>
>> IIUC, protection keys are only applicable for user space. This
>> function is getting used to populate siginfo structure. Then how
>> can we ever request this for any address in VMALLOC region.
>
> make sense. this check is not needed.
>
>>
>>> +
>>> + if (!mm || !mm->pgd)
>>> + return 0;
>>
>> Is this really required at this stage ?
>
> its a sanity check to gaurd against bad inputs. See a problem?
I mean its okay, thought it to be unnecessary. Your call.
On Wed 05-07-17 14:21:37, Ram Pai wrote:
> Memory protection keys enable applications to protect its
> address space from inadvertent access or corruption from
> itself.
>
> The overall idea:
>
> A process allocates a key and associates it with
> an address range within its address space.
> The process then can dynamically set read/write
> permissions on the key without involving the
> kernel. Any code that violates the permissions
> of the address space; as defined by its associated
> key, will receive a segmentation fault.
>
> This patch series enables the feature on PPC64 HPTE
> platform.
>
> ISA3.0 section 5.7.13 describes the detailed specifications.
Could you describe the highlevel design of this feature in the cover
letter. I have tried to get some idea from the patchset but it was
really far from trivial. Patches are not very well split up (many
helpers are added without their users etc..).
>
> Testing:
> This patch series has passed all the protection key
> tests available in the selftests directory.
> The tests are updated to work on both x86 and powerpc.
>
> version v5:
> (1) reverted back to the old design -- store the
> key in the pte, instead of bypassing it.
> The v4 design slowed down the hash page path.
This surprised me a lot but I couldn't find the respective code. Why do
you need to store anything in the pte? My understanding of PKEYs is that
the setup and teardown should be very cheap and so no page tables have
to updated. Or do I just misunderstand what you wrote here?
--
Michal Hocko
SUSE Labs
On Tue, Jul 11, 2017 at 03:59:59PM +1000, Balbir Singh wrote:
> On Wed, 5 Jul 2017 14:21:39 -0700
> Ram Pai <[email protected]> wrote:
>
> > Rearrange 64K PTE bits to free up bits 3, 4, 5 and 6
> > in the 64K backed HPTE pages. This along with the earlier
> > patch will entirely free up the four bits from 64K PTE.
> > The bit numbers are big-endian as defined in the ISA3.0
> >
> > This patch does the following change to 64K PTE backed
> > by 64K HPTE.
> >
> > H_PAGE_F_SECOND (S) which occupied bit 4 moves to the
> > second part of the pte to bit 60.
> > H_PAGE_F_GIX (G,I,X) which occupied bit 5, 6 and 7 also
> > moves to the second part of the pte to bit 61,
> > 62, 63, 64 respectively
> >
> > since bit 7 is now freed up, we move H_PAGE_BUSY (B) from
> > bit 9 to bit 7.
> >
> > The second part of the PTE will hold
> > (H_PAGE_F_SECOND|H_PAGE_F_GIX) at bit 60,61,62,63.
> >
> > Before the patch, the 64K HPTE backed 64k PTE format was
> > as follows
> >
> > 0 1 2 3 4 5 6 7 8 9 10...........................63
> > : : : : : : : : : : : :
> > v v v v v v v v v v v v
> >
> > ,-,-,-,-,--,--,--,--,-,-,-,-,-,------------------,-,-,-,
> > |x|x|x| |S |G |I |X |x|B|x|x|x|................|.|.|.|.| <- primary pte
> > '_'_'_'_'__'__'__'__'_'_'_'_'_'________________'_'_'_'_'
> > | | | | | | | | | | | | |..................| | | | | <- secondary pte
> > '_'_'_'_'__'__'__'__'_'_'_'_'__________________'_'_'_'_'
> >
>
> It's not entirely clear what the secondary pte contains
> today and how many of the bits are free today?
The secondary pte today is not used for anything for 64k-hpte
backed ptes. It gets used the moment the pte gets backed by
4-k hptes. Till then the bits are available. And this patch
makes use of that knowledge.
Will add some words in the patch description towards this.
Thanks,
RP
>
> Balbir Singh.
--
Ram Pai
On 07/05/2017 02:22 PM, Ram Pai wrote:
> Abstracted out the arch specific code into the header file, and
> added powerpc specific changes.
>
> a) added 4k-backed hpte, memory allocator, powerpc specific.
> b) added three test case where the key is associated after the page is
> accessed/allocated/mapped.
> c) cleaned up the code to make checkpatch.pl happy
There's a *lot* of churn here. If it breaks, I'm going to have a heck
of a time figuring out which hunk broke. Is there any way to break this
up into a series of things that we have a chance at bisecting?
On 07/05/2017 02:21 PM, Ram Pai wrote:
> Currently there are only 4bits in the vma flags to support 16 keys
> on x86. powerpc supports 32 keys, which needs 5bits. This patch
> introduces an addition bit in the vma flags.
>
> Signed-off-by: Ram Pai <[email protected]>
> ---
> fs/proc/task_mmu.c | 6 +++++-
> include/linux/mm.h | 18 +++++++++++++-----
> 2 files changed, 18 insertions(+), 6 deletions(-)
>
> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> index f0c8b33..2ddc298 100644
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -666,12 +666,16 @@ static void show_smap_vma_flags(struct seq_file *m, struct vm_area_struct *vma)
> [ilog2(VM_MERGEABLE)] = "mg",
> [ilog2(VM_UFFD_MISSING)]= "um",
> [ilog2(VM_UFFD_WP)] = "uw",
> -#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
> +#ifdef CONFIG_ARCH_HAS_PKEYS
> /* These come out via ProtectionKey: */
> [ilog2(VM_PKEY_BIT0)] = "",
> [ilog2(VM_PKEY_BIT1)] = "",
> [ilog2(VM_PKEY_BIT2)] = "",
> [ilog2(VM_PKEY_BIT3)] = "",
> +#endif /* CONFIG_ARCH_HAS_PKEYS */
> +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
> + /* Additional bit in ProtectionKey: */
> + [ilog2(VM_PKEY_BIT4)] = "",
> #endif
I'd probably just leave the #ifdef out and eat the byte or whatever of
storage that this costs us on x86.
> };
> size_t i;
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 7cb17c6..3d35bcc 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -208,21 +208,29 @@ extern int overcommit_kbytes_handler(struct ctl_table *, int, void __user *,
> #define VM_HIGH_ARCH_BIT_1 33 /* bit only usable on 64-bit architectures */
> #define VM_HIGH_ARCH_BIT_2 34 /* bit only usable on 64-bit architectures */
> #define VM_HIGH_ARCH_BIT_3 35 /* bit only usable on 64-bit architectures */
> +#define VM_HIGH_ARCH_BIT_4 36 /* bit only usable on 64-bit arch */
Please just copy the above lines.
> #define VM_HIGH_ARCH_0 BIT(VM_HIGH_ARCH_BIT_0)
> #define VM_HIGH_ARCH_1 BIT(VM_HIGH_ARCH_BIT_1)
> #define VM_HIGH_ARCH_2 BIT(VM_HIGH_ARCH_BIT_2)
> #define VM_HIGH_ARCH_3 BIT(VM_HIGH_ARCH_BIT_3)
> +#define VM_HIGH_ARCH_4 BIT(VM_HIGH_ARCH_BIT_4)
> #endif /* CONFIG_ARCH_USES_HIGH_VMA_FLAGS */
>
> -#if defined(CONFIG_X86)
> -# define VM_PAT VM_ARCH_1 /* PAT reserves whole VMA at once (x86) */
> -#if defined (CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS)
> +#ifdef CONFIG_ARCH_HAS_PKEYS
> # define VM_PKEY_SHIFT VM_HIGH_ARCH_BIT_0
> -# define VM_PKEY_BIT0 VM_HIGH_ARCH_0 /* A protection key is a 4-bit value */
> +# define VM_PKEY_BIT0 VM_HIGH_ARCH_0
> # define VM_PKEY_BIT1 VM_HIGH_ARCH_1
> # define VM_PKEY_BIT2 VM_HIGH_ARCH_2
> # define VM_PKEY_BIT3 VM_HIGH_ARCH_3
> -#endif
> +#endif /* CONFIG_ARCH_HAS_PKEYS */
We have the space here, so can we just say that it's 4-bits on x86 and 5
on ppc?
> +#if defined(CONFIG_PPC64_MEMORY_PROTECTION_KEYS)
> +# define VM_PKEY_BIT4 VM_HIGH_ARCH_4 /* additional key bit used on ppc64 */
> +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
Why bother #ifdef'ing a #define?
> +#if defined(CONFIG_X86)
> +# define VM_PAT VM_ARCH_1 /* PAT reserves whole VMA at once (x86) */
> #elif defined(CONFIG_PPC)
> # define VM_SAO VM_ARCH_1 /* Strong Access Ordering (powerpc) */
> #elif defined(CONFIG_PARISC)
>
On 07/05/2017 02:21 PM, Ram Pai wrote:
> Currently sys_pkey_create() provides the ability to disable read
> and write permission on the key, at creation. powerpc has the
> hardware support to disable execute on a pkey as well.This patch
> enhances the interface to let disable execute at key creation
> time. x86 does not allow this. Hence the next patch will add
> ability in x86 to return error if PKEY_DISABLE_EXECUTE is
> specified.
>
> Signed-off-by: Ram Pai <[email protected]>
> ---
> include/uapi/asm-generic/mman-common.h | 4 +++-
> 1 files changed, 3 insertions(+), 1 deletions(-)
>
> diff --git a/include/uapi/asm-generic/mman-common.h b/include/uapi/asm-generic/mman-common.h
> index 8c27db0..bf4fa07 100644
> --- a/include/uapi/asm-generic/mman-common.h
> +++ b/include/uapi/asm-generic/mman-common.h
> @@ -74,7 +74,9 @@
>
> #define PKEY_DISABLE_ACCESS 0x1
> #define PKEY_DISABLE_WRITE 0x2
> +#define PKEY_DISABLE_EXECUTE 0x4
> #define PKEY_ACCESS_MASK (PKEY_DISABLE_ACCESS |\
> - PKEY_DISABLE_WRITE)
> + PKEY_DISABLE_WRITE |\
> + PKEY_DISABLE_EXECUTE)
If you do this, it breaks bisection. Can you please just do this in one
patch?
On 07/05/2017 02:21 PM, Ram Pai wrote:
> x86 does not support disabling execute permissions on a pkey.
>
> Signed-off-by: Ram Pai <[email protected]>
> ---
> arch/x86/kernel/fpu/xstate.c | 3 +++
> 1 files changed, 3 insertions(+), 0 deletions(-)
>
> diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c
> index c24ac1e..d582631 100644
> --- a/arch/x86/kernel/fpu/xstate.c
> +++ b/arch/x86/kernel/fpu/xstate.c
> @@ -900,6 +900,9 @@ int arch_set_user_pkey_access(struct task_struct *tsk, int pkey,
> if (!boot_cpu_has(X86_FEATURE_OSPKE))
> return -EINVAL;
>
> + if (init_val & PKEY_DISABLE_EXECUTE)
> + return -EINVAL;
I'd really rather that we define a supported mask instead of having each
architecture go through and list which ones it supports.
On 07/05/2017 02:22 PM, Ram Pai wrote:
> +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
> +void arch_show_smap(struct seq_file *m, struct vm_area_struct *vma)
> +{
> + seq_printf(m, "ProtectionKey: %8u\n", vma_pkey(vma));
> +}
> +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
This seems like kinda silly unnecessary duplication. Could we just put
this in the fs/proc/ code and #ifdef it on ARCH_HAS_PKEYS?
On 07/05/2017 02:22 PM, Ram Pai wrote:
> Add documentation updates that capture PowerPC specific changes.
>
> Signed-off-by: Ram Pai <[email protected]>
> ---
> Documentation/vm/protection-keys.txt | 85 ++++++++++++++++++++++++++--------
> 1 files changed, 65 insertions(+), 20 deletions(-)
>
> diff --git a/Documentation/vm/protection-keys.txt b/Documentation/vm/protection-keys.txt
> index b643045..d50b6ab 100644
> --- a/Documentation/vm/protection-keys.txt
> +++ b/Documentation/vm/protection-keys.txt
> @@ -1,21 +1,46 @@
> -Memory Protection Keys for Userspace (PKU aka PKEYs) is a CPU feature
> -which will be found on future Intel CPUs.
> +Memory Protection Keys for Userspace (PKU aka PKEYs) is a CPU feature found in
> +new generation of intel CPUs and on PowerPC 7 and higher CPUs.
Please try not to change the wording here. I really did mean to
literally put "future Intel CPUs." Also, you broke my nice wrapping. :)
I'm also thinking that this needs to be more generic. The ppc _CPU_
feature is *NOT* for userspace-only, right?
> Memory Protection Keys provides a mechanism for enforcing page-based
> -protections, but without requiring modification of the page tables
> -when an application changes protection domains. It works by
> -dedicating 4 previously ignored bits in each page table entry to a
> -"protection key", giving 16 possible keys.
> -
> -There is also a new user-accessible register (PKRU) with two separate
> -bits (Access Disable and Write Disable) for each key. Being a CPU
> -register, PKRU is inherently thread-local, potentially giving each
> -thread a different set of protections from every other thread.
> -
> -There are two new instructions (RDPKRU/WRPKRU) for reading and writing
> -to the new register. The feature is only available in 64-bit mode,
> -even though there is theoretically space in the PAE PTEs. These
> -permissions are enforced on data access only and have no effect on
> +protections, but without requiring modification of the page tables when an
> +application changes protection domains.
> +
> +
> +On Intel:
> +
> + It works by dedicating 4 previously ignored bits in each page table
> + entry to a "protection key", giving 16 possible keys.
> +
> + There is also a new user-accessible register (PKRU) with two separate
> + bits (Access Disable and Write Disable) for each key. Being a CPU
> + register, PKRU is inherently thread-local, potentially giving each
> + thread a different set of protections from every other thread.
> +
> + There are two new instructions (RDPKRU/WRPKRU) for reading and writing
> + to the new register. The feature is only available in 64-bit mode,
> + even though there is theoretically space in the PAE PTEs. These
> + permissions are enforced on data access only and have no effect on
> + instruction fetches.
> +
> +
> +On PowerPC:
> +
> + It works by dedicating 5 page table entry bits to a "protection key",
> + giving 32 possible keys.
> +
> + There is a user-accessible register (AMR) with two separate bits;
> + Access Disable and Write Disable, for each key. Being a CPU
> + register, AMR is inherently thread-local, potentially giving each
> + thread a different set of protections from every other thread. NOTE:
> + Disabling read permission does not disable write and vice-versa.
> +
> + The feature is available on 64-bit HPTE mode only.
> + 'mtspr 0xd, mem' reads the AMR register
> + 'mfspr mem, 0xd' writes into the AMR register.
The whole "being a CPU register" bits seem pretty common. Should it be
in the leading paragraph that is shared?
> +Permissions are enforced on data access only and have no effect on
> instruction fetches.
Shouldn't we mention the ppc support for execute-disable here too?
Also, *does* this apply to ppc? You have it both in this common area
and in the x86 portion.
> =========================== Syscalls ===========================
> @@ -28,9 +53,9 @@ There are 3 system calls which directly interact with pkeys:
> unsigned long prot, int pkey);
>
> Before a pkey can be used, it must first be allocated with
> -pkey_alloc(). An application calls the WRPKRU instruction
> +pkey_alloc(). An application calls the WRPKRU/AMR instruction
> directly in order to change access permissions to memory covered
> -with a key. In this example WRPKRU is wrapped by a C function
> +with a key. In this example WRPKRU/AMR is wrapped by a C function
> called pkey_set().
>
> int real_prot = PROT_READ|PROT_WRITE;
> @@ -52,11 +77,11 @@ is no longer in use:
> munmap(ptr, PAGE_SIZE);
> pkey_free(pkey);
>
> -(Note: pkey_set() is a wrapper for the RDPKRU and WRPKRU instructions.
> +(Note: pkey_set() is a wrapper for the RDPKRU,WRPKRU or AMR instructions.
> An example implementation can be found in
> tools/testing/selftests/x86/protection_keys.c)
>
> -=========================== Behavior ===========================
> +=========================== Behavior =================================
>
> The kernel attempts to make protection keys consistent with the
> behavior of a plain mprotect(). For instance if you do this:
> @@ -83,3 +108,23 @@ with a read():
> The kernel will send a SIGSEGV in both cases, but si_code will be set
> to SEGV_PKERR when violating protection keys versus SEGV_ACCERR when
> the plain mprotect() permissions are violated.
> +
> +
> +====================================================================
> + Semantic differences
> +
> +The following semantic differences exist between x86 and power.
> +
> +a) powerpc allows creation of a key with execute-disabled. The following
> + is allowed on powerpc.
> + pkey = pkey_alloc(0, PKEY_DISABLE_WRITE | PKEY_DISABLE_ACCESS |
> + PKEY_DISABLE_EXECUTE);
> + x86 disallows PKEY_DISABLE_EXECUTE during key creation.
It isn't that powerpc supports *creation* of the key. It doesn't
support setting PKEY_DISABLE_EXECUTE, period, which implies that you
can't set it at pkey_alloc(). That's a pretty important distinction, IMNHO.
> +b) changing the permission bits of a key from a signal handler does not
> + persist on x86. The PKRU specific fpregs entry needs to be modified
> + for it to persist. On powerpc the permission bits of the key can be
> + modified by programming the AMR register from the signal handler.
> + The changes persists across signal boundaries.
^"changes persist", not "persists".
On Tue, Jul 11, 2017 at 04:52:46PM +0200, Michal Hocko wrote:
> On Wed 05-07-17 14:21:37, Ram Pai wrote:
> > Memory protection keys enable applications to protect its
> > address space from inadvertent access or corruption from
> > itself.
> >
> > The overall idea:
> >
> > A process allocates a key and associates it with
> > an address range within its address space.
> > The process then can dynamically set read/write
> > permissions on the key without involving the
> > kernel. Any code that violates the permissions
> > of the address space; as defined by its associated
> > key, will receive a segmentation fault.
> >
> > This patch series enables the feature on PPC64 HPTE
> > platform.
> >
> > ISA3.0 section 5.7.13 describes the detailed specifications.
>
> Could you describe the highlevel design of this feature in the cover
> letter.
Yes it can be hard to understand without the big picture. I will
provide the high level design and the rationale behind the patch split
towards the end. Also I will have it in the cover letter for my next
revision of the patchset.
> I have tried to get some idea from the patchset but it was
> really far from trivial. Patches are not very well split up (many
> helpers are added without their users etc..).
I see your point. Earlier, I had the patches split such a way that the
users of the helpers were in the same patch as that of the helper.
But then comments from others lead to the current split.
>
> >
> > Testing:
> > This patch series has passed all the protection key
> > tests available in the selftests directory.
> > The tests are updated to work on both x86 and powerpc.
> >
> > version v5:
> > (1) reverted back to the old design -- store the
> > key in the pte, instead of bypassing it.
> > The v4 design slowed down the hash page path.
>
> This surprised me a lot but I couldn't find the respective code. Why do
> you need to store anything in the pte? My understanding of PKEYs is that
> the setup and teardown should be very cheap and so no page tables have
> to updated. Or do I just misunderstand what you wrote here?
Ideally the MMU looks at the PTE for keys, in order to enforce
protection. This is the case with x86 and is the case with power9 Radix
page table. Hence the keys have to be programmed into the PTE.
However with HPT on power, these keys do not necessarily have to be
programmed into the PTE. We could bypass the Linux Page Table Entry(PTE)
and instead just program them into the Hash Page Table(HPTE), since
the MMU does not refer the PTE but refers the HPTE. The last version
of the page attempted to do that. It worked as follows:
a) when a address range is requested to be associated with a key; by the
application through key_mprotect() system call, the kernel
stores that key in the vmas corresponding to that address
range.
b) Whenever there is a hash page fault for that address, the fault
handler reads the key from the VMA and programs the key into the
HPTE. __hash_page() is the function that does that.
c) Once the hpte is programmed, the MMU can sense key violations and
generate key-faults.
The problem is with step (b). This step is really a very critical
path which is performance sensitive. We dont want to add any delays.
However if we want to access the key from the vma, we will have to
hold the vma semaphore, and that is a big NO-NO. As a result, this
design had to be dropped.
I reverted back to the old design i.e the design in v4 version. In this
version we do the following:
a) when a address range is requested to be associated with a key; by the
application through key_mprotect() system call, the kernel
stores that key in the vmas corresponding to that address
range. Also the kernel programs the key into Linux PTE coresponding to all the
pages associated with the address range.
b) Whenever there is a hash page fault for that address, the fault
handler reads the key from the Linux PTE and programs the key into
the HPTE.
c) Once the HPTE is programmed, the MMU can sense key violations and
generate key-faults.
Since step (b) in this case has easy access to the Linux PTE, and hence
to the key, it is fast to access it and program the HPTE. Thus we avoid
taking any performance hit on this critical path.
Hope this explains the rationale,
As promised here is the high level design:
(1) When a application associates a key with a address range,
program the key in the Linux PTE.
(2) Program the key into HPTE, when a HPTE is allocated to back
the Linux PTE.
(3) And finally when the MMU detects a key violation due to invalid
user access, invoke the registered signal handler and provide it
with the key number that got violated and the state of the key
register (AMR) at the time it faulted.
In order to accomplish (1) we need to free up 5 bits in the Linux PTE to
store the key. This is accomplished by patches
powerpc: Free up four 64K PTE bits in 4K backed HPTE
powerpc: Free up four 64K PTE bits in 64K backed HPTE pages
The above two patches modify the way the HPTE slots are stored
in the PTE different various configurations. The details are abstracted
out into two helper functions introduced by the following two
patches.
powerpc: introduce pte_set_hash_slot() helper
powerpc: introduce pte_get_hash_gslot() helper
Now we go and modify all the code that can benefit by the
above abstraction. The following 5 patches handle that.
powerpc: use helper functions in __hash_page_64K() for 64K PTE
powerpc: use helper functions in __hash_page_huge() for 64K PTE
powerpc: use helper functions in __hash_page_4K() for 64K PTE
powerpc: use helper functions in __hash_page_4K() for 4K PTE
powerpc: use helper functions in flush_hash_page()
Since we have modified the PTE format, it has to be correctly reflected
in the dump report provided through debugfs. the following patch does
it.
powerpc: capture the PTE format changes in the dump pte report
Till now we have done nothing much other then prepared ourselves to
accomadate memory key bits in the PTE. The next set of patches do
the actual work.
The VMA stores the key value. The x86 implementation needed
just 4bits in the VMA flags, since they support only 16keys. But
PowerPC supports 32 keys, so we need one more bit. The following patch
does that.
mm: introduce an additional vma bit for powerpc pkey
Also x86 does not allow one to create a key with execute-denied permission.
PowerPC can handle that. So we add the ability to
support such a feature if the arch can handle it. The following two
patch help towards that.
mm: ability to disable execute permission on a key at creation
x86: disallow pkey creation with PKEY_DISABLE_EXECUTE
We than introduce the ability to house-keep the protection keys. There
are 32 keys. We need to track; which keys are available, which keys are
allocated and which keys are reserved. All that is handled in the
following patch
powerpc: initial plumbing for key management
Before we introduce the pkey_alloc() and pkey_free() system calls, we
need to implement infrastructure that can allocate and free the keys,
and can program the hardware registers correspondingly.
So the following patches enable that.
powerpc: helper function to read,write AMR,IAMR,UAMOR registers
powerpc: implementation for arch_set_user_pkey_access()
powerpc: sys_pkey_alloc() and sys_pkey_free() system calls
The key state has to be stored and restored across context switches, since
each task has its own key state. the next patch helps towards that.
powerpc: store and restore the pkey state across context switches
x86 implementation introduced the concept of execute-only key where a
key can be set aside with execute-only permissions and the kernel can
use the key to associate with address-spaces that are execute only. We
facilitate that requirement through the next patch
powerpc: introduce execute-only pkey
At this point we are ready to support the key_mprotect() system call.
the following four patches accomplish that. These patches togather
handle programming the key into the pte bits. All the hard work
done to release some pte bits; in the initially patches,
are finally bearing fruits here.
powerpc: ability to associate pkey to a vma
powerpc: implementation for arch_override_mprotect_pkey()
powerpc: map vma key-protection bits to pte key bits.
powerpc: sys_pkey_mprotect() system call
Given that the PTE holds the key bits, we can copy them
bit into the HPTE, because that is where they should land eventually
for any key-faults to trigger. The following patch accomplishes that.
powerpc: Program HPTE key protection bits
Side stepping a bit. We also need the ability for the kernel to validate
key violation when accessing user pages. things like copy_*_user().
So the following patches help towards that.
powerpc: check key protection for user page access
powerpc: helper to validate key-access permissions of a pte
Ok. back to the main theme. The key is programmed into the HPTE.
the MMU is able to detect key violations and generate key faults. But
then the kernel has to be cognizant of the key faults or else it will
drop them. So the next few patches help towards that.
powerpc: Handle exceptions caused by pkey violation
powerpc: implementation for arch_vma_access_permitted()
powerpc: Macro the mask used for checking DSI exception
Everything is in place now, just the final peice of informing user space
on key violation is missing. So the next set of patches accomplish that.
powerpc: capture AMR register content on pkey violation
powerpc: introduce get_pte_pkey() helper
powerpc: capture the violated protection key on fault
powerpc: Deliver SEGV signal on pkey violation
One missing piece. We need the ability to tell -- which key is associated
with each VMA my looking at the smaps. the following patch helps towards it.
procfs: display the protection-key number associated with a vma
Well everything accomplished...but how do we know if everything is in place
and works as expected? The next set of patches modify the selftest, by first
moving them into arch-independent directory and then abstracting out
the arch-depended pieces, and finally adding some additional tests
to make it even more robust.
selftest: Move protecton key selftest to arch neutral directory
selftest: PowerPC specific test updates to memory protection keys
Nothing is complete without Documentation. and that is what the final
two patches accomplish. Again they move the documentation into
arch independent directory and explains the differences between
x86 and powerpc.
Documentation: Move protecton key documentation to arch neutral directory
Documentation: PowerPC specific updates to memory protection keys
Hope the above explanation helps.
NOTE: key support for power9 radix is not supported yet, but the above
design will make it easy to support it as and when the hardware is ready
to handle it.
Thanks for your valuable comments.
RP
> --
> Michal Hocko
> SUSE Labs
--
Ram Pai
On Tue, 2017-07-11 at 11:11 -0700, Dave Hansen wrote:
> On 07/05/2017 02:21 PM, Ram Pai wrote:
> > Currently sys_pkey_create() provides the ability to disable read
> > and write permission on the key, at creation. powerpc has the
> > hardware support to disable execute on a pkey as well.This patch
> > enhances the interface to let disable execute at key creation
> > time. x86 does not allow this. Hence the next patch will add
> > ability in x86 to return error if PKEY_DISABLE_EXECUTE is
> > specified.
That leads to the question... How do you tell userspace.
(apologies if I missed that in an existing patch in the series)
How do we inform userspace of the key capabilities ? There are at least
two things userspace may want to know already:
- What protection bits are supported for a key
- How many keys exist
- Which keys are available for use by userspace. On PowerPC, the
kernel can reserve some keys for itself, so can the hypervisor. In
fact, they do.
Cheers,
Ben.
On Wed, Jul 12, 2017 at 07:29:37AM +1000, Benjamin Herrenschmidt wrote:
> On Tue, 2017-07-11 at 11:11 -0700, Dave Hansen wrote:
> > On 07/05/2017 02:21 PM, Ram Pai wrote:
> > > Currently sys_pkey_create() provides the ability to disable read
> > > and write permission on the key, at creation. powerpc has the
> > > hardware support to disable execute on a pkey as well.This patch
> > > enhances the interface to let disable execute at key creation
> > > time. x86 does not allow this. Hence the next patch will add
> > > ability in x86 to return error if PKEY_DISABLE_EXECUTE is
> > > specified.
>
> That leads to the question... How do you tell userspace.
>
> (apologies if I missed that in an existing patch in the series)
>
> How do we inform userspace of the key capabilities ? There are at least
> two things userspace may want to know already:
>
> - What protection bits are supported for a key
the userspace is the one which allocates the keys and enables/disables the
protection bits on the key. the kernel is just a facilitator. Now if the
use space wants to know the current permissions on a given key, it can
just read the AMR/PKRU register on powerpc/intel respectively.
>
> - How many keys exist
There is no standard way of finding this other than trying to allocate
as many till you fail. A procfs or sysfs file can be added to expose
this information.
>
> - Which keys are available for use by userspace. On PowerPC, the
> kernel can reserve some keys for itself, so can the hypervisor. In
> fact, they do.
this information can be exposed through /proc or /sysfs
I am sure there will be more demands and requirements as applications
start leveraging these feature.
RP
>
> Cheers,
> Ben.
--
Ram Pai
On 07/11/2017 02:51 PM, Ram Pai wrote:
> On Wed, Jul 12, 2017 at 07:29:37AM +1000, Benjamin Herrenschmidt wrote:
>> On Tue, 2017-07-11 at 11:11 -0700, Dave Hansen wrote:
>>> On 07/05/2017 02:21 PM, Ram Pai wrote:
>>>> Currently sys_pkey_create() provides the ability to disable read
>>>> and write permission on the key, at creation. powerpc has the
>>>> hardware support to disable execute on a pkey as well.This patch
>>>> enhances the interface to let disable execute at key creation
>>>> time. x86 does not allow this. Hence the next patch will add
>>>> ability in x86 to return error if PKEY_DISABLE_EXECUTE is
>>>> specified.
>>
>> That leads to the question... How do you tell userspace.
>>
>> (apologies if I missed that in an existing patch in the series)
>>
>> How do we inform userspace of the key capabilities ? There are at least
>> two things userspace may want to know already:
>>
>> - What protection bits are supported for a key
>
> the userspace is the one which allocates the keys and enables/disables the
> protection bits on the key. the kernel is just a facilitator. Now if the
> use space wants to know the current permissions on a given key, it can
> just read the AMR/PKRU register on powerpc/intel respectively.
Let's say I want to execute-disable a region. Can I use protection
keys? Do I do
pkey_mprotect(... PKEY_DISABLE_EXECUTE);
and assume that the -EINVAL is because PKEY_DISABLE_EXECUTE is
unsupported, or do I do:
#ifdef __ppc__
pkey = pkey_aloc();
pkey_mprotect(... PKEY_DISABLE_EXECUTE);
#else
mprotect();
#endif
>> - How many keys exist
>
> There is no standard way of finding this other than trying to allocate
> as many till you fail. A procfs or sysfs file can be added to expose
> this information.
It's also dynamic. On x86, you lose a key if you've used the
execute-only support. We also reserve the right to steal more in the
future if we want.
>> - Which keys are available for use by userspace. On PowerPC, the
>> kernel can reserve some keys for itself, so can the hypervisor. In
>> fact, they do.
>
> this information can be exposed through /proc or /sysfs
>
> I am sure there will be more demands and requirements as applications
> start leveraging these feature.
For 5 bits, I think just having someone run pkey_alloc() in a loop is
fine. I don't think we really need to enumerate it in some other way.
On Tue, 2017-07-11 at 12:32 -0700, Ram Pai wrote:
> Ideally the MMU looks at the PTE for keys, in order to enforce
> protection. This is the case with x86 and is the case with power9 Radix
> page table. Hence the keys have to be programmed into the PTE.
POWER9 radix doesn't currently support keys.
Cheers,
Ben.
On Tue, 2017-07-11 at 14:51 -0700, Ram Pai wrote:
> On Wed, Jul 12, 2017 at 07:29:37AM +1000, Benjamin Herrenschmidt wrote:
> > On Tue, 2017-07-11 at 11:11 -0700, Dave Hansen wrote:
> > > On 07/05/2017 02:21 PM, Ram Pai wrote:
> > > > Currently sys_pkey_create() provides the ability to disable read
> > > > and write permission on the key, at creation. powerpc has the
> > > > hardware support to disable execute on a pkey as well.This patch
> > > > enhances the interface to let disable execute at key creation
> > > > time. x86 does not allow this. Hence the next patch will add
> > > > ability in x86 to return error if PKEY_DISABLE_EXECUTE is
> > > > specified.
> >
> > That leads to the question... How do you tell userspace.
> >
> > (apologies if I missed that in an existing patch in the series)
> >
> > How do we inform userspace of the key capabilities ? There are at least
> > two things userspace may want to know already:
> >
> > - What protection bits are supported for a key
>
> the userspace is the one which allocates the keys and enables/disables the
> protection bits on the key. the kernel is just a facilitator. Now if the
> use space wants to know the current permissions on a given key, it can
> just read the AMR/PKRU register on powerpc/intel respectively.
You misunderstand. How does userspace knows on a given system whether
execute permission control is supported for keys ?
>
> >
> > - How many keys exist
>
> There is no standard way of finding this other than trying to allocate
> as many till you fail. A procfs or sysfs file can be added to expose
> this information.
>
> >
> > - Which keys are available for use by userspace. On PowerPC, the
> > kernel can reserve some keys for itself, so can the hypervisor. In
> > fact, they do.
>
> this information can be exposed through /proc or /sysfs
>
> I am sure there will be more demands and requirements as applications
> start leveraging these feature.
>
> RP
> >
> > Cheers,
> > Ben.
>
>
On Tue, Jul 11, 2017 at 02:57:30PM -0700, Dave Hansen wrote:
> On 07/11/2017 02:51 PM, Ram Pai wrote:
> > On Wed, Jul 12, 2017 at 07:29:37AM +1000, Benjamin Herrenschmidt wrote:
> >> On Tue, 2017-07-11 at 11:11 -0700, Dave Hansen wrote:
> >>> On 07/05/2017 02:21 PM, Ram Pai wrote:
> >>>> Currently sys_pkey_create() provides the ability to disable read
> >>>> and write permission on the key, at creation. powerpc has the
> >>>> hardware support to disable execute on a pkey as well.This patch
> >>>> enhances the interface to let disable execute at key creation
> >>>> time. x86 does not allow this. Hence the next patch will add
> >>>> ability in x86 to return error if PKEY_DISABLE_EXECUTE is
> >>>> specified.
> >>
> >> That leads to the question... How do you tell userspace.
> >>
> >> (apologies if I missed that in an existing patch in the series)
> >>
> >> How do we inform userspace of the key capabilities ? There are at least
> >> two things userspace may want to know already:
> >>
> >> - What protection bits are supported for a key
> >
> > the userspace is the one which allocates the keys and enables/disables the
> > protection bits on the key. the kernel is just a facilitator. Now if the
> > use space wants to know the current permissions on a given key, it can
> > just read the AMR/PKRU register on powerpc/intel respectively.
>
> Let's say I want to execute-disable a region. Can I use protection
> keys? Do I do
>
> pkey_mprotect(... PKEY_DISABLE_EXECUTE);
>
> and assume that the -EINVAL is because PKEY_DISABLE_EXECUTE is
> unsupported, or do I do:
>
> #ifdef __ppc__
> pkey = pkey_aloc();
> pkey_mprotect(... PKEY_DISABLE_EXECUTE);
> #else
> mprotect();
> #endif
on ppc you could do either
pkey = pkey_alloc(..,PKEY_DISABLE_EXECUTE);
pkey_mprotect(...,pkey);
or you can just do the x86 way
mprotect();
>
> >> - How many keys exist
> >
> > There is no standard way of finding this other than trying to allocate
> > as many till you fail. A procfs or sysfs file can be added to expose
> > this information.
>
> It's also dynamic. On x86, you lose a key if you've used the
> execute-only support. We also reserve the right to steal more in the
> future if we want.
total number of key supported on the architecture is a constant.
How many are reserved by the architecture is also probably known in
advance.
Now how many does the kernel use to reserve for itself is something
the kernel knows too and hence can expose it, though the information
may change dynamically as the kernel reserves and releases the key
based on its internal needs.
So i think we can expose this informaton through procfs/sysfs and let
the application decide how it wants to use the information.
>
> >> - Which keys are available for use by userspace. On PowerPC, the
> >> kernel can reserve some keys for itself, so can the hypervisor. In
> >> fact, they do.
> >
> > this information can be exposed through /proc or /sysfs
> >
> > I am sure there will be more demands and requirements as applications
> > start leveraging these feature.
>
> For 5 bits, I think just having someone run pkey_alloc() in a loop is
> fine. I don't think we really need to enumerate it in some other way.
--
Ram Pai
On Wed, Jul 12, 2017 at 08:08:56AM +1000, Benjamin Herrenschmidt wrote:
> On Tue, 2017-07-11 at 14:51 -0700, Ram Pai wrote:
> > On Wed, Jul 12, 2017 at 07:29:37AM +1000, Benjamin Herrenschmidt wrote:
> > > On Tue, 2017-07-11 at 11:11 -0700, Dave Hansen wrote:
> > > > On 07/05/2017 02:21 PM, Ram Pai wrote:
> > > > > Currently sys_pkey_create() provides the ability to disable read
> > > > > and write permission on the key, at creation. powerpc has the
> > > > > hardware support to disable execute on a pkey as well.This patch
> > > > > enhances the interface to let disable execute at key creation
> > > > > time. x86 does not allow this. Hence the next patch will add
> > > > > ability in x86 to return error if PKEY_DISABLE_EXECUTE is
> > > > > specified.
> > >
> > > That leads to the question... How do you tell userspace.
> > >
> > > (apologies if I missed that in an existing patch in the series)
> > >
> > > How do we inform userspace of the key capabilities ? There are at least
> > > two things userspace may want to know already:
> > >
> > > - What protection bits are supported for a key
> >
> > the userspace is the one which allocates the keys and enables/disables the
> > protection bits on the key. the kernel is just a facilitator. Now if the
> > use space wants to know the current permissions on a given key, it can
> > just read the AMR/PKRU register on powerpc/intel respectively.
>
> You misunderstand. How does userspace knows on a given system whether
> execute permission control is supported for keys ?
Ah..sorry. did not catch that part.
Yes the current patch set does not make that information available. The
indirect way of find this out is, to try to allocate a key with
execute-disable permission and decide based on the pass/fail status.
we can expose that information through a procfs/sysfs interface.
RP
On 07/11/2017 03:14 PM, Ram Pai wrote:
> Now how many does the kernel use to reserve for itself is something
> the kernel knows too and hence can expose it, though the information
> may change dynamically as the kernel reserves and releases the key
> based on its internal needs.
>
> So i think we can expose this informaton through procfs/sysfs and let
> the application decide how it wants to use the information.
Why bother? On x86, you'll be told either 14 or 15 depending on whether
you tried to create a mapping in the process without execute permission.
You can't use all 14 or 15 unless you actually call pkey_alloc() anyway
because the /proc check is inherently racy.
I'm just not sure I see the value in creating a new ABI for it.
On Tue, 11 Jul 2017 08:44:15 -0700
Ram Pai <[email protected]> wrote:
> On Tue, Jul 11, 2017 at 03:59:59PM +1000, Balbir Singh wrote:
> > On Wed, 5 Jul 2017 14:21:39 -0700
> > Ram Pai <[email protected]> wrote:
> >
> > > Rearrange 64K PTE bits to free up bits 3, 4, 5 and 6
> > > in the 64K backed HPTE pages. This along with the earlier
> > > patch will entirely free up the four bits from 64K PTE.
> > > The bit numbers are big-endian as defined in the ISA3.0
> > >
> > > This patch does the following change to 64K PTE backed
> > > by 64K HPTE.
> > >
> > > H_PAGE_F_SECOND (S) which occupied bit 4 moves to the
> > > second part of the pte to bit 60.
> > > H_PAGE_F_GIX (G,I,X) which occupied bit 5, 6 and 7 also
> > > moves to the second part of the pte to bit 61,
> > > 62, 63, 64 respectively
> > >
> > > since bit 7 is now freed up, we move H_PAGE_BUSY (B) from
> > > bit 9 to bit 7.
> > >
> > > The second part of the PTE will hold
> > > (H_PAGE_F_SECOND|H_PAGE_F_GIX) at bit 60,61,62,63.
> > >
> > > Before the patch, the 64K HPTE backed 64k PTE format was
> > > as follows
> > >
> > > 0 1 2 3 4 5 6 7 8 9 10...........................63
> > > : : : : : : : : : : : :
> > > v v v v v v v v v v v v
> > >
> > > ,-,-,-,-,--,--,--,--,-,-,-,-,-,------------------,-,-,-,
> > > |x|x|x| |S |G |I |X |x|B|x|x|x|................|.|.|.|.| <- primary pte
> > > '_'_'_'_'__'__'__'__'_'_'_'_'_'________________'_'_'_'_'
> > > | | | | | | | | | | | | |..................| | | | | <- secondary pte
> > > '_'_'_'_'__'__'__'__'_'_'_'_'__________________'_'_'_'_'
> > >
> >
> > It's not entirely clear what the secondary pte contains
> > today and how many of the bits are free today?
>
> The secondary pte today is not used for anything for 64k-hpte
> backed ptes. It gets used the moment the pte gets backed by
> 4-k hptes. Till then the bits are available. And this patch
> makes use of that knowledge.
OK.. but does this mean subpage-protection? Or do you mean
page size demotion? I presume it's the later.
Balbir Singh.
On Wed, 5 Jul 2017 14:21:51 -0700
Ram Pai <[email protected]> wrote:
> Initial plumbing to manage all the keys supported by the
> hardware.
>
> Total 32 keys are supported on powerpc. However pkey 0,1
> and 31 are reserved. So effectively we have 29 pkeys.
>
> This patch keeps track of reserved keys, allocated keys
> and keys that are currently free.
It looks like this patch will only work in guest mode?
Is that an assumption we've made? What happens if I use
keys when running in hypervisor mode?
>
> Also it adds skeletal functions and macros, that the
> architecture-independent code expects to be available.
>
> Signed-off-by: Ram Pai <[email protected]>
> ---
> arch/powerpc/Kconfig | 16 +++++
> arch/powerpc/include/asm/book3s/64/mmu.h | 9 +++
> arch/powerpc/include/asm/pkeys.h | 106 ++++++++++++++++++++++++++++++
> arch/powerpc/mm/mmu_context_book3s64.c | 5 ++
> 4 files changed, 136 insertions(+), 0 deletions(-)
> create mode 100644 arch/powerpc/include/asm/pkeys.h
>
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index f7c8f99..a2480b6 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -871,6 +871,22 @@ config SECCOMP
>
> If unsure, say Y. Only embedded should say N here.
>
> +config PPC64_MEMORY_PROTECTION_KEYS
> + prompt "PowerPC Memory Protection Keys"
> + def_bool y
> + # Note: only available in 64-bit mode
> + depends on PPC64 && PPC_64K_PAGES
> + select ARCH_USES_HIGH_VMA_FLAGS
> + select ARCH_HAS_PKEYS
> + ---help---
> + Memory Protection Keys provides a mechanism for enforcing
> + page-based protections, but without requiring modification of the
> + page tables when an application changes protection domains.
> +
> + For details, see Documentation/powerpc/protection-keys.txt
> +
> + If unsure, say y.
> +
> endmenu
>
> config ISA_DMA_API
> diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h
> index 77529a3..104ad72 100644
> --- a/arch/powerpc/include/asm/book3s/64/mmu.h
> +++ b/arch/powerpc/include/asm/book3s/64/mmu.h
> @@ -108,6 +108,15 @@ struct patb_entry {
> #ifdef CONFIG_SPAPR_TCE_IOMMU
> struct list_head iommu_group_mem_list;
> #endif
> +
> +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
> + /*
> + * Each bit represents one protection key.
> + * bit set -> key allocated
> + * bit unset -> key available for allocation
> + */
> + u32 pkey_allocation_map;
> +#endif
> } mm_context_t;
>
> /*
> diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h
> new file mode 100644
> index 0000000..9345767
> --- /dev/null
> +++ b/arch/powerpc/include/asm/pkeys.h
> @@ -0,0 +1,106 @@
> +#ifndef _ASM_PPC64_PKEYS_H
> +#define _ASM_PPC64_PKEYS_H
> +
> +#define arch_max_pkey() 32
> +#define ARCH_VM_PKEY_FLAGS (VM_PKEY_BIT0 | VM_PKEY_BIT1 | VM_PKEY_BIT2 | \
> + VM_PKEY_BIT3 | VM_PKEY_BIT4)
> +/*
> + * Bits are in BE format.
> + * NOTE: key 31, 1, 0 are not used.
> + * key 0 is used by default. It give read/write/execute permission.
> + * key 31 is reserved by the hypervisor.
> + * key 1 is recommended to be not used.
> + * PowerISA(3.0) page 1015, programming note.
> + */
> +#define PKEY_INITIAL_ALLOCAION 0xc0000001
Shouldn't this be exchanged via CAS for guests? Have you seen
ibm,processor-storage-keys?
> +
> +#define pkeybit_mask(pkey) (0x1 << (arch_max_pkey() - pkey - 1))
> +
> +#define mm_pkey_allocation_map(mm) (mm->context.pkey_allocation_map)
> +
> +#define mm_set_pkey_allocated(mm, pkey) { \
> + mm_pkey_allocation_map(mm) |= pkeybit_mask(pkey); \
> +}
> +
> +#define mm_set_pkey_free(mm, pkey) { \
> + mm_pkey_allocation_map(mm) &= ~pkeybit_mask(pkey); \
> +}
> +
> +#define mm_set_pkey_is_allocated(mm, pkey) \
> + (mm_pkey_allocation_map(mm) & pkeybit_mask(pkey))
> +
> +#define mm_set_pkey_is_reserved(mm, pkey) (PKEY_INITIAL_ALLOCAION & \
> + pkeybit_mask(pkey))
> +
> +static inline bool mm_pkey_is_allocated(struct mm_struct *mm, int pkey)
> +{
> + /* a reserved key is never considered as 'explicitly allocated' */
> + return (!mm_set_pkey_is_reserved(mm, pkey) &&
> + mm_set_pkey_is_allocated(mm, pkey));
> +}
> +
> +/*
> + * Returns a positive, 5-bit key on success, or -1 on failure.
> + */
> +static inline int mm_pkey_alloc(struct mm_struct *mm)
> +{
> + /*
> + * Note: this is the one and only place we make sure
> + * that the pkey is valid as far as the hardware is
> + * concerned. The rest of the kernel trusts that
> + * only good, valid pkeys come out of here.
> + */
> + u32 all_pkeys_mask = (u32)(~(0x0));
> + int ret;
> +
> + /*
> + * Are we out of pkeys? We must handle this specially
> + * because ffz() behavior is undefined if there are no
> + * zeros.
> + */
> + if (mm_pkey_allocation_map(mm) == all_pkeys_mask)
> + return -1;
> +
> + ret = arch_max_pkey() -
> + ffz((u32)mm_pkey_allocation_map(mm))
> + - 1;
> + mm_set_pkey_allocated(mm, ret);
> + return ret;
> +}
So the locking is provided by the caller for the function above?
> +
> +static inline int mm_pkey_free(struct mm_struct *mm, int pkey)
> +{
> + if (!mm_pkey_is_allocated(mm, pkey))
> + return -EINVAL;
> +
> + mm_set_pkey_free(mm, pkey);
> +
> + return 0;
> +}
> +
> +/*
> + * Try to dedicate one of the protection keys to be used as an
> + * execute-only protection key.
> + */
> +static inline int execute_only_pkey(struct mm_struct *mm)
> +{
> + return 0;
> +}
> +
> +static inline int arch_override_mprotect_pkey(struct vm_area_struct *vma,
> + int prot, int pkey)
> +{
> + return 0;
> +}
> +
> +static inline int arch_set_user_pkey_access(struct task_struct *tsk, int pkey,
> + unsigned long init_val)
> +{
> + return 0;
> +}
> +
> +static inline void pkey_mm_init(struct mm_struct *mm)
> +{
> + mm_pkey_allocation_map(mm) = PKEY_INITIAL_ALLOCAION;
> +}
> +#endif /*_ASM_PPC64_PKEYS_H */
> diff --git a/arch/powerpc/mm/mmu_context_book3s64.c b/arch/powerpc/mm/mmu_context_book3s64.c
> index c6dca2a..2da9931 100644
> --- a/arch/powerpc/mm/mmu_context_book3s64.c
> +++ b/arch/powerpc/mm/mmu_context_book3s64.c
> @@ -16,6 +16,7 @@
> #include <linux/string.h>
> #include <linux/types.h>
> #include <linux/mm.h>
> +#include <linux/pkeys.h>
> #include <linux/spinlock.h>
> #include <linux/idr.h>
> #include <linux/export.h>
> @@ -120,6 +121,10 @@ static int hash__init_new_context(struct mm_struct *mm)
>
> subpage_prot_init_new_context(mm);
>
> +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
> + pkey_mm_init(mm);
Can we have two variants of pkey_mm_init() and avoid #ifdefs around the code?
> +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
> +
> return index;
> }
>
Balbir Singh.
On Wed, 5 Jul 2017 14:21:52 -0700
Ram Pai <[email protected]> wrote:
> Implements helper functions to read and write the key related
> registers; AMR, IAMR, UAMOR.
>
> AMR register tracks the read,write permission of a key
> IAMR register tracks the execute permission of a key
> UAMOR register enables and disables a key
>
> Signed-off-by: Ram Pai <[email protected]>
> ---
> arch/powerpc/include/asm/book3s/64/pgtable.h | 60 ++++++++++++++++++++++++++
> 1 files changed, 60 insertions(+), 0 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
> index 85bc987..435d6a7 100644
> --- a/arch/powerpc/include/asm/book3s/64/pgtable.h
> +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
> @@ -428,6 +428,66 @@ static inline void huge_ptep_set_wrprotect(struct mm_struct *mm,
> pte_update(mm, addr, ptep, 0, _PAGE_PRIVILEGED, 1);
> }
>
> +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
> +
> +#include <asm/reg.h>
> +static inline u64 read_amr(void)
> +{
> + return mfspr(SPRN_AMR);
> +}
> +static inline void write_amr(u64 value)
> +{
> + mtspr(SPRN_AMR, value);
> +}
> +static inline u64 read_iamr(void)
> +{
> + return mfspr(SPRN_IAMR);
> +}
> +static inline void write_iamr(u64 value)
> +{
> + mtspr(SPRN_IAMR, value);
> +}
> +static inline u64 read_uamor(void)
> +{
> + return mfspr(SPRN_UAMOR);
> +}
> +static inline void write_uamor(u64 value)
> +{
> + mtspr(SPRN_UAMOR, value);
> +}
> +
> +#else /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
> +
> +static inline u64 read_amr(void)
> +{
> + WARN(1, "%s called with MEMORY PROTECTION KEYS disabled\n", __func__);
> + return -1;
> +}
Why do we need to have a version here if we are going to WARN(), why not
let the compilation fail if called from outside of CONFIG_PPC64_MEMORY_PROTECTION_KEYS?
Is that the intention?
Balbir Singh
On Tue 11-07-17 12:32:57, Ram Pai wrote:
> On Tue, Jul 11, 2017 at 04:52:46PM +0200, Michal Hocko wrote:
> > On Wed 05-07-17 14:21:37, Ram Pai wrote:
> > > Memory protection keys enable applications to protect its
> > > address space from inadvertent access or corruption from
> > > itself.
> > >
> > > The overall idea:
> > >
> > > A process allocates a key and associates it with
> > > an address range within its address space.
> > > The process then can dynamically set read/write
> > > permissions on the key without involving the
> > > kernel. Any code that violates the permissions
> > > of the address space; as defined by its associated
> > > key, will receive a segmentation fault.
> > >
> > > This patch series enables the feature on PPC64 HPTE
> > > platform.
> > >
> > > ISA3.0 section 5.7.13 describes the detailed specifications.
> >
> > Could you describe the highlevel design of this feature in the cover
> > letter.
>
> Yes it can be hard to understand without the big picture. I will
> provide the high level design and the rationale behind the patch split
> towards the end. Also I will have it in the cover letter for my next
> revision of the patchset.
Thanks!
> > I have tried to get some idea from the patchset but it was
> > really far from trivial. Patches are not very well split up (many
> > helpers are added without their users etc..).
>
> I see your point. Earlier, I had the patches split such a way that the
> users of the helpers were in the same patch as that of the helper.
> But then comments from others lead to the current split.
It is not my call here, obviously. I cannot review arch specific parts
due to lack of familiarity but it is a general good practice to include
helpers along with their users to make the usage clear. Also, as much as
I like small patches because they are easier to review, having very many
of them can lead to a harder review in the end because you easily lose
a higher level overview.
> > > Testing:
> > > This patch series has passed all the protection key
> > > tests available in the selftests directory.
> > > The tests are updated to work on both x86 and powerpc.
> > >
> > > version v5:
> > > (1) reverted back to the old design -- store the
> > > key in the pte, instead of bypassing it.
> > > The v4 design slowed down the hash page path.
> >
> > This surprised me a lot but I couldn't find the respective code. Why do
> > you need to store anything in the pte? My understanding of PKEYs is that
> > the setup and teardown should be very cheap and so no page tables have
> > to updated. Or do I just misunderstand what you wrote here?
>
> Ideally the MMU looks at the PTE for keys, in order to enforce
> protection. This is the case with x86 and is the case with power9 Radix
> page table. Hence the keys have to be programmed into the PTE.
But x86 doesn't update ptes for PKEYs, that would be just too expensive.
You could use standard mprotect to do the same...
> However with HPT on power, these keys do not necessarily have to be
> programmed into the PTE. We could bypass the Linux Page Table Entry(PTE)
> and instead just program them into the Hash Page Table(HPTE), since
> the MMU does not refer the PTE but refers the HPTE. The last version
> of the page attempted to do that. It worked as follows:
>
> a) when a address range is requested to be associated with a key; by the
> application through key_mprotect() system call, the kernel
> stores that key in the vmas corresponding to that address
> range.
>
> b) Whenever there is a hash page fault for that address, the fault
> handler reads the key from the VMA and programs the key into the
> HPTE. __hash_page() is the function that does that.
What causes the fault here?
> c) Once the hpte is programmed, the MMU can sense key violations and
> generate key-faults.
>
> The problem is with step (b). This step is really a very critical
> path which is performance sensitive. We dont want to add any delays.
> However if we want to access the key from the vma, we will have to
> hold the vma semaphore, and that is a big NO-NO. As a result, this
> design had to be dropped.
>
>
>
> I reverted back to the old design i.e the design in v4 version. In this
> version we do the following:
>
> a) when a address range is requested to be associated with a key; by the
> application through key_mprotect() system call, the kernel
> stores that key in the vmas corresponding to that address
> range. Also the kernel programs the key into Linux PTE coresponding to all the
> pages associated with the address range.
OK, so how is this any different from the regular mprotect then?
> b) Whenever there is a hash page fault for that address, the fault
> handler reads the key from the Linux PTE and programs the key into
> the HPTE.
>
> c) Once the HPTE is programmed, the MMU can sense key violations and
> generate key-faults.
>
>
> Since step (b) in this case has easy access to the Linux PTE, and hence
> to the key, it is fast to access it and program the HPTE. Thus we avoid
> taking any performance hit on this critical path.
>
> Hope this explains the rationale,
>
>
> As promised here is the high level design:
I will read through that later
[...]
--
Michal Hocko
SUSE Labs
On Wed 12-07-17 09:23:37, Michal Hocko wrote:
> On Tue 11-07-17 12:32:57, Ram Pai wrote:
[...]
> > Ideally the MMU looks at the PTE for keys, in order to enforce
> > protection. This is the case with x86 and is the case with power9 Radix
> > page table. Hence the keys have to be programmed into the PTE.
>
> But x86 doesn't update ptes for PKEYs, that would be just too expensive.
> You could use standard mprotect to do the same...
OK, this seems to be a misunderstanding and confusion on my end.
do_mprotect_pkey does mprotect_fixup even for the pkey path which is
quite surprising to me. I guess my misunderstanding comes from
Documentation/x86/protection-keys.txt
"
Memory Protection Keys provides a mechanism for enforcing page-based
protections, but without requiring modification of the page tables
when an application changes protection domains. It works by
dedicating 4 previously ignored bits in each page table entry to a
"protection key", giving 16 possible keys.
"
So please disregard my previous comments about page tables and sorry
about the confusion.
--
Michal Hocko
SUSE Labs
On Tue, Jul 11, 2017 at 10:33:09AM -0700, Dave Hansen wrote:
> On 07/05/2017 02:22 PM, Ram Pai wrote:
> > Abstracted out the arch specific code into the header file, and
> > added powerpc specific changes.
> >
> > a) added 4k-backed hpte, memory allocator, powerpc specific.
> > b) added three test case where the key is associated after the page is
> > accessed/allocated/mapped.
> > c) cleaned up the code to make checkpatch.pl happy
>
> There's a *lot* of churn here. If it breaks, I'm going to have a heck
> of a time figuring out which hunk broke. Is there any way to break this
> up into a series of things that we have a chance at bisecting?
Just finished breaking down the changes into 20 gradual increments.
I have pushed it to my github tree at
https://github.com/rampai/memorykeys.git
branch is memkey.v6-rc3
See if it works for you. I am sure I would have broken something on
x86 since I dont have a x86 platform to test.
Let me know, Thanks,
RP
On Tue, Jul 11, 2017 at 11:10:46AM -0700, Dave Hansen wrote:
> On 07/05/2017 02:21 PM, Ram Pai wrote:
> > Currently there are only 4bits in the vma flags to support 16 keys
> > on x86. powerpc supports 32 keys, which needs 5bits. This patch
> > introduces an addition bit in the vma flags.
> >
> > Signed-off-by: Ram Pai <[email protected]>
> > ---
> > fs/proc/task_mmu.c | 6 +++++-
> > include/linux/mm.h | 18 +++++++++++++-----
> > 2 files changed, 18 insertions(+), 6 deletions(-)
> >
> > diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> > index f0c8b33..2ddc298 100644
> > --- a/fs/proc/task_mmu.c
> > +++ b/fs/proc/task_mmu.c
> > @@ -666,12 +666,16 @@ static void show_smap_vma_flags(struct seq_file *m, struct vm_area_struct *vma)
> > [ilog2(VM_MERGEABLE)] = "mg",
> > [ilog2(VM_UFFD_MISSING)]= "um",
> > [ilog2(VM_UFFD_WP)] = "uw",
> > -#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
> > +#ifdef CONFIG_ARCH_HAS_PKEYS
> > /* These come out via ProtectionKey: */
> > [ilog2(VM_PKEY_BIT0)] = "",
> > [ilog2(VM_PKEY_BIT1)] = "",
> > [ilog2(VM_PKEY_BIT2)] = "",
> > [ilog2(VM_PKEY_BIT3)] = "",
> > +#endif /* CONFIG_ARCH_HAS_PKEYS */
> > +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
> > + /* Additional bit in ProtectionKey: */
> > + [ilog2(VM_PKEY_BIT4)] = "",
> > #endif
>
> I'd probably just leave the #ifdef out and eat the byte or whatever of
> storage that this costs us on x86.
fine with me.
>
> > };
> > size_t i;
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index 7cb17c6..3d35bcc 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -208,21 +208,29 @@ extern int overcommit_kbytes_handler(struct ctl_table *, int, void __user *,
> > #define VM_HIGH_ARCH_BIT_1 33 /* bit only usable on 64-bit architectures */
> > #define VM_HIGH_ARCH_BIT_2 34 /* bit only usable on 64-bit architectures */
> > #define VM_HIGH_ARCH_BIT_3 35 /* bit only usable on 64-bit architectures */
> > +#define VM_HIGH_ARCH_BIT_4 36 /* bit only usable on 64-bit arch */
>
> Please just copy the above lines.
Just copying over makes checkpatch.pl unhappy. It exceeds 80 columns.
>
> > #define VM_HIGH_ARCH_0 BIT(VM_HIGH_ARCH_BIT_0)
> > #define VM_HIGH_ARCH_1 BIT(VM_HIGH_ARCH_BIT_1)
> > #define VM_HIGH_ARCH_2 BIT(VM_HIGH_ARCH_BIT_2)
> > #define VM_HIGH_ARCH_3 BIT(VM_HIGH_ARCH_BIT_3)
> > +#define VM_HIGH_ARCH_4 BIT(VM_HIGH_ARCH_BIT_4)
> > #endif /* CONFIG_ARCH_USES_HIGH_VMA_FLAGS */
> >
> > -#if defined(CONFIG_X86)
> > -# define VM_PAT VM_ARCH_1 /* PAT reserves whole VMA at once (x86) */
> > -#if defined (CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS)
> > +#ifdef CONFIG_ARCH_HAS_PKEYS
> > # define VM_PKEY_SHIFT VM_HIGH_ARCH_BIT_0
> > -# define VM_PKEY_BIT0 VM_HIGH_ARCH_0 /* A protection key is a 4-bit value */
> > +# define VM_PKEY_BIT0 VM_HIGH_ARCH_0
> > # define VM_PKEY_BIT1 VM_HIGH_ARCH_1
> > # define VM_PKEY_BIT2 VM_HIGH_ARCH_2
> > # define VM_PKEY_BIT3 VM_HIGH_ARCH_3
> > -#endif
> > +#endif /* CONFIG_ARCH_HAS_PKEYS */
>
> We have the space here, so can we just say that it's 4-bits on x86 and 5
> on ppc?
sure.
>
> > +#if defined(CONFIG_PPC64_MEMORY_PROTECTION_KEYS)
> > +# define VM_PKEY_BIT4 VM_HIGH_ARCH_4 /* additional key bit used on ppc64 */
> > +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
>
> Why bother #ifdef'ing a #define?
ok.
RP
On Wed, 2017-07-12 at 15:23 -0700, Ram Pai wrote:
> Just copying over makes checkpatch.pl unhappy. It exceeds 80 columns.
Which is fine to ignore in a case like that where you remain consistent
with the existing code.
Ben.
On Wed, 2017-07-12 at 09:23 +0200, Michal Hocko wrote:
>
> >
> > Ideally the MMU looks at the PTE for keys, in order to enforce
> > protection. This is the case with x86 and is the case with power9 Radix
> > page table. Hence the keys have to be programmed into the PTE.
>
> But x86 doesn't update ptes for PKEYs, that would be just too expensive.
> You could use standard mprotect to do the same...
What do you mean ? x86 ends up in mprotect_fixup -> change_protection()
which will update the PTEs just the same as we do.
Changing the key for a page is a form mprotect. Changing the access
permissions for keys is different, for us it's a special register
(AMR).
I don't understand why you think we are doing any differently than x86
here.
> > However with HPT on power, these keys do not necessarily have to be
> > programmed into the PTE. We could bypass the Linux Page Table Entry(PTE)
> > and instead just program them into the Hash Page Table(HPTE), since
> > the MMU does not refer the PTE but refers the HPTE. The last version
> > of the page attempted to do that. It worked as follows:
> >
> > a) when a address range is requested to be associated with a key; by the
> > application through key_mprotect() system call, the kernel
> > stores that key in the vmas corresponding to that address
> > range.
> >
> > b) Whenever there is a hash page fault for that address, the fault
> > handler reads the key from the VMA and programs the key into the
> > HPTE. __hash_page() is the function that does that.
>
> What causes the fault here?
The hardware. With the hash MMU, the HW walks a hash table which is
effectively a large in-memory TLB extension. When a page isn't found
there, a "hash fault" is generated allowing Linux to populate that
hash table with the content of the corresponding PTE.
> > c) Once the hpte is programmed, the MMU can sense key violations and
> > generate key-faults.
> >
> > The problem is with step (b). This step is really a very critical
> > path which is performance sensitive. We dont want to add any delays.
> > However if we want to access the key from the vma, we will have to
> > hold the vma semaphore, and that is a big NO-NO. As a result, this
> > design had to be dropped.
> >
> >
> >
> > I reverted back to the old design i.e the design in v4 version. In this
> > version we do the following:
> >
> > a) when a address range is requested to be associated with a key; by the
> > application through key_mprotect() system call, the kernel
> > stores that key in the vmas corresponding to that address
> > range. Also the kernel programs the key into Linux PTE coresponding to all the
> > pages associated with the address range.
>
> OK, so how is this any different from the regular mprotect then?
It takes the key argument. This is nothing new. This was done for x86
already, we are just re-using the infrastructure. Look at
do_mprotect_pkey() in mm/mprotect.c today. It's all the same code,
pkey_mprotect() is just mprotect with an added key argument.
> > b) Whenever there is a hash page fault for that address, the fault
> > handler reads the key from the Linux PTE and programs the key into
> > the HPTE.
> >
> > c) Once the HPTE is programmed, the MMU can sense key violations and
> > generate key-faults.
> >
> >
> > Since step (b) in this case has easy access to the Linux PTE, and hence
> > to the key, it is fast to access it and program the HPTE. Thus we avoid
> > taking any performance hit on this critical path.
> >
> > Hope this explains the rationale,
> >
> >
> > As promised here is the high level design:
>
> I will read through that later
> [...]
On Thu 13-07-17 08:53:52, Benjamin Herrenschmidt wrote:
> On Wed, 2017-07-12 at 09:23 +0200, Michal Hocko wrote:
> >
> > >
> > > Ideally the MMU looks at the PTE for keys, in order to enforce
> > > protection. This is the case with x86 and is the case with power9 Radix
> > > page table. Hence the keys have to be programmed into the PTE.
> >
> > But x86 doesn't update ptes for PKEYs, that would be just too expensive.
> > You could use standard mprotect to do the same...
>
> What do you mean ? x86 ends up in mprotect_fixup -> change_protection()
> which will update the PTEs just the same as we do.
>
> Changing the key for a page is a form mprotect. Changing the access
> permissions for keys is different, for us it's a special register
> (AMR).
>
> I don't understand why you think we are doing any differently than x86
> here.
That was a misunderstanding on my side as explained in other reply.
> > > However with HPT on power, these keys do not necessarily have to be
> > > programmed into the PTE. We could bypass the Linux Page Table Entry(PTE)
> > > and instead just program them into the Hash Page Table(HPTE), since
> > > the MMU does not refer the PTE but refers the HPTE. The last version
> > > of the page attempted to do that. It worked as follows:
> > >
> > > a) when a address range is requested to be associated with a key; by the
> > > application through key_mprotect() system call, the kernel
> > > stores that key in the vmas corresponding to that address
> > > range.
> > >
> > > b) Whenever there is a hash page fault for that address, the fault
> > > handler reads the key from the VMA and programs the key into the
> > > HPTE. __hash_page() is the function that does that.
> >
> > What causes the fault here?
>
> The hardware. With the hash MMU, the HW walks a hash table which is
> effectively a large in-memory TLB extension. When a page isn't found
> there, a "hash fault" is generated allowing Linux to populate that
> hash table with the content of the corresponding PTE.
Thanks for the clarification
--
Michal Hocko
SUSE Labs
On Wed, Jul 12, 2017 at 01:10:51PM +1000, Balbir Singh wrote:
> On Tue, 11 Jul 2017 08:44:15 -0700
> Ram Pai <[email protected]> wrote:
>
> > On Tue, Jul 11, 2017 at 03:59:59PM +1000, Balbir Singh wrote:
> > > On Wed, 5 Jul 2017 14:21:39 -0700
> > > Ram Pai <[email protected]> wrote:
> > >
> > > > Rearrange 64K PTE bits to free up bits 3, 4, 5 and 6
> > > > in the 64K backed HPTE pages. This along with the earlier
> > > > patch will entirely free up the four bits from 64K PTE.
> > > > The bit numbers are big-endian as defined in the ISA3.0
> > > >
> > > > This patch does the following change to 64K PTE backed
> > > > by 64K HPTE.
> > > >
> > > > H_PAGE_F_SECOND (S) which occupied bit 4 moves to the
> > > > second part of the pte to bit 60.
> > > > H_PAGE_F_GIX (G,I,X) which occupied bit 5, 6 and 7 also
> > > > moves to the second part of the pte to bit 61,
> > > > 62, 63, 64 respectively
> > > >
> > > > since bit 7 is now freed up, we move H_PAGE_BUSY (B) from
> > > > bit 9 to bit 7.
> > > >
> > > > The second part of the PTE will hold
> > > > (H_PAGE_F_SECOND|H_PAGE_F_GIX) at bit 60,61,62,63.
> > > >
> > > > Before the patch, the 64K HPTE backed 64k PTE format was
> > > > as follows
> > > >
> > > > 0 1 2 3 4 5 6 7 8 9 10...........................63
> > > > : : : : : : : : : : : :
> > > > v v v v v v v v v v v v
> > > >
> > > > ,-,-,-,-,--,--,--,--,-,-,-,-,-,------------------,-,-,-,
> > > > |x|x|x| |S |G |I |X |x|B|x|x|x|................|.|.|.|.| <- primary pte
> > > > '_'_'_'_'__'__'__'__'_'_'_'_'_'________________'_'_'_'_'
> > > > | | | | | | | | | | | | |..................| | | | | <- secondary pte
> > > > '_'_'_'_'__'__'__'__'_'_'_'_'__________________'_'_'_'_'
> > > >
> > >
> > > It's not entirely clear what the secondary pte contains
> > > today and how many of the bits are free today?
> >
> > The secondary pte today is not used for anything for 64k-hpte
> > backed ptes. It gets used the moment the pte gets backed by
> > 4-k hptes. Till then the bits are available. And this patch
> > makes use of that knowledge.
>
> OK.. but does this mean subpage-protection? Or do you mean
> page size demotion? I presume it's the later.
Yes. the later.
RP
On Wed, Jul 12, 2017 at 01:28:25PM +1000, Balbir Singh wrote:
> On Wed, 5 Jul 2017 14:21:51 -0700
> Ram Pai <[email protected]> wrote:
>
> > Initial plumbing to manage all the keys supported by the
> > hardware.
> >
> > Total 32 keys are supported on powerpc. However pkey 0,1
> > and 31 are reserved. So effectively we have 29 pkeys.
> >
> > This patch keeps track of reserved keys, allocated keys
> > and keys that are currently free.
>
> It looks like this patch will only work in guest mode?
> Is that an assumption we've made? What happens if I use
> keys when running in hypervisor mode?
It works in supervisor mode, as a guest aswell as a bare-metal
kernel. Whatever needs to be done in hypervisor mode
is already there in power-kvm.
>
> >
> > Also it adds skeletal functions and macros, that the
> > architecture-independent code expects to be available.
> >
> > Signed-off-by: Ram Pai <[email protected]>
> > ---
> > arch/powerpc/Kconfig | 16 +++++
> > arch/powerpc/include/asm/book3s/64/mmu.h | 9 +++
> > arch/powerpc/include/asm/pkeys.h | 106 ++++++++++++++++++++++++++++++
> > arch/powerpc/mm/mmu_context_book3s64.c | 5 ++
> > 4 files changed, 136 insertions(+), 0 deletions(-)
> > create mode 100644 arch/powerpc/include/asm/pkeys.h
> >
> > diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> > index f7c8f99..a2480b6 100644
> > --- a/arch/powerpc/Kconfig
> > +++ b/arch/powerpc/Kconfig
> > @@ -871,6 +871,22 @@ config SECCOMP
> >
> > If unsure, say Y. Only embedded should say N here.
> >
> > +config PPC64_MEMORY_PROTECTION_KEYS
> > + prompt "PowerPC Memory Protection Keys"
> > + def_bool y
> > + # Note: only available in 64-bit mode
> > + depends on PPC64 && PPC_64K_PAGES
> > + select ARCH_USES_HIGH_VMA_FLAGS
> > + select ARCH_HAS_PKEYS
> > + ---help---
> > + Memory Protection Keys provides a mechanism for enforcing
> > + page-based protections, but without requiring modification of the
> > + page tables when an application changes protection domains.
> > +
> > + For details, see Documentation/powerpc/protection-keys.txt
> > +
> > + If unsure, say y.
> > +
> > endmenu
> >
> > config ISA_DMA_API
> > diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h
> > index 77529a3..104ad72 100644
> > --- a/arch/powerpc/include/asm/book3s/64/mmu.h
> > +++ b/arch/powerpc/include/asm/book3s/64/mmu.h
> > @@ -108,6 +108,15 @@ struct patb_entry {
> > #ifdef CONFIG_SPAPR_TCE_IOMMU
> > struct list_head iommu_group_mem_list;
> > #endif
> > +
> > +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
> > + /*
> > + * Each bit represents one protection key.
> > + * bit set -> key allocated
> > + * bit unset -> key available for allocation
> > + */
> > + u32 pkey_allocation_map;
> > +#endif
> > } mm_context_t;
> >
> > /*
> > diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h
> > new file mode 100644
> > index 0000000..9345767
> > --- /dev/null
> > +++ b/arch/powerpc/include/asm/pkeys.h
> > @@ -0,0 +1,106 @@
> > +#ifndef _ASM_PPC64_PKEYS_H
> > +#define _ASM_PPC64_PKEYS_H
> > +
> > +#define arch_max_pkey() 32
> > +#define ARCH_VM_PKEY_FLAGS (VM_PKEY_BIT0 | VM_PKEY_BIT1 | VM_PKEY_BIT2 | \
> > + VM_PKEY_BIT3 | VM_PKEY_BIT4)
> > +/*
> > + * Bits are in BE format.
> > + * NOTE: key 31, 1, 0 are not used.
> > + * key 0 is used by default. It give read/write/execute permission.
> > + * key 31 is reserved by the hypervisor.
> > + * key 1 is recommended to be not used.
> > + * PowerISA(3.0) page 1015, programming note.
> > + */
> > +#define PKEY_INITIAL_ALLOCAION 0xc0000001
>
> Shouldn't this be exchanged via CAS for guests? Have you seen
> ibm,processor-storage-keys?
Yes. Was one of my TODOs to initilize this using the device-tree
interface. A brief look at that did not show the reserved keys
properly enumerated. But I may be wrong.
>
> > +
> > +#define pkeybit_mask(pkey) (0x1 << (arch_max_pkey() - pkey - 1))
> > +
> > +#define mm_pkey_allocation_map(mm) (mm->context.pkey_allocation_map)
> > +
> > +#define mm_set_pkey_allocated(mm, pkey) { \
> > + mm_pkey_allocation_map(mm) |= pkeybit_mask(pkey); \
> > +}
> > +
> > +#define mm_set_pkey_free(mm, pkey) { \
> > + mm_pkey_allocation_map(mm) &= ~pkeybit_mask(pkey); \
> > +}
> > +
> > +#define mm_set_pkey_is_allocated(mm, pkey) \
> > + (mm_pkey_allocation_map(mm) & pkeybit_mask(pkey))
> > +
> > +#define mm_set_pkey_is_reserved(mm, pkey) (PKEY_INITIAL_ALLOCAION & \
> > + pkeybit_mask(pkey))
> > +
> > +static inline bool mm_pkey_is_allocated(struct mm_struct *mm, int pkey)
> > +{
> > + /* a reserved key is never considered as 'explicitly allocated' */
> > + return (!mm_set_pkey_is_reserved(mm, pkey) &&
> > + mm_set_pkey_is_allocated(mm, pkey));
> > +}
> > +
> > +/*
> > + * Returns a positive, 5-bit key on success, or -1 on failure.
> > + */
> > +static inline int mm_pkey_alloc(struct mm_struct *mm)
> > +{
> > + /*
> > + * Note: this is the one and only place we make sure
> > + * that the pkey is valid as far as the hardware is
> > + * concerned. The rest of the kernel trusts that
> > + * only good, valid pkeys come out of here.
> > + */
> > + u32 all_pkeys_mask = (u32)(~(0x0));
> > + int ret;
> > +
> > + /*
> > + * Are we out of pkeys? We must handle this specially
> > + * because ffz() behavior is undefined if there are no
> > + * zeros.
> > + */
> > + if (mm_pkey_allocation_map(mm) == all_pkeys_mask)
> > + return -1;
> > +
> > + ret = arch_max_pkey() -
> > + ffz((u32)mm_pkey_allocation_map(mm))
> > + - 1;
> > + mm_set_pkey_allocated(mm, ret);
> > + return ret;
> > +}
>
> So the locking is provided by the caller for the function above?
yes.
>
> > +
> > +static inline int mm_pkey_free(struct mm_struct *mm, int pkey)
> > +{
> > + if (!mm_pkey_is_allocated(mm, pkey))
> > + return -EINVAL;
> > +
> > + mm_set_pkey_free(mm, pkey);
> > +
> > + return 0;
> > +}
> > +
> > +/*
> > + * Try to dedicate one of the protection keys to be used as an
> > + * execute-only protection key.
> > + */
> > +static inline int execute_only_pkey(struct mm_struct *mm)
> > +{
> > + return 0;
> > +}
> > +
> > +static inline int arch_override_mprotect_pkey(struct vm_area_struct *vma,
> > + int prot, int pkey)
> > +{
> > + return 0;
> > +}
> > +
> > +static inline int arch_set_user_pkey_access(struct task_struct *tsk, int pkey,
> > + unsigned long init_val)
> > +{
> > + return 0;
> > +}
> > +
> > +static inline void pkey_mm_init(struct mm_struct *mm)
> > +{
> > + mm_pkey_allocation_map(mm) = PKEY_INITIAL_ALLOCAION;
> > +}
> > +#endif /*_ASM_PPC64_PKEYS_H */
> > diff --git a/arch/powerpc/mm/mmu_context_book3s64.c b/arch/powerpc/mm/mmu_context_book3s64.c
> > index c6dca2a..2da9931 100644
> > --- a/arch/powerpc/mm/mmu_context_book3s64.c
> > +++ b/arch/powerpc/mm/mmu_context_book3s64.c
> > @@ -16,6 +16,7 @@
> > #include <linux/string.h>
> > #include <linux/types.h>
> > #include <linux/mm.h>
> > +#include <linux/pkeys.h>
> > #include <linux/spinlock.h>
> > #include <linux/idr.h>
> > #include <linux/export.h>
> > @@ -120,6 +121,10 @@ static int hash__init_new_context(struct mm_struct *mm)
> >
> > subpage_prot_init_new_context(mm);
> >
> > +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
> > + pkey_mm_init(mm);
>
> Can we have two variants of pkey_mm_init() and avoid #ifdefs around the code?
ok.
>
> > +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
> > +
> > return index;
> > }
> >
>
> Balbir Singh.
--
Ram Pai
On Wed, Jul 12, 2017 at 03:26:01PM +1000, Balbir Singh wrote:
> On Wed, 5 Jul 2017 14:21:52 -0700
> Ram Pai <[email protected]> wrote:
>
> > Implements helper functions to read and write the key related
> > registers; AMR, IAMR, UAMOR.
> >
> > AMR register tracks the read,write permission of a key
> > IAMR register tracks the execute permission of a key
> > UAMOR register enables and disables a key
> >
> > Signed-off-by: Ram Pai <[email protected]>
> > ---
> > arch/powerpc/include/asm/book3s/64/pgtable.h | 60 ++++++++++++++++++++++++++
> > 1 files changed, 60 insertions(+), 0 deletions(-)
> >
> > diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
> > index 85bc987..435d6a7 100644
> > --- a/arch/powerpc/include/asm/book3s/64/pgtable.h
> > +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
> > @@ -428,6 +428,66 @@ static inline void huge_ptep_set_wrprotect(struct mm_struct *mm,
> > pte_update(mm, addr, ptep, 0, _PAGE_PRIVILEGED, 1);
> > }
> >
> > +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
> > +
> > +#include <asm/reg.h>
> > +static inline u64 read_amr(void)
> > +{
> > + return mfspr(SPRN_AMR);
> > +}
> > +static inline void write_amr(u64 value)
> > +{
> > + mtspr(SPRN_AMR, value);
> > +}
> > +static inline u64 read_iamr(void)
> > +{
> > + return mfspr(SPRN_IAMR);
> > +}
> > +static inline void write_iamr(u64 value)
> > +{
> > + mtspr(SPRN_IAMR, value);
> > +}
> > +static inline u64 read_uamor(void)
> > +{
> > + return mfspr(SPRN_UAMOR);
> > +}
> > +static inline void write_uamor(u64 value)
> > +{
> > + mtspr(SPRN_UAMOR, value);
> > +}
> > +
> > +#else /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
> > +
> > +static inline u64 read_amr(void)
> > +{
> > + WARN(1, "%s called with MEMORY PROTECTION KEYS disabled\n", __func__);
> > + return -1;
> > +}
>
> Why do we need to have a version here if we are going to WARN(), why not
> let the compilation fail if called from outside of CONFIG_PPC64_MEMORY_PROTECTION_KEYS?
> Is that the intention?
I did not want to stop someone; kernel module for example, from calling
these interfaces from outside the pkey domain.
Either way can be argued to be correct, I suppose.
RP
On Tue, Jul 11, 2017 at 11:13:56AM -0700, Dave Hansen wrote:
> On 07/05/2017 02:22 PM, Ram Pai wrote:
> > +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
> > +void arch_show_smap(struct seq_file *m, struct vm_area_struct *vma)
> > +{
> > + seq_printf(m, "ProtectionKey: %8u\n", vma_pkey(vma));
> > +}
> > +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
>
> This seems like kinda silly unnecessary duplication. Could we just put
> this in the fs/proc/ code and #ifdef it on ARCH_HAS_PKEYS?
Well x86 predicates it based on availability of X86_FEATURE_OSPKE.
powerpc doesn't need that check or any similar check. So trying to
generalize the code does not save much IMHO.
maybe have a seperate inline function that does
seq_printf(m, "ProtectionKey: %8u\n", vma_pkey(vma));
and is called from x86 and powerpc's arch_show_smap()?
At least will keep the string format captured in
one single place.
thoughts?
RP
On Thu, Jul 13, 2017 at 5:55 PM, Ram Pai <[email protected]> wrote:
> On Wed, Jul 12, 2017 at 03:26:01PM +1000, Balbir Singh wrote:
>> On Wed, 5 Jul 2017 14:21:52 -0700
>> Ram Pai <[email protected]> wrote:
>>
>> > Implements helper functions to read and write the key related
>> > registers; AMR, IAMR, UAMOR.
>> >
>> > AMR register tracks the read,write permission of a key
>> > IAMR register tracks the execute permission of a key
>> > UAMOR register enables and disables a key
>> >
>> > Signed-off-by: Ram Pai <[email protected]>
>> > ---
>> > arch/powerpc/include/asm/book3s/64/pgtable.h | 60 ++++++++++++++++++++++++++
>> > 1 files changed, 60 insertions(+), 0 deletions(-)
>> >
>> > diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
>> > index 85bc987..435d6a7 100644
>> > --- a/arch/powerpc/include/asm/book3s/64/pgtable.h
>> > +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
>> > @@ -428,6 +428,66 @@ static inline void huge_ptep_set_wrprotect(struct mm_struct *mm,
>> > pte_update(mm, addr, ptep, 0, _PAGE_PRIVILEGED, 1);
>> > }
>> >
>> > +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
>> > +
>> > +#include <asm/reg.h>
>> > +static inline u64 read_amr(void)
>> > +{
>> > + return mfspr(SPRN_AMR);
>> > +}
>> > +static inline void write_amr(u64 value)
>> > +{
>> > + mtspr(SPRN_AMR, value);
>> > +}
>> > +static inline u64 read_iamr(void)
>> > +{
>> > + return mfspr(SPRN_IAMR);
>> > +}
>> > +static inline void write_iamr(u64 value)
>> > +{
>> > + mtspr(SPRN_IAMR, value);
>> > +}
>> > +static inline u64 read_uamor(void)
>> > +{
>> > + return mfspr(SPRN_UAMOR);
>> > +}
>> > +static inline void write_uamor(u64 value)
>> > +{
>> > + mtspr(SPRN_UAMOR, value);
>> > +}
>> > +
>> > +#else /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
>> > +
>> > +static inline u64 read_amr(void)
>> > +{
>> > + WARN(1, "%s called with MEMORY PROTECTION KEYS disabled\n", __func__);
>> > + return -1;
>> > +}
>>
>> Why do we need to have a version here if we are going to WARN(), why not
>> let the compilation fail if called from outside of CONFIG_PPC64_MEMORY_PROTECTION_KEYS?
>> Is that the intention?
>
> I did not want to stop someone; kernel module for example, from calling
> these interfaces from outside the pkey domain.
>
> Either way can be argued to be correct, I suppose.
Nope, build failures are better than run time failures, otherwise the
kernel will split its guts warning and warning here.
Balbir Singh.
On 07/13/2017 01:03 AM, Ram Pai wrote:
> On Tue, Jul 11, 2017 at 11:13:56AM -0700, Dave Hansen wrote:
>> On 07/05/2017 02:22 PM, Ram Pai wrote:
>>> +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
>>> +void arch_show_smap(struct seq_file *m, struct vm_area_struct *vma)
>>> +{
>>> + seq_printf(m, "ProtectionKey: %8u\n", vma_pkey(vma));
>>> +}
>>> +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
>>
>> This seems like kinda silly unnecessary duplication. Could we just put
>> this in the fs/proc/ code and #ifdef it on ARCH_HAS_PKEYS?
>
> Well x86 predicates it based on availability of X86_FEATURE_OSPKE.
>
> powerpc doesn't need that check or any similar check. So trying to
> generalize the code does not save much IMHO.
I know all your hardware doesn't support it. :)
So, for instance, if you are running on a new POWER9 with radix page
tables, you will just always output "ProtectionKey: 0" in every VMA,
regardless?
> maybe have a seperate inline function that does
> seq_printf(m, "ProtectionKey: %8u\n", vma_pkey(vma));
> and is called from x86 and powerpc's arch_show_smap()?
> At least will keep the string format captured in
> one single place.
Now that we have two architectures, is there a strong reason we can't
just have an arch_pkeys_enabled(), and stick the seq_printf() back in
generic code?
On Thu, Jul 13, 2017 at 07:07:48AM -0700, Dave Hansen wrote:
> On 07/13/2017 01:03 AM, Ram Pai wrote:
> > On Tue, Jul 11, 2017 at 11:13:56AM -0700, Dave Hansen wrote:
> >> On 07/05/2017 02:22 PM, Ram Pai wrote:
> >>> +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
> >>> +void arch_show_smap(struct seq_file *m, struct vm_area_struct *vma)
> >>> +{
> >>> + seq_printf(m, "ProtectionKey: %8u\n", vma_pkey(vma));
> >>> +}
> >>> +#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
> >>
> >> This seems like kinda silly unnecessary duplication. Could we just put
> >> this in the fs/proc/ code and #ifdef it on ARCH_HAS_PKEYS?
> >
> > Well x86 predicates it based on availability of X86_FEATURE_OSPKE.
> >
> > powerpc doesn't need that check or any similar check. So trying to
> > generalize the code does not save much IMHO.
>
> I know all your hardware doesn't support it. :)
Wow! you bring a good point which I had not considered yet. I need some
runtime checks for RPT.
But regardless, my above statement is still partially true. x86
predicates it based on availability of X86_FEATURE_OSPKE, and powerpc
should predicate it based on HPT. So we have our own
customized checks. Hence a unified function won't suffice.
>
> So, for instance, if you are running on a new POWER9 with radix page
> tables, you will just always output "ProtectionKey: 0" in every VMA,
> regardless?
>
> > maybe have a seperate inline function that does
> > seq_printf(m, "ProtectionKey: %8u\n", vma_pkey(vma));
> > and is called from x86 and powerpc's arch_show_smap()?
> > At least will keep the string format captured in
> > one single place.
>
> Now that we have two architectures, is there a strong reason we can't
> just have an arch_pkeys_enabled(), and stick the seq_printf() back in
> generic code?
correct. that looks like the correct approach. Was trying to avoid
touching arch neutral code. But this approach will force me
do so. Will do.
--
Ram Pai
On Tue, Jul 11, 2017 at 11:23:29AM -0700, Dave Hansen wrote:
> On 07/05/2017 02:22 PM, Ram Pai wrote:
> > Add documentation updates that capture PowerPC specific changes.
> >
> > Signed-off-by: Ram Pai <[email protected]>
> > ---
> > Documentation/vm/protection-keys.txt | 85 ++++++++++++++++++++++++++--------
> > 1 files changed, 65 insertions(+), 20 deletions(-)
> >
> > diff --git a/Documentation/vm/protection-keys.txt b/Documentation/vm/protection-keys.txt
> > index b643045..d50b6ab 100644
> > --- a/Documentation/vm/protection-keys.txt
> > +++ b/Documentation/vm/protection-keys.txt
> > @@ -1,21 +1,46 @@
> > -Memory Protection Keys for Userspace (PKU aka PKEYs) is a CPU feature
> > -which will be found on future Intel CPUs.
> > +Memory Protection Keys for Userspace (PKU aka PKEYs) is a CPU feature found in
> > +new generation of intel CPUs and on PowerPC 7 and higher CPUs.
>
> Please try not to change the wording here. I really did mean to
> literally put "future Intel CPUs." Also, you broke my nice wrapping. :)
>
> I'm also thinking that this needs to be more generic. The ppc _CPU_
> feature is *NOT* for userspace-only, right?
It can be used for protecting the kernel aswell with the help of the
hypervisor. But the current implementation is towards "Protection keys
for Userspace" only; not yet "Protection keys for Kernel". Hence will
not talk about it yet :).
>
> > Memory Protection Keys provides a mechanism for enforcing page-based
> > -protections, but without requiring modification of the page tables
> > -when an application changes protection domains. It works by
> > -dedicating 4 previously ignored bits in each page table entry to a
> > -"protection key", giving 16 possible keys.
> > -
> > -There is also a new user-accessible register (PKRU) with two separate
> > -bits (Access Disable and Write Disable) for each key. Being a CPU
> > -register, PKRU is inherently thread-local, potentially giving each
> > -thread a different set of protections from every other thread.
> > -
> > -There are two new instructions (RDPKRU/WRPKRU) for reading and writing
> > -to the new register. The feature is only available in 64-bit mode,
> > -even though there is theoretically space in the PAE PTEs. These
> > -permissions are enforced on data access only and have no effect on
> > +protections, but without requiring modification of the page tables when an
> > +application changes protection domains.
> > +
> > +
> > +On Intel:
> > +
> > + It works by dedicating 4 previously ignored bits in each page table
> > + entry to a "protection key", giving 16 possible keys.
> > +
> > + There is also a new user-accessible register (PKRU) with two separate
> > + bits (Access Disable and Write Disable) for each key. Being a CPU
> > + register, PKRU is inherently thread-local, potentially giving each
> > + thread a different set of protections from every other thread.
> > +
> > + There are two new instructions (RDPKRU/WRPKRU) for reading and writing
> > + to the new register. The feature is only available in 64-bit mode,
> > + even though there is theoretically space in the PAE PTEs. These
> > + permissions are enforced on data access only and have no effect on
> > + instruction fetches.
> > +
> > +
> > +On PowerPC:
> > +
> > + It works by dedicating 5 page table entry bits to a "protection key",
> > + giving 32 possible keys.
> > +
> > + There is a user-accessible register (AMR) with two separate bits;
> > + Access Disable and Write Disable, for each key. Being a CPU
> > + register, AMR is inherently thread-local, potentially giving each
> > + thread a different set of protections from every other thread. NOTE:
> > + Disabling read permission does not disable write and vice-versa.
> > +
> > + The feature is available on 64-bit HPTE mode only.
> > + 'mtspr 0xd, mem' reads the AMR register
> > + 'mfspr mem, 0xd' writes into the AMR register.
>
> The whole "being a CPU register" bits seem pretty common. Should it be
> in the leading paragraph that is shared?
>
> > +Permissions are enforced on data access only and have no effect on
> > instruction fetches.
>
> Shouldn't we mention the ppc support for execute-disable here too?
yes. have reformated the structure to capture all that information. Will
be in my v6 patch version.
>
> Also, *does* this apply to ppc? You have it both in this common area
> and in the x86 portion.
>
> > =========================== Syscalls ===========================
> > @@ -28,9 +53,9 @@ There are 3 system calls which directly interact with pkeys:
> > unsigned long prot, int pkey);
> >
> > Before a pkey can be used, it must first be allocated with
> > -pkey_alloc(). An application calls the WRPKRU instruction
> > +pkey_alloc(). An application calls the WRPKRU/AMR instruction
> > directly in order to change access permissions to memory covered
> > -with a key. In this example WRPKRU is wrapped by a C function
> > +with a key. In this example WRPKRU/AMR is wrapped by a C function
> > called pkey_set().
> >
> > int real_prot = PROT_READ|PROT_WRITE;
> > @@ -52,11 +77,11 @@ is no longer in use:
> > munmap(ptr, PAGE_SIZE);
> > pkey_free(pkey);
> >
> > -(Note: pkey_set() is a wrapper for the RDPKRU and WRPKRU instructions.
> > +(Note: pkey_set() is a wrapper for the RDPKRU,WRPKRU or AMR instructions.
> > An example implementation can be found in
> > tools/testing/selftests/x86/protection_keys.c)
> >
> > -=========================== Behavior ===========================
> > +=========================== Behavior =================================
> >
> > The kernel attempts to make protection keys consistent with the
> > behavior of a plain mprotect(). For instance if you do this:
> > @@ -83,3 +108,23 @@ with a read():
> > The kernel will send a SIGSEGV in both cases, but si_code will be set
> > to SEGV_PKERR when violating protection keys versus SEGV_ACCERR when
> > the plain mprotect() permissions are violated.
> > +
> > +
> > +====================================================================
> > + Semantic differences
> > +
> > +The following semantic differences exist between x86 and power.
> > +
> > +a) powerpc allows creation of a key with execute-disabled. The following
> > + is allowed on powerpc.
> > + pkey = pkey_alloc(0, PKEY_DISABLE_WRITE | PKEY_DISABLE_ACCESS |
> > + PKEY_DISABLE_EXECUTE);
> > + x86 disallows PKEY_DISABLE_EXECUTE during key creation.
>
> It isn't that powerpc supports *creation* of the key. It doesn't
> support setting PKEY_DISABLE_EXECUTE, period, which implies that you
> can't set it at pkey_alloc(). That's a pretty important distinction, IMNHO.
ok. will the following wording capture the subtle distinction?
+a) powerpc *also* allows creation of a key with execute-disabled.
+ The following is allowed on powerpc.
+ pkey = pkey_alloc(0, PKEY_DISABLE_EXECUTE);
+
+b) ....
>
> > +b) changing the permission bits of a key from a signal handler does not
> > + persist on x86. The PKRU specific fpregs entry needs to be modified
> > + for it to persist. On powerpc the permission bits of the key can be
> > + modified by programming the AMR register from the signal handler.
> > + The changes persists across signal boundaries.
>
> ^"changes persist", not "persists".
done.
--
Ram Pai
On Thu, Jul 13, 2017 at 12:45:00AM -0700, Ram Pai wrote:
> On Wed, Jul 12, 2017 at 01:28:25PM +1000, Balbir Singh wrote:
> > On Wed, 5 Jul 2017 14:21:51 -0700
> > Ram Pai <[email protected]> wrote:
> >
> > > Initial plumbing to manage all the keys supported by the
> > > hardware.
> > >
> > > Total 32 keys are supported on powerpc. However pkey 0,1
> > > and 31 are reserved. So effectively we have 29 pkeys.
> > >
> > > This patch keeps track of reserved keys, allocated keys
> > > and keys that are currently free.
> >
> > It looks like this patch will only work in guest mode?
> > Is that an assumption we've made? What happens if I use
> > keys when running in hypervisor mode?
>
> It works in supervisor mode, as a guest aswell as a bare-metal
> kernel. Whatever needs to be done in hypervisor mode
> is already there in power-kvm.
I realize i did not answer your question accurately...
"What happens if I use keys when running in hypervisor mode?"
Its not clear what happens. As far as I can tell the MMU does
not check key violation when in hypervisor mode. So effectively
I think, keys are ineffective when in hypervisor mode.
RP
On Fri, Jul 14, 2017 at 6:37 AM, Ram Pai <[email protected]> wrote:
> On Thu, Jul 13, 2017 at 12:45:00AM -0700, Ram Pai wrote:
>> On Wed, Jul 12, 2017 at 01:28:25PM +1000, Balbir Singh wrote:
>> > On Wed, 5 Jul 2017 14:21:51 -0700
>> > Ram Pai <[email protected]> wrote:
>> >
>> > > Initial plumbing to manage all the keys supported by the
>> > > hardware.
>> > >
>> > > Total 32 keys are supported on powerpc. However pkey 0,1
>> > > and 31 are reserved. So effectively we have 29 pkeys.
>> > >
>> > > This patch keeps track of reserved keys, allocated keys
>> > > and keys that are currently free.
>> >
>> > It looks like this patch will only work in guest mode?
>> > Is that an assumption we've made? What happens if I use
>> > keys when running in hypervisor mode?
>>
>> It works in supervisor mode, as a guest aswell as a bare-metal
>> kernel. Whatever needs to be done in hypervisor mode
>> is already there in power-kvm.
>
> I realize i did not answer your question accurately...
> "What happens if I use keys when running in hypervisor mode?"
>
> Its not clear what happens. As far as I can tell the MMU does
> not check key violation when in hypervisor mode. So effectively
> I think, keys are ineffective when in hypervisor mode.
keys are honored in hypervisor mode. I was just
stating that we need a mechanism used by the hypervisor
to partition the key space between guests and hypervisor.
Balbir Singh
On Thu, Jul 13, 2017 at 07:49:05PM +1000, Balbir Singh wrote:
> On Thu, Jul 13, 2017 at 5:55 PM, Ram Pai <[email protected]> wrote:
> > On Wed, Jul 12, 2017 at 03:26:01PM +1000, Balbir Singh wrote:
> >> On Wed, 5 Jul 2017 14:21:52 -0700
> >> Ram Pai <[email protected]> wrote:
> >>
> >> > Implements helper functions to read and write the key related
> >> > registers; AMR, IAMR, UAMOR.
> >> >
> >> > AMR register tracks the read,write permission of a key
> >> > IAMR register tracks the execute permission of a key
> >> > UAMOR register enables and disables a key
> >> >
> >> > Signed-off-by: Ram Pai <[email protected]>
> >> > ---
> >> > arch/powerpc/include/asm/book3s/64/pgtable.h | 60 ++++++++++++++++++++++++++
> >> > 1 files changed, 60 insertions(+), 0 deletions(-)
> >> >
> >> > diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
> >> > index 85bc987..435d6a7 100644
> >> > --- a/arch/powerpc/include/asm/book3s/64/pgtable.h
> >> > +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
> >> > @@ -428,6 +428,66 @@ static inline void huge_ptep_set_wrprotect(struct mm_struct *mm,
> >> > pte_update(mm, addr, ptep, 0, _PAGE_PRIVILEGED, 1);
> >> > }
> >> >
> >> > +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
> >> > +
> >> > +#include <asm/reg.h>
> >> > +static inline u64 read_amr(void)
> >> > +{
> >> > + return mfspr(SPRN_AMR);
> >> > +}
> >> > +static inline void write_amr(u64 value)
> >> > +{
> >> > + mtspr(SPRN_AMR, value);
> >> > +}
> >> > +static inline u64 read_iamr(void)
> >> > +{
> >> > + return mfspr(SPRN_IAMR);
> >> > +}
> >> > +static inline void write_iamr(u64 value)
> >> > +{
> >> > + mtspr(SPRN_IAMR, value);
> >> > +}
> >> > +static inline u64 read_uamor(void)
> >> > +{
> >> > + return mfspr(SPRN_UAMOR);
> >> > +}
> >> > +static inline void write_uamor(u64 value)
> >> > +{
> >> > + mtspr(SPRN_UAMOR, value);
> >> > +}
> >> > +
> >> > +#else /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
> >> > +
> >> > +static inline u64 read_amr(void)
> >> > +{
> >> > + WARN(1, "%s called with MEMORY PROTECTION KEYS disabled\n", __func__);
> >> > + return -1;
> >> > +}
> >>
> >> Why do we need to have a version here if we are going to WARN(), why not
> >> let the compilation fail if called from outside of CONFIG_PPC64_MEMORY_PROTECTION_KEYS?
> >> Is that the intention?
> >
> > I did not want to stop someone; kernel module for example, from calling
> > these interfaces from outside the pkey domain.
> >
> > Either way can be argued to be correct, I suppose.
>
> Nope, build failures are better than run time failures, otherwise the
> kernel will split its guts warning and warning here.
>
Well these are helper functions that can be called by anyone under
any situation. I will rather have them defined unconditionally; under
no ifdefs. No spewing of warnings anymore. The registers will
be read or written as told. It just makes sense that way.
RP
--
Ram Pai