2022-01-26 22:22:39

by Janis Schoetterl-Glausch

[permalink] [raw]
Subject: [RFC PATCH 0/2] uaccess: Add mechanism for key checked access to user memory

Something like this patch series is required as part of KVM supporting
storage keys on s390.
See https://lore.kernel.org/kvm/[email protected]/

On s390 each physical page is associated with 4 access control bits.
On access, these are compared with an access key, which is either
provided by the instruction or taken from the CPU state.
Based on that comparison, the access either succeeds or is prevented.

KVM on s390 needs to be able emulate this behavior, for example during
instruction emulation, when it makes accesses on behalf of the guest.
In order to do that, we need variants of __copy_from/to_user that pass
along an access key to the architecture specific implementation of
__copy_from/to_user. That is the only difference, variants do the same
might_fault(), instrument_copy_to_user(), etc. calls as the normal
functions do and need to be kept in sync with those.
If these __copy_from/to_user_key functions were to be maintained
in architecture specific code they would be prone to going out of sync
with their non key counterparts if there were code changes.
So, instead, add these variants to include/linux/uaccess.h.

Considerations:
* The key argument is an unsigned long, in order to make the functions
less specific to s390, which would only need an u8.
This could also be generalized further, i.e. by having the type be
defined by the architecture, with the default being a struct without
any members.
Also the functions could be renamed ..._opaque, ..._arg, or similar.
* Which functions do we provide _key variants for? Just defining
__copy_from/to_user_key would make it rather specific to our use
case.
* Should ...copy_from/to_user_key functions be callable from common
code? The patch defines the functions to be functionally identical
to the normal functions if the architecture does not define
raw_copy_from/to_user_key, so that this would be possible, however it
is not required for our use case.

For the minimal functionality we require see the diff below.

bloat-o-meter reported a .03% kernel size increase.

Comments are much appreciated.

Janis Schoetterl-Glausch (2):
uaccess: Add mechanism for key checked access to user memory
s390/uaccess: Provide raw_copy_from/to_user_key

arch/s390/include/asm/uaccess.h | 22 ++++++-
arch/s390/lib/uaccess.c | 48 ++++++++------
include/linux/uaccess.h | 107 ++++++++++++++++++++++++++++++++
lib/usercopy.c | 33 ++++++++++
4 files changed, 188 insertions(+), 22 deletions(-)


diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
index ac0394087f7d..b3c58b7605d6 100644
--- a/include/linux/uaccess.h
+++ b/include/linux/uaccess.h
@@ -114,6 +114,20 @@ __copy_from_user(void *to, const void __user *from, unsigned long n)
return raw_copy_from_user(to, from, n);
}

+#ifdef raw_copy_from_user_key
+static __always_inline __must_check unsigned long
+__copy_from_user_key(void *to, const void __user *from, unsigned long n,
+ unsigned long key)
+{
+ might_fault();
+ if (should_fail_usercopy())
+ return n;
+ instrument_copy_from_user(to, from, n);
+ check_object_size(to, n, false);
+ return raw_copy_from_user_key(to, from, n, key);
+}
+#endif /* raw_copy_from_user_key */
+
/**
* __copy_to_user_inatomic: - Copy a block of data into user space, with less checking.
* @to: Destination address, in user space.
@@ -148,6 +162,20 @@ __copy_to_user(void __user *to, const void *from, unsigned long n)
return raw_copy_to_user(to, from, n);
}

+#ifdef raw_copy_to_user_key
+static __always_inline __must_check unsigned long
+__copy_to_user_key(void __user *to, const void *from, unsigned long n,
+ unsigned long key)
+{
+ might_fault();
+ if (should_fail_usercopy())
+ return n;
+ instrument_copy_to_user(to, from, n);
+ check_object_size(from, n, true);
+ return raw_copy_to_user_key(to, from, n, key);
+}
+#endif /* raw_copy_to_user_key */
+
#ifdef INLINE_COPY_FROM_USER
static inline __must_check unsigned long
_copy_from_user(void *to, const void __user *from, unsigned long n)

base-commit: 0280e3c58f92b2fe0e8fbbdf8d386449168de4a8
--
2.32.0


2022-01-26 22:24:43

by Janis Schoetterl-Glausch

[permalink] [raw]
Subject: [RFC PATCH 1/2] uaccess: Add mechanism for key checked access to user memory

KVM on s390 needs a mechanism to do accesses to guest memory
that honors storage key protection.

On s390 each physical page is associated with 4 access control bits.
On access, these are compared with an access key, which is either
provided by the instruction or taken from the CPU state.
Based on that comparison, the access either succeeds or is prevented.

KVM on s390 needs to be able emulate this behavior, for example during
instruction emulation, when it makes accesses on behalf of the guest.
Introduce ...copy_{from,to}_user_key functions KVM can use to achieve
this. These differ from their non key counterparts by having an
additional key argument, and delegating to raw_copy_from/to_user_key
instead of raw_copy_{from,to}_user. Otherwise they are the same.
If they were to be maintained in architecture specific code they would
be prone to going out of sync with their non key counterparts.
To prevent this, add them to include/linux/uaccess.h.
In order to allow use of ...copy_{from,to}_user_key from common code,
the key argument is ignored on architectures that do not provide
raw_copy_{from,to}_user_key and the functions become functionally
identical to ...copy_{from,to}_user.

Signed-off-by: Janis Schoetterl-Glausch <[email protected]>
---
include/linux/uaccess.h | 107 ++++++++++++++++++++++++++++++++++++++++
lib/usercopy.c | 33 +++++++++++++
2 files changed, 140 insertions(+)

diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
index ac0394087f7d..cba64cd23193 100644
--- a/include/linux/uaccess.h
+++ b/include/linux/uaccess.h
@@ -93,6 +93,11 @@ static inline void force_uaccess_end(mm_segment_t oldfs)
* Biarch ones should also provide raw_copy_in_user() - similar to the above,
* but both source and destination are __user pointers (affected by set_fs()
* as usual) and both source and destination can trigger faults.
+ *
+ * Architectures can also provide raw_copy_{from,to}_user_key variants that take
+ * an additional key argument that can be used for additional memory protection
+ * checks. If these variants are not provided, ...copy_{from,to}_user_key are
+ * identical to their non key counterparts.
*/

static __always_inline __must_check unsigned long
@@ -201,6 +206,108 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
return n;
}

+/*
+ * ...copy_{from,to}_user_key variants
+ * must be kept in sync with their non key counterparts.
+ */
+#ifndef raw_copy_from_user_key
+static __always_inline unsigned long __must_check
+raw_copy_from_user_key(void *to, const void __user *from, unsigned long n,
+ unsigned long key)
+{
+ return raw_copy_from_user(to, from, n);
+}
+#endif
+static __always_inline __must_check unsigned long
+__copy_from_user_key(void *to, const void __user *from, unsigned long n,
+ unsigned long key)
+{
+ might_fault();
+ if (should_fail_usercopy())
+ return n;
+ instrument_copy_from_user(to, from, n);
+ check_object_size(to, n, false);
+ return raw_copy_from_user_key(to, from, n, key);
+}
+
+#ifdef INLINE_COPY_FROM_USER_KEY
+static inline __must_check unsigned long
+_copy_from_user_key(void *to, const void __user *from, unsigned long n,
+ unsigned long key)
+{
+ unsigned long res = n;
+ might_fault();
+ if (!should_fail_usercopy() && likely(access_ok(from, n))) {
+ instrument_copy_from_user(to, from, n);
+ res = raw_copy_from_user_key(to, from, n, key);
+ }
+ if (unlikely(res))
+ memset(to + (n - res), 0, res);
+ return res;
+}
+#else
+extern __must_check unsigned long
+_copy_from_user_key(void *, const void __user *, unsigned long, unsigned long);
+#endif
+
+#ifndef raw_copy_to_user_key
+static __always_inline unsigned long __must_check
+raw_copy_to_user_key(void __user *to, const void *from, unsigned long n,
+ unsigned long key)
+{
+ return raw_copy_to_user(to, from, n);
+}
+#endif
+
+static __always_inline __must_check unsigned long
+__copy_to_user_key(void __user *to, const void *from, unsigned long n,
+ unsigned long key)
+{
+ might_fault();
+ if (should_fail_usercopy())
+ return n;
+ instrument_copy_to_user(to, from, n);
+ check_object_size(from, n, true);
+ return raw_copy_to_user_key(to, from, n, key);
+}
+
+#ifdef INLINE_COPY_TO_USER_KEY
+static inline __must_check unsigned long
+_copy_to_user_key(void __user *to, const void *from, unsigned long n,
+ unsigned long key)
+{
+ might_fault();
+ if (should_fail_usercopy())
+ return n;
+ if (access_ok(to, n)) {
+ instrument_copy_to_user(to, from, n);
+ n = raw_copy_to_user_key(to, from, n, key);
+ }
+ return n;
+}
+#else
+extern __must_check unsigned long
+_copy_to_user_key(void __user *, const void *, unsigned long, unsigned long);
+#endif
+
+static __always_inline unsigned long __must_check
+copy_from_user_key(void *to, const void __user *from, unsigned long n,
+ unsigned long key)
+{
+ if (likely(check_copy_size(to, n, false)))
+ n = _copy_from_user(to, from, n);
+ return n;
+}
+
+static __always_inline unsigned long __must_check
+copy_to_user_key(void __user *to, const void *from, unsigned long n,
+ unsigned long key)
+{
+ if (likely(check_copy_size(from, n, true)))
+ n = _copy_to_user(to, from, n);
+ return n;
+}
+
#ifndef copy_mc_to_kernel
/*
* Without arch opt-in this generic copy_mc_to_kernel() will not handle
diff --git a/lib/usercopy.c b/lib/usercopy.c
index 7413dd300516..c13394d0f306 100644
--- a/lib/usercopy.c
+++ b/lib/usercopy.c
@@ -37,6 +37,39 @@ unsigned long _copy_to_user(void __user *to, const void *from, unsigned long n)
EXPORT_SYMBOL(_copy_to_user);
#endif

+#ifndef INLINE_COPY_FROM_USER_KEY
+unsigned long _copy_from_user_key(void *to, const void __user *from,
+ unsigned long n, unsigned long key)
+{
+ unsigned long res = n;
+ might_fault();
+ if (!should_fail_usercopy() && likely(access_ok(from, n))) {
+ instrument_copy_from_user(to, from, n);
+ res = raw_copy_from_user_key(to, from, n, key);
+ }
+ if (unlikely(res))
+ memset(to + (n - res), 0, res);
+ return res;
+}
+EXPORT_SYMBOL(_copy_from_user_key);
+#endif
+
+#ifndef INLINE_COPY_TO_USER_KEY
+unsigned long _copy_to_user_key(void __user *to, const void *from,
+ unsigned long n, unsigned long key)
+{
+ might_fault();
+ if (should_fail_usercopy())
+ return n;
+ if (likely(access_ok(to, n))) {
+ instrument_copy_to_user(to, from, n);
+ n = raw_copy_to_user_key(to, from, n, key);
+ }
+ return n;
+}
+EXPORT_SYMBOL(_copy_to_user_key);
+#endif
+
/**
* check_zeroed_user: check if a userspace buffer only contains zero bytes
* @from: Source address, in userspace.
--
2.32.0

2022-01-26 22:24:46

by Janis Schoetterl-Glausch

[permalink] [raw]
Subject: [RFC PATCH 2/2] s390/uaccess: Provide raw_copy_from/to_user_key

This makes the user access functions that perform storage key checking
available, so they can be used by KVM for emulation.
Since the existing uaccess implementation on s390 makes use of move
instructions that support having an additional access key supplied,
we can implement raw_copy_from/to_user_key by enhancing the
existing implementation.

Signed-off-by: Janis Schoetterl-Glausch <[email protected]>
---
arch/s390/include/asm/uaccess.h | 22 +++++++++++++--
arch/s390/lib/uaccess.c | 48 +++++++++++++++++++--------------
2 files changed, 48 insertions(+), 22 deletions(-)

diff --git a/arch/s390/include/asm/uaccess.h b/arch/s390/include/asm/uaccess.h
index 147cb3534ce4..422066d7c5e2 100644
--- a/arch/s390/include/asm/uaccess.h
+++ b/arch/s390/include/asm/uaccess.h
@@ -33,15 +33,33 @@ static inline int __range_ok(unsigned long addr, unsigned long size)

#define access_ok(addr, size) __access_ok(addr, size)

+#define raw_copy_from_user_key raw_copy_from_user_key
unsigned long __must_check
-raw_copy_from_user(void *to, const void __user *from, unsigned long n);
+raw_copy_from_user_key(void *to, const void __user *from, unsigned long n,
+ unsigned long key);

+#define raw_copy_to_user_key raw_copy_to_user_key
unsigned long __must_check
-raw_copy_to_user(void __user *to, const void *from, unsigned long n);
+raw_copy_to_user_key(void __user *to, const void *from, unsigned long n,
+ unsigned long key);
+
+static __always_inline unsigned long __must_check
+raw_copy_from_user(void *to, const void __user *from, unsigned long n)
+{
+ return raw_copy_from_user_key(to, from, n, 0);
+}
+
+static __always_inline unsigned long __must_check
+raw_copy_to_user(void __user *to, const void *from, unsigned long n)
+{
+ return raw_copy_to_user_key(to, from, n, 0);
+}

#ifndef CONFIG_KASAN
#define INLINE_COPY_FROM_USER
#define INLINE_COPY_TO_USER
+#define INLINE_COPY_FROM_USER_KEY
+#define INLINE_COPY_TO_USER_KEY
#endif

int __put_user_bad(void) __attribute__((noreturn));
diff --git a/arch/s390/lib/uaccess.c b/arch/s390/lib/uaccess.c
index 8a5d21461889..689a5ab3121a 100644
--- a/arch/s390/lib/uaccess.c
+++ b/arch/s390/lib/uaccess.c
@@ -59,11 +59,13 @@ static inline int copy_with_mvcos(void)
#endif

static inline unsigned long copy_from_user_mvcos(void *x, const void __user *ptr,
- unsigned long size)
+ unsigned long size, unsigned long key)
{
unsigned long tmp1, tmp2;
union oac spec = {
+ .oac2.key = key,
.oac2.as = PSW_BITS_AS_SECONDARY,
+ .oac2.k = 1,
.oac2.a = 1,
};

@@ -94,19 +96,19 @@ static inline unsigned long copy_from_user_mvcos(void *x, const void __user *ptr
}

static inline unsigned long copy_from_user_mvcp(void *x, const void __user *ptr,
- unsigned long size)
+ unsigned long size, unsigned long key)
{
unsigned long tmp1, tmp2;

tmp1 = -256UL;
asm volatile(
" sacf 0\n"
- "0: mvcp 0(%0,%2),0(%1),%3\n"
+ "0: mvcp 0(%0,%2),0(%1),%[key]\n"
"7: jz 5f\n"
"1: algr %0,%3\n"
" la %1,256(%1)\n"
" la %2,256(%2)\n"
- "2: mvcp 0(%0,%2),0(%1),%3\n"
+ "2: mvcp 0(%0,%2),0(%1),%[key]\n"
"8: jnz 1b\n"
" j 5f\n"
"3: la %4,255(%1)\n" /* %4 = ptr + 255 */
@@ -115,7 +117,7 @@ static inline unsigned long copy_from_user_mvcp(void *x, const void __user *ptr,
" slgr %4,%1\n"
" clgr %0,%4\n" /* copy crosses next page boundary? */
" jnh 6f\n"
- "4: mvcp 0(%4,%2),0(%1),%3\n"
+ "4: mvcp 0(%4,%2),0(%1),%[key]\n"
"9: slgr %0,%4\n"
" j 6f\n"
"5: slgr %0,%0\n"
@@ -123,24 +125,28 @@ static inline unsigned long copy_from_user_mvcp(void *x, const void __user *ptr,
EX_TABLE(0b,3b) EX_TABLE(2b,3b) EX_TABLE(4b,6b)
EX_TABLE(7b,3b) EX_TABLE(8b,3b) EX_TABLE(9b,6b)
: "+a" (size), "+a" (ptr), "+a" (x), "+a" (tmp1), "=a" (tmp2)
- : : "cc", "memory");
+ : [key] "d" (key << 4)
+ : "cc", "memory");
return size;
}

-unsigned long raw_copy_from_user(void *to, const void __user *from, unsigned long n)
+unsigned long raw_copy_from_user_key(void *to, const void __user *from,
+ unsigned long n, unsigned long key)
{
if (copy_with_mvcos())
- return copy_from_user_mvcos(to, from, n);
- return copy_from_user_mvcp(to, from, n);
+ return copy_from_user_mvcos(to, from, n, key);
+ return copy_from_user_mvcp(to, from, n, key);
}
-EXPORT_SYMBOL(raw_copy_from_user);
+EXPORT_SYMBOL(raw_copy_from_user_key);

static inline unsigned long copy_to_user_mvcos(void __user *ptr, const void *x,
- unsigned long size)
+ unsigned long size, unsigned long key)
{
unsigned long tmp1, tmp2;
union oac spec = {
+ .oac1.key = key,
.oac1.as = PSW_BITS_AS_SECONDARY,
+ .oac1.k = 1,
.oac1.a = 1,
};

@@ -171,19 +177,19 @@ static inline unsigned long copy_to_user_mvcos(void __user *ptr, const void *x,
}

static inline unsigned long copy_to_user_mvcs(void __user *ptr, const void *x,
- unsigned long size)
+ unsigned long size, unsigned long key)
{
unsigned long tmp1, tmp2;

tmp1 = -256UL;
asm volatile(
" sacf 0\n"
- "0: mvcs 0(%0,%1),0(%2),%3\n"
+ "0: mvcs 0(%0,%1),0(%2),%[key]\n"
"7: jz 5f\n"
"1: algr %0,%3\n"
" la %1,256(%1)\n"
" la %2,256(%2)\n"
- "2: mvcs 0(%0,%1),0(%2),%3\n"
+ "2: mvcs 0(%0,%1),0(%2),%[key]\n"
"8: jnz 1b\n"
" j 5f\n"
"3: la %4,255(%1)\n" /* %4 = ptr + 255 */
@@ -192,7 +198,7 @@ static inline unsigned long copy_to_user_mvcs(void __user *ptr, const void *x,
" slgr %4,%1\n"
" clgr %0,%4\n" /* copy crosses next page boundary? */
" jnh 6f\n"
- "4: mvcs 0(%4,%1),0(%2),%3\n"
+ "4: mvcs 0(%4,%1),0(%2),%[key]\n"
"9: slgr %0,%4\n"
" j 6f\n"
"5: slgr %0,%0\n"
@@ -200,17 +206,19 @@ static inline unsigned long copy_to_user_mvcs(void __user *ptr, const void *x,
EX_TABLE(0b,3b) EX_TABLE(2b,3b) EX_TABLE(4b,6b)
EX_TABLE(7b,3b) EX_TABLE(8b,3b) EX_TABLE(9b,6b)
: "+a" (size), "+a" (ptr), "+a" (x), "+a" (tmp1), "=a" (tmp2)
- : : "cc", "memory");
+ : [key] "d" (key << 4)
+ : "cc", "memory");
return size;
}

-unsigned long raw_copy_to_user(void __user *to, const void *from, unsigned long n)
+unsigned long raw_copy_to_user_key(void __user *to, const void *from,
+ unsigned long n, unsigned long key)
{
if (copy_with_mvcos())
- return copy_to_user_mvcos(to, from, n);
- return copy_to_user_mvcs(to, from, n);
+ return copy_to_user_mvcos(to, from, n, key);
+ return copy_to_user_mvcs(to, from, n, key);
}
-EXPORT_SYMBOL(raw_copy_to_user);
+EXPORT_SYMBOL(raw_copy_to_user_key);

static inline unsigned long clear_user_mvcos(void __user *to, unsigned long size)
{
--
2.32.0

2022-02-01 20:39:47

by Christian Borntraeger

[permalink] [raw]
Subject: Re: [RFC PATCH 0/2] uaccess: Add mechanism for key checked access to user memory

Am 26.01.22 um 18:33 schrieb Janis Schoetterl-Glausch:
> Something like this patch series is required as part of KVM supporting
> storage keys on s390.
> See https://lore.kernel.org/kvm/[email protected]/

Just to give some more context. In theory we could confine the alternative
uaccess functions in s390x architecture code, after all we only have one
place in KVM code where we call it. But this will be very likely
result in future changes not being synced. This would very likely also
continue to work but it might miss security and functionality enhancements.
And I think we want our KVM uaccess to also do the kasan, error-inject and
so on. After all there is a reason why all copy_*user functions were merged
and now architectures only provide raw_*_user functions.

>
> On s390 each physical page is associated with 4 access control bits.
> On access, these are compared with an access key, which is either
> provided by the instruction or taken from the CPU state.
> Based on that comparison, the access either succeeds or is prevented.
>
> KVM on s390 needs to be able emulate this behavior, for example during
> instruction emulation, when it makes accesses on behalf of the guest.
> In order to do that, we need variants of __copy_from/to_user that pass
> along an access key to the architecture specific implementation of
> __copy_from/to_user. That is the only difference, variants do the same
> might_fault(), instrument_copy_to_user(), etc. calls as the normal
> functions do and need to be kept in sync with those.
> If these __copy_from/to_user_key functions were to be maintained
> in architecture specific code they would be prone to going out of sync
> with their non key counterparts if there were code changes.
> So, instead, add these variants to include/linux/uaccess.h.
>
> Considerations:
> * The key argument is an unsigned long, in order to make the functions
> less specific to s390, which would only need an u8.
> This could also be generalized further, i.e. by having the type be
> defined by the architecture, with the default being a struct without
> any members.
> Also the functions could be renamed ..._opaque, ..._arg, or similar.
> * Which functions do we provide _key variants for? Just defining
> __copy_from/to_user_key would make it rather specific to our use
> case.
> * Should ...copy_from/to_user_key functions be callable from common
> code? The patch defines the functions to be functionally identical
> to the normal functions if the architecture does not define
> raw_copy_from/to_user_key, so that this would be possible, however it
> is not required for our use case.
>
> For the minimal functionality we require see the diff below.
>
> bloat-o-meter reported a .03% kernel size increase.
>
> Comments are much appreciated.
>
> Janis Schoetterl-Glausch (2):
> uaccess: Add mechanism for key checked access to user memory
> s390/uaccess: Provide raw_copy_from/to_user_key
>
> arch/s390/include/asm/uaccess.h | 22 ++++++-
> arch/s390/lib/uaccess.c | 48 ++++++++------
> include/linux/uaccess.h | 107 ++++++++++++++++++++++++++++++++
> lib/usercopy.c | 33 ++++++++++
> 4 files changed, 188 insertions(+), 22 deletions(-)
>
>
> diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
> index ac0394087f7d..b3c58b7605d6 100644
> --- a/include/linux/uaccess.h
> +++ b/include/linux/uaccess.h
> @@ -114,6 +114,20 @@ __copy_from_user(void *to, const void __user *from, unsigned long n)
> return raw_copy_from_user(to, from, n);
> }
>
> +#ifdef raw_copy_from_user_key
> +static __always_inline __must_check unsigned long
> +__copy_from_user_key(void *to, const void __user *from, unsigned long n,
> + unsigned long key)
> +{
> + might_fault();
> + if (should_fail_usercopy())
> + return n;
> + instrument_copy_from_user(to, from, n);
> + check_object_size(to, n, false);
> + return raw_copy_from_user_key(to, from, n, key);
> +}
> +#endif /* raw_copy_from_user_key */
> +
> /**
> * __copy_to_user_inatomic: - Copy a block of data into user space, with less checking.
> * @to: Destination address, in user space.
> @@ -148,6 +162,20 @@ __copy_to_user(void __user *to, const void *from, unsigned long n)
> return raw_copy_to_user(to, from, n);
> }
>
> +#ifdef raw_copy_to_user_key
> +static __always_inline __must_check unsigned long
> +__copy_to_user_key(void __user *to, const void *from, unsigned long n,
> + unsigned long key)
> +{
> + might_fault();
> + if (should_fail_usercopy())
> + return n;
> + instrument_copy_to_user(to, from, n);
> + check_object_size(from, n, true);
> + return raw_copy_to_user_key(to, from, n, key);
> +}
> +#endif /* raw_copy_to_user_key */
> +
> #ifdef INLINE_COPY_FROM_USER
> static inline __must_check unsigned long
> _copy_from_user(void *to, const void __user *from, unsigned long n)
>
> base-commit: 0280e3c58f92b2fe0e8fbbdf8d386449168de4a8

2022-02-07 18:14:07

by Janis Schoetterl-Glausch

[permalink] [raw]
Subject: Re: [RFC PATCH 0/2] uaccess: Add mechanism for key checked access to user memory

> Considerations:
> * The key argument is an unsigned long, in order to make the functions
> less specific to s390, which would only need an u8.
> This could also be generalized further, i.e. by having the type be
> defined by the architecture, with the default being a struct without
> any members.
> Also the functions could be renamed ..._opaque, ..._arg, or similar.
> * Which functions do we provide _key variants for? Just defining
> __copy_from/to_user_key would make it rather specific to our use
> case.
> * Should ...copy_from/to_user_key functions be callable from common
> code? The patch defines the functions to be functionally identical
> to the normal functions if the architecture does not define
> raw_copy_from/to_user_key, so that this would be possible, however it
> is not required for our use case.
>
After thinking about it some more, this variant seems an attractive
compromise between the different dimensions.
It maximises extensibility by having the additional argument and
semantic completely architecture defined.
At the same time it keeps the changes to the minimum, which reduces the
maintenance cost of keeping the functions in sync.
It is also clear how other use cases can be supported, when they arise.
Calling the functions from common code would be supported by defining
the opaque argument as an empty struct by default, and defaulting to
raw_copy_from/to_user. If other variants of copy to/from user with an
additional argument are required they can be added in the same manner as
is done here for __copy_from/to_user.
>
> Comments are much appreciated.

Janis Schoetterl-Glausch (2):
uaccess: Add mechanism for arch specific user access with argument
s390/uaccess: Provide raw_copy_from/to_user_opaque

arch/s390/include/asm/uaccess.h | 27 ++++++++++++++--
arch/s390/lib/uaccess.c | 56 ++++++++++++++++++++-------------
include/linux/uaccess.h | 28 +++++++++++++++++
3 files changed, 88 insertions(+), 23 deletions(-)

--
2.32.0