2021-03-10 17:50:30

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 00/15] powerpc: Cleanup of uaccess.h and adding asm goto for get_user()

This series cleans up uaccess.h and adds asm goto for get_user()

v2:
- Further clean ups
- asm goto for get_user()
- Move a few patches unrelated to put_user/get_user into another misc series.

Christophe Leroy (15):
powerpc/uaccess: Remove __get_user_allowed() and unsafe_op_wrap()
powerpc/uaccess: Define ___get_user_instr() for ppc32
powerpc/align: Convert emulate_spe() to user_access_begin
powerpc/uaccess: Remove __get/put_user_inatomic()
powerpc/uaccess: Move get_user_instr helpers in asm/inst.h
powerpc/align: Don't use __get_user_instr() on kernel addresses
powerpc/uaccess: Call might_fault() inconditionaly
powerpc/uaccess: Remove __unsafe_put_user_goto()
powerpc/uaccess: Remove __chk_user_ptr() in __get/put_user
powerpc/uaccess: Remove calls to __get_user_bad() and __put_user_bad()
powerpc/uaccess: Split out __get_user_nocheck()
powerpc/uaccess: Rename __get/put_user_check/nocheck
powerpc/uaccess: Refactor get/put_user() and __get/put_user()
powerpc/uaccess: Introduce __get_user_size_goto()
powerpc/uaccess: Use asm goto for get_user when compiler supports it

arch/powerpc/include/asm/inst.h | 34 ++
arch/powerpc/include/asm/uaccess.h | 293 +++++++-----------
arch/powerpc/kernel/align.c | 67 ++--
.../kernel/hw_breakpoint_constraints.c | 2 +-
arch/powerpc/kernel/traps.c | 2 +-
5 files changed, 187 insertions(+), 211 deletions(-)

--
2.25.0


2021-03-10 17:51:09

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 01/15] powerpc/uaccess: Remove __get_user_allowed() and unsafe_op_wrap()

Those two macros have only one user which is unsafe_get_user().

Put everything in one place and remove them.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/include/asm/uaccess.h | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
index 78e2a3990eab..8cbf3e3874f1 100644
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -53,9 +53,6 @@ static inline bool __access_ok(unsigned long addr, unsigned long size)
#define __put_user(x, ptr) \
__put_user_nocheck((__typeof__(*(ptr)))(x), (ptr), sizeof(*(ptr)))

-#define __get_user_allowed(x, ptr) \
- __get_user_nocheck((x), (ptr), sizeof(*(ptr)), false)
-
#define __get_user_inatomic(x, ptr) \
__get_user_nosleep((x), (ptr), sizeof(*(ptr)))
#define __put_user_inatomic(x, ptr) \
@@ -482,8 +479,11 @@ user_write_access_begin(const void __user *ptr, size_t len)
#define user_write_access_begin user_write_access_begin
#define user_write_access_end prevent_current_write_to_user

-#define unsafe_op_wrap(op, err) do { if (unlikely(op)) goto err; } while (0)
-#define unsafe_get_user(x, p, e) unsafe_op_wrap(__get_user_allowed(x, p), e)
+#define unsafe_get_user(x, p, e) do { \
+ if (unlikely(__get_user_nocheck((x), (p), sizeof(*(p)), false)))\
+ goto e; \
+} while (0)
+
#define unsafe_put_user(x, p, e) \
__unsafe_put_user_goto((__typeof__(*(p)))(x), (p), sizeof(*(p)), e)

--
2.25.0

2021-03-10 17:51:48

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 03/15] powerpc/align: Convert emulate_spe() to user_access_begin

This patch converts emulate_spe() to using user_access_being
logic.

Since commit 662bbcb2747c ("mm, sched: Allow uaccess in atomic with
pagefault_disable()"), might_fault() doesn't fire when called from
sections where pagefaults are disabled, which must be the case
when using _inatomic variants of __get_user and __put_user. So
the might_fault() in user_access_begin() is not a problem.

There was a verification of user_mode() together with the access_ok(),
but the function returns in case !user_mode() immediately after
the access_ok() verification, so removing that test has no effect.

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/kernel/align.c | 61 ++++++++++++++++++++-----------------
1 file changed, 33 insertions(+), 28 deletions(-)

diff --git a/arch/powerpc/kernel/align.c b/arch/powerpc/kernel/align.c
index c7797eb958c7..c4d7b445b459 100644
--- a/arch/powerpc/kernel/align.c
+++ b/arch/powerpc/kernel/align.c
@@ -107,7 +107,6 @@ static struct aligninfo spe_aligninfo[32] = {
static int emulate_spe(struct pt_regs *regs, unsigned int reg,
struct ppc_inst ppc_instr)
{
- int ret;
union {
u64 ll;
u32 w[2];
@@ -127,11 +126,6 @@ static int emulate_spe(struct pt_regs *regs, unsigned int reg,
nb = spe_aligninfo[instr].len;
flags = spe_aligninfo[instr].flags;

- /* Verify the address of the operand */
- if (unlikely(user_mode(regs) &&
- !access_ok(addr, nb)))
- return -EFAULT;
-
/* userland only */
if (unlikely(!user_mode(regs)))
return 0;
@@ -169,26 +163,27 @@ static int emulate_spe(struct pt_regs *regs, unsigned int reg,
}
} else {
temp.ll = data.ll = 0;
- ret = 0;
p = addr;

+ if (!user_read_access_begin(addr, nb))
+ return -EFAULT;
+
switch (nb) {
case 8:
- ret |= __get_user_inatomic(temp.v[0], p++);
- ret |= __get_user_inatomic(temp.v[1], p++);
- ret |= __get_user_inatomic(temp.v[2], p++);
- ret |= __get_user_inatomic(temp.v[3], p++);
+ unsafe_get_user(temp.v[0], p++, Efault_read);
+ unsafe_get_user(temp.v[1], p++, Efault_read);
+ unsafe_get_user(temp.v[2], p++, Efault_read);
+ unsafe_get_user(temp.v[3], p++, Efault_read);
fallthrough;
case 4:
- ret |= __get_user_inatomic(temp.v[4], p++);
- ret |= __get_user_inatomic(temp.v[5], p++);
+ unsafe_get_user(temp.v[4], p++, Efault_read);
+ unsafe_get_user(temp.v[5], p++, Efault_read);
fallthrough;
case 2:
- ret |= __get_user_inatomic(temp.v[6], p++);
- ret |= __get_user_inatomic(temp.v[7], p++);
- if (unlikely(ret))
- return -EFAULT;
+ unsafe_get_user(temp.v[6], p++, Efault_read);
+ unsafe_get_user(temp.v[7], p++, Efault_read);
}
+ user_read_access_end();

switch (instr) {
case EVLDD:
@@ -255,31 +250,41 @@ static int emulate_spe(struct pt_regs *regs, unsigned int reg,

/* Store result to memory or update registers */
if (flags & ST) {
- ret = 0;
p = addr;
+
+ if (!user_read_access_begin(addr, nb))
+ return -EFAULT;
+
switch (nb) {
case 8:
- ret |= __put_user_inatomic(data.v[0], p++);
- ret |= __put_user_inatomic(data.v[1], p++);
- ret |= __put_user_inatomic(data.v[2], p++);
- ret |= __put_user_inatomic(data.v[3], p++);
+ unsafe_put_user(data.v[0], p++, Efault_write);
+ unsafe_put_user(data.v[1], p++, Efault_write);
+ unsafe_put_user(data.v[2], p++, Efault_write);
+ unsafe_put_user(data.v[3], p++, Efault_write);
fallthrough;
case 4:
- ret |= __put_user_inatomic(data.v[4], p++);
- ret |= __put_user_inatomic(data.v[5], p++);
+ unsafe_put_user(data.v[4], p++, Efault_write);
+ unsafe_put_user(data.v[5], p++, Efault_write);
fallthrough;
case 2:
- ret |= __put_user_inatomic(data.v[6], p++);
- ret |= __put_user_inatomic(data.v[7], p++);
+ unsafe_put_user(data.v[6], p++, Efault_write);
+ unsafe_put_user(data.v[7], p++, Efault_write);
}
- if (unlikely(ret))
- return -EFAULT;
+ user_write_access_end();
} else {
*evr = data.w[0];
regs->gpr[reg] = data.w[1];
}

return 1;
+
+Efault_read:
+ user_read_access_end();
+ return -EFAULT;
+
+Efault_write:
+ user_write_access_end();
+ return -EFAULT;
}
#endif /* CONFIG_SPE */

--
2.25.0

2021-03-10 17:52:10

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 08/15] powerpc/uaccess: Remove __unsafe_put_user_goto()

__unsafe_put_user_goto() is just an intermediate layer to
__put_user_size_goto() without added value other than doing
the __user pointer type checking.

Do the __user pointer type checking in __put_user_size_goto()
and remove __unsafe_put_user_goto().

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/include/asm/uaccess.h | 20 +++++++-------------
1 file changed, 7 insertions(+), 13 deletions(-)

diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
index c4bbc64758a0..a6d3563cf3ee 100644
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -130,23 +130,17 @@ __pu_failed: \

#define __put_user_size_goto(x, ptr, size, label) \
do { \
+ __typeof__(*(ptr)) __user *__pus_addr = (ptr); \
+ \
switch (size) { \
- case 1: __put_user_asm_goto(x, ptr, label, "stb"); break; \
- case 2: __put_user_asm_goto(x, ptr, label, "sth"); break; \
- case 4: __put_user_asm_goto(x, ptr, label, "stw"); break; \
- case 8: __put_user_asm2_goto(x, ptr, label); break; \
+ case 1: __put_user_asm_goto(x, __pus_addr, label, "stb"); break; \
+ case 2: __put_user_asm_goto(x, __pus_addr, label, "sth"); break; \
+ case 4: __put_user_asm_goto(x, __pus_addr, label, "stw"); break; \
+ case 8: __put_user_asm2_goto(x, __pus_addr, label); break; \
default: __put_user_bad(); \
} \
} while (0)

-#define __unsafe_put_user_goto(x, ptr, size, label) \
-do { \
- __typeof__(*(ptr)) __user *__pu_addr = (ptr); \
- __chk_user_ptr(ptr); \
- __put_user_size_goto((x), __pu_addr, (size), label); \
-} while (0)
-
-
extern long __get_user_bad(void);

/*
@@ -405,7 +399,7 @@ user_write_access_begin(const void __user *ptr, size_t len)
} while (0)

#define unsafe_put_user(x, p, e) \
- __unsafe_put_user_goto((__typeof__(*(p)))(x), (p), sizeof(*(p)), e)
+ __put_user_size_goto((__typeof__(*(p)))(x), (p), sizeof(*(p)), e)

#define unsafe_copy_to_user(d, s, l, e) \
do { \
--
2.25.0

2021-03-10 17:52:11

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 09/15] powerpc/uaccess: Remove __chk_user_ptr() in __get/put_user

Commit d02f6b7dab82 ("powerpc/uaccess: Evaluate macro arguments once,
before user access is allowed") changed the __chk_user_ptr()
argument from the passed ptr pointer to the locally
declared __gu_addr. But __gu_addr is locally defined as __user
so the check is pointless.

During kernel build __chk_user_ptr() voids and is only evaluated
during sparse checks so it should have been armless to leave the
original pointer check there.

Nevertheless, this check is indeed redundant with the assignment
above which casts the ptr pointer to the local __user __gu_addr.
In case of mismatch, sparse will detect it there, so the
__check_user_ptr() is not needed anywhere else than in access_ok().

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/include/asm/uaccess.h | 3 ---
1 file changed, 3 deletions(-)

diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
index a6d3563cf3ee..a9f2639ca3a8 100644
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -78,7 +78,6 @@ __pu_failed: \
__typeof__(size) __pu_size = (size); \
\
might_fault(); \
- __chk_user_ptr(__pu_addr); \
__put_user_size(__pu_val, __pu_addr, __pu_size, __pu_err); \
\
__pu_err; \
@@ -197,7 +196,6 @@ extern long __get_user_bad(void);
#define __get_user_size_allowed(x, ptr, size, retval) \
do { \
retval = 0; \
- __chk_user_ptr(ptr); \
if (size > sizeof(x)) \
(x) = __get_user_bad(); \
switch (size) { \
@@ -230,7 +228,6 @@ do { \
__typeof__(*(ptr)) __user *__gu_addr = (ptr); \
__typeof__(size) __gu_size = (size); \
\
- __chk_user_ptr(__gu_addr); \
if (do_allow) { \
might_fault(); \
__get_user_size(__gu_val, __gu_addr, __gu_size, __gu_err); \
--
2.25.0

2021-03-10 17:52:12

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 13/15] powerpc/uaccess: Refactor get/put_user() and __get/put_user()

Make get_user() do the access_ok() check then call __get_user().
Make put_user() do the access_ok() check then call __put_user().

Then embed __get_user_size() and __put_user_size() in
__get_user() and __put_user().

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/include/asm/uaccess.h | 66 +++++++++++-------------------
1 file changed, 23 insertions(+), 43 deletions(-)

diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
index 616a3a7928c2..671c083f2f2f 100644
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -43,21 +43,6 @@ static inline bool __access_ok(unsigned long addr, unsigned long size)
* exception handling means that it's no longer "just"...)
*
*/
-#define __put_user_size(x, ptr, size, retval) \
-do { \
- __label__ __pu_failed; \
- \
- retval = 0; \
- allow_write_to_user(ptr, size); \
- __put_user_size_goto(x, ptr, size, __pu_failed); \
- prevent_write_to_user(ptr, size); \
- break; \
- \
-__pu_failed: \
- retval = -EFAULT; \
- prevent_write_to_user(ptr, size); \
-} while (0)
-
#define __put_user(x, ptr) \
({ \
long __pu_err; \
@@ -66,23 +51,29 @@ __pu_failed: \
__typeof__(sizeof(*(ptr))) __pu_size = sizeof(*(ptr)); \
\
might_fault(); \
- __put_user_size(__pu_val, __pu_addr, __pu_size, __pu_err); \
+ do { \
+ __label__ __pu_failed; \
+ \
+ allow_write_to_user(__pu_addr, __pu_size); \
+ __put_user_size_goto(__pu_val, __pu_addr, __pu_size, __pu_failed); \
+ prevent_write_to_user(__pu_addr, __pu_size); \
+ __pu_err = 0; \
+ break; \
+ \
+__pu_failed: \
+ prevent_write_to_user(__pu_addr, __pu_size); \
+ __pu_err = -EFAULT; \
+ } while (0); \
\
__pu_err; \
})

#define put_user(x, ptr) \
({ \
- long __pu_err = -EFAULT; \
- __typeof__(*(ptr)) __user *__pu_addr = (ptr); \
- __typeof__(*(ptr)) __pu_val = (__typeof__(*(ptr)))(x); \
- __typeof__(sizeof(*(ptr))) __pu_size = sizeof(*(ptr)); \
+ __typeof__(*(ptr)) __user *_pu_addr = (ptr); \
\
- might_fault(); \
- if (access_ok(__pu_addr, __pu_size)) \
- __put_user_size(__pu_val, __pu_addr, __pu_size, __pu_err); \
- \
- __pu_err; \
+ access_ok(_pu_addr, sizeof(*(ptr))) ? \
+ __put_user(x, _pu_addr) : -EFAULT; \
})

/*
@@ -192,13 +183,6 @@ do { \
} \
} while (0)

-#define __get_user_size(x, ptr, size, retval) \
-do { \
- allow_read_from_user(ptr, size); \
- __get_user_size_allowed(x, ptr, size, retval); \
- prevent_read_from_user(ptr, size); \
-} while (0)
-
/*
* This is a type: either unsigned long, if the argument fits into
* that type, or otherwise unsigned long long.
@@ -214,7 +198,9 @@ do { \
__typeof__(sizeof(*(ptr))) __gu_size = sizeof(*(ptr)); \
\
might_fault(); \
- __get_user_size(__gu_val, __gu_addr, __gu_size, __gu_err); \
+ allow_read_from_user(__gu_addr, __gu_size); \
+ __get_user_size_allowed(__gu_val, __gu_addr, __gu_size, __gu_err); \
+ prevent_read_from_user(__gu_addr, __gu_size); \
(x) = (__typeof__(*(ptr)))__gu_val; \
\
__gu_err; \
@@ -222,17 +208,11 @@ do { \

#define get_user(x, ptr) \
({ \
- long __gu_err = -EFAULT; \
- __long_type(*(ptr)) __gu_val = 0; \
- __typeof__(*(ptr)) __user *__gu_addr = (ptr); \
- __typeof__(sizeof(*(ptr))) __gu_size = sizeof(*(ptr)); \
- \
- might_fault(); \
- if (access_ok(__gu_addr, __gu_size)) \
- __get_user_size(__gu_val, __gu_addr, __gu_size, __gu_err); \
- (x) = (__force __typeof__(*(ptr)))__gu_val; \
+ __typeof__(*(ptr)) __user *_gu_addr = (ptr); \
\
- __gu_err; \
+ access_ok(_gu_addr, sizeof(*(ptr))) ? \
+ __get_user(x, _gu_addr) : \
+ ((x) = (__force __typeof__(*(ptr)))0, -EFAULT); \
})

/* more complex routines */
--
2.25.0

2021-03-10 17:52:17

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 14/15] powerpc/uaccess: Introduce __get_user_size_goto()

We have got two places doing a goto based on the result
of __get_user_size_allowed().

Refactor that into __get_user_size_goto().

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/include/asm/uaccess.h | 24 ++++++++++++------------
1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
index 671c083f2f2f..e3e53e88cb26 100644
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -183,6 +183,15 @@ do { \
} \
} while (0)

+#define __get_user_size_goto(x, ptr, size, label) \
+do { \
+ long __gus_retval; \
+ \
+ __get_user_size_allowed(x, ptr, size, __gus_retval); \
+ if (__gus_retval) \
+ goto label; \
+} while (0)
+
/*
* This is a type: either unsigned long, if the argument fits into
* that type, or otherwise unsigned long long.
@@ -352,13 +361,10 @@ user_write_access_begin(const void __user *ptr, size_t len)
#define user_write_access_end prevent_current_write_to_user

#define unsafe_get_user(x, p, e) do { \
- long __gu_err; \
__long_type(*(p)) __gu_val; \
__typeof__(*(p)) __user *__gu_addr = (p); \
\
- __get_user_size_allowed(__gu_val, __gu_addr, sizeof(*(p)), __gu_err); \
- if (__gu_err) \
- goto e; \
+ __get_user_size_goto(__gu_val, __gu_addr, sizeof(*(p)), e); \
(x) = (__typeof__(*(p)))__gu_val; \
} while (0)

@@ -389,14 +395,8 @@ do { \
#define HAVE_GET_KERNEL_NOFAULT

#define __get_kernel_nofault(dst, src, type, err_label) \
-do { \
- int __kr_err; \
- \
- __get_user_size_allowed(*((type *)(dst)), (__force type __user *)(src),\
- sizeof(type), __kr_err); \
- if (unlikely(__kr_err)) \
- goto err_label; \
-} while (0)
+ __get_user_size_goto(*((type *)(dst)), \
+ (__force type __user *)(src), sizeof(type), err_label)

#define __put_kernel_nofault(dst, src, type, err_label) \
__put_user_size_goto(*((type *)(src)), \
--
2.25.0

2021-03-10 17:52:25

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 15/15] powerpc/uaccess: Use asm goto for get_user when compiler supports it

clang 11 and future GCC are supporting asm goto with outputs.

Use it to implement get_user in order to get better generated code.

Note that clang requires to set x in the default branch of
__get_user_size_goto() otherwise is compliant about x not being
initialised :puzzled:

Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/include/asm/uaccess.h | 55 ++++++++++++++++++++++++++++++
1 file changed, 55 insertions(+)

diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
index e3e53e88cb26..960ab5c04b11 100644
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -136,6 +136,59 @@ do { \
: "=r" (err) \
: "b" (uaddr), "b" (kaddr), "i" (-EFAULT), "0" (err))

+#ifdef CONFIG_CC_HAS_ASM_GOTO_OUTPUT
+
+#define __get_user_asm_goto(x, addr, label, op) \
+ asm_volatile_goto( \
+ "1: "op"%U1%X1 %0, %1 # get_user\n" \
+ EX_TABLE(1b, %l2) \
+ : "=r" (x) \
+ : "m"UPD_CONSTR (*addr) \
+ : \
+ : label)
+
+#ifdef __powerpc64__
+#define __get_user_asm2_goto(x, addr, label) \
+ __get_user_asm_goto(x, addr, label, "ld")
+#else /* __powerpc64__ */
+#define __get_user_asm2_goto(x, addr, label) \
+ asm_volatile_goto( \
+ "1: lwz%X1 %0, %1\n" \
+ "2: lwz%X1 %L0, %L1\n" \
+ EX_TABLE(1b, %l2) \
+ EX_TABLE(2b, %l2) \
+ : "=r" (x) \
+ : "m" (*addr) \
+ : \
+ : label)
+#endif /* __powerpc64__ */
+
+#define __get_user_size_goto(x, ptr, size, label) \
+do { \
+ BUILD_BUG_ON(size > sizeof(x)); \
+ switch (size) { \
+ case 1: __get_user_asm_goto(x, (u8 __user *)ptr, label, "lbz"); break; \
+ case 2: __get_user_asm_goto(x, (u16 __user *)ptr, label, "lhz"); break; \
+ case 4: __get_user_asm_goto(x, (u32 __user *)ptr, label, "lwz"); break; \
+ case 8: __get_user_asm2_goto(x, (u64 __user *)ptr, label); break; \
+ default: x = 0; BUILD_BUG(); \
+ } \
+} while (0)
+
+#define __get_user_size_allowed(x, ptr, size, retval) \
+do { \
+ __label__ __gus_failed; \
+ \
+ __get_user_size_goto(x, ptr, size, __gus_failed); \
+ retval = 0; \
+ break; \
+__gus_failed: \
+ x = 0; \
+ retval = -EFAULT; \
+} while (0)
+
+#else /* CONFIG_CC_HAS_ASM_GOTO_OUTPUT */
+
#define __get_user_asm(x, addr, err, op) \
__asm__ __volatile__( \
"1: "op"%U2%X2 %1, %2 # get_user\n" \
@@ -192,6 +245,8 @@ do { \
goto label; \
} while (0)

+#endif /* CONFIG_CC_HAS_ASM_GOTO_OUTPUT */
+
/*
* This is a type: either unsigned long, if the argument fits into
* that type, or otherwise unsigned long long.
--
2.25.0

2021-03-10 21:49:40

by Daniel Axtens

[permalink] [raw]
Subject: Re: [PATCH v2 01/15] powerpc/uaccess: Remove __get_user_allowed() and unsafe_op_wrap()

Hi Christophe,

Thanks for the answers to my questions on v1.

This all looks good to me.

Reviewed-by: Daniel Axtens <[email protected]>

Kind regards,
Daniel

> Those two macros have only one user which is unsafe_get_user().
>
> Put everything in one place and remove them.
>
> Signed-off-by: Christophe Leroy <[email protected]>
> ---
> arch/powerpc/include/asm/uaccess.h | 10 +++++-----
> 1 file changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
> index 78e2a3990eab..8cbf3e3874f1 100644
> --- a/arch/powerpc/include/asm/uaccess.h
> +++ b/arch/powerpc/include/asm/uaccess.h
> @@ -53,9 +53,6 @@ static inline bool __access_ok(unsigned long addr, unsigned long size)
> #define __put_user(x, ptr) \
> __put_user_nocheck((__typeof__(*(ptr)))(x), (ptr), sizeof(*(ptr)))
>
> -#define __get_user_allowed(x, ptr) \
> - __get_user_nocheck((x), (ptr), sizeof(*(ptr)), false)
> -
> #define __get_user_inatomic(x, ptr) \
> __get_user_nosleep((x), (ptr), sizeof(*(ptr)))
> #define __put_user_inatomic(x, ptr) \
> @@ -482,8 +479,11 @@ user_write_access_begin(const void __user *ptr, size_t len)
> #define user_write_access_begin user_write_access_begin
> #define user_write_access_end prevent_current_write_to_user
>
> -#define unsafe_op_wrap(op, err) do { if (unlikely(op)) goto err; } while (0)
> -#define unsafe_get_user(x, p, e) unsafe_op_wrap(__get_user_allowed(x, p), e)
> +#define unsafe_get_user(x, p, e) do { \
> + if (unlikely(__get_user_nocheck((x), (p), sizeof(*(p)), false)))\
> + goto e; \
> +} while (0)
> +
> #define unsafe_put_user(x, p, e) \
> __unsafe_put_user_goto((__typeof__(*(p)))(x), (p), sizeof(*(p)), e)
>
> --
> 2.25.0

2021-03-10 22:33:33

by Daniel Axtens

[permalink] [raw]
Subject: Re: [PATCH v2 03/15] powerpc/align: Convert emulate_spe() to user_access_begin

Hi Christophe,

> This patch converts emulate_spe() to using user_access_being
s/being/begin/ :)
> logic.
>
> Since commit 662bbcb2747c ("mm, sched: Allow uaccess in atomic with
> pagefault_disable()"), might_fault() doesn't fire when called from
> sections where pagefaults are disabled, which must be the case
> when using _inatomic variants of __get_user and __put_user. So
> the might_fault() in user_access_begin() is not a problem.
(likewise with the might_fault() in __get_user_nocheck, called from
unsafe_get_user())

> There was a verification of user_mode() together with the access_ok(),
> but the function returns in case !user_mode() immediately after
> the access_ok() verification, so removing that test has no effect.

I agree that removing the test is safe.

> - /* Verify the address of the operand */
> - if (unlikely(user_mode(regs) &&
> - !access_ok(addr, nb)))
> - return -EFAULT;
> -

I found the reasoning a bit confusing: I think it's safe to remove
because:

- we have the usermode check immediately following it:

> /* userland only */
> if (unlikely(!user_mode(regs)))
> return 0;

- and then we have the access_ok() check as part of
user_read_access_begin later on in the function:

> + if (!user_read_access_begin(addr, nb))
> + return -EFAULT;
> +


> switch (nb) {
> case 8:
> - ret |= __get_user_inatomic(temp.v[0], p++);
> - ret |= __get_user_inatomic(temp.v[1], p++);
> - ret |= __get_user_inatomic(temp.v[2], p++);
> - ret |= __get_user_inatomic(temp.v[3], p++);
> + unsafe_get_user(temp.v[0], p++, Efault_read);
> + unsafe_get_user(temp.v[1], p++, Efault_read);
> + unsafe_get_user(temp.v[2], p++, Efault_read);
> + unsafe_get_user(temp.v[3], p++, Efault_read);

This will bail early rather than trying every possible read. I think
that's OK. I can't think of a situation where we could fail to read the
first byte and then successfully read later bytes, for example. Also I
can't think of a sane way userspace could depend on that behaviour. So I
agree with this change (and the change to the write path).

> fallthrough;
> case 4:
> - ret |= __get_user_inatomic(temp.v[4], p++);
> - ret |= __get_user_inatomic(temp.v[5], p++);
> + unsafe_get_user(temp.v[4], p++, Efault_read);
> + unsafe_get_user(temp.v[5], p++, Efault_read);
> fallthrough;
> case 2:
> - ret |= __get_user_inatomic(temp.v[6], p++);
> - ret |= __get_user_inatomic(temp.v[7], p++);
> - if (unlikely(ret))
> - return -EFAULT;
> + unsafe_get_user(temp.v[6], p++, Efault_read);
> + unsafe_get_user(temp.v[7], p++, Efault_read);
> }
> + user_read_access_end();
>
> switch (instr) {
> case EVLDD:
> @@ -255,31 +250,41 @@ static int emulate_spe(struct pt_regs *regs, unsigned int reg,
>
> /* Store result to memory or update registers */
> if (flags & ST) {
> - ret = 0;
> p = addr;
> +
> + if (!user_read_access_begin(addr, nb))

That should be a user_write_access_begin.

> + return -EFAULT;
> +


>
> return 1;
> +
> +Efault_read:

Checkpatch complains that this is CamelCase, which seems like a
checkpatch problem. Efault_{read,write} seem like good labels to me.

(You don't need to change anything, I just like to check the checkpatch
results when reviewing a patch.)

> + user_read_access_end();
> + return -EFAULT;
> +
> +Efault_write:
> + user_write_access_end();
> + return -EFAULT;
> }
> #endif /* CONFIG_SPE */
>

With the user_write_access_begin change:
Reviewed-by: Daniel Axtens <[email protected]>

Kind regards,
Daniel

2021-03-11 05:47:45

by Christophe Leroy

[permalink] [raw]
Subject: Re: [PATCH v2 03/15] powerpc/align: Convert emulate_spe() to user_access_begin



Le 10/03/2021 à 23:31, Daniel Axtens a écrit :
> Hi Christophe,
>
>> This patch converts emulate_spe() to using user_access_being
> s/being/begin/ :)
>> logic.
>>
>> Since commit 662bbcb2747c ("mm, sched: Allow uaccess in atomic with
>> pagefault_disable()"), might_fault() doesn't fire when called from
>> sections where pagefaults are disabled, which must be the case
>> when using _inatomic variants of __get_user and __put_user. So
>> the might_fault() in user_access_begin() is not a problem.
> (likewise with the might_fault() in __get_user_nocheck, called from
> unsafe_get_user())

unsafe_get_user() call __get_user_nocheck() with do_allow = false, so there is no might_fault() there.

>
>> There was a verification of user_mode() together with the access_ok(),
>> but the function returns in case !user_mode() immediately after
>> the access_ok() verification, so removing that test has no effect.
>
> I agree that removing the test is safe.
>
>> - /* Verify the address of the operand */
>> - if (unlikely(user_mode(regs) &&
>> - !access_ok(addr, nb)))
>> - return -EFAULT;
>> -
>
> I found the reasoning a bit confusing: I think it's safe to remove
> because:

Ok, I'll see if I can rephrase it.

>
> - we have the usermode check immediately following it:
>
>> /* userland only */
>> if (unlikely(!user_mode(regs)))
>> return 0;
>
> - and then we have the access_ok() check as part of
> user_read_access_begin later on in the function:
>
>> + if (!user_read_access_begin(addr, nb))
>> + return -EFAULT;
>> +
>
>
>> switch (nb) {
>> case 8:
>> - ret |= __get_user_inatomic(temp.v[0], p++);
>> - ret |= __get_user_inatomic(temp.v[1], p++);
>> - ret |= __get_user_inatomic(temp.v[2], p++);
>> - ret |= __get_user_inatomic(temp.v[3], p++);
>> + unsafe_get_user(temp.v[0], p++, Efault_read);
>> + unsafe_get_user(temp.v[1], p++, Efault_read);
>> + unsafe_get_user(temp.v[2], p++, Efault_read);
>> + unsafe_get_user(temp.v[3], p++, Efault_read);
>
> This will bail early rather than trying every possible read. I think
> that's OK.

It tries every possible read, but at the end it bails out with EFAULT, so I see no point.

> I can't think of a situation where we could fail to read the
> first byte and then successfully read later bytes, for example. Also I
> can't think of a sane way userspace could depend on that behaviour. So I
> agree with this change (and the change to the write path).
>
>> fallthrough;
>> case 4:
>> - ret |= __get_user_inatomic(temp.v[4], p++);
>> - ret |= __get_user_inatomic(temp.v[5], p++);
>> + unsafe_get_user(temp.v[4], p++, Efault_read);
>> + unsafe_get_user(temp.v[5], p++, Efault_read);
>> fallthrough;
>> case 2:
>> - ret |= __get_user_inatomic(temp.v[6], p++);
>> - ret |= __get_user_inatomic(temp.v[7], p++);
>> - if (unlikely(ret))
>> - return -EFAULT;
>> + unsafe_get_user(temp.v[6], p++, Efault_read);
>> + unsafe_get_user(temp.v[7], p++, Efault_read);
>> }
>> + user_read_access_end();
>>
>> switch (instr) {
>> case EVLDD:
>> @@ -255,31 +250,41 @@ static int emulate_spe(struct pt_regs *regs, unsigned int reg,
>>
>> /* Store result to memory or update registers */
>> if (flags & ST) {
>> - ret = 0;
>> p = addr;
>> +
>> + if (!user_read_access_begin(addr, nb))
>
> That should be a user_write_access_begin.

Good catch thanks.

>
>> + return -EFAULT;
>> +
>
>
>>
>> return 1;
>> +
>> +Efault_read:
>
> Checkpatch complains that this is CamelCase, which seems like a
> checkpatch problem. Efault_{read,write} seem like good labels to me.

I'm not keen of names mixing capital letters and lowercase, but Efault is the label that has been
used almost everywhere with unsafe_get/put_user(), so I inclined myself.

>
> (You don't need to change anything, I just like to check the checkpatch
> results when reviewing a patch.)
>
>> + user_read_access_end();
>> + return -EFAULT;
>> +
>> +Efault_write:
>> + user_write_access_end();
>> + return -EFAULT;
>> }
>> #endif /* CONFIG_SPE */
>>
>
> With the user_write_access_begin change:
> Reviewed-by: Daniel Axtens <[email protected]>
>
> Kind regards,
> Daniel
>

2021-03-12 13:26:34

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v3 03/15] powerpc/align: Convert emulate_spe() to user_access_begin

This patch converts emulate_spe() to using user_access_begin
logic.

Since commit 662bbcb2747c ("mm, sched: Allow uaccess in atomic with
pagefault_disable()"), might_fault() doesn't fire when called from
sections where pagefaults are disabled, which must be the case
when using _inatomic variants of __get_user and __put_user. So
the might_fault() in user_access_begin() is not a problem.

There was a verification of user_mode() together with the access_ok(),
but there is a second verification of user_mode() just after, that
leads to immediate return. The access_ok() is now part of the
user_access_begin which is called after that other user_mode()
verification, so no need to check user_mode() again.

Signed-off-by: Christophe Leroy <[email protected]>
Reviewed-by: Daniel Axtens <[email protected]>
---
v3:
- Changed the second user_read_access_begin() to user_write_access_begin()
- Reworded explaination about the user_mode() with access_ok() in the commit message
---
arch/powerpc/kernel/align.c | 61 ++++++++++++++++++++-----------------
1 file changed, 33 insertions(+), 28 deletions(-)

diff --git a/arch/powerpc/kernel/align.c b/arch/powerpc/kernel/align.c
index c7797eb958c7..f362c99213be 100644
--- a/arch/powerpc/kernel/align.c
+++ b/arch/powerpc/kernel/align.c
@@ -107,7 +107,6 @@ static struct aligninfo spe_aligninfo[32] = {
static int emulate_spe(struct pt_regs *regs, unsigned int reg,
struct ppc_inst ppc_instr)
{
- int ret;
union {
u64 ll;
u32 w[2];
@@ -127,11 +126,6 @@ static int emulate_spe(struct pt_regs *regs, unsigned int reg,
nb = spe_aligninfo[instr].len;
flags = spe_aligninfo[instr].flags;

- /* Verify the address of the operand */
- if (unlikely(user_mode(regs) &&
- !access_ok(addr, nb)))
- return -EFAULT;
-
/* userland only */
if (unlikely(!user_mode(regs)))
return 0;
@@ -169,26 +163,27 @@ static int emulate_spe(struct pt_regs *regs, unsigned int reg,
}
} else {
temp.ll = data.ll = 0;
- ret = 0;
p = addr;

+ if (!user_read_access_begin(addr, nb))
+ return -EFAULT;
+
switch (nb) {
case 8:
- ret |= __get_user_inatomic(temp.v[0], p++);
- ret |= __get_user_inatomic(temp.v[1], p++);
- ret |= __get_user_inatomic(temp.v[2], p++);
- ret |= __get_user_inatomic(temp.v[3], p++);
+ unsafe_get_user(temp.v[0], p++, Efault_read);
+ unsafe_get_user(temp.v[1], p++, Efault_read);
+ unsafe_get_user(temp.v[2], p++, Efault_read);
+ unsafe_get_user(temp.v[3], p++, Efault_read);
fallthrough;
case 4:
- ret |= __get_user_inatomic(temp.v[4], p++);
- ret |= __get_user_inatomic(temp.v[5], p++);
+ unsafe_get_user(temp.v[4], p++, Efault_read);
+ unsafe_get_user(temp.v[5], p++, Efault_read);
fallthrough;
case 2:
- ret |= __get_user_inatomic(temp.v[6], p++);
- ret |= __get_user_inatomic(temp.v[7], p++);
- if (unlikely(ret))
- return -EFAULT;
+ unsafe_get_user(temp.v[6], p++, Efault_read);
+ unsafe_get_user(temp.v[7], p++, Efault_read);
}
+ user_read_access_end();

switch (instr) {
case EVLDD:
@@ -255,31 +250,41 @@ static int emulate_spe(struct pt_regs *regs, unsigned int reg,

/* Store result to memory or update registers */
if (flags & ST) {
- ret = 0;
p = addr;
+
+ if (!user_write_access_begin(addr, nb))
+ return -EFAULT;
+
switch (nb) {
case 8:
- ret |= __put_user_inatomic(data.v[0], p++);
- ret |= __put_user_inatomic(data.v[1], p++);
- ret |= __put_user_inatomic(data.v[2], p++);
- ret |= __put_user_inatomic(data.v[3], p++);
+ unsafe_put_user(data.v[0], p++, Efault_write);
+ unsafe_put_user(data.v[1], p++, Efault_write);
+ unsafe_put_user(data.v[2], p++, Efault_write);
+ unsafe_put_user(data.v[3], p++, Efault_write);
fallthrough;
case 4:
- ret |= __put_user_inatomic(data.v[4], p++);
- ret |= __put_user_inatomic(data.v[5], p++);
+ unsafe_put_user(data.v[4], p++, Efault_write);
+ unsafe_put_user(data.v[5], p++, Efault_write);
fallthrough;
case 2:
- ret |= __put_user_inatomic(data.v[6], p++);
- ret |= __put_user_inatomic(data.v[7], p++);
+ unsafe_put_user(data.v[6], p++, Efault_write);
+ unsafe_put_user(data.v[7], p++, Efault_write);
}
- if (unlikely(ret))
- return -EFAULT;
+ user_write_access_end();
} else {
*evr = data.w[0];
regs->gpr[reg] = data.w[1];
}

return 1;
+
+Efault_read:
+ user_read_access_end();
+ return -EFAULT;
+
+Efault_write:
+ user_write_access_end();
+ return -EFAULT;
}
#endif /* CONFIG_SPE */

--
2.25.0

2021-04-10 14:31:36

by Michael Ellerman

[permalink] [raw]
Subject: Re: [PATCH v2 00/15] powerpc: Cleanup of uaccess.h and adding asm goto for get_user()

On Wed, 10 Mar 2021 17:46:39 +0000 (UTC), Christophe Leroy wrote:
> This series cleans up uaccess.h and adds asm goto for get_user()
>
> v2:
> - Further clean ups
> - asm goto for get_user()
> - Move a few patches unrelated to put_user/get_user into another misc series.
>
> [...]

Applied to powerpc/next.

[01/15] powerpc/uaccess: Remove __get_user_allowed() and unsafe_op_wrap()
https://git.kernel.org/powerpc/c/8cdf748d557f15ae6f9e0d4108cc3ea6e1ee4419
[02/15] powerpc/uaccess: Define ___get_user_instr() for ppc32
https://git.kernel.org/powerpc/c/9bd68dc5d7463cb959bff9ac4b6c7e578171de35
[03/15] powerpc/align: Convert emulate_spe() to user_access_begin
https://git.kernel.org/powerpc/c/3fa3db32956d74c0784171ae0334685502bb169a
[04/15] powerpc/uaccess: Remove __get/put_user_inatomic()
https://git.kernel.org/powerpc/c/bad956b8fe1a8b3b634d596ed2023ec30726cdf1
[05/15] powerpc/uaccess: Move get_user_instr helpers in asm/inst.h
https://git.kernel.org/powerpc/c/35506a3e2d7c4d93cb564e23471a448cbd98f085
[06/15] powerpc/align: Don't use __get_user_instr() on kernel addresses
https://git.kernel.org/powerpc/c/111631b5e9dae764754657aad00bd6cd1a805d0d
[07/15] powerpc/uaccess: Call might_fault() inconditionaly
https://git.kernel.org/powerpc/c/ed0d9c66f97c6865e87fa6e3631bbc3919a31ad6
[08/15] powerpc/uaccess: Remove __unsafe_put_user_goto()
https://git.kernel.org/powerpc/c/be15a165796598cd3929ca9aac56ba5ec69e41c1
[09/15] powerpc/uaccess: Remove __chk_user_ptr() in __get/put_user
https://git.kernel.org/powerpc/c/028e15616857add3ba4951f989027675370b0e82
[10/15] powerpc/uaccess: Remove calls to __get_user_bad() and __put_user_bad()
https://git.kernel.org/powerpc/c/9975f852ce1bf041a1a81bf882e29ee7a3b78ca6
[11/15] powerpc/uaccess: Split out __get_user_nocheck()
https://git.kernel.org/powerpc/c/f904c22f2a9fb09fe705efdedbe4af9a30bdf633
[12/15] powerpc/uaccess: Rename __get/put_user_check/nocheck
https://git.kernel.org/powerpc/c/17f8c0bc21bbb7d1fe729c7f656924a6ea72079b
[13/15] powerpc/uaccess: Refactor get/put_user() and __get/put_user()
https://git.kernel.org/powerpc/c/e72fcdb26cde72985c418b39f72ecaa222e1f4d5
[14/15] powerpc/uaccess: Introduce __get_user_size_goto()
https://git.kernel.org/powerpc/c/035785ab2826beb43cfa65a2df37d60074915a4d
[15/15] powerpc/uaccess: Use asm goto for get_user when compiler supports it
https://git.kernel.org/powerpc/c/5cd29b1fd3e8f2b45fe6d011588d832417defe31

cheers

2021-04-10 14:33:37

by Michael Ellerman

[permalink] [raw]
Subject: Re: [PATCH v3 03/15] powerpc/align: Convert emulate_spe() to user_access_begin

On Fri, 12 Mar 2021 13:25:11 +0000 (UTC), Christophe Leroy wrote:
> This patch converts emulate_spe() to using user_access_begin
> logic.
>
> Since commit 662bbcb2747c ("mm, sched: Allow uaccess in atomic with
> pagefault_disable()"), might_fault() doesn't fire when called from
> sections where pagefaults are disabled, which must be the case
> when using _inatomic variants of __get_user and __put_user. So
> the might_fault() in user_access_begin() is not a problem.
>
> [...]

Applied to powerpc/next.

[03/15] powerpc/align: Convert emulate_spe() to user_access_begin
https://git.kernel.org/powerpc/c/3fa3db32956d74c0784171ae0334685502bb169a

cheers