From: "H. Peter Anvin (Intel)" <[email protected]>
This patchset addresses several inconsistencies in the handling of
system call numbers in x86-64 (and x32).
Right now, *some* code will treat e.g. 0x00000001_00000001 as a system
call and some will not. Some of the code, notably in ptrace and
seccomp, will treat 0x00000001_ffffffff as a system call and some will
not.
Furthermore, right now, e.g. 335 for x86-64 will force the exit code
to be set to -ENOSYS even if poked by ptrace, but 548 will not,
because there is an observable difference between an out of range
system call and a system call number that falls outside the range of
the tables.
Both of these issues are visible to the user; for example the
syscall_numbering_64 kernel selftest fails if run under strace for
this reason (system calls succeed with the high bits set, whereas they
fail when not being traced.)
The architecture independent code in Linux expects "int" for the
system call number, per the API documented, but not implemented, in
<asm-generic/syscalls.h>: system call numbers are expected to be
"int", with -1 as the only non-system-call sentinel.
Treating the same data in multiple ways in different context is at the
very best confusing, but it also has the potential to cause security
problems (no such security problems are known at this time, however.)
This is an ABI change, but it is in fact a return to the original
x86-64 ABI: the original assembly entry code would zero-extend the
system call number passed and only the bottom 32 bits were examined.
1. Consistently treat the system call number as a signed int. This is
what syscall_get_nr() already does, and therefore what all
architecture-independent code (e.g. seccomp) already expects.
2. As per the defined semantics of syscall_get_nr(), only the value -1
is defined as a non-system call, so comparing >= 0 is
incorrect. Change to != -1.
3. Call sys_ni_syscall() for system calls which are out of range
except for -1, which is used by ptrace and seccomp as a "skip
system call" marker) just as for system call numbers that
correspond to holes in the table.
4. Updates and extends the syscall_numbering_64 selftest.
Changes from v2:
* Factor out and split what was a single patch in the v2 patchset; the
rest of the patches have already been applied.
* Fix the syscall_numbering_64 selftest to match the definition
changes, make its output more informative, and extend it to more
tests. Avoid using the glibc syscall() wrapper to make sure we test
what we think we are testing.
* Better documentation of the changes.
Changes from v1:
* Only -1 should be a non-system call per the cross-architectural
definition of sys_ni_syscall().
* Fix/improve patch descriptions.
---
arch/x86/entry/common.c | 93 +++++---
arch/x86/entry/entry_64.S | 2 +-
arch/x86/include/asm/syscall.h | 2 +-
tools/testing/selftests/x86/syscall_numbering.c | 274 +++++++++++++++++++-----
4 files changed, 291 insertions(+), 80 deletions(-)
From: "H. Peter Anvin (Intel)" <[email protected]>
System call numbers are defined as int, so use int everywhere for
system call numbers. This patch is strictly a cleanup; it should not
change anything user visible; all ABI changes have been done in the
preceeding patches.
Signed-off-by: H. Peter Anvin (Intel) <[email protected]>
---
arch/x86/entry/common.c | 93 ++++++++++++++++++++++++----------
arch/x86/include/asm/syscall.h | 2 +-
2 files changed, 66 insertions(+), 29 deletions(-)
diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
index f51bc17262db..714804f0970c 100644
--- a/arch/x86/entry/common.c
+++ b/arch/x86/entry/common.c
@@ -36,49 +36,87 @@
#include <asm/irq_stack.h>
#ifdef CONFIG_X86_64
-__visible noinstr void do_syscall_64(struct pt_regs *regs, unsigned long nr)
+
+static __always_inline bool do_syscall_x64(struct pt_regs *regs, int nr)
+{
+ /*
+ * Convert negative numbers to very high and thus out of range
+ * numbers for comparisons. Use unsigned long to slightly
+ * improve the array_index_nospec() generated code.
+ */
+ unsigned long unr = nr;
+
+ if (likely(unr < NR_syscalls)) {
+ unr = array_index_nospec(unr, NR_syscalls);
+ regs->ax = sys_call_table[unr](regs);
+ return true;
+ }
+ return false;
+}
+
+static __always_inline bool do_syscall_x32(struct pt_regs *regs, int nr)
+{
+ /*
+ * Adjust the starting offset of the table, and convert numbers
+ * < __X32_SYSCALL_BIT to very high and thus out of range
+ * numbers for comparisons. Use unsigned long to slightly
+ * improve the array_index_nospec() generated code.
+ */
+ unsigned long xnr = nr - __X32_SYSCALL_BIT;
+
+ if (IS_ENABLED(CONFIG_X86_X32_ABI) &&
+ likely(xnr < X32_NR_syscalls)) {
+ xnr = array_index_nospec(xnr, X32_NR_syscalls);
+ regs->ax = x32_sys_call_table[xnr](regs);
+ return true;
+ }
+ return false;
+}
+
+__visible noinstr void do_syscall_64(struct pt_regs *regs, int nr)
{
add_random_kstack_offset();
nr = syscall_enter_from_user_mode(regs, nr);
instrumentation_begin();
- if (likely(nr < NR_syscalls)) {
- nr = array_index_nospec(nr, NR_syscalls);
- regs->ax = sys_call_table[nr](regs);
-#ifdef CONFIG_X86_X32_ABI
- } else if (likely((nr & __X32_SYSCALL_BIT) &&
- (nr & ~__X32_SYSCALL_BIT) < X32_NR_syscalls)) {
- nr = array_index_nospec(nr & ~__X32_SYSCALL_BIT,
- X32_NR_syscalls);
- regs->ax = x32_sys_call_table[nr](regs);
-#endif
- } else if (unlikely((int)nr != -1)) {
+
+ if (!do_syscall_x64(regs, nr) &&
+ !do_syscall_x32(regs, nr) &&
+ unlikely(nr != -1)) {
+ /* Invalid system call, but still a system call? */
regs->ax = __x64_sys_ni_syscall(regs);
}
+
instrumentation_end();
syscall_exit_to_user_mode(regs);
}
#endif
#if defined(CONFIG_X86_32) || defined(CONFIG_IA32_EMULATION)
-static __always_inline unsigned int syscall_32_enter(struct pt_regs *regs)
+static __always_inline int syscall_32_enter(struct pt_regs *regs)
{
if (IS_ENABLED(CONFIG_IA32_EMULATION))
current_thread_info()->status |= TS_COMPAT;
- return (unsigned int)regs->orig_ax;
+ return (int)regs->orig_ax;
}
/*
* Invoke a 32-bit syscall. Called with IRQs on in CONTEXT_KERNEL.
*/
-static __always_inline void do_syscall_32_irqs_on(struct pt_regs *regs,
- unsigned int nr)
+static __always_inline void do_syscall_32_irqs_on(struct pt_regs *regs, int nr)
{
- if (likely(nr < IA32_NR_syscalls)) {
- nr = array_index_nospec(nr, IA32_NR_syscalls);
- regs->ax = ia32_sys_call_table[nr](regs);
- } else if (unlikely((int)nr != -1)) {
+ /*
+ * Convert negative numbers to very high and thus out of range
+ * numbers for comparisons. Use unsigned long to slightly
+ * improve the array_index_nospec() generated code.
+ */
+ unsigned long unr = nr;
+
+ if (likely(unr < IA32_NR_syscalls)) {
+ unr = array_index_nospec(unr, IA32_NR_syscalls);
+ regs->ax = ia32_sys_call_table[unr](regs);
+ } else if (unlikely(nr != -1)) {
regs->ax = __ia32_sys_ni_syscall(regs);
}
}
@@ -86,15 +124,15 @@ static __always_inline void do_syscall_32_irqs_on(struct pt_regs *regs,
/* Handles int $0x80 */
__visible noinstr void do_int80_syscall_32(struct pt_regs *regs)
{
- unsigned int nr = syscall_32_enter(regs);
+ int nr = syscall_32_enter(regs);
add_random_kstack_offset();
/*
- * Subtlety here: if ptrace pokes something larger than 2^32-1 into
- * orig_ax, the unsigned int return value truncates it. This may
- * or may not be necessary, but it matches the old asm behavior.
+ * Subtlety here: if ptrace pokes something larger than 2^31-1 into
+ * orig_ax, the int return value truncates it. This matches
+ * the semantics of syscall_get_nr().
*/
- nr = (unsigned int)syscall_enter_from_user_mode(regs, nr);
+ nr = syscall_enter_from_user_mode(regs, nr);
instrumentation_begin();
do_syscall_32_irqs_on(regs, nr);
@@ -105,7 +143,7 @@ __visible noinstr void do_int80_syscall_32(struct pt_regs *regs)
static noinstr bool __do_fast_syscall_32(struct pt_regs *regs)
{
- unsigned int nr = syscall_32_enter(regs);
+ int nr = syscall_32_enter(regs);
int res;
add_random_kstack_offset();
@@ -140,8 +178,7 @@ static noinstr bool __do_fast_syscall_32(struct pt_regs *regs)
return false;
}
- /* The case truncates any ptrace induced syscall nr > 2^32 -1 */
- nr = (unsigned int)syscall_enter_from_user_mode_work(regs, nr);
+ nr = syscall_enter_from_user_mode_work(regs, nr);
/* Now this is just like a normal syscall. */
do_syscall_32_irqs_on(regs, nr);
diff --git a/arch/x86/include/asm/syscall.h b/arch/x86/include/asm/syscall.h
index f6593cafdbd9..f7e2d82d24fb 100644
--- a/arch/x86/include/asm/syscall.h
+++ b/arch/x86/include/asm/syscall.h
@@ -159,7 +159,7 @@ static inline int syscall_get_arch(struct task_struct *task)
? AUDIT_ARCH_I386 : AUDIT_ARCH_X86_64;
}
-void do_syscall_64(struct pt_regs *regs, unsigned long nr);
+void do_syscall_64(struct pt_regs *regs, int nr);
void do_int80_syscall_32(struct pt_regs *regs);
long do_fast_syscall_32(struct pt_regs *regs);
--
2.31.1
From: "H. Peter Anvin (Intel)" <[email protected]>
Update the syscall_numbering_64 selftest to reflect that a system call
is to be extended from 32 bits. Add a mix of tests for valid and
invalid system calls in 64-bit and x32 space.
Use an explicit system call instruction, because we cannot know if the
glibc syscall() wrapper intercepts instructions, extends the system
call number independently, or anything similar.
Use long long instead of long to make it possible to compile this test
on x32 as well as 64 bits.
Signed-off-by: H. Peter Anvin (Intel) <[email protected]>
---
.../testing/selftests/x86/syscall_numbering.c | 274 ++++++++++++++----
1 file changed, 222 insertions(+), 52 deletions(-)
diff --git a/tools/testing/selftests/x86/syscall_numbering.c b/tools/testing/selftests/x86/syscall_numbering.c
index d6b09cb1aa2c..b578cd3c6b3c 100644
--- a/tools/testing/selftests/x86/syscall_numbering.c
+++ b/tools/testing/selftests/x86/syscall_numbering.c
@@ -1,6 +1,8 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
- * syscall_arg_fault.c - tests faults 32-bit fast syscall stack args
+ * syscall_numbering.c - test calling the x86-64 kernel with various
+ * valid and invalid system call numbers.
+ *
* Copyright (c) 2018 Andrew Lutomirski
*/
@@ -11,79 +13,247 @@
#include <stdbool.h>
#include <errno.h>
#include <unistd.h>
-#include <syscall.h>
+#include <string.h>
+#include <fcntl.h>
+#include <limits.h>
-static int nerrs;
+/* Common system call numbers */
+#define SYS_READ 0
+#define SYS_WRITE 1
+#define SYS_GETPID 39
+/* x64-only system call numbers */
+#define X64_IOCTL 16
+#define X64_READV 19
+#define X64_WRITEV 20
+/* x32-only system call numbers (without X32_BIT) */
+#define X32_IOCTL 514
+#define X32_READV 515
+#define X32_WRITEV 516
-#define X32_BIT 0x40000000UL
+#define X32_BIT 0x40000000
-static void check_enosys(unsigned long nr, bool *ok)
+static unsigned int nerr = 0; /* Cumulative error count */
+static int nullfd = -1; /* File descriptor for /dev/null */
+
+/*
+ * Directly invokes the given syscall with nullfd as the first argument
+ * and the rest zero. Avoids involving glibc wrappers in case they ever
+ * end up intercepting some system calls for some reason, or modify
+ * the system call number itself.
+ */
+static inline long long probe_syscall(int msb, int lsb)
{
- /* If this fails, a segfault is reasonably likely. */
- fflush(stdout);
-
- long ret = syscall(nr, 0, 0, 0, 0, 0, 0);
- if (ret == 0) {
- printf("[FAIL]\tsyscall %lu succeeded, but it should have failed\n", nr);
- *ok = false;
- } else if (errno != ENOSYS) {
- printf("[FAIL]\tsyscall %lu had error code %d, but it should have reported ENOSYS\n", nr, errno);
- *ok = false;
- }
+ register long long arg1 asm("rdi") = nullfd;
+ register long long arg2 asm("rsi") = 0;
+ register long long arg3 asm("rdx") = 0;
+ register long long arg4 asm("r10") = 0;
+ register long long arg5 asm("r8") = 0;
+ register long long arg6 asm("r9") = 0;
+ long long nr = ((long long)msb << 32) | (unsigned int)lsb;
+ long long ret;
+
+ asm volatile("syscall"
+ : "=a" (ret)
+ : "a" (nr), "r" (arg1), "r" (arg2), "r" (arg3),
+ "r" (arg4), "r" (arg5), "r" (arg6)
+ : "rcx", "r11", "memory", "cc");
+
+ return ret;
}
-static void test_x32_without_x32_bit(void)
+static const char *syscall_str(int msb, int start, int end)
{
- bool ok = true;
+ static char buf[64];
+ const char * const type = (start & X32_BIT) ? "x32" : "x64";
+ int lsb = start;
/*
- * Syscalls 512-547 are "x32" syscalls. They are intended to be
- * called with the x32 (0x40000000) bit set. Calling them without
- * the x32 bit set is nonsense and should not work.
+ * Improve readability by stripping the x32 bit, but round
+ * toward zero so we don't display -1 as -1073741825.
*/
- printf("[RUN]\tChecking syscalls 512-547\n");
- for (int i = 512; i <= 547; i++)
- check_enosys(i, &ok);
+ if (lsb < 0)
+ lsb |= X32_BIT;
+ else
+ lsb &= ~X32_BIT;
+
+ if (start == end)
+ snprintf(buf, sizeof buf, "%s syscall %d:%d",
+ type, msb, lsb);
+ else
+ snprintf(buf, sizeof buf, "%s syscalls %d:%d..%d",
+ type, msb, lsb, lsb + (end-start));
+
+ return buf;
+}
+
+static unsigned int _check_for(int msb, int start, int end, long long expect,
+ const char *expect_str)
+{
+ unsigned int err = 0;
+
+ for (int nr = start; nr <= end; nr++) {
+ long long ret = probe_syscall(msb, nr);
+
+ if (ret != expect) {
+ printf("[FAIL]\t %s returned %lld, but it should have returned %s\n",
+ syscall_str(msb, nr, nr),
+ ret, expect_str);
+ err++;
+ }
+ }
+ if (err) {
+ nerr += err;
+ if (start != end)
+ printf("[FAIL]\t %s had %u failure%s\n",
+ syscall_str(msb, start, end),
+ err, (err == 1) ? "s" : "");
+ } else {
+ printf("[OK]\t %s returned %s as expected\n",
+ syscall_str(msb, start, end), expect_str);
+ }
+
+ return err;
+}
+
+#define check_for(msb,start,end,expect) \
+ _check_for(msb,start,end,expect,#expect)
+
+static bool check_zero(int msb, int nr)
+{
+ return check_for(msb, nr, nr, 0);
+}
+
+static bool check_enosys(int msb, int nr)
+{
+ return check_for(msb, nr, nr, -ENOSYS);
+}
+
+/*
+ * Anyone diagnosing a failure will want to know whether the kernel
+ * supports x32. Tell them. This can also be used to conditionalize
+ * tests based on existence or nonexistence of x32.
+ */
+static bool test_x32(void)
+{
+ long long ret;
+ long long mypid = getpid();
+
+ printf("[RUN]\tChecking for x32 by calling x32 getpid()\n");
+ ret = probe_syscall(0, SYS_GETPID | X32_BIT);
+
+ if (ret == mypid) {
+ printf("[INFO]\t x32 is supported\n");
+ return true;
+ } else if (ret == -ENOSYS) {
+ printf("[INFO]\t x32 is not supported\n");
+ return false;
+ } else {
+ printf("[FAIL]\t x32 getpid() returned %lld, but it should have returned either %lld or -ENOSYS\n", ret, mypid);
+ nerr++;
+ return true; /* Proceed as if... */
+ }
+}
+
+static void test_syscalls_common(int msb)
+{
+ printf("[RUN]\t Checking some common syscalls as 64 bit\n");
+ check_zero(msb, SYS_READ);
+ check_zero(msb, SYS_WRITE);
+
+ printf("[RUN]\t Checking some 64-bit only syscalls as 64 bit\n");
+ check_zero(msb, X64_READV);
+ check_zero(msb, X64_WRITEV);
+
+ printf("[RUN]\t Checking out of range system calls\n");
+ check_for(msb, -64, -1, -ENOSYS);
+ check_for(msb, X32_BIT-64, X32_BIT-1, -ENOSYS);
+ check_for(msb, -64-X32_BIT, -1-X32_BIT, -ENOSYS);
+ check_for(msb, INT_MAX-64, INT_MAX-1, -ENOSYS);
+}
+
+static void test_syscalls_with_x32(int msb)
+{
/*
- * Check that a handful of 64-bit-only syscalls are rejected if the x32
- * bit is set.
+ * Syscalls 512-547 are "x32" syscalls. They are
+ * intended to be called with the x32 (0x40000000) bit
+ * set. Calling them without the x32 bit set is
+ * nonsense and should not work.
*/
- printf("[RUN]\tChecking some 64-bit syscalls in x32 range\n");
- check_enosys(16 | X32_BIT, &ok); /* ioctl */
- check_enosys(19 | X32_BIT, &ok); /* readv */
- check_enosys(20 | X32_BIT, &ok); /* writev */
+ printf("[RUN]\t Checking x32 syscalls as 64 bit\n");
+ check_for(msb, 512 | X32_BIT, 547 | X32_BIT, -ENOSYS);
+
+ printf("[RUN]\t Checking some common syscalls as x32\n");
+ check_zero(msb, SYS_READ | X32_BIT);
+ check_zero(msb, SYS_WRITE | X32_BIT);
+
+ printf("[RUN]\t Checking some x32 syscalls as x32\n");
+ check_zero(msb, X32_READV | X32_BIT);
+ check_zero(msb, X32_WRITEV | X32_BIT);
+
+ printf("[RUN]\t Checking some 64-bit syscalls as x32\n");
+ check_enosys(msb, X64_IOCTL | X32_BIT);
+ check_enosys(msb, X64_READV | X32_BIT);
+ check_enosys(msb, X64_WRITEV | X32_BIT);
+}
+
+static void test_syscalls_without_x32(int msb)
+{
+ printf("[RUN]\t Checking for absence of x32 system calls\n");
+ check_for(msb, 0 | X32_BIT, 999 | X32_BIT, -ENOSYS);
+}
+
+static void test_syscall_numbering(void)
+{
+ static const int msbs[] = {
+ 0, 1, -1, X32_BIT-1, X32_BIT, X32_BIT-1, -X32_BIT, INT_MAX,
+ INT_MIN, INT_MIN+1
+ };
+ bool with_x32 = test_x32();
/*
- * Check some syscalls with high bits set.
+ * The MSB is supposed to be ignored, so we loop over a few
+ * to test that out.
*/
- printf("[RUN]\tChecking numbers above 2^32-1\n");
- check_enosys((1UL << 32), &ok);
- check_enosys(X32_BIT | (1UL << 32), &ok);
+ for (size_t i = 0; i < sizeof(msbs)/sizeof(msbs[0]); i++) {
+ int msb = msbs[i];
+ printf("[RUN]\tChecking system calls with msb = %d (0x%x)\n",
+ msb, msb);
- if (!ok)
- nerrs++;
- else
- printf("[OK]\tThey all returned -ENOSYS\n");
+ test_syscalls_common(msb);
+ if (with_x32)
+ test_syscalls_with_x32(msb);
+ else
+ test_syscalls_without_x32(msb);
+ }
}
-int main()
+int main(void)
{
/*
- * Anyone diagnosing a failure will want to know whether the kernel
- * supports x32. Tell them.
+ * It is quite likely to get a segfault on a failure, so make
+ * sure the message gets out by setting stdout to nonbuffered.
*/
- printf("\tChecking for x32...");
- fflush(stdout);
- if (syscall(39 | X32_BIT, 0, 0, 0, 0, 0, 0) >= 0) {
- printf(" supported\n");
- } else if (errno == ENOSYS) {
- printf(" not supported\n");
- } else {
- printf(" confused\n");
- }
+ setvbuf(stdout, NULL, _IONBF, 0);
- test_x32_without_x32_bit();
+ /*
+ * Harmless file descriptor to work on...
+ */
+ nullfd = open("/dev/null", O_RDWR);
+ if (nullfd < 0) {
+ printf("[FAIL]\tUnable to open /dev/null: %s\n",
+ strerror(errno));
+ printf("[SKIP]\tCannot execute test\n");
+ return 71; /* EX_OSERR */
+ }
- return nerrs ? 1 : 0;
+ test_syscall_numbering();
+ if (!nerr) {
+ printf("[OK]\tAll system calls succeeded or failed as expected\n");
+ return 0;
+ } else {
+ printf("[FAIL]\tA total of %u system call%s had incorrect behavior\n",
+ nerr, nerr != 1 ? "s" : "");
+ return 1;
+ }
}
--
2.31.1
From: "H. Peter Anvin (Intel)" <[email protected]>
Right now, *some* code will treat e.g. 0x00000001_00000001 as a system
call and some will not. Some of the code, notably in ptrace and
seccomp, will treat 0x00000001_ffffffff as a system call and some will
not.
This is visible to the user: for example, the syscall_numbering_64
test fails if run under strace, because as strace uses ptrace, it ends
up clobbering the upper half of the 64-bit system call number.
The arch-independent code all assumes that a system call is "int" that
the value -1 specifically and not just any negative value is used for
a non-system call. This is the case on x86 as well when
arch-independent code is involved. The arch-independent API is
defined/documented (but not *implemented*!) in
<asm-generic/syscall.h>.
This is an ABI change, but is in fact a revert to the original x86-64
ABI. The original assembly entry code would zero-extend the system
call number; this patch uses sign extend to be explicit that this is
treated as a signed number (although in practice it makes no
difference, of course) and to avoid people getting the idea of
"optimizing" it, as has happened on at least two(!) separate
occasions.
Do not store the extended value into regs->orig_ax, however: on
x86-64, the ABI is that the callee is responsible for extending
parameters, so only examining the lower 32 bits is fully consistent
with any "int" argument to any system call, e.g. regs->di for
write(2). The full value of %rax on entry to the kernel is thus still
available.
Signed-off-by: H. Peter Anvin (Intel) <[email protected]>
---
arch/x86/entry/entry_64.S | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 1d9db15fdc69..85f04ea0e368 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -108,7 +108,7 @@ SYM_INNER_LABEL(entry_SYSCALL_64_after_hwframe, SYM_L_GLOBAL)
/* IRQs are off. */
movq %rsp, %rdi
- movq %rax, %rsi
+ movslq %eax, %rsi
call do_syscall_64 /* returns with IRQs disabled */
/*
--
2.31.1
On 5/14/21 6:10 PM, H. Peter Anvin wrote:
> From: "H. Peter Anvin (Intel)" <[email protected]>
>
> System call numbers are defined as int, so use int everywhere for
> system call numbers. This patch is strictly a cleanup; it should not
> change anything user visible; all ABI changes have been done in the
> preceeding patches.
>
> Signed-off-by: H. Peter Anvin (Intel) <[email protected]>
> ---
> arch/x86/entry/common.c | 93 ++++++++++++++++++++++++----------
> arch/x86/include/asm/syscall.h | 2 +-
> 2 files changed, 66 insertions(+), 29 deletions(-)
>
> diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
> index f51bc17262db..714804f0970c 100644
> --- a/arch/x86/entry/common.c
> +++ b/arch/x86/entry/common.c
> @@ -36,49 +36,87 @@
> #include <asm/irq_stack.h>
>
> #ifdef CONFIG_X86_64
> -__visible noinstr void do_syscall_64(struct pt_regs *regs, unsigned long nr)
> +
> +static __always_inline bool do_syscall_x64(struct pt_regs *regs, int nr)
> +{
> + /*
> + * Convert negative numbers to very high and thus out of range
> + * numbers for comparisons. Use unsigned long to slightly
> + * improve the array_index_nospec() generated code.
> + */
> + unsigned long unr = nr;
> +
> + if (likely(unr < NR_syscalls)) {
> + unr = array_index_nospec(unr, NR_syscalls);
> + regs->ax = sys_call_table[unr](regs);
> + return true;
> + }
> + return false;
> +}
How much do you like micro-optimization? You could be silly^Wclever and
add a new syscall handler:
long skip_syscall(struct pt_regs *regs)
{
return regs->ax;
}
and prepend this to the syscall tables -- it would be a sort-of-real
syscall -1. Then the call sequence becomes:
int adjusted_nr = nr + 1 (or nr - x32bit + 1);
if (likely(nr < NR_adjusted_syscalls)) {
unr = array_index_nospec...;
regs->ax = sys_call_table[unr](regs); /* might be a no-op! */
} else {
regs->ax = -ENOSYS;
}
which removes a branch from the fast path.
Heh. You have reinvented the original implementation.
On May 15, 2021 8:37:12 AM PDT, Andy Lutomirski <[email protected]> wrote:
>On 5/14/21 6:10 PM, H. Peter Anvin wrote:
>> From: "H. Peter Anvin (Intel)" <[email protected]>
>>
>> System call numbers are defined as int, so use int everywhere for
>> system call numbers. This patch is strictly a cleanup; it should not
>> change anything user visible; all ABI changes have been done in the
>> preceeding patches.
>>
>> Signed-off-by: H. Peter Anvin (Intel) <[email protected]>
>> ---
>> arch/x86/entry/common.c | 93
>++++++++++++++++++++++++----------
>> arch/x86/include/asm/syscall.h | 2 +-
>> 2 files changed, 66 insertions(+), 29 deletions(-)
>>
>> diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
>> index f51bc17262db..714804f0970c 100644
>> --- a/arch/x86/entry/common.c
>> +++ b/arch/x86/entry/common.c
>> @@ -36,49 +36,87 @@
>> #include <asm/irq_stack.h>
>>
>> #ifdef CONFIG_X86_64
>> -__visible noinstr void do_syscall_64(struct pt_regs *regs, unsigned
>long nr)
>> +
>> +static __always_inline bool do_syscall_x64(struct pt_regs *regs, int
>nr)
>> +{
>> + /*
>> + * Convert negative numbers to very high and thus out of range
>> + * numbers for comparisons. Use unsigned long to slightly
>> + * improve the array_index_nospec() generated code.
>> + */
>> + unsigned long unr = nr;
>> +
>> + if (likely(unr < NR_syscalls)) {
>> + unr = array_index_nospec(unr, NR_syscalls);
>> + regs->ax = sys_call_table[unr](regs);
>> + return true;
>> + }
>> + return false;
>> +}
>
>How much do you like micro-optimization? You could be silly^Wclever
>and
>add a new syscall handler:
>
>long skip_syscall(struct pt_regs *regs)
>{
> return regs->ax;
>}
>
>and prepend this to the syscall tables -- it would be a sort-of-real
>syscall -1. Then the call sequence becomes:
>
>int adjusted_nr = nr + 1 (or nr - x32bit + 1);
>
>if (likely(nr < NR_adjusted_syscalls)) {
> unr = array_index_nospec...;
> regs->ax = sys_call_table[unr](regs); /* might be a no-op! */
>} else {
> regs->ax = -ENOSYS;
>}
>
>which removes a branch from the fast path.
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
Answer: I don't think it is a good idea to have the system can table offset ... it seems like an unnecessary debugging headache.
On May 15, 2021 8:37:12 AM PDT, Andy Lutomirski <[email protected]> wrote:
>On 5/14/21 6:10 PM, H. Peter Anvin wrote:
>> From: "H. Peter Anvin (Intel)" <[email protected]>
>>
>> System call numbers are defined as int, so use int everywhere for
>> system call numbers. This patch is strictly a cleanup; it should not
>> change anything user visible; all ABI changes have been done in the
>> preceeding patches.
>>
>> Signed-off-by: H. Peter Anvin (Intel) <[email protected]>
>> ---
>> arch/x86/entry/common.c | 93
>++++++++++++++++++++++++----------
>> arch/x86/include/asm/syscall.h | 2 +-
>> 2 files changed, 66 insertions(+), 29 deletions(-)
>>
>> diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
>> index f51bc17262db..714804f0970c 100644
>> --- a/arch/x86/entry/common.c
>> +++ b/arch/x86/entry/common.c
>> @@ -36,49 +36,87 @@
>> #include <asm/irq_stack.h>
>>
>> #ifdef CONFIG_X86_64
>> -__visible noinstr void do_syscall_64(struct pt_regs *regs, unsigned
>long nr)
>> +
>> +static __always_inline bool do_syscall_x64(struct pt_regs *regs, int
>nr)
>> +{
>> + /*
>> + * Convert negative numbers to very high and thus out of range
>> + * numbers for comparisons. Use unsigned long to slightly
>> + * improve the array_index_nospec() generated code.
>> + */
>> + unsigned long unr = nr;
>> +
>> + if (likely(unr < NR_syscalls)) {
>> + unr = array_index_nospec(unr, NR_syscalls);
>> + regs->ax = sys_call_table[unr](regs);
>> + return true;
>> + }
>> + return false;
>> +}
>
>How much do you like micro-optimization? You could be silly^Wclever
>and
>add a new syscall handler:
>
>long skip_syscall(struct pt_regs *regs)
>{
> return regs->ax;
>}
>
>and prepend this to the syscall tables -- it would be a sort-of-real
>syscall -1. Then the call sequence becomes:
>
>int adjusted_nr = nr + 1 (or nr - x32bit + 1);
>
>if (likely(nr < NR_adjusted_syscalls)) {
> unr = array_index_nospec...;
> regs->ax = sys_call_table[unr](regs); /* might be a no-op! */
>} else {
> regs->ax = -ENOSYS;
>}
>
>which removes a branch from the fast path.
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
* H. Peter Anvin <[email protected]> wrote:
> This is an ABI change, but is in fact a revert to the original x86-64
> ABI. The original assembly entry code would zero-extend the system
> call number; this patch uses sign extend to be explicit that this is
> treated as a signed number (although in practice it makes no
> difference, of course) and to avoid people getting the idea of
> "optimizing" it, as has happened on at least two(!) separate
> occasions.
The original x86-64 ABI as documented by AMD, as we (probably) never had
this in Linux, right?
Sounds sensible to do this, assuming nothing relies on the weirdness.
Thanks,
Ingo
Pretty soon you'll have the whole entry code rewritten back into assembly ????
On May 15, 2021 11:48:22 AM PDT, Andy Lutomirski <[email protected]> wrote:
>
>
>On Sat, May 15, 2021, at 10:42 AM, H. Peter Anvin wrote:
>> Answer: I don't think it is a good idea to have the system can table
>offset ... it seems like an unnecessary debugging headache.
>
>Emit it in asm:
>
>table_minus_one:
> .quad not_a_syscall
>table:
> (real table here)
>
>/me runs.
>
>
>
>>
>> On May 15, 2021 8:37:12 AM PDT, Andy Lutomirski <[email protected]
><mailto:luto%40kernel.org>> wrote:
>> >On 5/14/21 6:10 PM, H. Peter Anvin wrote:
>> >> From: "H. Peter Anvin (Intel)" <[email protected]
><mailto:hpa%40zytor.com>>
>> >>
>> >> System call numbers are defined as int, so use int everywhere for
>> >> system call numbers. This patch is strictly a cleanup; it should
>not
>> >> change anything user visible; all ABI changes have been done in
>the
>> >> preceeding patches.
>> >>
>> >> Signed-off-by: H. Peter Anvin (Intel) <[email protected]
><mailto:hpa%40zytor.com>>
>> >> ---
>> >> arch/x86/entry/common.c | 93
>> >++++++++++++++++++++++++----------
>> >> arch/x86/include/asm/syscall.h | 2 +-
>> >> 2 files changed, 66 insertions(+), 29 deletions(-)
>> >>
>> >> diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
>> >> index f51bc17262db..714804f0970c 100644
>> >> --- a/arch/x86/entry/common.c
>> >> +++ b/arch/x86/entry/common.c
>> >> @@ -36,49 +36,87 @@
>> >> #include <asm/irq_stack.h>
>> >>
>> >> #ifdef CONFIG_X86_64
>> >> -__visible noinstr void do_syscall_64(struct pt_regs *regs,
>unsigned
>> >long nr)
>> >> +
>> >> +static __always_inline bool do_syscall_x64(struct pt_regs *regs,
>int
>> >nr)
>> >> +{
>> >> + /*
>> >> + * Convert negative numbers to very high and thus out of range
>> >> + * numbers for comparisons. Use unsigned long to slightly
>> >> + * improve the array_index_nospec() generated code.
>> >> + */
>> >> + unsigned long unr = nr;
>> >> +
>> >> + if (likely(unr < NR_syscalls)) {
>> >> + unr = array_index_nospec(unr, NR_syscalls);
>> >> + regs->ax = sys_call_table[unr](regs);
>> >> + return true;
>> >> + }
>> >> + return false;
>> >> +}
>> >
>> >How much do you like micro-optimization? You could be silly^Wclever
>> >and
>> >add a new syscall handler:
>> >
>> >long skip_syscall(struct pt_regs *regs)
>> >{
>> > return regs->ax;
>> >}
>> >
>> >and prepend this to the syscall tables -- it would be a sort-of-real
>> >syscall -1. Then the call sequence becomes:
>> >
>> >int adjusted_nr = nr + 1 (or nr - x32bit + 1);
>> >
>> >if (likely(nr < NR_adjusted_syscalls)) {
>> > unr = array_index_nospec...;
>> > regs->ax = sys_call_table[unr](regs); /* might be a no-op! */
>> >} else {
>> > regs->ax = -ENOSYS;
>> >}
>> >
>> >which removes a branch from the fast path.
>>
>> --
>> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>>
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
* H. Peter Anvin <[email protected]> wrote:
> From: "H. Peter Anvin (Intel)" <[email protected]>
>
> Update the syscall_numbering_64 selftest to reflect that a system call
> is to be extended from 32 bits. Add a mix of tests for valid and
> invalid system calls in 64-bit and x32 space.
>
> Use an explicit system call instruction, because we cannot know if the
> glibc syscall() wrapper intercepts instructions, extends the system
> call number independently, or anything similar.
>
> Use long long instead of long to make it possible to compile this test
> on x32 as well as 64 bits.
>
> Signed-off-by: H. Peter Anvin (Intel) <[email protected]>
> ---
> .../testing/selftests/x86/syscall_numbering.c | 274 ++++++++++++++----
> 1 file changed, 222 insertions(+), 52 deletions(-)
Small request: I'd suggest moving this to the first place - so that we can
easily test before/after effects of (current) patch #1/4.
Thanks,
Ingo
No, this is what the Linux kernel did for a very long time.
On May 16, 2021 12:48:19 AM PDT, Ingo Molnar <[email protected]> wrote:
>
>* H. Peter Anvin <[email protected]> wrote:
>
>> This is an ABI change, but is in fact a revert to the original x86-64
>> ABI. The original assembly entry code would zero-extend the system
>> call number; this patch uses sign extend to be explicit that this is
>> treated as a signed number (although in practice it makes no
>> difference, of course) and to avoid people getting the idea of
>> "optimizing" it, as has happened on at least two(!) separate
>> occasions.
>
>The original x86-64 ABI as documented by AMD, as we (probably) never
>had
>this in Linux, right?
>
>Sounds sensible to do this, assuming nothing relies on the weirdness.
>
>Thanks,
>
> Ingo
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
Sure. I am also working on implementing Andy's request of adding a ptracer; it will probably take me a few days to clear off the time to do so.
On May 16, 2021 12:52:06 AM PDT, Ingo Molnar <[email protected]> wrote:
>
>* H. Peter Anvin <[email protected]> wrote:
>
>> From: "H. Peter Anvin (Intel)" <[email protected]>
>>
>> Update the syscall_numbering_64 selftest to reflect that a system
>call
>> is to be extended from 32 bits. Add a mix of tests for valid and
>> invalid system calls in 64-bit and x32 space.
>>
>> Use an explicit system call instruction, because we cannot know if
>the
>> glibc syscall() wrapper intercepts instructions, extends the system
>> call number independently, or anything similar.
>>
>> Use long long instead of long to make it possible to compile this
>test
>> on x32 as well as 64 bits.
>>
>> Signed-off-by: H. Peter Anvin (Intel) <[email protected]>
>> ---
>> .../testing/selftests/x86/syscall_numbering.c | 274
>++++++++++++++----
>> 1 file changed, 222 insertions(+), 52 deletions(-)
>
>Small request: I'd suggest moving this to the first place - so that we
>can
>easily test before/after effects of (current) patch #1/4.
>
>Thanks,
>
> Ingo
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
Well, I finished the ptracer addition to the test. It was *interesting*: it turns out that ptracing system calls, *even without modifying the state in any way*, just being a passive observer, a sign-extends the system call numbers *on current kernels*.
This means that on current kernels passively tracing a process changes the syscall behavior. I think we can all agree that that is not acceptable.
I will do a couple of cleanups and add this to a v4 patchset.
On May 16, 2021 12:52:06 AM PDT, Ingo Molnar <[email protected]> wrote:
>
>* H. Peter Anvin <[email protected]> wrote:
>
>> From: "H. Peter Anvin (Intel)" <[email protected]>
>>
>> Update the syscall_numbering_64 selftest to reflect that a system
>call
>> is to be extended from 32 bits. Add a mix of tests for valid and
>> invalid system calls in 64-bit and x32 space.
>>
>> Use an explicit system call instruction, because we cannot know if
>the
>> glibc syscall() wrapper intercepts instructions, extends the system
>> call number independently, or anything similar.
>>
>> Use long long instead of long to make it possible to compile this
>test
>> on x32 as well as 64 bits.
>>
>> Signed-off-by: H. Peter Anvin (Intel) <[email protected]>
>> ---
>> .../testing/selftests/x86/syscall_numbering.c | 274
>++++++++++++++----
>> 1 file changed, 222 insertions(+), 52 deletions(-)
>
>Small request: I'd suggest moving this to the first place - so that we
>can
>easily test before/after effects of (current) patch #1/4.
>
>Thanks,
>
> Ingo
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.