2018-07-22 17:47:20

by Andy Lutomirski

[permalink] [raw]
Subject: [RFC 0/2] Get rid of the entry trampoline

Hi all-

I think there's general agreement that the current entry trampoline
sucks, mainly because it's dog-slow. Thanks, Spectre.

There are three possible fixes I know of:

a) Linus' hack: use R11 for scratch space. This doesn't actually
speed it up, but it improves the addressing situation a bit.
I don't like it, though: it causes the SYSCALL64 path to forget
the arithmetic flags and all of the MSR_SYCALL_MASK flags. The
latter may be a showstopper, given that we've seen some evidence
of nasty Wine use cases that expect setting EFLAGS.NT and doing
a system call to actually do something intelligent. Similarly,
there could easily be user programs out there that set AC because
they want alignment checking and expect AC to remain set across
system calls.

b) Move the trampoline within 2G of the entry text and copy it for
each CPU. This is certainly possible, but it's a bit gross,
and it uses num_possible_cpus() * 64 bytes of memory (rounded
up to a page). It will also result in more complicated code.

c) This series. Just make %gs work in the entry trampoline. It's
actually a net code deletion.

I suspect that (b) would be faster in code that does a lot of system
calls and doesn't totally blow away the cache or the TLB between
system calls. I suspect that (c) is faster in code that does
cache-cold system calls.

Andy Lutomirski (2):
x86/entry/64: Use the TSS sp2 slot for rsp_scratch
x86/pti/64: Remove the SYSCALL64 entry trampoline

arch/x86/entry/entry_64.S | 66 +-----------------------------
arch/x86/include/asm/processor.h | 5 +++
arch/x86/include/asm/thread_info.h | 1 +
arch/x86/kernel/asm-offsets_64.c | 1 +
arch/x86/kernel/cpu/common.c | 11 +----
arch/x86/kernel/kprobes/core.c | 10 +----
arch/x86/kernel/process_64.c | 2 -
arch/x86/kernel/vmlinux.lds.S | 10 -----
arch/x86/mm/cpu_entry_area.c | 5 ---
arch/x86/mm/pti.c | 24 ++++++++++-
10 files changed, 33 insertions(+), 102 deletions(-)

--
2.17.1



2018-07-22 17:46:45

by Andy Lutomirski

[permalink] [raw]
Subject: [RFC 1/2] x86/entry/64: Use the TSS sp2 slot for rsp_scratch

In the non-trampoline SYSCALL64 path, we use a percpu variable to
temporarily store the user RSP value. Instead of a separate
variable, use the otherwise unused sp2 slot in the TSS. This will
improve cache locality, as the sp1 slot is already used in the same
code to find the kernel stack. It will also simplify a future
change to make the non-trampoline path work in PTI mode.

Signed-off-by: Andy Lutomirski <[email protected]>
---
arch/x86/include/asm/processor.h | 5 +++++
arch/x86/include/asm/thread_info.h | 1 +
arch/x86/kernel/asm-offsets_64.c | 1 +
arch/x86/kernel/process_64.c | 2 --
4 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index cfd29ee8c3da..2ef4c39ded45 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -308,7 +308,12 @@ struct x86_hw_tss {
*/
u64 sp1;

+ /*
+ * sp2 is scratch space used by the SYSCALL64 handler. Linux does
+ * not use rung 2, so sp2 is not otherwise needed.
+ */
u64 sp2;
+
u64 reserved2;
u64 ist[7];
u32 reserved3;
diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
index 2ff2a30a264f..9a2f84233e39 100644
--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -209,6 +209,7 @@ static inline int arch_within_stack_frames(const void * const stack,

#ifdef CONFIG_X86_64
# define cpu_current_top_of_stack (cpu_tss_rw + TSS_sp1)
+# define rsp_scratch (cpu_tss_rw + TSS_sp2)
#endif

#endif
diff --git a/arch/x86/kernel/asm-offsets_64.c b/arch/x86/kernel/asm-offsets_64.c
index b2dcd161f514..621bf6b5a63b 100644
--- a/arch/x86/kernel/asm-offsets_64.c
+++ b/arch/x86/kernel/asm-offsets_64.c
@@ -67,6 +67,7 @@ int main(void)
OFFSET(TSS_ist, tss_struct, x86_tss.ist);
OFFSET(TSS_sp0, tss_struct, x86_tss.sp0);
OFFSET(TSS_sp1, tss_struct, x86_tss.sp1);
+ OFFSET(TSS_sp2, tss_struct, x86_tss.sp2);
BLANK();

#ifdef CONFIG_STACKPROTECTOR
diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
index 12bb445fb98d..3ed5fed181cc 100644
--- a/arch/x86/kernel/process_64.c
+++ b/arch/x86/kernel/process_64.c
@@ -59,8 +59,6 @@
#include <asm/unistd_32_ia32.h>
#endif

-__visible DEFINE_PER_CPU(unsigned long, rsp_scratch);
-
/* Prints also some state that isn't saved in the pt_regs */
void __show_regs(struct pt_regs *regs, int all)
{
--
2.17.1


2018-07-22 17:46:45

by Andy Lutomirski

[permalink] [raw]
Subject: [RFC 2/2] x86/pti/64: Remove the SYSCALL64 entry trampoline

TODO: benchmark!

The SYSCALL64 trampoline has a couple of nice properties:

- The usual sequence of SWAPGS followed by two GS-relative accesses to
set up RSP is somewhat slow because the GS-relative accesses need
to wait for SWAPGS to finish. The trampoline approach allows
RIP-relative accesses to set up RSP, which avoids the stall.

- The trampoline avoids any percpu access before CR3 is set up,
which means that no percpu memory needs to be mapped in the user
page tables. This prevents using Meltdown to read any percpu memory
outside the cpu_entry_area and prevents using timing leaks
to directly locate the percpu areas.

The downsides of using a trampoline may outweigh the upsides, however.
It adds an extra non-contiguous I$ cache line to system calls, and it
forces an indirect jump to transfer control back to the normal kernel
text after CR3 is set up. The latter is because x86 lacks a 64-bit
direct jump instruction that could jump from the trampoline to the entry
text. With retpolines enabled, the indirect jump is extremely slow.

This patch changes the code to map the percpu TSS into the user page
tables to allow the non-trampoline SYSCALL64 path to work under PTI.
This does not add a new direct information leak, since the TSS is
readable by Meltdown from the cpu_entry_area alias regardless. It
does allow a timing attack to locate the percpu area, but KASLR is
more or less a lost cause against local attack on CPUs vulnerable to
Meltdown regardless. As far as I'm concerned, on current hardware,
KASLR is only useful to mitigate remote attacks that try to attack
the kernel without first gaining RCE against a vulnerable user
process.

There is a possible alternative approach: we could instead move the
trampoline within 2G of the entry text and make a separate copy for
each CPU. Then we could use a direct jump to rejoin the normal
entry path.

Signed-off-by: Andy Lutomirski <[email protected]>
---
arch/x86/entry/entry_64.S | 66 +---------------------------------
arch/x86/kernel/cpu/common.c | 11 +-----
arch/x86/kernel/kprobes/core.c | 10 +-----
arch/x86/kernel/vmlinux.lds.S | 10 ------
arch/x86/mm/cpu_entry_area.c | 5 ---
arch/x86/mm/pti.c | 24 ++++++++++++-
6 files changed, 26 insertions(+), 100 deletions(-)

diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 1f7cf8407cfb..ab19f47ae187 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -142,67 +142,6 @@ END(native_usergs_sysret64)
* with them due to bugs in both AMD and Intel CPUs.
*/

- .pushsection .entry_trampoline, "ax"
-
-/*
- * The code in here gets remapped into cpu_entry_area's trampoline. This means
- * that the assembler and linker have the wrong idea as to where this code
- * lives (and, in fact, it's mapped more than once, so it's not even at a
- * fixed address). So we can't reference any symbols outside the entry
- * trampoline and expect it to work.
- *
- * Instead, we carefully abuse %rip-relative addressing.
- * _entry_trampoline(%rip) refers to the start of the remapped) entry
- * trampoline. We can thus find cpu_entry_area with this macro:
- */
-
-#define CPU_ENTRY_AREA \
- _entry_trampoline - CPU_ENTRY_AREA_entry_trampoline(%rip)
-
-/* The top word of the SYSENTER stack is hot and is usable as scratch space. */
-#define RSP_SCRATCH CPU_ENTRY_AREA_entry_stack + \
- SIZEOF_entry_stack - 8 + CPU_ENTRY_AREA
-
-ENTRY(entry_SYSCALL_64_trampoline)
- UNWIND_HINT_EMPTY
- swapgs
-
- /* Stash the user RSP. */
- movq %rsp, RSP_SCRATCH
-
- /* Note: using %rsp as a scratch reg. */
- SWITCH_TO_KERNEL_CR3 scratch_reg=%rsp
-
- /* Load the top of the task stack into RSP */
- movq CPU_ENTRY_AREA_tss + TSS_sp1 + CPU_ENTRY_AREA, %rsp
-
- /* Start building the simulated IRET frame. */
- pushq $__USER_DS /* pt_regs->ss */
- pushq RSP_SCRATCH /* pt_regs->sp */
- pushq %r11 /* pt_regs->flags */
- pushq $__USER_CS /* pt_regs->cs */
- pushq %rcx /* pt_regs->ip */
-
- /*
- * x86 lacks a near absolute jump, and we can't jump to the real
- * entry text with a relative jump. We could push the target
- * address and then use retq, but this destroys the pipeline on
- * many CPUs (wasting over 20 cycles on Sandy Bridge). Instead,
- * spill RDI and restore it in a second-stage trampoline.
- */
- pushq %rdi
- movq $entry_SYSCALL_64_stage2, %rdi
- JMP_NOSPEC %rdi
-END(entry_SYSCALL_64_trampoline)
-
- .popsection
-
-ENTRY(entry_SYSCALL_64_stage2)
- UNWIND_HINT_EMPTY
- popq %rdi
- jmp entry_SYSCALL_64_after_hwframe
-END(entry_SYSCALL_64_stage2)
-
ENTRY(entry_SYSCALL_64)
UNWIND_HINT_EMPTY
/*
@@ -212,11 +151,8 @@ ENTRY(entry_SYSCALL_64)
*/

swapgs
- /*
- * This path is only taken when PAGE_TABLE_ISOLATION is disabled so it
- * is not required to switch CR3.
- */
movq %rsp, PER_CPU_VAR(rsp_scratch)
+ SWITCH_TO_KERNEL_CR3 scratch_reg=%rsp
movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp

/* Construct struct pt_regs on stack */
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index eb4cb3efd20e..0209af82de53 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1513,19 +1513,10 @@ EXPORT_PER_CPU_SYMBOL(__preempt_count);
/* May not be marked __init: used by software suspend */
void syscall_init(void)
{
- extern char _entry_trampoline[];
- extern char entry_SYSCALL_64_trampoline[];
-
int cpu = smp_processor_id();
- unsigned long SYSCALL64_entry_trampoline =
- (unsigned long)get_cpu_entry_area(cpu)->entry_trampoline +
- (entry_SYSCALL_64_trampoline - _entry_trampoline);

wrmsr(MSR_STAR, 0, (__USER32_CS << 16) | __KERNEL_CS);
- if (static_cpu_has(X86_FEATURE_PTI))
- wrmsrl(MSR_LSTAR, SYSCALL64_entry_trampoline);
- else
- wrmsrl(MSR_LSTAR, (unsigned long)entry_SYSCALL_64);
+ wrmsrl(MSR_LSTAR, (unsigned long)entry_SYSCALL_64);

#ifdef CONFIG_IA32_EMULATION
wrmsrl(MSR_CSTAR, (unsigned long)entry_SYSCALL_compat);
diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index 6f4d42377fe5..09693d89757d 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -1172,18 +1172,10 @@ NOKPROBE_SYMBOL(longjmp_break_handler);

bool arch_within_kprobe_blacklist(unsigned long addr)
{
- bool is_in_entry_trampoline_section = false;
-
-#ifdef CONFIG_X86_64
- is_in_entry_trampoline_section =
- (addr >= (unsigned long)__entry_trampoline_start &&
- addr < (unsigned long)__entry_trampoline_end);
-#endif
return (addr >= (unsigned long)__kprobes_text_start &&
addr < (unsigned long)__kprobes_text_end) ||
(addr >= (unsigned long)__entry_text_start &&
- addr < (unsigned long)__entry_text_end) ||
- is_in_entry_trampoline_section;
+ addr < (unsigned long)__entry_text_end);
}

int __init arch_init_kprobes(void)
diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 5e1458f609a1..ba5e7d896eca 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -115,16 +115,6 @@ SECTIONS
*(.fixup)
*(.gnu.warning)

-#ifdef CONFIG_X86_64
- . = ALIGN(PAGE_SIZE);
- __entry_trampoline_start = .;
- _entry_trampoline = .;
- *(.entry_trampoline)
- . = ALIGN(PAGE_SIZE);
- __entry_trampoline_end = .;
- ASSERT(. - _entry_trampoline == PAGE_SIZE, "entry trampoline is too big");
-#endif
-
#ifdef CONFIG_RETPOLINE
__indirect_thunk_start = .;
*(.text.__x86.indirect_thunk)
diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c
index b45f5aaefd74..419d203d8520 100644
--- a/arch/x86/mm/cpu_entry_area.c
+++ b/arch/x86/mm/cpu_entry_area.c
@@ -80,8 +80,6 @@ static void percpu_setup_debug_store(int cpu)
static void __init setup_cpu_entry_area(int cpu)
{
#ifdef CONFIG_X86_64
- extern char _entry_trampoline[];
-
/* On 64-bit systems, we use a read-only fixmap GDT and TSS. */
pgprot_t gdt_prot = PAGE_KERNEL_RO;
pgprot_t tss_prot = PAGE_KERNEL_RO;
@@ -143,9 +141,6 @@ static void __init setup_cpu_entry_area(int cpu)
cea_map_percpu_pages(&get_cpu_entry_area(cpu)->exception_stacks,
&per_cpu(exception_stacks, cpu),
sizeof(exception_stacks) / PAGE_SIZE, PAGE_KERNEL);
-
- cea_set_pte(&get_cpu_entry_area(cpu)->entry_trampoline,
- __pa_symbol(_entry_trampoline), PAGE_KERNEL_RX);
#endif
percpu_setup_debug_store(cpu);
}
diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
index 4d418e705878..5d5d512f6d14 100644
--- a/arch/x86/mm/pti.c
+++ b/arch/x86/mm/pti.c
@@ -360,11 +360,33 @@ static void __init pti_clone_p4d(unsigned long addr)
}

/*
- * Clone the CPU_ENTRY_AREA into the user space visible page table.
+ * Clone the CPU_ENTRY_AREA and associated data into the user space visible
+ * page table.
*/
static void __init pti_clone_user_shared(void)
{
+ unsigned cpu;
+
pti_clone_p4d(CPU_ENTRY_AREA_BASE);
+
+ for_each_possible_cpu(cpu) {
+ /*
+ * The SYSCALL64 entry code needs to be able to find the
+ * thread stack and needs one word of scratch space in which
+ * to spill a register. All of this lives in the TSS, in
+ * the sp1 and sp2 slots.
+ */
+
+ unsigned long va = (unsigned long)&per_cpu(cpu_tss_rw, cpu);
+ phys_addr_t pa = per_cpu_ptr_to_phys((void *)va);
+ pte_t *target_pte;
+
+ target_pte = pti_user_pagetable_walk_pte(va);
+ if (WARN_ON(!target_pte))
+ return;
+
+ set_pte(target_pte, pfn_pte(pa >> PAGE_SHIFT, PAGE_KERNEL));
+ }
}

/*
--
2.17.1


2018-07-22 18:29:06

by Linus Torvalds

[permalink] [raw]
Subject: Re: [RFC 2/2] x86/pti/64: Remove the SYSCALL64 entry trampoline

On Sun, Jul 22, 2018 at 10:45 AM Andy Lutomirski <[email protected]> wrote:
>
> This patch changes the code to map the percpu TSS into the user page
> tables to allow the non-trampoline SYSCALL64 path to work under PTI.

Me likey.

However:

> This does not add a new direct information leak, since the TSS is
> readable by Meltdown from the cpu_entry_area alias regardless.

Afaik, it does now potentially expose through meltdown the per-thread
entry stack info, which is new.

But I don't think that's a show-stopper.

> static void __init pti_clone_user_shared(void)
> {
> + for_each_possible_cpu(cpu) {

But this code is pretty disgusting and seems wrong.

Do you really want to do all trhe _possible_ cpu's, not just the
online ones? I'd rather expose less (think MAXCPU) and then have the
CPU hotplug code expose the page as the CPU comes up?

> + unsigned long va = (unsigned long)&per_cpu(cpu_tss_rw, cpu);
> + phys_addr_t pa = per_cpu_ptr_to_phys((void *)va);
> + pte_t *target_pte;
> +
> + target_pte = pti_user_pagetable_walk_pte(va);

This function only exists if CONFIG_X86_VSYSCALL_EMULATION, so it
won't even compile under (very unusual) configurations.

The "disgusting" part is that I think it could/should share more code
with the vsyscall case, and the whole target-pte checking and setting
should be shared too.

Beause not being shared, I react to this:

> + set_pte(target_pte, pfn_pte(pa >> PAGE_SHIFT, PAGE_KERNEL));

Hmm. The vsyscall code just does

*target_pte = ..

without any set_pte() stuff. Do we want/need the PVOP cases, and if
so, why doesn't the vsyscall case need it?

Anyway, I love the approach, and how this gets rid of the nasty
trampoline, so no real complaints, just "this needs some fixups".

Linus

2018-07-22 20:22:45

by Ingo Molnar

[permalink] [raw]
Subject: Re: [RFC 1/2] x86/entry/64: Use the TSS sp2 slot for rsp_scratch


* Andy Lutomirski <[email protected]> wrote:

> In the non-trampoline SYSCALL64 path, we use a percpu variable to
> temporarily store the user RSP value. Instead of a separate
> variable, use the otherwise unused sp2 slot in the TSS. This will
> improve cache locality, as the sp1 slot is already used in the same
> code to find the kernel stack. It will also simplify a future
> change to make the non-trampoline path work in PTI mode.
>
> Signed-off-by: Andy Lutomirski <[email protected]>
> ---
> arch/x86/include/asm/processor.h | 5 +++++
> arch/x86/include/asm/thread_info.h | 1 +
> arch/x86/kernel/asm-offsets_64.c | 1 +
> arch/x86/kernel/process_64.c | 2 --
> 4 files changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
> index cfd29ee8c3da..2ef4c39ded45 100644
> --- a/arch/x86/include/asm/processor.h
> +++ b/arch/x86/include/asm/processor.h
> @@ -308,7 +308,12 @@ struct x86_hw_tss {
> */
> u64 sp1;
>
> + /*
> + * sp2 is scratch space used by the SYSCALL64 handler. Linux does
> + * not use rung 2, so sp2 is not otherwise needed.
> + */
> u64 sp2;

s/rung/ring

Thanks,

Ingo

2018-07-22 21:01:47

by Andy Lutomirski

[permalink] [raw]
Subject: Re: [RFC 2/2] x86/pti/64: Remove the SYSCALL64 entry trampoline


> On Jul 22, 2018, at 11:27 AM, Linus Torvalds <[email protected]> wrote:
>
>> On Sun, Jul 22, 2018 at 10:45 AM Andy Lutomirski <[email protected]> wrote:
>>
>> This patch changes the code to map the percpu TSS into the user page
>> tables to allow the non-trampoline SYSCALL64 path to work under PTI.
>
> Me likey.
>
> However:
>
>> This does not add a new direct information leak, since the TSS is
>> readable by Meltdown from the cpu_entry_area alias regardless.
>
> Afaik, it does now potentially expose through meltdown the per-thread
> entry stack info, which is new.

It’s always been exposed through the RO alias. The only new exposure is the *address* of the RW alias, I think.

>
> But I don't think that's a show-stopper.
>
>> static void __init pti_clone_user_shared(void)
>> {
>> + for_each_possible_cpu(cpu) {
>
> But this code is pretty disgusting and seems wrong.
>
> Do you really want to do all trhe _possible_ cpu's, not just the
> online ones? I'd rather expose less (think MAXCPU) and then have the
> CPU hotplug code expose the page as the CPU comes up?

We already have exactly the same issue for cpu_entry_area. If we change it, I think we should do cpu_entry_area at the same time. But that’s awkward because cpu_entry_area is mapped one PMD at a time right now.

It’s also awkward to expose a percpu page dynamically, because (I think) percpu data isn’t guaranteed to all be in the same PGD-sized area. A vmalloc fault in the early SYSCALL64 path is fatal.

>
>> + unsigned long va = (unsigned long)&per_cpu(cpu_tss_rw, cpu);
>> + phys_addr_t pa = per_cpu_ptr_to_phys((void *)va);
>> + pte_t *target_pte;
>> +
>> + target_pte = pti_user_pagetable_walk_pte(va);
>
> This function only exists if CONFIG_X86_VSYSCALL_EMULATION, so it
> won't even compile under (very unusual) configurations.

Oops.

>
> The "disgusting" part is that I think it could/should share more code
> with the vsyscall case, and the whole target-pte checking and setting
> should be shared too.

I tried that. It was uglier. The percpu code wants to make up a new PTE because the real kernel mapping uses large pages. The vsyscall code wants to copy a PTE because it’s really a PTE and it has unusual permissions.

>
> Beause not being shared, I react to this:
>
>> + set_pte(target_pte, pfn_pte(pa >> PAGE_SHIFT, PAGE_KERNEL));
>
> Hmm. The vsyscall code just does
>
> *target_pte = ..
>
> without any set_pte() stuff. Do we want/need the PVOP cases, and if
> so, why doesn't the vsyscall case need it?

It doesn’t need it. I could use plain assignment.

>
> Anyway, I love the approach, and how this gets rid of the nasty
> trampoline, so no real complaints, just "this needs some fixups".
>
>

I’ll do the fixups. I think that, if we want to unmap the pages for CPUs that aren’t present, that should be a separate patch. I’m also not convinced it adds much value.

In general, PTI is fairly crappy, and it leaks all kinds of information. I suspect the worst leak is the NMI stack for local and remote CPUs. Fixing *that* is going to be fugly, but may actually be important, because I can easily imagine malicious user code that causes arbitrary kernel memory to get read and spilled on the NMI stack.

What we *should* do IMO is defer allocation of percpu space for not-present CPUs to save a bunch of memory. But that’s a major change and will probably break things.

2018-07-23 12:39:47

by Dave Hansen

[permalink] [raw]
Subject: Re: [RFC 1/2] x86/entry/64: Use the TSS sp2 slot for rsp_scratch

On 07/22/2018 10:45 AM, Andy Lutomirski wrote:
> + /*
> + * sp2 is scratch space used by the SYSCALL64 handler. Linux does
> + * not use rung 2, so sp2 is not otherwise needed.
> + */
> u64 sp2;

Could we call out the actual thing that we use this slot for, and the
symbol name so folks can find the corresponding code that does this?
While I know the spot in entry_64 you're talking about, it might not be
patently obvious to everyone, and it's also a bit more challenging to
grep for than normal C code.

Maybe:

/*
* Since Linux does not use ring 2, the 'sp2' slot is unused.
* entry_SYSCALL_64 uses this as scratch space to stash the user
* %RSP value.
*/

2018-07-23 13:01:10

by Dave Hansen

[permalink] [raw]
Subject: Re: [RFC 2/2] x86/pti/64: Remove the SYSCALL64 entry trampoline

On 07/22/2018 10:45 AM, Andy Lutomirski wrote:
> static void __init pti_clone_user_shared(void)
> {
> + unsigned cpu;
> +
> pti_clone_p4d(CPU_ENTRY_AREA_BASE);
> +
> + for_each_possible_cpu(cpu) {
> + /*
> + * The SYSCALL64 entry code needs to be able to find the
> + * thread stack and needs one word of scratch space in which
> + * to spill a register. All of this lives in the TSS, in
> + * the sp1 and sp2 slots.
> + */

I had to remind myself about the r/o cpu_entry_area alias here. Should
we maybe call it out explicitly?

/*
* The TSS is also mapped read-only into the cpu_entry_area.
* The cpu_entry_area copy is used r/o by the hardware for the
* hardware stack switching, like interrupt entry.
*
* The copies being mapped here are the normal r/w per-cpu
* areas. We need r/w because we spill a register here.
*/

BTW, since we have this alias, do we still *need* the r/o cpu_entry_area
alias? Or do we still get some value from keeping the thing referenced
by hardware r/o?

2018-07-24 02:38:51

by Andy Lutomirski

[permalink] [raw]
Subject: Re: [RFC 1/2] x86/entry/64: Use the TSS sp2 slot for rsp_scratch

On Mon, Jul 23, 2018 at 5:38 AM, Dave Hansen
<[email protected]> wrote:
> On 07/22/2018 10:45 AM, Andy Lutomirski wrote:
>> + /*
>> + * sp2 is scratch space used by the SYSCALL64 handler. Linux does
>> + * not use rung 2, so sp2 is not otherwise needed.
>> + */
>> u64 sp2;
>
> Could we call out the actual thing that we use this slot for, and the
> symbol name so folks can find the corresponding code that does this?
> While I know the spot in entry_64 you're talking about, it might not be
> patently obvious to everyone, and it's also a bit more challenging to
> grep for than normal C code.
>
> Maybe:
>
> /*
> * Since Linux does not use ring 2, the 'sp2' slot is unused.
> * entry_SYSCALL_64 uses this as scratch space to stash the user
> * %RSP value.
> */

I'll improve this for v2.