Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756718AbbHZRLI (ORCPT ); Wed, 26 Aug 2015 13:11:08 -0400 Received: from mail-oi0-f43.google.com ([209.85.218.43]:34863 "EHLO mail-oi0-f43.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756374AbbHZRLD (ORCPT ); Wed, 26 Aug 2015 13:11:03 -0400 MIME-Version: 1.0 In-Reply-To: References: From: Andy Lutomirski Date: Wed, 26 Aug 2015 10:10:43 -0700 Message-ID: Subject: Re: Proposal for finishing the 64-bit x86 syscall cleanup To: Brian Gerst Cc: X86 ML , Denys Vlasenko , Borislav Petkov , Linus Torvalds , "linux-kernel@vger.kernel.org" , Jan Beulich Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4965 Lines: 121 On Tue, Aug 25, 2015 at 10:20 PM, Brian Gerst wrote: >>>> Thing 2: vdso compilation with binutils that doesn't support .cfi directives >>>> >>>> Userspace debuggers really like having the vdso properly >>>> CFI-annotated, and the 32-bit fast syscall entries are annotatied >>>> manually in hexidecimal. AFAIK Jan Beulich is the only person who >>>> understands it. >>>> >>>> I want to be able to change the entries a little bit to clean them up >>>> (and possibly rework the SYSCALL32 and SYSENTER register tricks, which >>>> currently suck), but it's really, really messy right now because of >>>> the hex CFI stuff. Could we just drop the CFI annotations if the >>>> binutils version is too old or even just require new enough binutils >>>> to build 32-bit and compat kernels? >>> >>> One thing I want to do is rework the 32-bit VDSO into a single image, >>> using alternatives to handle the selection of entry method. The >>> open-coded CFI crap has made that near impossible to do. >>> >> >> Yes please! >> >> But please don't change the actual instruction ordering at all yet, >> since the SYSCALL case seems to be buggy right now. >> >> (If you want to be really fancy, don't use alternatives. Instead >> teach vdso2c to annotate the actual dynamic table function pointers so >> we can rewrite the pointers at boot time. That will save a cycle or >> two.) > > The easiest way to select the right entry code is by changing the ELF > AUX vector. That gets the normal usage, but there are two additional > cases that need addressing. > > 1) Some code could possibly lookup the __kernel_vsyscall symbol > directly and call it, but that's non-standard. If there is code out > there that does this, we could update the ELF symbol table to point > __kernel_vsyscall to the chosen entry point, or just remove the symbol > and let the caller fall back to INT80. Here's an alternate proposal, which is mostly copied from what I posted here yesterday afternoon: https://bugzilla.kernel.org/show_bug.cgi?id=101061 I think we should consider doing this: __kernel_vsyscall: push %ecx push %edx movl %esp, %edx ALTERNATIVE (Intel with SEP): sysenter ALTERNATIVE (AMD with SYSCALL32 on 64-bit kernels): syscall hlt /* keep weird binary tracers from utterly screwing up */ ALTERNATIVE (if neither of the other cases apply): nops movl %edx, %esp movl (%esp), %edx movl 8(%esp), %ecx int $0x80 vsyscall_after_int80: popl %edx popl %ecx ret First, in the case where we have neither SEP nor SYSCALL32, I claim that this Just Works. We push a couple regs, pointlessly shuffle esp, restore the regs, do int $0x80 with the same regs we started with, and then (again, pointlessly) pop the regs we pushed. Now we make the semantics of *both* syscall32 and sysenter that they load edx into regs->sp, fetch regs->dx and regs->cx from memory, and set regs->ip to vsyscall_after_int80. (This is a wee bit slower than the current sysenter code because it does two memory fetches instead of one.) Then they proceed just like int80. In particular, anything that does "ip -= 2" works exactly like int80 because it points at an actual int80 instruction. Note that sysenter's slow path already sort of works like this. Now we've fixed the correctness issues but we've killed performance, as we'll use IRET instead of SYSRET to get back out. We can fix that using opportunstic sysret. If we're returning from a compat syscall that entered via sysenter or syscall32, if regs->ip == vsyscall_after_int80, regs->r11 == regs->flags, regs->ss == __USER_DS, regs->cs == __USER32_CS, and flags are sane, then return using SYSRETL. (The r11 check is probably unnecessary.) This is not quite as elegant as 64-bit opportunistic sysret, since we're zapping ecx. This should be unobservable except by debuggers, since we already know that we're returning to a 'pop ecx' instruction. NB: I don't think we can enable SYSCALL on 32-bit kernels. IIRC there's no MSR_SYSCALL_MASK support, which makes the whole thing basically useless, since we can't mask TF and we don't control ESP. > > 2) The sigreturn trampolines. These are tricky because the sigreturn > syscalls implicitly uses regs->sp to find the signal frame. That > interacts badly with the SYSENTER/SYSCALL entries, which save > registers on the stack. It currently uses a bare SYSCALL instruction > (no pushes to the stack), but falls back to INT80 for SYSENTER. One > option is to create new syscalls that takes a pointer to the signal > frame as arg1. I'm not sure this is worth doing, since IIRC modern userspace doesn't use the vdso sigreturn trampolines at all. We could just continue using int80 for now. --Andy -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/