Received: by 2002:a05:6a10:6744:0:0:0:0 with SMTP id w4csp2179094pxu; Sat, 17 Oct 2020 14:42:06 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwghG7OgHBvFAvLMg3g97dGDI1TmPr3oTPO2LPGAdjmi+cmaFJKRh3cVz5p56qTMn9BKtCe X-Received: by 2002:aa7:d29a:: with SMTP id w26mr10502468edq.59.1602970926563; Sat, 17 Oct 2020 14:42:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1602970926; cv=none; d=google.com; s=arc-20160816; b=SCdyWc4cX1rWoeGeZgXMepbxxFLUFMYZN+79K4EGy0fN1Z0Br1055Sfi4mPu8VxtLy fZhcpZD62XuzhqpRHeX2ey3B0xmBOcGoQEjV3NfIHMmbHWZxsOGRyAJN4LyWPnxJWo7D HtXITaXcNeCHwpcLv8C7SJMSJEqUHz2HTyB4lyx9cafj31hZ0qhkuOUJf2vexfEH4tWe W5F7ijHHyFkIAwxOXScBw/iOSjyQx1Zs8Vo54fPclGlG+cn9cWxFxvK5wiVPTOeZ72/I GqxB/7AIIxZ318cjzS2So6F8Ed2dRU2KHRHINE0KLCd/BAo5Pfc/lxePsaP8fzjq4gOI hy/g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:organization:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :ironport-sdr:ironport-sdr; bh=/m2ygDZYCH6besBkkdkcTNu9lFH7SyBrBe4hFhcBdjE=; b=ZjK+gC+PUMRBnMMa/HVTMY3vWmA46h6FMRIkAcrUQCD7Hs/D4iE8UFoNBkzbuKdnII rmN/7zLIauCNLLq6zvcjdjHqDZ2fedRqdasKnYLCnlooGA8qONgqitbCNEYR8vr8x7jp WRbZTMscwsOvF5UbPQCQvs7QaG6wAmmOvIqevqKD807tgLGG5Q+HsQiztCfhVsSxRIPm fu/E9XBcRDzWXPOgNTa5654HcwMNECRPvYi0A4emzIbUP8E8hfqSKcRwLnqrtTNQP8su lpwLncMP2CU/Fv8b8XaPYJsW8bTCBvQAEl3/pUEvqj4zhf4j72WMxLOPTTVKoXD3/7mZ t5Ow== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id e21si2900119edy.555.2020.10.17.14.41.37; Sat, 17 Oct 2020 14:42:06 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2439386AbgJQVCW (ORCPT + 99 others); Sat, 17 Oct 2020 17:02:22 -0400 Received: from mga18.intel.com ([134.134.136.126]:28858 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2439312AbgJQVCW (ORCPT ); Sat, 17 Oct 2020 17:02:22 -0400 IronPort-SDR: EQkXEoMyOFC9cnEueiGKaanYR/Ejbed3um19OxNCeMxjIXNBXz6L2lhNI7gFZFZdL8VFGopyg/ sEzFFVyNqnwA== X-IronPort-AV: E=McAfee;i="6000,8403,9777"; a="154623379" X-IronPort-AV: E=Sophos;i="5.77,388,1596524400"; d="scan'208";a="154623379" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2020 14:02:21 -0700 IronPort-SDR: f5eYN8cnDLfQvNcDja6UpjSpxreobQybxFbC3vpwgGjFvIgKp32cp2oJxTZXTiGpSc9Mw3GDjz ADzJEUrzYhRg== X-IronPort-AV: E=Sophos;i="5.77,388,1596524400"; d="scan'208";a="465074195" Received: from hebenstp-mobl3.ger.corp.intel.com (HELO localhost) ([10.252.54.12]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2020 14:02:09 -0700 Date: Sun, 18 Oct 2020 00:02:04 +0300 From: Jarkko Sakkinen To: Andy Lutomirski Cc: X86 ML , linux-sgx@vger.kernel.org, LKML , Sean Christopherson , Jethro Beekman , Cedric Xing , Andrew Morton , Andy Shevchenko , asapek@google.com, Borislav Petkov , chenalexchen@google.com, Conrad Parker , cyhanish@google.com, Dave Hansen , "Huang, Haitao" , "Huang, Kai" , "Svahn, Kai" , Keith Moyer , Christian Ludloff , Andrew Lutomirski , Neil Horman , Nathaniel McCallum , Patrick Uiterwijk , David Rientjes , Thomas Gleixner , yaozhangx@google.com, mikko.ylinen@intel.com Subject: Re: [PATCH v39 21/24] x86/vdso: Implement a vDSO for Intel SGX enclave call Message-ID: <20201017210204.GA64894@linux.intel.com> References: <20201003045059.665934-1-jarkko.sakkinen@linux.intel.com> <20201003045059.665934-22-jarkko.sakkinen@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Organization: Intel Finland Oy - BIC 0357606-4 - Westendinkatu 7, 02160 Espoo Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Oct 16, 2020 at 06:48:53PM -0700, Andy Lutomirski wrote: > On Fri, Oct 2, 2020 at 9:51 PM Jarkko Sakkinen > wrote: > > > > From: Sean Christopherson > > > > An SGX runtime must be aware of the exceptions, which happen inside an > > enclave. Introduce a vDSO call that wraps EENTER/ERESUME cycle and returns > > the CPU exception back to the caller exactly when it happens. > > > > Kernel fixups the exception information to RDI, RSI and RDX. The SGX call > > vDSO handler fills this information to the user provided buffer or > > alternatively trigger user provided callback at the time of the exception. > > > > The calling convention supports providing the parameters in standard RDI > > RSI, RDX, RCX, R8 and R9 registers, i.e. it is possible to declare the vDSO > > as a C prototype, but other than that there is no specific support for > > SystemV ABI. Storing XSAVE etc. is all responsibility of the enclave and > > the associated run-time. > > > > Suggested-by: Andy Lutomirski > > Acked-by: Jethro Beekman > > Tested-by: Jethro Beekman > > Signed-off-by: Sean Christopherson > > Co-developed-by: Cedric Xing > > Signed-off-by: Cedric Xing > > Co-developed-by: Jarkko Sakkinen > > Signed-off-by: Jarkko Sakkinen > > > +SYM_FUNC_START(__vdso_sgx_enter_enclave) > > + /* Prolog */ > > + .cfi_startproc > > + push %rbp > > + .cfi_adjust_cfa_offset 8 > > + .cfi_rel_offset %rbp, 0 > > + mov %rsp, %rbp > > + .cfi_def_cfa_register %rbp > > + push %rbx > > + .cfi_rel_offset %rbx, -8 > > This *looks* right, but I'm not really an expert. I did not change this from earlier versions. > > + > > + mov %ecx, %eax > > +.Lenter_enclave: > > + /* EENTER <= leaf <= ERESUME */ > > + cmp $EENTER, %eax > > + jb .Linvalid_input > > + cmp $ERESUME, %eax > > + ja .Linvalid_input > > + > > + mov SGX_ENCLAVE_OFFSET_OF_RUN(%rbp), %rcx > > + > > + /* Validate that the reserved area contains only zeros. */ > > + push %rax > > + push %rbx > > This could use a .cfi_register_something_or_other for rbx Sean pointed out that saving %rbx is not necessary here: https://lore.kernel.org/linux-sgx/20201006025703.GG15803@linux.intel.com/ > > + mov $SGX_ENCLAVE_RUN_RESERVED_START, %rbx > > +1: > > + mov (%rcx, %rbx), %rax > > + cmpq $0, %rax > > + jne .Linvalid_input > > + > > + add $8, %rbx > > + cmpq $SGX_ENCLAVE_RUN_RESERVED_END, %rbx > > + jne 1b > > + pop %rbx > > This should undo it. Given private feedback from Sean, I'm replacing this with: mov $SGX_ENCLAVE_RUN_RESERVED_START, %rbx 1: cmpq $0, (%rcx, %rbx) jne .Linvalid_input add $8, %rbx cmpq $SGX_ENCLAVE_RUN_RESERVED_END, %rbx jne 1b There was bug in the error path, %rax was not popped. I did negative testing (testing both branches) for this but it went clean. I guess if I fix this, that will deal with all of your comments? > > + pop %rax > > + > > + /* Load TCS and AEP */ > > + mov SGX_ENCLAVE_RUN_TCS(%rcx), %rbx > > + lea .Lasync_exit_pointer(%rip), %rcx > > + > > + /* Single ENCLU serving as both EENTER and AEP (ERESUME) */ > > +.Lasync_exit_pointer: > > +.Lenclu_eenter_eresume: > > + enclu > > + > > + /* EEXIT jumps here unless the enclave is doing something fancy. */ > > + mov SGX_ENCLAVE_OFFSET_OF_RUN(%rbp), %rbx > > + > > + /* Set exit_reason. */ > > + movl $EEXIT, SGX_ENCLAVE_RUN_LEAF(%rbx) > > + > > + /* Invoke userspace's exit handler if one was provided. */ > > +.Lhandle_exit: > > + cmpq $0, SGX_ENCLAVE_RUN_USER_HANDLER(%rbx) > > + jne .Linvoke_userspace_handler > > + > > + /* Success, in the sense that ENCLU was attempted. */ > > + xor %eax, %eax > > + > > +.Lout: > > + pop %rbx > > and this should undo the .cfi_register. > > > + leave > > + .cfi_def_cfa %rsp, 8 > > + ret > > + > > + /* The out-of-line code runs with the pre-leave stack frame. */ > > + .cfi_def_cfa %rbp, 16 > > + > > +.Linvalid_input: > > Here rbx and rax are pushed, and I guess pop rbx and leave fixes that > up, so okay. > > > + mov $(-EINVAL), %eax > > + jmp .Lout > > + > > +.Lhandle_exception: > > + mov SGX_ENCLAVE_OFFSET_OF_RUN(%rbp), %rbx > > + > > + /* Set the exception info. */ > > + mov %eax, (SGX_ENCLAVE_RUN_LEAF)(%rbx) > > + mov %di, (SGX_ENCLAVE_RUN_EXCEPTION_VECTOR)(%rbx) > > + mov %si, (SGX_ENCLAVE_RUN_EXCEPTION_ERROR_CODE)(%rbx) > > + mov %rdx, (SGX_ENCLAVE_RUN_EXCEPTION_ADDR)(%rbx) > > + jmp .Lhandle_exit > > + > > +.Linvoke_userspace_handler: > > + /* Pass the untrusted RSP (at exit) to the callback via %rcx. */ > > + mov %rsp, %rcx > > + > > + /* Save struct sgx_enclave_exception %rbx is about to be clobbered. */ > > + mov %rbx, %rax > > + > > + /* Save the untrusted RSP offset in %rbx (non-volatile register). */ > > + mov %rsp, %rbx > > + and $0xf, %rbx > > + > > + /* > > + * Align stack per x86_64 ABI. Note, %rsp needs to be 16-byte aligned > > + * _after_ pushing the parameters on the stack, hence the bonus push. > > + */ > > + and $-0x10, %rsp > > + push %rax > > + > > + /* Push struct sgx_enclave_exception as a param to the callback. */ > > + push %rax > > + > > + /* Clear RFLAGS.DF per x86_64 ABI */ > > + cld > > + > > + /* > > + * Load the callback pointer to %rax and lfence for LVI (load value > > + * injection) protection before making the call. > > + */ > > + mov SGX_ENCLAVE_RUN_USER_HANDLER(%rax), %rax > > + lfence > > + call *%rax > > + > > + /* Undo the post-exit %rsp adjustment. */ > > + lea 0x10(%rsp, %rbx), %rsp > > + > > + /* > > + * If the return from callback is zero or negative, return immediately, > > + * else re-execute ENCLU with the postive return value interpreted as > > + * the requested ENCLU leaf. > > + */ > > + cmp $0, %eax > > + jle .Lout > > + jmp .Lenter_enclave > > + > > + .cfi_endproc > > + > > +_ASM_VDSO_EXTABLE_HANDLE(.Lenclu_eenter_eresume, .Lhandle_exception) /Jarkko