Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp1566086pxa; Thu, 6 Aug 2020 10:24:36 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy6noL57jlDlncEMShWZsaXgn3IXTtNxW4IT3xZ4elZizlES5Qe+bRy9ZG4MqqJ37/19vAz X-Received: by 2002:aa7:cc98:: with SMTP id p24mr5172236edt.333.1596734676329; Thu, 06 Aug 2020 10:24:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1596734676; cv=none; d=google.com; s=arc-20160816; b=LYG1r7TEUxdEKc/Nb9FVg19jbFi7jVg/G7IbkbvLo2IAhdTP505eehlZCVKS203CmZ ClFBNextVLEyQ8bhZV66HStpAv2QWidPe6NoTRN01say+Tz14cf+0gD54P9pvuWl4K7Q qErlMaLIX0GoFE3yKGK0GiEZiEi79NWJwgMJzj7C3zA+KR1jxoAM3T7Hpk10MC9vlrI4 H7N6NfAuTpT0JrCKVdaI6otFr3HuLQUTJYuyCA++cOaoNGeMmnb8DpLM3vzjX+AVvI0l Sds5MZVry1y0nDSypBtqqGEKHIXKzRQ6E5f/P6+oCqliMXbHkBk1zwUOFTPpVmDCUvfy ZTug== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:date:references :in-reply-to:subject:cc:to:from:dkim-signature; bh=8QM45qXXS4ltZ7QCSqL7VtzRB13fm6Bh4LQX2QslyYA=; b=kT01vBHJv8Mt1V+A3fyMk/mwGiv/x2RecOcjUhdtSJTstr+VoEdsayEpmpJXQFu6ah 1ToMuKmZkLa028B/Z5IoCSdTRL6kRdVjvccYFX+jDC3r2jLuouiG94WyUOGc48UsRDFJ n7zyee/KLfV+nlpfAh9USv2kq9k4uDYZwSfuY3v49t43lEq0GoOl960dcRmBlL8F+UjZ cTp/Mf6THXdBgalGxLbYneRlLasEM8kVCfeAhm0Rtj0kwjQfS61+ETpedR4ImDxSaoF9 igIxAzujMd7ejKamb5Wx5EMfnZANBmcRUq9G41rIv21WXDzNysvS6LQasAz9vCmlm4tO 8nPQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2020-01-29 header.b=LMvKVn9R; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id mm19si3685916ejb.501.2020.08.06.10.24.13; Thu, 06 Aug 2020 10:24:36 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2020-01-29 header.b=LMvKVn9R; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728665AbgHFRXI (ORCPT + 99 others); Thu, 6 Aug 2020 13:23:08 -0400 Received: from userp2120.oracle.com ([156.151.31.85]:47492 "EHLO userp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729947AbgHFRFM (ORCPT ); Thu, 6 Aug 2020 13:05:12 -0400 Received: from pps.filterd (userp2120.oracle.com [127.0.0.1]) by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 076DqAKE088912; Thu, 6 Aug 2020 13:59:26 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : in-reply-to : references : date : message-id : mime-version : content-type; s=corp-2020-01-29; bh=8QM45qXXS4ltZ7QCSqL7VtzRB13fm6Bh4LQX2QslyYA=; b=LMvKVn9RUBRDyoHqinHNcbJVm5WEDNWhmMO6Xl6NyFC6oh9lTSuATFkcHEwG1QWbQRZq o7LC/PXUB/Y2CM2/mggzOi5tWpiwpmsuB5/xhfIyyzIOIjmR157k+J1270L6bfMVcljZ 0tWS0Xw+kTITVZB4SFmU4evrRwo/Y8CXyOjpZgCdJom2uouAc3Mkm2OMmOpyuLOS96TT gUQWtysrZviaRRAO1L7Kk6Sqe3CkHW1UnTEkTQg7PLTWfkovLeIFN7nYxAVq59VgCpTX J7hD6VwMdaugEyvM7eBLddbEb2dq6fusE1pYW1d11S9LXmdVSpuAAs0TEbnhusdNsa03 Ow== Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70]) by userp2120.oracle.com with ESMTP id 32r6fxk32e-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 06 Aug 2020 13:59:25 +0000 Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1]) by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 076Dr0Ug179379; Thu, 6 Aug 2020 13:59:24 GMT Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235]) by aserp3020.oracle.com with ESMTP id 32pdnwf9k0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 06 Aug 2020 13:59:24 +0000 Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25]) by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 076DxJEe013258; Thu, 6 Aug 2020 13:59:19 GMT Received: from starbug-mbp.localdomain (/79.97.215.145) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Thu, 06 Aug 2020 06:59:19 -0700 Received: by starbug-mbp.localdomain (Postfix, from userid 501) id C8ECBF05E27; Thu, 6 Aug 2020 14:59:12 +0100 (IST) From: Darren Kenny To: Jarkko Sakkinen , x86@kernel.org, linux-sgx@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Jarkko Sakkinen , linux-security-module@vger.kernel.org, linux-mm@kvack.org, Andrew Morton , Matthew Wilcox , Jethro Beekman , Haitao Huang , Chunyang Hui , Jordan Hand , Nathaniel McCallum , Seth Moore , Sean Christopherson , Suresh Siddha , andriy.shevchenko@linux.intel.com, asapek@google.com, bp@alien8.de, cedric.xing@intel.com, chenalexchen@google.com, conradparker@google.com, cyhanish@google.com, dave.hansen@intel.com, haitao.huang@intel.com, josh@joshtriplett.org, kai.huang@intel.com, kai.svahn@intel.com, kmoy@google.com, ludloff@google.com, luto@kernel.org, nhorman@redhat.com, puiterwijk@redhat.com, rientjes@google.com, tglx@linutronix.de, yaozhangx@google.com Subject: Re: [PATCH v36 11/24] x86/sgx: Add SGX enclave driver In-Reply-To: <20200716135303.276442-12-jarkko.sakkinen@linux.intel.com> References: <20200716135303.276442-1-jarkko.sakkinen@linux.intel.com> <20200716135303.276442-12-jarkko.sakkinen@linux.intel.com> Date: Thu, 06 Aug 2020 14:59:12 +0100 Message-ID: MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9704 signatures=668679 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=5 spamscore=0 mlxscore=0 bulkscore=0 adultscore=0 phishscore=0 malwarescore=0 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2008060098 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9704 signatures=668679 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 impostorscore=0 adultscore=0 bulkscore=0 priorityscore=1501 mlxlogscore=999 phishscore=0 lowpriorityscore=0 suspectscore=5 spamscore=0 clxscore=1011 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2008060098 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thursday, 2020-07-16 at 16:52:50 +03, Jarkko Sakkinen wrote: > Intel Software Guard eXtensions (SGX) is a set of CPU instructions that can > be used by applications to set aside private regions of code and data. The > code outside the SGX hosted software entity is prevented from accessing the > memory inside the enclave by the CPU. We call these entities enclaves. > > Add a driver that provides an ioctl API to construct and run enclaves. > Enclaves are constructed from pages residing in reserved physical memory > areas. The contents of these pages can only be accessed when they are > mapped as part of an enclave, by a hardware thread running inside the > enclave. > > The starting state of an enclave consists of a fixed measured set of > pages that are copied to the EPC during the construction process by > using ENCLS leaf functions and Software Enclave Control Structure (SECS) > that defines the enclave properties. > > Enclaves are constructed by using ENCLS leaf functions ECREATE, EADD and > EINIT. ECREATE initializes SECS, EADD copies pages from system memory to > the EPC and EINIT checks a given signed measurement and moves the enclave > into a state ready for execution. > > An initialized enclave can only be accessed through special Thread Control > Structure (TCS) pages by using ENCLU (ring-3 only) leaf EENTER. This leaf > function converts a thread into enclave mode and continues the execution in > the offset defined by the TCS provided to EENTER. An enclave is exited > through syscall, exception, interrupts or by explicitly calling another > ENCLU leaf EEXIT. > > The mmap() permissions are capped by the contained enclave page > permissions. The mapped areas must also be opaque, i.e. each page address > must contain a page. This logic is implemented in sgx_encl_may_map(). > > Cc: linux-security-module@vger.kernel.org > Cc: linux-mm@kvack.org > Cc: Andrew Morton > Cc: Matthew Wilcox > Acked-by: Jethro Beekman > Tested-by: Jethro Beekman > Tested-by: Haitao Huang > Tested-by: Chunyang Hui > Tested-by: Jordan Hand > Tested-by: Nathaniel McCallum > Tested-by: Seth Moore Tested-by: Darren Kenny Reviewed-by: Darren Kenny > Co-developed-by: Sean Christopherson > Signed-off-by: Sean Christopherson > Co-developed-by: Suresh Siddha > Signed-off-by: Suresh Siddha > Signed-off-by: Jarkko Sakkinen > --- > arch/x86/kernel/cpu/sgx/Makefile | 2 + > arch/x86/kernel/cpu/sgx/driver.c | 177 ++++++++++++++++ > arch/x86/kernel/cpu/sgx/driver.h | 29 +++ > arch/x86/kernel/cpu/sgx/encl.c | 333 +++++++++++++++++++++++++++++++ > arch/x86/kernel/cpu/sgx/encl.h | 87 ++++++++ > arch/x86/kernel/cpu/sgx/main.c | 11 + > 6 files changed, 639 insertions(+) > create mode 100644 arch/x86/kernel/cpu/sgx/driver.c > create mode 100644 arch/x86/kernel/cpu/sgx/driver.h > create mode 100644 arch/x86/kernel/cpu/sgx/encl.c > create mode 100644 arch/x86/kernel/cpu/sgx/encl.h > > diff --git a/arch/x86/kernel/cpu/sgx/Makefile b/arch/x86/kernel/cpu/sgx/Makefile > index 79510ce01b3b..3fc451120735 100644 > --- a/arch/x86/kernel/cpu/sgx/Makefile > +++ b/arch/x86/kernel/cpu/sgx/Makefile > @@ -1,2 +1,4 @@ > obj-y += \ > + driver.o \ > + encl.o \ > main.o > diff --git a/arch/x86/kernel/cpu/sgx/driver.c b/arch/x86/kernel/cpu/sgx/driver.c > new file mode 100644 > index 000000000000..b52520407f5b > --- /dev/null > +++ b/arch/x86/kernel/cpu/sgx/driver.c > @@ -0,0 +1,177 @@ > +// SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) > +// Copyright(c) 2016-18 Intel Corporation. > + > +#include > +#include > +#include > +#include > +#include > +#include > +#include "driver.h" > +#include "encl.h" > + > +MODULE_DESCRIPTION("Intel SGX Enclave Driver"); > +MODULE_AUTHOR("Jarkko Sakkinen "); > +MODULE_LICENSE("Dual BSD/GPL"); > + > +u64 sgx_encl_size_max_32; > +u64 sgx_encl_size_max_64; > +u32 sgx_misc_reserved_mask; > +u64 sgx_attributes_reserved_mask; > +u64 sgx_xfrm_reserved_mask = ~0x3; > +u32 sgx_xsave_size_tbl[64]; > + > +static int sgx_open(struct inode *inode, struct file *file) > +{ > + struct sgx_encl *encl; > + int ret; > + > + encl = kzalloc(sizeof(*encl), GFP_KERNEL); > + if (!encl) > + return -ENOMEM; > + > + atomic_set(&encl->flags, 0); > + kref_init(&encl->refcount); > + xa_init(&encl->page_array); > + mutex_init(&encl->lock); > + INIT_LIST_HEAD(&encl->mm_list); > + spin_lock_init(&encl->mm_lock); > + > + ret = init_srcu_struct(&encl->srcu); > + if (ret) { > + kfree(encl); > + return ret; > + } > + > + file->private_data = encl; > + > + return 0; > +} > + > +static int sgx_release(struct inode *inode, struct file *file) > +{ > + struct sgx_encl *encl = file->private_data; > + struct sgx_encl_mm *encl_mm; > + > + for ( ; ; ) { > + spin_lock(&encl->mm_lock); > + > + if (list_empty(&encl->mm_list)) { > + encl_mm = NULL; > + } else { > + encl_mm = list_first_entry(&encl->mm_list, > + struct sgx_encl_mm, list); > + list_del_rcu(&encl_mm->list); > + } > + > + spin_unlock(&encl->mm_lock); > + > + /* The list is empty, ready to go. */ > + if (!encl_mm) > + break; > + > + synchronize_srcu(&encl->srcu); > + mmu_notifier_unregister(&encl_mm->mmu_notifier, encl_mm->mm); > + kfree(encl_mm); > + } > + > + mutex_lock(&encl->lock); > + atomic_or(SGX_ENCL_DEAD, &encl->flags); > + mutex_unlock(&encl->lock); > + > + kref_put(&encl->refcount, sgx_encl_release); > + return 0; > +} > + > +static int sgx_mmap(struct file *file, struct vm_area_struct *vma) > +{ > + struct sgx_encl *encl = file->private_data; > + int ret; > + > + ret = sgx_encl_may_map(encl, vma->vm_start, vma->vm_end, vma->vm_flags); > + if (ret) > + return ret; > + > + ret = sgx_encl_mm_add(encl, vma->vm_mm); > + if (ret) > + return ret; > + > + vma->vm_ops = &sgx_vm_ops; > + vma->vm_flags |= VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP | VM_IO; > + vma->vm_private_data = encl; > + > + return 0; > +} > + > +static unsigned long sgx_get_unmapped_area(struct file *file, > + unsigned long addr, > + unsigned long len, > + unsigned long pgoff, > + unsigned long flags) > +{ > + if (flags & MAP_PRIVATE) > + return -EINVAL; > + > + if (flags & MAP_FIXED) > + return addr; > + > + return current->mm->get_unmapped_area(file, addr, len, pgoff, flags); > +} > + > +static const struct file_operations sgx_encl_fops = { > + .owner = THIS_MODULE, > + .open = sgx_open, > + .release = sgx_release, > + .mmap = sgx_mmap, > + .get_unmapped_area = sgx_get_unmapped_area, > +}; > + > +static struct miscdevice sgx_dev_enclave = { > + .minor = MISC_DYNAMIC_MINOR, > + .name = "enclave", > + .nodename = "sgx/enclave", > + .fops = &sgx_encl_fops, > +}; > + > +int __init sgx_drv_init(void) > +{ > + unsigned int eax, ebx, ecx, edx; > + u64 attr_mask, xfrm_mask; > + int ret; > + int i; > + > + if (!boot_cpu_has(X86_FEATURE_SGX_LC)) { > + pr_info("The public key MSRs are not writable.\n"); > + return -ENODEV; > + } > + > + cpuid_count(SGX_CPUID, 0, &eax, &ebx, &ecx, &edx); > + sgx_misc_reserved_mask = ~ebx | SGX_MISC_RESERVED_MASK; > + sgx_encl_size_max_64 = 1ULL << ((edx >> 8) & 0xFF); > + sgx_encl_size_max_32 = 1ULL << (edx & 0xFF); > + > + cpuid_count(SGX_CPUID, 1, &eax, &ebx, &ecx, &edx); > + > + attr_mask = (((u64)ebx) << 32) + (u64)eax; > + sgx_attributes_reserved_mask = ~attr_mask | SGX_ATTR_RESERVED_MASK; > + > + if (boot_cpu_has(X86_FEATURE_OSXSAVE)) { > + xfrm_mask = (((u64)edx) << 32) + (u64)ecx; > + > + for (i = 2; i < 64; i++) { > + cpuid_count(0x0D, i, &eax, &ebx, &ecx, &edx); > + if ((1 << i) & xfrm_mask) > + sgx_xsave_size_tbl[i] = eax + ebx; > + } > + > + sgx_xfrm_reserved_mask = ~xfrm_mask; > + } > + > + ret = misc_register(&sgx_dev_enclave); > + if (ret) { > + pr_err("Creating /dev/sgx/enclave failed with %d.\n", ret); > + return ret; > + } > + > + return 0; > +} > diff --git a/arch/x86/kernel/cpu/sgx/driver.h b/arch/x86/kernel/cpu/sgx/driver.h > new file mode 100644 > index 000000000000..f7ce40dedc91 > --- /dev/null > +++ b/arch/x86/kernel/cpu/sgx/driver.h > @@ -0,0 +1,29 @@ > +/* SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) */ > +#ifndef __ARCH_SGX_DRIVER_H__ > +#define __ARCH_SGX_DRIVER_H__ > + > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include "sgx.h" > + > +#define SGX_EINIT_SPIN_COUNT 20 > +#define SGX_EINIT_SLEEP_COUNT 50 > +#define SGX_EINIT_SLEEP_TIME 20 > + > +extern u64 sgx_encl_size_max_32; > +extern u64 sgx_encl_size_max_64; > +extern u32 sgx_misc_reserved_mask; > +extern u64 sgx_attributes_reserved_mask; > +extern u64 sgx_xfrm_reserved_mask; > +extern u32 sgx_xsave_size_tbl[64]; > + > +long sgx_ioctl(struct file *filep, unsigned int cmd, unsigned long arg); > + > +int sgx_drv_init(void); > + > +#endif /* __ARCH_X86_SGX_DRIVER_H__ */ > diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c > new file mode 100644 > index 000000000000..af5df6bc58f3 > --- /dev/null > +++ b/arch/x86/kernel/cpu/sgx/encl.c > @@ -0,0 +1,333 @@ > +// SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) > +// Copyright(c) 2016-18 Intel Corporation. > + > +#include > +#include > +#include > +#include > +#include > +#include > +#include "arch.h" > +#include "encl.h" > +#include "encls.h" > +#include "sgx.h" > + > +static struct sgx_encl_page *sgx_encl_load_page(struct sgx_encl *encl, > + unsigned long addr) > +{ > + struct sgx_encl_page *entry; > + unsigned int flags; > + > + /* If process was forked, VMA is still there but vm_private_data is set > + * to NULL. > + */ > + if (!encl) > + return ERR_PTR(-EFAULT); > + > + flags = atomic_read(&encl->flags); > + > + if ((flags & SGX_ENCL_DEAD) || !(flags & SGX_ENCL_INITIALIZED)) > + return ERR_PTR(-EFAULT); > + > + entry = xa_load(&encl->page_array, PFN_DOWN(addr)); > + if (!entry) > + return ERR_PTR(-EFAULT); > + > + /* Page is already resident in the EPC. */ > + if (entry->epc_page) > + return entry; > + > + return ERR_PTR(-EFAULT); > +} > + > +static void sgx_mmu_notifier_release(struct mmu_notifier *mn, > + struct mm_struct *mm) > +{ > + struct sgx_encl_mm *encl_mm = > + container_of(mn, struct sgx_encl_mm, mmu_notifier); > + struct sgx_encl_mm *tmp = NULL; > + > + /* > + * The enclave itself can remove encl_mm. Note, objects can't be moved > + * off an RCU protected list, but deletion is ok. > + */ > + spin_lock(&encl_mm->encl->mm_lock); > + list_for_each_entry(tmp, &encl_mm->encl->mm_list, list) { > + if (tmp == encl_mm) { > + list_del_rcu(&encl_mm->list); > + break; > + } > + } > + spin_unlock(&encl_mm->encl->mm_lock); > + > + if (tmp == encl_mm) { > + synchronize_srcu(&encl_mm->encl->srcu); > + mmu_notifier_put(mn); > + } > +} > + > +static void sgx_mmu_notifier_free(struct mmu_notifier *mn) > +{ > + struct sgx_encl_mm *encl_mm = > + container_of(mn, struct sgx_encl_mm, mmu_notifier); > + > + kfree(encl_mm); > +} > + > +static const struct mmu_notifier_ops sgx_mmu_notifier_ops = { > + .release = sgx_mmu_notifier_release, > + .free_notifier = sgx_mmu_notifier_free, > +}; > + > +static struct sgx_encl_mm *sgx_encl_find_mm(struct sgx_encl *encl, > + struct mm_struct *mm) > +{ > + struct sgx_encl_mm *encl_mm = NULL; > + struct sgx_encl_mm *tmp; > + int idx; > + > + idx = srcu_read_lock(&encl->srcu); > + > + list_for_each_entry_rcu(tmp, &encl->mm_list, list) { > + if (tmp->mm == mm) { > + encl_mm = tmp; > + break; > + } > + } > + > + srcu_read_unlock(&encl->srcu, idx); > + > + return encl_mm; > +} > + > +int sgx_encl_mm_add(struct sgx_encl *encl, struct mm_struct *mm) > +{ > + struct sgx_encl_mm *encl_mm; > + int ret; > + > + /* mm_list can be accessed only by a single thread at a time. */ > + mmap_assert_write_locked(mm); > + > + if (atomic_read(&encl->flags) & SGX_ENCL_DEAD) > + return -EINVAL; > + > + /* > + * mm_structs are kept on mm_list until the mm or the enclave dies, > + * i.e. once an mm is off the list, it's gone for good, therefore it's > + * impossible to get a false positive on @mm due to a stale mm_list. > + */ > + if (sgx_encl_find_mm(encl, mm)) > + return 0; > + > + encl_mm = kzalloc(sizeof(*encl_mm), GFP_KERNEL); > + if (!encl_mm) > + return -ENOMEM; > + > + encl_mm->encl = encl; > + encl_mm->mm = mm; > + encl_mm->mmu_notifier.ops = &sgx_mmu_notifier_ops; > + > + ret = __mmu_notifier_register(&encl_mm->mmu_notifier, mm); > + if (ret) { > + kfree(encl_mm); > + return ret; > + } > + > + spin_lock(&encl->mm_lock); > + list_add_rcu(&encl_mm->list, &encl->mm_list); > + spin_unlock(&encl->mm_lock); > + > + return 0; > +} > + > +static void sgx_vma_open(struct vm_area_struct *vma) > +{ > + struct sgx_encl *encl = vma->vm_private_data; > + > + if (!encl) > + return; > + > + if (sgx_encl_mm_add(encl, vma->vm_mm)) > + vma->vm_private_data = NULL; > +} > + > +static unsigned int sgx_vma_fault(struct vm_fault *vmf) > +{ > + unsigned long addr = (unsigned long)vmf->address; > + struct vm_area_struct *vma = vmf->vma; > + struct sgx_encl *encl = vma->vm_private_data; > + struct sgx_encl_page *entry; > + int ret = VM_FAULT_NOPAGE; > + unsigned long pfn; > + > + if (!encl) > + return VM_FAULT_SIGBUS; > + > + mutex_lock(&encl->lock); > + > + entry = sgx_encl_load_page(encl, addr); > + if (IS_ERR(entry)) { > + if (unlikely(PTR_ERR(entry) != -EBUSY)) > + ret = VM_FAULT_SIGBUS; > + > + goto out; > + } > + > + if (!follow_pfn(vma, addr, &pfn)) > + goto out; > + > + ret = vmf_insert_pfn(vma, addr, PFN_DOWN(entry->epc_page->desc)); > + if (ret != VM_FAULT_NOPAGE) { > + ret = VM_FAULT_SIGBUS; > + goto out; > + } > + > +out: > + mutex_unlock(&encl->lock); > + return ret; > +} > + > +/** > + * sgx_encl_may_map() - Check if a requested VMA mapping is allowed > + * @encl: an enclave > + * @start: lower bound of the address range, inclusive > + * @end: upper bound of the address range, exclusive > + * @vm_prot_bits: requested protections of the address range > + * > + * Iterate through the enclave pages contained within [@start, @end) to verify > + * the permissions requested by @vm_prot_bits do not exceed that of any enclave > + * page to be mapped. > + * > + * Return: > + * 0 on success, > + * -EACCES if VMA permissions exceed enclave page permissions > + */ > +int sgx_encl_may_map(struct sgx_encl *encl, unsigned long start, > + unsigned long end, unsigned long vm_flags) > +{ > + unsigned long vm_prot_bits = vm_flags & (VM_READ | VM_WRITE | VM_EXEC); > + unsigned long idx_start = PFN_DOWN(start); > + unsigned long idx_end = PFN_DOWN(end - 1); > + struct sgx_encl_page *page; > + XA_STATE(xas, &encl->page_array, idx_start); > + > + /* > + * Disallow RIE tasks as their VMA permissions might conflict with the > + * enclave page permissions. > + */ > + if (!!(current->personality & READ_IMPLIES_EXEC)) > + return -EACCES; > + > + xas_for_each(&xas, page, idx_end) > + if (!page || (~page->vm_max_prot_bits & vm_prot_bits)) > + return -EACCES; > + > + return 0; > +} > + > +static int sgx_vma_mprotect(struct vm_area_struct *vma, > + struct vm_area_struct **pprev, unsigned long start, > + unsigned long end, unsigned long newflags) > +{ > + int ret; > + > + ret = sgx_encl_may_map(vma->vm_private_data, start, end, newflags); > + if (ret) > + return ret; > + > + return mprotect_fixup(vma, pprev, start, end, newflags); > +} > + > +const struct vm_operations_struct sgx_vm_ops = { > + .open = sgx_vma_open, > + .fault = sgx_vma_fault, > + .mprotect = sgx_vma_mprotect, > +}; > + > +/** > + * sgx_encl_find - find an enclave > + * @mm: mm struct of the current process > + * @addr: address in the ELRANGE > + * @vma: the resulting VMA > + * > + * Find an enclave identified by the given address. Give back a VMA that is > + * part of the enclave and located in that address. The VMA is given back if it > + * is a proper enclave VMA even if an &sgx_encl instance does not exist yet > + * (enclave creation has not been performed). > + * > + * Return: > + * 0 on success, > + * -EINVAL if an enclave was not found, > + * -ENOENT if the enclave has not been created yet > + */ > +int sgx_encl_find(struct mm_struct *mm, unsigned long addr, > + struct vm_area_struct **vma) > +{ > + struct vm_area_struct *result; > + struct sgx_encl *encl; > + > + result = find_vma(mm, addr); > + if (!result || result->vm_ops != &sgx_vm_ops || addr < result->vm_start) > + return -EINVAL; > + > + encl = result->vm_private_data; > + *vma = result; > + > + return encl ? 0 : -ENOENT; > +} > + > +/** > + * sgx_encl_destroy() - destroy enclave resources > + * @encl: an &sgx_encl instance > + */ > +void sgx_encl_destroy(struct sgx_encl *encl) > +{ > + struct sgx_encl_page *entry; > + unsigned long index; > + > + atomic_or(SGX_ENCL_DEAD, &encl->flags); > + > + xa_for_each(&encl->page_array, index, entry) { > + if (entry->epc_page) { > + sgx_free_epc_page(entry->epc_page); > + encl->secs_child_cnt--; > + entry->epc_page = NULL; > + } > + > + kfree(entry); > + } > + > + xa_destroy(&encl->page_array); > + > + if (!encl->secs_child_cnt && encl->secs.epc_page) { > + sgx_free_epc_page(encl->secs.epc_page); > + encl->secs.epc_page = NULL; > + } > +} > + > +/** > + * sgx_encl_release - Destroy an enclave instance > + * @kref: address of a kref inside &sgx_encl > + * > + * Used together with kref_put(). Frees all the resources associated with the > + * enclave and the instance itself. > + */ > +void sgx_encl_release(struct kref *ref) > +{ > + struct sgx_encl *encl = container_of(ref, struct sgx_encl, refcount); > + > + sgx_encl_destroy(encl); > + > + if (encl->backing) > + fput(encl->backing); > + > + cleanup_srcu_struct(&encl->srcu); > + > + WARN_ON_ONCE(!list_empty(&encl->mm_list)); > + > + /* Detect EPC page leak's. */ > + WARN_ON_ONCE(encl->secs_child_cnt); > + WARN_ON_ONCE(encl->secs.epc_page); > + > + kfree(encl); > +} > diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h > new file mode 100644 > index 000000000000..74ad6c4da783 > --- /dev/null > +++ b/arch/x86/kernel/cpu/sgx/encl.h > @@ -0,0 +1,87 @@ > +/* SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) */ > +/** > + * Copyright(c) 2016-19 Intel Corporation. > + */ > +#ifndef _X86_ENCL_H > +#define _X86_ENCL_H > + > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include "sgx.h" > + > +/** > + * enum sgx_encl_page_desc - defines bits for an enclave page's descriptor > + * %SGX_ENCL_PAGE_ADDR_MASK: Holds the virtual address of the page. > + * > + * The page address for SECS is zero and is used by the subsystem to recognize > + * the SECS page. > + */ > +enum sgx_encl_page_desc { > + /* Bits 11:3 are available when the page is not swapped. */ > + SGX_ENCL_PAGE_ADDR_MASK = PAGE_MASK, > +}; > + > +#define SGX_ENCL_PAGE_ADDR(page) \ > + ((page)->desc & SGX_ENCL_PAGE_ADDR_MASK) > + > +struct sgx_encl_page { > + unsigned long desc; > + unsigned long vm_max_prot_bits; > + struct sgx_epc_page *epc_page; > + struct sgx_encl *encl; > +}; > + > +enum sgx_encl_flags { > + SGX_ENCL_CREATED = BIT(0), > + SGX_ENCL_INITIALIZED = BIT(1), > + SGX_ENCL_DEBUG = BIT(2), > + SGX_ENCL_DEAD = BIT(3), > + SGX_ENCL_IOCTL = BIT(4), > +}; > + > +struct sgx_encl_mm { > + struct sgx_encl *encl; > + struct mm_struct *mm; > + struct list_head list; > + struct mmu_notifier mmu_notifier; > +}; > + > +struct sgx_encl { > + atomic_t flags; > + u64 secs_attributes; > + u64 allowed_attributes; > + unsigned int page_cnt; > + unsigned int secs_child_cnt; > + struct mutex lock; > + struct list_head mm_list; > + spinlock_t mm_lock; > + struct file *backing; > + struct kref refcount; > + struct srcu_struct srcu; > + unsigned long base; > + unsigned long size; > + unsigned long ssaframesize; > + struct xarray page_array; > + struct sgx_encl_page secs; > + cpumask_t cpumask; > +}; > + > +extern const struct vm_operations_struct sgx_vm_ops; > + > +int sgx_encl_find(struct mm_struct *mm, unsigned long addr, > + struct vm_area_struct **vma); > +void sgx_encl_destroy(struct sgx_encl *encl); > +void sgx_encl_release(struct kref *ref); > +int sgx_encl_mm_add(struct sgx_encl *encl, struct mm_struct *mm); > +int sgx_encl_may_map(struct sgx_encl *encl, unsigned long start, > + unsigned long end, unsigned long vm_flags); > + > +#endif /* _X86_ENCL_H */ > diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c > index 97c6895fb6c9..4137254fb29e 100644 > --- a/arch/x86/kernel/cpu/sgx/main.c > +++ b/arch/x86/kernel/cpu/sgx/main.c > @@ -9,6 +9,8 @@ > #include > #include > #include > +#include "driver.h" > +#include "encl.h" > #include "encls.h" > > struct sgx_epc_section sgx_epc_sections[SGX_MAX_EPC_SECTIONS]; > @@ -260,6 +262,8 @@ static bool __init sgx_page_cache_init(void) > > static void __init sgx_init(void) > { > + int ret; > + > if (!boot_cpu_has(X86_FEATURE_SGX)) > return; > > @@ -269,8 +273,15 @@ static void __init sgx_init(void) > if (!sgx_page_reclaimer_init()) > goto err_page_cache; > > + ret = sgx_drv_init(); > + if (ret) > + goto err_kthread; > + > return; > > +err_kthread: > + kthread_stop(ksgxswapd_tsk); > + > err_page_cache: > sgx_page_cache_teardown(); > } > -- > 2.25.1