Received: by 2002:a05:6a10:2726:0:0:0:0 with SMTP id ib38csp3491465pxb; Mon, 4 Apr 2022 18:41:34 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx6inszODUNw5EaKqrdYocSpEtqpoIgMr45t3CIugTBXAdagjt2xbZDg92OsUmRDMD0uKDP X-Received: by 2002:a17:90b:3c0d:b0:1c7:ecae:e609 with SMTP id pb13-20020a17090b3c0d00b001c7ecaee609mr1222719pjb.61.1649122894089; Mon, 04 Apr 2022 18:41:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1649122894; cv=none; d=google.com; s=arc-20160816; b=qsMK7o71pZBOEekc+DRrxMVi/ziiMtk7fxr8P7xEYeA5iFNvlQGo+/VK34KNYtg3I2 zOSU8E7aZYIqPqHozBSP5hlYNOSAlbt+/9uwY/eZjLG/2tCKd4vsfWrbnsGjUWNfgl+Z PN6XL5LlkiVHRprep79Vw7JlQtAtfhDQF9CfXA5hZVcvLT2ndhl5qtVCpLlxu+6AJX+G 5RYsWaTg8cZP/cfDqSbVHhZ8GrjwwJgZwjQDstMtW+/wwzFOWo0OMpmWDGcedCJUgMls g8WANM8UgsMSkCB4oLYNwsNH2pd4x1Nw2MeR+ps+Q4ed4Jyl7qg2KQbIhHABrSNPkrBD hLsQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Nncm/DFg2mRDkFxLCnYPgDpxkFTKwzpUz+ovr6XOmtc=; b=Rq37EWUqZ6bwkflofaQiwteTkrLBGp43v/aIHr12G5KPhi4uuvOyCTJmU1C9zIdpC4 QYdeKvZbd+y8zJ+E3bwbv2h3DQDtQH4lnwj9A8gEVTgjhIVVJnwGJ5BgJ3NBU44JYwwQ uwHxWfC6UDZs4hlbFR6kMjuUU/wkYKmqVrGYtFR0CV+knlpU85rRDW89/hbYtzJxxrGv 6c6+l5wr6bWNc8P6xsHgt5JvH4zUPlEaEvMSsoF3d0COqqYLCq04iKsse/03M0ib8ZrF aPpztxuQeUCqHur98oGA10UoLSHXf50y45tE/NqNudQTaGrEUslKW1B+ZqUO+qT3+eTn 9yZA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="UIt//fUO"; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [23.128.96.19]) by mx.google.com with ESMTPS id k135-20020a636f8d000000b0038633e6a886si11773173pgc.513.2022.04.04.18.41.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Apr 2022 18:41:34 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) client-ip=23.128.96.19; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="UIt//fUO"; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 84597282B2D; Mon, 4 Apr 2022 17:38:28 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1382131AbiDDVYz (ORCPT + 99 others); Mon, 4 Apr 2022 17:24:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46210 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379253AbiDDQvs (ORCPT ); Mon, 4 Apr 2022 12:51:48 -0400 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B11C52B1B0; Mon, 4 Apr 2022 09:49:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649090991; x=1680626991; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=nUCxoYXpaLcMuFF2b8VDSWzuYcUmFDOIEehMew72cCE=; b=UIt//fUOf9TmtNF2OO1vwsDoNuoMdpRsQxZA4ewEHfn9Ma2yvblsimnE kobQFpSVBeFZesIklacYnzDbSQFzJyEAQpLHz9ToUBE793qILat4WbQJS XG3W9M0hDSgyh+qb+sTpq7LC/HuI6U9LJ/ExuqTXrzGDjOZ7HCz5UJhX9 j7rQXl8fq17DvAl2ooLYUXBo7TnTaTAp2p3P81gkI8BWrMBq4ZucqQn8G zV3HEHQZLnVro2BmHZLrAGuqiapeTBW5VWkcJjo9aJFdNVBtXRtRmfUeJ wFXSgvbTMYhMns8SjGU2HOMf3g//FDAMSpFZEqxCTTOgYtf7dlmpRrH9v A==; X-IronPort-AV: E=McAfee;i="6200,9189,10307"; a="323734039" X-IronPort-AV: E=Sophos;i="5.90,234,1643702400"; d="scan'208";a="323734039" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Apr 2022 09:49:51 -0700 X-IronPort-AV: E=Sophos;i="5.90,234,1643702400"; d="scan'208";a="523105162" Received: from rchatre-ws.ostc.intel.com ([10.54.69.144]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Apr 2022 09:49:50 -0700 From: Reinette Chatre To: dave.hansen@linux.intel.com, jarkko@kernel.org, tglx@linutronix.de, bp@alien8.de, luto@kernel.org, mingo@redhat.com, linux-sgx@vger.kernel.org, x86@kernel.org Cc: seanjc@google.com, kai.huang@intel.com, cathy.zhang@intel.com, cedric.xing@intel.com, haitao.huang@intel.com, mark.shanahan@intel.com, hpa@zytor.com, linux-kernel@vger.kernel.org Subject: [PATCH V3 05/30] x86/sgx: Support loading enclave page without VMA permissions check Date: Mon, 4 Apr 2022 09:49:13 -0700 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.0 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org sgx_encl_load_page() is used to find and load an enclave page into enclave (EPC) memory, potentially loading it from the backing storage. Both usages of sgx_encl_load_page() are during an access to the enclave page from a VMA and thus the permissions of the VMA are considered before the enclave page is loaded. SGX2 functions operating on enclave pages belonging to an initialized enclave requiring the page to be in EPC. It is thus required to support loading enclave pages into the EPC independent from a VMA. Split the current sgx_encl_load_page() to support the two usages: A new call, sgx_encl_load_page_in_vma(), behaves exactly like the current sgx_encl_load_page() that takes VMA permissions into account, while sgx_encl_load_page() just loads an enclave page into EPC. VMA, PTE, and EPCM permissions would continue to dictate whether the pages can be accessed from within an enclave. Signed-off-by: Reinette Chatre --- Changes since V2: - New patch arch/x86/kernel/cpu/sgx/encl.c | 57 ++++++++++++++++++++++------------ arch/x86/kernel/cpu/sgx/encl.h | 2 ++ 2 files changed, 40 insertions(+), 19 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c index 7c63a1911fae..05ae1168391c 100644 --- a/arch/x86/kernel/cpu/sgx/encl.c +++ b/arch/x86/kernel/cpu/sgx/encl.c @@ -131,25 +131,10 @@ static struct sgx_epc_page *sgx_encl_eldu(struct sgx_encl_page *encl_page, return epc_page; } -static struct sgx_encl_page *sgx_encl_load_page(struct sgx_encl *encl, - unsigned long addr, - unsigned long vm_flags) +static struct sgx_encl_page *__sgx_encl_load_page(struct sgx_encl *encl, + struct sgx_encl_page *entry) { - unsigned long vm_prot_bits = vm_flags & (VM_READ | VM_WRITE | VM_EXEC); struct sgx_epc_page *epc_page; - struct sgx_encl_page *entry; - - entry = xa_load(&encl->page_array, PFN_DOWN(addr)); - if (!entry) - return ERR_PTR(-EFAULT); - - /* - * Verify that the faulted page has equal or higher build time - * permissions than the VMA permissions (i.e. the subset of {VM_READ, - * VM_WRITE, VM_EXECUTE} in vma->vm_flags). - */ - if ((entry->vm_max_prot_bits & vm_prot_bits) != vm_prot_bits) - return ERR_PTR(-EFAULT); /* Entry successfully located. */ if (entry->epc_page) { @@ -175,6 +160,40 @@ static struct sgx_encl_page *sgx_encl_load_page(struct sgx_encl *encl, return entry; } +static struct sgx_encl_page *sgx_encl_load_page_in_vma(struct sgx_encl *encl, + unsigned long addr, + unsigned long vm_flags) +{ + unsigned long vm_prot_bits = vm_flags & (VM_READ | VM_WRITE | VM_EXEC); + struct sgx_encl_page *entry; + + entry = xa_load(&encl->page_array, PFN_DOWN(addr)); + if (!entry) + return ERR_PTR(-EFAULT); + + /* + * Verify that the page has equal or higher build time + * permissions than the VMA permissions (i.e. the subset of {VM_READ, + * VM_WRITE, VM_EXECUTE} in vma->vm_flags). + */ + if ((entry->vm_max_prot_bits & vm_prot_bits) != vm_prot_bits) + return ERR_PTR(-EFAULT); + + return __sgx_encl_load_page(encl, entry); +} + +struct sgx_encl_page *sgx_encl_load_page(struct sgx_encl *encl, + unsigned long addr) +{ + struct sgx_encl_page *entry; + + entry = xa_load(&encl->page_array, PFN_DOWN(addr)); + if (!entry) + return ERR_PTR(-EFAULT); + + return __sgx_encl_load_page(encl, entry); +} + static vm_fault_t sgx_vma_fault(struct vm_fault *vmf) { unsigned long addr = (unsigned long)vmf->address; @@ -196,7 +215,7 @@ static vm_fault_t sgx_vma_fault(struct vm_fault *vmf) mutex_lock(&encl->lock); - entry = sgx_encl_load_page(encl, addr, vma->vm_flags); + entry = sgx_encl_load_page_in_vma(encl, addr, vma->vm_flags); if (IS_ERR(entry)) { mutex_unlock(&encl->lock); @@ -344,7 +363,7 @@ static struct sgx_encl_page *sgx_encl_reserve_page(struct sgx_encl *encl, for ( ; ; ) { mutex_lock(&encl->lock); - entry = sgx_encl_load_page(encl, addr, vm_flags); + entry = sgx_encl_load_page_in_vma(encl, addr, vm_flags); if (PTR_ERR(entry) != -EBUSY) break; diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h index fec43ca65065..6b34efba1602 100644 --- a/arch/x86/kernel/cpu/sgx/encl.h +++ b/arch/x86/kernel/cpu/sgx/encl.h @@ -116,5 +116,7 @@ unsigned int sgx_alloc_va_slot(struct sgx_va_page *va_page); void sgx_free_va_slot(struct sgx_va_page *va_page, unsigned int offset); bool sgx_va_page_full(struct sgx_va_page *va_page); void sgx_encl_free_epc_page(struct sgx_epc_page *page); +struct sgx_encl_page *sgx_encl_load_page(struct sgx_encl *encl, + unsigned long addr); #endif /* _X86_ENCL_H */ -- 2.25.1