Received: by 2002:a05:6a10:1a4d:0:0:0:0 with SMTP id nk13csp1749907pxb; Wed, 9 Feb 2022 03:48:51 -0800 (PST) X-Google-Smtp-Source: ABdhPJwJQ3AfEao77gJfBkJBrK5zzrg00WUNehGfDyd6cJbfygEWQIHFSfmC0ize6zDNdbtIz/xi X-Received: by 2002:a62:86c9:: with SMTP id x192mr1978934pfd.54.1644407331695; Wed, 09 Feb 2022 03:48:51 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1644407331; cv=none; d=google.com; s=arc-20160816; b=ZtTUPptbGIB5WSJC2zbZ4XDwaVfEAUTgvqNNgn3Yq2aXbEpaEKDBhbC374hZmd8M8y N8BU1qKWmQmA5OMISGfEND4yCDKy+ZDmWbFMGfW05AbX9WjVUHJ+ZlydRmtxEZKGAYzB BPPI3pvyNxFjH2T67Gk1X9AbaVg2+L9VdhGNQq1Od+UKf53Y50AK5rcCT+s4vvoaUXRh /oWyuh3R3kDE/Hhsaq5vog7an37Nu82TtT2R3dhkpheFgliSaDUWVKXsQbrmR36GR0IW evBqJA3eiymGZl7iEHmjmCBtqoQaacxpSMLX2UlSYzC+uhupkSAVDoQy4LaLkpKMpoyC pO7g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=g4kzahXRh5miP5NgdIx2HgtB1+2IQc3R4HN5OXtQA28=; b=cw9pWEfq+FReeUKpZtitUhxlWt4DJb9gJvx1bR1R6yLkmkf0V98TmiCDxRuX39ZEyU jv+QiqFAcvAEr05FYER1BzSSguU3zRlkGmsa8RgrutQmkKlHsgIuZQUxjzTj9uOax45T BgqtCY97HZreeJB4J9HxgI2pbhIFkMWJWRhUQtgtXFTTWpE5cbhwqVRC+iB+dDDofU2d BXgBvkUac2J/D25+Gp+KxGBvLSKBavdzGAe2q61JF3mOF5V6/suPyczIjNdcIcO6zJV8 Puukbg/1LtBGrLINzJo8LfNkHu/H5QQkuRMi7NRmk7XIhvS6/JhXpwi04dhDcZGSfhBl +czA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Dbue0Y6E; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [2620:137:e000::1:18]) by mx.google.com with ESMTPS id j15si5023185plh.140.2022.02.09.03.48.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Feb 2022 03:48:51 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) client-ip=2620:137:e000::1:18; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Dbue0Y6E; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 2B80AE0E1F01; Wed, 9 Feb 2022 01:59:18 -0800 (PST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239773AbiBHBKH (ORCPT + 99 others); Mon, 7 Feb 2022 20:10:07 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59376 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344175AbiBHAqO (ORCPT ); Mon, 7 Feb 2022 19:46:14 -0500 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 67AA2C061355; Mon, 7 Feb 2022 16:46:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1644281173; x=1675817173; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=YMexE40kaiGLd/mVjctHyhaC96cH8DRjWhtoklPBvQk=; b=Dbue0Y6E58en+g8In6BKNKT+mOGniDS8wBelfJcBXjtnZSQnZqzwmSC8 s82Kdipgxl+ayLzoSH88GVuzzjGCUud4N1Vy708W+BPivEQCveMVU9sCb Q8w1PoC7XgJBYTb61l8iUNktnMznlMsR83aepVyB/AUHcdMKbb69/XFHH DKX61Mlool/QiaT7E177KrhW9uZDmpxPEgDZPu1YE7tp+ukZO8fOo8rfe WKNVKsiM/T7kS9Eej2t+Od0thJQKU0MyMVN6xIrHSa6AXoSzAN4HME5U1 L2HjIJEDPvrr7Z+O1iJS5V0UJn+D+2Si5Kp0wLbZf3HMN98oWUpXlyuB9 Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10251"; a="232407955" X-IronPort-AV: E=Sophos;i="5.88,351,1635231600"; d="scan'208";a="232407955" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Feb 2022 16:46:09 -0800 X-IronPort-AV: E=Sophos;i="5.88,351,1635231600"; d="scan'208";a="499389499" Received: from rchatre-ws.ostc.intel.com ([10.54.69.144]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Feb 2022 16:46:08 -0800 From: Reinette Chatre To: dave.hansen@linux.intel.com, jarkko@kernel.org, tglx@linutronix.de, bp@alien8.de, luto@kernel.org, mingo@redhat.com, linux-sgx@vger.kernel.org, x86@kernel.org Cc: seanjc@google.com, kai.huang@intel.com, cathy.zhang@intel.com, cedric.xing@intel.com, haitao.huang@intel.com, mark.shanahan@intel.com, hpa@zytor.com, linux-kernel@vger.kernel.org Subject: [PATCH V2 11/32] x86/sgx: Move PTE zap code to new sgx_zap_enclave_ptes() Date: Mon, 7 Feb 2022 16:45:33 -0800 Message-Id: <43414b1c9e21d1201ca09300eb75bf9c000457e1.1644274683.git.reinette.chatre@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.0 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The SGX reclaimer removes page table entries pointing to pages that are moved to swap. SGX2 enables changes to pages belonging to an initialized enclave, thus enclave pages may have their permission or type changed while the page is being accessed by an enclave. Supporting SGX2 requires page table entries to be removed so that any cached mappings to changed pages are removed. For example, with the ability to change enclave page types a regular enclave page may be changed to a Thread Control Structure (TCS) page that may not be accessed by an enclave. Factor out the code removing page table entries to a separate function sgx_zap_enclave_ptes(), fixing accuracy of comments in the process, and make it available to the upcoming SGX2 code. Place sgx_zap_enclave_ptes() with the rest of the enclave code in encl.c interacting with the page table since this code is no longer unique to the reclaimer. Signed-off-by: Reinette Chatre --- Changes since V1: - Elaborate why SGX2 needs this ability (Jarkko). - More specific subject. - Fix kernel-doc to have brackets in function name. arch/x86/kernel/cpu/sgx/encl.c | 45 +++++++++++++++++++++++++++++++++- arch/x86/kernel/cpu/sgx/encl.h | 2 +- arch/x86/kernel/cpu/sgx/main.c | 31 ++--------------------- 3 files changed, 47 insertions(+), 31 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c index 6f5d01121766..8da813504249 100644 --- a/arch/x86/kernel/cpu/sgx/encl.c +++ b/arch/x86/kernel/cpu/sgx/encl.c @@ -589,7 +589,7 @@ int sgx_encl_mm_add(struct sgx_encl *encl, struct mm_struct *mm) spin_lock(&encl->mm_lock); list_add_rcu(&encl_mm->list, &encl->mm_list); - /* Pairs with smp_rmb() in sgx_reclaimer_block(). */ + /* Pairs with smp_rmb() in sgx_zap_enclave_ptes(). */ smp_wmb(); encl->mm_list_version++; spin_unlock(&encl->mm_lock); @@ -778,6 +778,49 @@ int sgx_encl_test_and_clear_young(struct mm_struct *mm, return ret; } +/** + * sgx_zap_enclave_ptes() - remove PTEs mapping the address from enclave + * @encl: the enclave + * @addr: page aligned pointer to single page for which PTEs will be removed + * + * Multiple VMAs may have an enclave page mapped. Remove the PTE mapping + * @addr from each VMA. Ensure that page fault handler is ready to handle + * new mappings of @addr before calling this function. + */ +void sgx_zap_enclave_ptes(struct sgx_encl *encl, unsigned long addr) +{ + unsigned long mm_list_version; + struct sgx_encl_mm *encl_mm; + struct vm_area_struct *vma; + int idx, ret; + + do { + mm_list_version = encl->mm_list_version; + + /* Pairs with smp_wmb() in sgx_encl_mm_add(). */ + smp_rmb(); + + idx = srcu_read_lock(&encl->srcu); + + list_for_each_entry_rcu(encl_mm, &encl->mm_list, list) { + if (!mmget_not_zero(encl_mm->mm)) + continue; + + mmap_read_lock(encl_mm->mm); + + ret = sgx_encl_find(encl_mm->mm, addr, &vma); + if (!ret && encl == vma->vm_private_data) + zap_vma_ptes(vma, addr, PAGE_SIZE); + + mmap_read_unlock(encl_mm->mm); + + mmput_async(encl_mm->mm); + } + + srcu_read_unlock(&encl->srcu, idx); + } while (unlikely(encl->mm_list_version != mm_list_version)); +} + /** * sgx_alloc_va_page() - Allocate a Version Array (VA) page * diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h index becb68503baa..82e21088e68b 100644 --- a/arch/x86/kernel/cpu/sgx/encl.h +++ b/arch/x86/kernel/cpu/sgx/encl.h @@ -112,7 +112,7 @@ int sgx_encl_get_backing(struct sgx_encl *encl, unsigned long page_index, void sgx_encl_put_backing(struct sgx_backing *backing, bool do_write); int sgx_encl_test_and_clear_young(struct mm_struct *mm, struct sgx_encl_page *page); - +void sgx_zap_enclave_ptes(struct sgx_encl *encl, unsigned long addr); struct sgx_epc_page *sgx_alloc_va_page(void); unsigned int sgx_alloc_va_slot(struct sgx_va_page *va_page); void sgx_free_va_slot(struct sgx_va_page *va_page, unsigned int offset); diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index fa33922879bf..ce9e87d5f8ec 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -137,36 +137,9 @@ static void sgx_reclaimer_block(struct sgx_epc_page *epc_page) struct sgx_encl_page *page = epc_page->owner; unsigned long addr = page->desc & PAGE_MASK; struct sgx_encl *encl = page->encl; - unsigned long mm_list_version; - struct sgx_encl_mm *encl_mm; - struct vm_area_struct *vma; - int idx, ret; - - do { - mm_list_version = encl->mm_list_version; - - /* Pairs with smp_rmb() in sgx_encl_mm_add(). */ - smp_rmb(); - - idx = srcu_read_lock(&encl->srcu); - - list_for_each_entry_rcu(encl_mm, &encl->mm_list, list) { - if (!mmget_not_zero(encl_mm->mm)) - continue; - - mmap_read_lock(encl_mm->mm); - - ret = sgx_encl_find(encl_mm->mm, addr, &vma); - if (!ret && encl == vma->vm_private_data) - zap_vma_ptes(vma, addr, PAGE_SIZE); - - mmap_read_unlock(encl_mm->mm); - - mmput_async(encl_mm->mm); - } + int ret; - srcu_read_unlock(&encl->srcu, idx); - } while (unlikely(encl->mm_list_version != mm_list_version)); + sgx_zap_enclave_ptes(encl, addr); mutex_lock(&encl->lock); -- 2.25.1