Received: by 2002:ad5:4acb:0:0:0:0:0 with SMTP id n11csp5744333imw; Wed, 20 Jul 2022 11:35:52 -0700 (PDT) X-Google-Smtp-Source: AGRyM1urA3jwOYSyWtY/NC+EcmgDMmo+jJLAV09qC0p1wGvPtx8i9e5O8kcoWvnQAG27gbEoejqg X-Received: by 2002:a17:902:8bc5:b0:16c:f48b:d5b5 with SMTP id r5-20020a1709028bc500b0016cf48bd5b5mr16524778plo.128.1658342152638; Wed, 20 Jul 2022 11:35:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1658342152; cv=none; d=google.com; s=arc-20160816; b=l1Fe9kL7bB4YRMuDoaW6RNvmKE0iwd9ij7qf3+s4N5LZoI6X/phnN0xNM2FT92SBph ysOod5dr/s4YkLXMc98sbuWyf92v+OJ4xVtB2cnRcRRIwllE9mAtuR++qBwKVmpig8ak 1bhOZto9rjjLCyIg86P3XIlyiTPPWzEusE11EnsdghABAlYpWl3nrqH8Tl6pNArA2K5V VMlhmqUSjG389HgzNQiIpy5xii9lBUssiJUtScbNIWvBOMC0teCmBG2Af92M4z8mxpWB KfKNmDPN07JzDXew4GRqPSgaaLXwRQzhAiOL96iM6X7jBtLJOIDQ/hKfEOkwB2os9kma QJ2A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=QuDjEFZglgrNirGIw+l8W9wVHFtFuecBbgXORUaQTi4=; b=bo77qpvztRHok2Vo6uj47xvH4CBY5GTH+1DFYSdC+WqvYnP+Hg1NbCNresRH4bUoRp o6DxTkAFkq/z6HJrkiJCEwiC3vuUhm1prOAHsN+APV3JG2o/PKjNVVQzybStDFupo4gF wbrO+mzMkikOtEyVOQF1wI04W7Mj4l+C9q3xoKLgT2OGkk6tBUL/BSom5odWkwe0dmhm fLseEfgBioz8HBGSSI9GbayazkvEsiW9M4v3b8/vloMJ7KFw/nb1Nxnn9zuLTvZtl/KM 5h/+CP5WWstEnOFfKiIpgVCGDFN9Qel4U8Wlt+nnUL3Y8+Oc7wGNh9tJ+s4sbcmjVTS8 Evig== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Ot6ZdGm1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n13-20020a170903110d00b0016a10c450ebsi24419989plh.92.2022.07.20.11.35.37; Wed, 20 Jul 2022 11:35:52 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Ot6ZdGm1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238452AbiGTSVq (ORCPT + 99 others); Wed, 20 Jul 2022 14:21:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41498 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234284AbiGTSVo (ORCPT ); Wed, 20 Jul 2022 14:21:44 -0400 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C532A45056; Wed, 20 Jul 2022 11:21:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1658341301; x=1689877301; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=u56DnQ3gyU8pGO//wLELFocqwEDuWZY33quZjb9uohM=; b=Ot6ZdGm1m80cMO6POaFeg/jzwHktav6zb6iqMaykl28ESjK7DKvRY5SQ 34Vi4WZAjteaKVjacTGJC1Peu6Toj9kWwHGlBJXP/gRzVP4xGkzv9AYP0 4SWBUWpVxvClBQDM274E9RAbQlkfyHY6v1w1AEc7Pm4JvzF4RDkMK8T0y tXJ6kkfy0iSgTkETca3CQoM8S6b7QttDHGUk0KM/cy78+HUSey2nC2HeE 96ZUd2lxVA4+AHPWgtVM7mxhiC61v9X8vizhcY/HYKD1VbRJUUPVJAFl6 beO2DVarJZNHv/3uIMWVYh8zCeCgoCGd2d73NkZ1JByHrS1j7pvBzXznz Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10414"; a="285617945" X-IronPort-AV: E=Sophos;i="5.92,287,1650956400"; d="scan'208";a="285617945" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2022 11:21:41 -0700 X-IronPort-AV: E=Sophos;i="5.92,287,1650956400"; d="scan'208";a="656384909" Received: from mmeszaro-mobl1.amr.corp.intel.com (HELO kcaccard-desk.amr.corp.intel.com) ([10.209.70.215]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2022 11:21:39 -0700 From: Kristen Carlson Accardi To: linux-kernel@vger.kernel.org, Jarkko Sakkinen , Dave Hansen , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" Cc: Kristen Carlson Accardi , linux-sgx@vger.kernel.org Subject: [PATCH] x86/sgx: Improve comments for sgx_encl_lookup/alloc_backing() Date: Wed, 20 Jul 2022 11:21:19 -0700 Message-Id: <20220720182120.1160956-1-kristen@linux.intel.com> X-Mailer: git-send-email 2.36.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-5.0 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Modify the comments for sgx_encl_lookup_backing() and for sgx_encl_alloc_backing() to indicate that they take a reference which must be dropped with a call to sgx_encl_put_backing(). Make sgx_encl_lookup_backing() static for now, and change the name of sgx_encl_get_backing() to __sgx_encl_get_backing() to make it more clear that sgx_encl_get_backing() is an internal function. Signed-off-by: Kristen Carlson Accardi --- arch/x86/kernel/cpu/sgx/encl.c | 21 ++++++++++++++------- arch/x86/kernel/cpu/sgx/encl.h | 2 -- 2 files changed, 14 insertions(+), 9 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c index 19876ebfb504..325c2d59e6b4 100644 --- a/arch/x86/kernel/cpu/sgx/encl.c +++ b/arch/x86/kernel/cpu/sgx/encl.c @@ -12,6 +12,9 @@ #include "encls.h" #include "sgx.h" +static int sgx_encl_lookup_backing(struct sgx_encl *encl, unsigned long page_index, + struct sgx_backing *backing); + #define PCMDS_PER_PAGE (PAGE_SIZE / sizeof(struct sgx_pcmd)) /* * 32 PCMD entries share a PCMD page. PCMD_FIRST_MASK is used to @@ -706,7 +709,7 @@ static struct page *sgx_encl_get_backing_page(struct sgx_encl *encl, } /** - * sgx_encl_get_backing() - Pin the backing storage + * __sgx_encl_get_backing() - Pin the backing storage * @encl: an enclave pointer * @page_index: enclave page index * @backing: data for accessing backing storage for the page @@ -718,7 +721,7 @@ static struct page *sgx_encl_get_backing_page(struct sgx_encl *encl, * 0 on success, * -errno otherwise. */ -static int sgx_encl_get_backing(struct sgx_encl *encl, unsigned long page_index, +static int __sgx_encl_get_backing(struct sgx_encl *encl, unsigned long page_index, struct sgx_backing *backing) { pgoff_t page_pcmd_off = sgx_encl_get_backing_page_pcmd_offset(encl, page_index); @@ -794,7 +797,7 @@ static struct mem_cgroup *sgx_encl_get_mem_cgroup(struct sgx_encl *encl) } /** - * sgx_encl_alloc_backing() - allocate a new backing storage page + * sgx_encl_alloc_backing() - create a new backing storage page * @encl: an enclave pointer * @page_index: enclave page index * @backing: data for accessing backing storage for the page @@ -802,7 +805,9 @@ static struct mem_cgroup *sgx_encl_get_mem_cgroup(struct sgx_encl *encl) * When called from ksgxd, sets the active memcg from one of the * mms in the enclave's mm_list prior to any backing page allocation, * in order to ensure that shmem page allocations are charged to the - * enclave. + * enclave. Create a backing page for loading data back into an EPC page with + * ELDU. This function takes a reference on a new backing page which + * must be dropped with a corresponding call to sgx_encl_put_backing(). * * Return: * 0 on success, @@ -815,7 +820,7 @@ int sgx_encl_alloc_backing(struct sgx_encl *encl, unsigned long page_index, struct mem_cgroup *memcg = set_active_memcg(encl_memcg); int ret; - ret = sgx_encl_get_backing(encl, page_index, backing); + ret = __sgx_encl_get_backing(encl, page_index, backing); set_active_memcg(memcg); mem_cgroup_put(encl_memcg); @@ -833,15 +838,17 @@ int sgx_encl_alloc_backing(struct sgx_encl *encl, unsigned long page_index, * It is the caller's responsibility to ensure that it is appropriate to use * sgx_encl_lookup_backing() rather than sgx_encl_alloc_backing(). If lookup is * not used correctly, this will cause an allocation which is not accounted for. + * This function takes a reference on an existing backing page which must be + * dropped with a corresponding call to sgx_encl_put_backing(). * * Return: * 0 on success, * -errno otherwise. */ -int sgx_encl_lookup_backing(struct sgx_encl *encl, unsigned long page_index, +static int sgx_encl_lookup_backing(struct sgx_encl *encl, unsigned long page_index, struct sgx_backing *backing) { - return sgx_encl_get_backing(encl, page_index, backing); + return __sgx_encl_get_backing(encl, page_index, backing); } /** diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h index 332ef3568267..d731ef53f815 100644 --- a/arch/x86/kernel/cpu/sgx/encl.h +++ b/arch/x86/kernel/cpu/sgx/encl.h @@ -106,8 +106,6 @@ int sgx_encl_may_map(struct sgx_encl *encl, unsigned long start, bool current_is_ksgxd(void); void sgx_encl_release(struct kref *ref); int sgx_encl_mm_add(struct sgx_encl *encl, struct mm_struct *mm); -int sgx_encl_lookup_backing(struct sgx_encl *encl, unsigned long page_index, - struct sgx_backing *backing); int sgx_encl_alloc_backing(struct sgx_encl *encl, unsigned long page_index, struct sgx_backing *backing); void sgx_encl_put_backing(struct sgx_backing *backing); -- 2.36.1