Received: by 2002:a05:6a10:16a7:0:0:0:0 with SMTP id gp39csp1357279pxb; Wed, 4 Nov 2020 06:57:56 -0800 (PST) X-Google-Smtp-Source: ABdhPJzb/pMQF1S38I3hKzmpxpSEUi6FF1v6QDdES84V2stKpyTKOmj4rXQhjUhUhflyk0R3muSv X-Received: by 2002:a17:906:2444:: with SMTP id a4mr6722858ejb.415.1604501876215; Wed, 04 Nov 2020 06:57:56 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1604501876; cv=none; d=google.com; s=arc-20160816; b=oY9p7vMOtkIA8rkvhy3N2UgwLplAQlSuuiUEZI+xGvPswS6nzB+VMY6mML0XgH19gi 2wrv4q8dAHoj2Qlu4SjLqec7hZadWcTkCnKs6V9w30lzNVxUeCEd8pMxJTwpZA8XVF2a cEPT8hgShqVxvYf4zWcriiQb/1DcWkc6KHY26v/FWCbWBUhvPTHejUvNdV6eWo6fspGC FrNKxsZNf6pYCu8CLxjCryPX6jwcOCwW9wy2hmlusYbtT5qFr1R/41N0NpGt3VrGB+dE UPgh2vWh3TCMgfN+nNoMkL1HPL6Mxj+5jQJGbr78GyIYHPmZUrzcBGkj2JlMXdWwlcKs lwtg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=eDMcMNS6LaYhvjBqamqJ/Y9Nzu12ynRig/EpGsZ+etg=; b=cD7z1r6tDHocy4wGiDHtOGA685F7+Zevo4LhrHbuGWAfe0R9cNhzoBbl7BxSug06R/ Zqx6TP+UlgWQw5NnLE2LBnmgUgI/0SUuNcbZ/FiEC5pThEqEfhHH2VvElRmVWNe9ZIDX A5kCuXuKkzbgvJUew56XTnKcXwghvtFypYa+S+57tegqSx5rsNZz6bP1kP2CNdG8tHbg tYm4iHB9O6zOCeQdbBShwHS1ENyauGhPCmDFv8BfGKKSNZ0F7CQz+6efVTEijeuzyF3L ZMm/8f9qDCbrQOUCXZnWHFfIE37mPW3NbPb12NgrR4KrttK33ckihi9O7ThYRHudTrho A2xA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id q23si1507766ejs.570.2020.11.04.06.57.32; Wed, 04 Nov 2020 06:57:56 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730423AbgKDOzg (ORCPT + 99 others); Wed, 4 Nov 2020 09:55:36 -0500 Received: from mail.kernel.org ([198.145.29.99]:49332 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730407AbgKDOze (ORCPT ); Wed, 4 Nov 2020 09:55:34 -0500 Received: from suppilovahvero.lan (83-245-197-237.elisa-laajakaista.fi [83.245.197.237]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 16EC022280; Wed, 4 Nov 2020 14:55:27 +0000 (UTC) From: Jarkko Sakkinen To: x86@kernel.org, linux-sgx@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Jarkko Sakkinen , Jethro Beekman , Darren Kenny , Sean Christopherson , akpm@linux-foundation.org, andriy.shevchenko@linux.intel.com, asapek@google.com, bp@alien8.de, cedric.xing@intel.com, chenalexchen@google.com, conradparker@google.com, cyhanish@google.com, dave.hansen@intel.com, haitao.huang@intel.com, kai.huang@intel.com, kai.svahn@intel.com, kmoy@google.com, ludloff@google.com, luto@kernel.org, nhorman@redhat.com, npmccallum@redhat.com, puiterwijk@redhat.com, rientjes@google.com, tglx@linutronix.de, yaozhangx@google.com, mikko.ylinen@intel.com Subject: [PATCH v40 09/24] x86/sgx: Add SGX page allocator functions Date: Wed, 4 Nov 2020 16:54:15 +0200 Message-Id: <20201104145430.300542-10-jarkko.sakkinen@linux.intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20201104145430.300542-1-jarkko.sakkinen@linux.intel.com> References: <20201104145430.300542-1-jarkko.sakkinen@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The previous patch initialized a simple SGX page allocator. Add functions for runtime allocation and free. This allocator and its algorithms are as simple as it gets. They do a linear search across all EPC sections and find the first free page. They are not NUMA aware and only hand out individual pages. The SGX hardware does not support large pages, so something more complicated like a buddy allocator is unwarranted. The free function (sgx_free_epc_page()) implicitly calls ENCLS[EREMOVE], which returns the page to the uninitialized state. This ensures that the page is ready for use at the next allocation. Acked-by: Jethro Beekman Reviewed-by: Darren Kenny Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson Signed-off-by: Jarkko Sakkinen --- arch/x86/kernel/cpu/sgx/main.c | 62 ++++++++++++++++++++++++++++++++++ arch/x86/kernel/cpu/sgx/sgx.h | 3 ++ 2 files changed, 65 insertions(+) diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index 956055a0eff6..b9ac438a13a4 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -85,6 +85,68 @@ static bool __init sgx_page_reclaimer_init(void) return true; } +static struct sgx_epc_page *__sgx_alloc_epc_page_from_section(struct sgx_epc_section *section) +{ + struct sgx_epc_page *page; + + if (list_empty(§ion->page_list)) + return NULL; + + page = list_first_entry(§ion->page_list, struct sgx_epc_page, list); + list_del_init(&page->list); + + return page; +} + +/** + * __sgx_alloc_epc_page() - Allocate an EPC page + * + * Iterate through EPC sections and borrow a free EPC page to the caller. When a + * page is no longer needed it must be released with sgx_free_epc_page(). + * + * Return: + * an EPC page, + * -errno on error + */ +struct sgx_epc_page *__sgx_alloc_epc_page(void) +{ + struct sgx_epc_section *section; + struct sgx_epc_page *page; + int i; + + for (i = 0; i < sgx_nr_epc_sections; i++) { + section = &sgx_epc_sections[i]; + spin_lock(§ion->lock); + page = __sgx_alloc_epc_page_from_section(section); + spin_unlock(§ion->lock); + + if (page) + return page; + } + + return ERR_PTR(-ENOMEM); +} + +/** + * sgx_free_epc_page() - Free an EPC page + * @page: an EPC page + * + * Call EREMOVE for an EPC page and insert it back to the list of free pages. + */ +void sgx_free_epc_page(struct sgx_epc_page *page) +{ + struct sgx_epc_section *section = &sgx_epc_sections[page->section]; + int ret; + + ret = __eremove(sgx_get_epc_virt_addr(page)); + if (WARN_ONCE(ret, "EREMOVE returned %d (0x%x)", ret, ret)) + return; + + spin_lock(§ion->lock); + list_add_tail(&page->list, §ion->page_list); + spin_unlock(§ion->lock); +} + static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size, unsigned long index, struct sgx_epc_section *section) diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h index 02afa84dd8fd..bd9dcb1ffcfa 100644 --- a/arch/x86/kernel/cpu/sgx/sgx.h +++ b/arch/x86/kernel/cpu/sgx/sgx.h @@ -57,4 +57,7 @@ static inline void *sgx_get_epc_virt_addr(struct sgx_epc_page *page) return section->virt_addr + index * PAGE_SIZE; } +struct sgx_epc_page *__sgx_alloc_epc_page(void); +void sgx_free_epc_page(struct sgx_epc_page *page); + #endif /* _X86_SGX_H */ -- 2.27.0