Received: by 2002:a05:6a10:d5a5:0:0:0:0 with SMTP id gn37csp2896743pxb; Fri, 8 Oct 2021 18:34:32 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwkUwSfQ5jtVmA8dcvvQ22KR71xBcB6ohGSigLJQjGnHQsCQDObpxDIrGOapJIU+fYVVTN3 X-Received: by 2002:a17:90b:350f:: with SMTP id ls15mr15420537pjb.220.1633743271932; Fri, 08 Oct 2021 18:34:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1633743271; cv=none; d=google.com; s=arc-20160816; b=IbFiQqAhSYqW9vKkAbW1wjseSQuKU+dowxurceXOLR8hJc2jrGzTEfeHaH1zUa+BeI QdqQl5ZtYTnMIIAcVHgWD48bQyjt2IwQOrDPqvD4ZkFXN65C3T+BNWxh6peqTNj9qP0k q1dj8yMKrRdCe55kmGTH5utyeJtmjHMk8utsXkmAymNSFXithtkgN+KACPAr+ULPJtOT nifU2m1wG/PC3B8rHz6LEKfX45HvmITJQbGjKCOtm6NGzSbCjYxmpdw70eW/qY9E5KDw 1SU3kgtoRagT99m8dcEv2Rkz3rEJ8dTXICr4f7Zo94aXR6EDHTPaSsIbhT58ax3m1OZd /Ufw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=t+wHdIWktzTkerkzpG6Lk1oJTfMJ8NDIgvbe4oqqyKc=; b=QfzV1NvMDM8uKltpwTclObwot9FhNIAGwOw1ZZop14GGb1oOTylAvtUaP8w3QHMV6R xU3AOPLN116iuakYZb8Ofw3XqWdDQqUW/3iK058p6T5wpMiO3hKzEMXmB5X8/+EN1tCQ H2A5LP4n4TWcKFD6UA4+OoRni6DEyBKWpHVpjmuyWcsfWoM2aFtlmf1hSmN09Cc4Uzyd /kwkcr/HfY/cBeswUQBaAdcC0d3i1VJQGtvq2hE1owLC+17GWCPi81xsUdwRUKBQUaZa rbgUx8sVzs/YDpXFYSwHbpzuI25lPpV8VxRxa42q38qEseeNos9UAHCMM9y2YiKEWNOX DElA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id e64si1168846pfe.178.2021.10.08.18.34.19; Fri, 08 Oct 2021 18:34:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244047AbhJIBfc (ORCPT + 99 others); Fri, 8 Oct 2021 21:35:32 -0400 Received: from szxga08-in.huawei.com ([45.249.212.255]:24179 "EHLO szxga08-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232063AbhJIBfb (ORCPT ); Fri, 8 Oct 2021 21:35:31 -0400 Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.56]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4HR6wx50Dsz1DHSX; Sat, 9 Oct 2021 09:32:01 +0800 (CST) Received: from dggpeml100016.china.huawei.com (7.185.36.216) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Sat, 9 Oct 2021 09:33:33 +0800 Received: from DESKTOP-27KDQMV.china.huawei.com (10.174.148.223) by dggpeml100016.china.huawei.com (7.185.36.216) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Sat, 9 Oct 2021 09:33:32 +0800 From: "Longpeng(Mike)" To: , , CC: , , , , , , , , , Longpeng Subject: [PATCH v3 1/4] nitro_enclaves: Merge contiguous physical memory regions Date: Sat, 9 Oct 2021 09:32:45 +0800 Message-ID: <20211009013248.1174-2-longpeng2@huawei.com> X-Mailer: git-send-email 2.25.0.windows.1 In-Reply-To: <20211009013248.1174-1-longpeng2@huawei.com> References: <20211009013248.1174-1-longpeng2@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.174.148.223] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpeml100016.china.huawei.com (7.185.36.216) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Longpeng There can be cases when there are more memory regions that need to be set for an enclave than the maximum supported number of memory regions per enclave. One example can be when the memory regions are backed by 2 MiB hugepages (the minimum supported hugepage size). Let's merge the adjacent regions if they are physical contiguous. This way the final number of memory regions is less than before merging and could potentially avoid reaching maximum. Signed-off-by: Longpeng --- Changes v2 -> v3: - update the commit title and commit message. [Andra] - use 'struct range' to instead of 'struct phys_mem_region'. [Andra, Greg KH] - add comments before the function definition. [Andra] - rename several variables, parameters and function. [Andra] --- drivers/virt/nitro_enclaves/ne_misc_dev.c | 79 +++++++++++++++++++++---------- 1 file changed, 55 insertions(+), 24 deletions(-) diff --git a/drivers/virt/nitro_enclaves/ne_misc_dev.c b/drivers/virt/nitro_enclaves/ne_misc_dev.c index e21e1e8..eea53e9 100644 --- a/drivers/virt/nitro_enclaves/ne_misc_dev.c +++ b/drivers/virt/nitro_enclaves/ne_misc_dev.c @@ -26,6 +26,7 @@ #include #include #include +#include #include #include "ne_misc_dev.h" @@ -126,6 +127,16 @@ struct ne_cpu_pool { static struct ne_cpu_pool ne_cpu_pool; /** + * struct phys_contig_mem_regions - Physical contiguous memory regions + * @num: The number of regions that currently has. + * @region: The array of physical memory regions. + */ +struct phys_contig_mem_regions { + unsigned long num; + struct range region[0]; +}; + +/** * ne_check_enclaves_created() - Verify if at least one enclave has been created. * @void: No parameters provided. * @@ -825,6 +836,33 @@ static int ne_sanity_check_user_mem_region_page(struct ne_enclave *ne_enclave, } /** + * ne_merge_phys_contig_memory_regions() - Add a memory region and merge the adjacent + * regions if they are physical contiguous. + * @regions : Private data associated with the physical contiguous memory regions. + * @page_paddr: Physical start address of the region to be added. + * @page_size : Length of the region to be added. + * + * Return: + * * No return value. + */ +static void +ne_merge_phys_contig_memory_regions(struct phys_contig_mem_regions *regions, + u64 page_paddr, u64 page_size) +{ + /* Physical contiguous, just merge */ + if (regions->num && + (regions->region[regions->num - 1].end + 1) == page_paddr) { + regions->region[regions->num - 1].end += page_size; + + return; + } + + regions->region[regions->num].start = page_paddr; + regions->region[regions->num].end = page_paddr + page_size - 1; + regions->num++; +} + +/** * ne_set_user_memory_region_ioctl() - Add user space memory region to the slot * associated with the current enclave. * @ne_enclave : Private data associated with the current enclave. @@ -843,9 +881,9 @@ static int ne_set_user_memory_region_ioctl(struct ne_enclave *ne_enclave, unsigned long max_nr_pages = 0; unsigned long memory_size = 0; struct ne_mem_region *ne_mem_region = NULL; - unsigned long nr_phys_contig_mem_regions = 0; struct pci_dev *pdev = ne_devs.ne_pci_dev->pdev; - struct page **phys_contig_mem_regions = NULL; + struct phys_contig_mem_regions *phys_contig_mem_regions = NULL; + size_t size_to_alloc = 0; int rc = -EINVAL; rc = ne_sanity_check_user_mem_region(ne_enclave, mem_region); @@ -866,8 +904,9 @@ static int ne_set_user_memory_region_ioctl(struct ne_enclave *ne_enclave, goto free_mem_region; } - phys_contig_mem_regions = kcalloc(max_nr_pages, sizeof(*phys_contig_mem_regions), - GFP_KERNEL); + size_to_alloc = sizeof(*phys_contig_mem_regions) + + max_nr_pages * sizeof(struct range); + phys_contig_mem_regions = kzalloc(size_to_alloc, GFP_KERNEL); if (!phys_contig_mem_regions) { rc = -ENOMEM; @@ -901,26 +940,16 @@ static int ne_set_user_memory_region_ioctl(struct ne_enclave *ne_enclave, if (rc < 0) goto put_pages; - /* - * TODO: Update once handled non-contiguous memory regions - * received from user space or contiguous physical memory regions - * larger than 2 MiB e.g. 8 MiB. - */ - phys_contig_mem_regions[i] = ne_mem_region->pages[i]; + ne_merge_phys_contig_memory_regions(phys_contig_mem_regions, + page_to_phys(ne_mem_region->pages[i]), + page_size(ne_mem_region->pages[i])); memory_size += page_size(ne_mem_region->pages[i]); ne_mem_region->nr_pages++; } while (memory_size < mem_region.memory_size); - /* - * TODO: Update once handled non-contiguous memory regions received - * from user space or contiguous physical memory regions larger than - * 2 MiB e.g. 8 MiB. - */ - nr_phys_contig_mem_regions = ne_mem_region->nr_pages; - - if ((ne_enclave->nr_mem_regions + nr_phys_contig_mem_regions) > + if ((ne_enclave->nr_mem_regions + phys_contig_mem_regions->num) > ne_enclave->max_mem_regions) { dev_err_ratelimited(ne_misc_dev.this_device, "Reached max memory regions %lld\n", @@ -931,9 +960,10 @@ static int ne_set_user_memory_region_ioctl(struct ne_enclave *ne_enclave, goto put_pages; } - for (i = 0; i < nr_phys_contig_mem_regions; i++) { - u64 phys_region_addr = page_to_phys(phys_contig_mem_regions[i]); - u64 phys_region_size = page_size(phys_contig_mem_regions[i]); + for (i = 0; i < phys_contig_mem_regions->num; i++) { + struct range *range = phys_contig_mem_regions->region + i; + u64 phys_region_addr = range->start; + u64 phys_region_size = range_len(range); if (phys_region_size & (NE_MIN_MEM_REGION_SIZE - 1)) { dev_err_ratelimited(ne_misc_dev.this_device, @@ -959,13 +989,14 @@ static int ne_set_user_memory_region_ioctl(struct ne_enclave *ne_enclave, list_add(&ne_mem_region->mem_region_list_entry, &ne_enclave->mem_regions_list); - for (i = 0; i < nr_phys_contig_mem_regions; i++) { + for (i = 0; i < phys_contig_mem_regions->num; i++) { struct ne_pci_dev_cmd_reply cmd_reply = {}; struct slot_add_mem_req slot_add_mem_req = {}; + struct range *range = phys_contig_mem_regions->region + i; slot_add_mem_req.slot_uid = ne_enclave->slot_uid; - slot_add_mem_req.paddr = page_to_phys(phys_contig_mem_regions[i]); - slot_add_mem_req.size = page_size(phys_contig_mem_regions[i]); + slot_add_mem_req.paddr = range->start; + slot_add_mem_req.size = range_len(range); rc = ne_do_request(pdev, SLOT_ADD_MEM, &slot_add_mem_req, sizeof(slot_add_mem_req), -- 1.8.3.1