Received: by 2002:a05:6a11:4021:0:0:0:0 with SMTP id ky33csp312067pxb; Mon, 13 Sep 2021 20:18:48 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz1xwrPyRgeP1pw4dZtEfNX1jtH8qNrPhlO97eQ/wo/gHyYhHj+n2zM81mOGe1JJH+1CP+V X-Received: by 2002:a17:907:d09:: with SMTP id gn9mr16487884ejc.447.1631589528522; Mon, 13 Sep 2021 20:18:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1631589528; cv=none; d=google.com; s=arc-20160816; b=JNTKGLcXA7v2CLLHd81MKft3k7BM76C5rgQUC2lbG+ENrCsGoJ55Ivu1bMdC6pkPHc HKO+H9z8t9+acKSIt08LQ1PwkE0l2xa3hi7tppM/W1+F2fVTEJTxeGHAYVRqPg6S+L7u gKz3Rq3mgSHWVrDArz8lvzKNg3KaQsP0eeGny9z1LJyCiBEYiL2/zbFK0GJ59xoxCNJR EaWMo2I2ZIgUCOb4dOzzfgdV58an/K9LxZr4tiCZtb661TCMPrFpzTW3oe69anY0luNm VRcmH8grphMfJChydehebupqsjMLbWOnj59Mov0nuGHaVKg96OpbQhJ1JXm0MOSxrt5T uqAw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=j3uOORunQ7VOb4i72zSYx/NWRMoZ79vIXmi5MblwOv8=; b=clGDqaSrRpLHP6LrKP8WNlHX7hnvJxX6FiedlKOfcPv3nfst+bxczARQrnUxEUdip3 Zp0RopX0SOkvanzpu8VZJtQEWnFrZ+ttB4gol54zj1N2vKqmZ8Evgrj+4u167DrSBBCA Ux6pC4vhCQNw/L6WQEgKm4U324rFANeg9qaU/bHO2jynfM/ZEc4mxTJodHn9cKlkCHIl 9za4AfiQpTbMkKnnIddZpl9+UaMZUAl6Q2Y221D+cjy6wVWKIFEw8KUBeyEoNiDCQpYM Jag/3H+EqB8Lo5IxK3GIwLv3vKwOEwAo6RH4Q7qVp7O+Z/aDcigqn3o8C45jOhAb1lvK 48NA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id ml15si8670261ejb.747.2021.09.13.20.18.25; Mon, 13 Sep 2021 20:18:48 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238424AbhINDQN (ORCPT + 99 others); Mon, 13 Sep 2021 23:16:13 -0400 Received: from szxga01-in.huawei.com ([45.249.212.187]:19970 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237213AbhINDQM (ORCPT ); Mon, 13 Sep 2021 23:16:12 -0400 Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.54]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4H7pJT2LQ9zbmW8; Tue, 14 Sep 2021 11:10:49 +0800 (CST) Received: from dggpeml100016.china.huawei.com (7.185.36.216) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Tue, 14 Sep 2021 11:14:54 +0800 Received: from DESKTOP-27KDQMV.china.huawei.com (10.174.148.223) by dggpeml100016.china.huawei.com (7.185.36.216) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Tue, 14 Sep 2021 11:14:53 +0800 From: "Longpeng(Mike)" To: , , CC: , , "Longpeng(Mike)" Subject: [PATCH] nitro_enclaves: merge contiguous physical memory regions Date: Tue, 14 Sep 2021 11:14:50 +0800 Message-ID: <20210914031450.919-1-longpeng2@huawei.com> X-Mailer: git-send-email 2.25.0.windows.1 MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.174.148.223] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpeml100016.china.huawei.com (7.185.36.216) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org There maybe too many physical memory regions if the memory regions backend with 2M hugetlb pages. Let's merge the adjacent regions if they are physical contiguous. Signed-off-by: Longpeng(Mike) --- drivers/virt/nitro_enclaves/ne_misc_dev.c | 64 +++++++++++++++++-------------- 1 file changed, 35 insertions(+), 29 deletions(-) diff --git a/drivers/virt/nitro_enclaves/ne_misc_dev.c b/drivers/virt/nitro_enclaves/ne_misc_dev.c index e21e1e8..2920f26 100644 --- a/drivers/virt/nitro_enclaves/ne_misc_dev.c +++ b/drivers/virt/nitro_enclaves/ne_misc_dev.c @@ -824,6 +824,11 @@ static int ne_sanity_check_user_mem_region_page(struct ne_enclave *ne_enclave, return 0; } +struct phys_contig_mem_region { + u64 paddr; + u64 size; +}; + /** * ne_set_user_memory_region_ioctl() - Add user space memory region to the slot * associated with the current enclave. @@ -843,9 +848,10 @@ static int ne_set_user_memory_region_ioctl(struct ne_enclave *ne_enclave, unsigned long max_nr_pages = 0; unsigned long memory_size = 0; struct ne_mem_region *ne_mem_region = NULL; - unsigned long nr_phys_contig_mem_regions = 0; struct pci_dev *pdev = ne_devs.ne_pci_dev->pdev; - struct page **phys_contig_mem_regions = NULL; + struct phys_contig_mem_region *phys_regions = NULL; + unsigned long nr_phys_regions = 0; + u64 prev_phys_region_end; int rc = -EINVAL; rc = ne_sanity_check_user_mem_region(ne_enclave, mem_region); @@ -866,9 +872,8 @@ static int ne_set_user_memory_region_ioctl(struct ne_enclave *ne_enclave, goto free_mem_region; } - phys_contig_mem_regions = kcalloc(max_nr_pages, sizeof(*phys_contig_mem_regions), - GFP_KERNEL); - if (!phys_contig_mem_regions) { + phys_regions = kcalloc(max_nr_pages, sizeof(*phys_regions), GFP_KERNEL); + if (!phys_regions) { rc = -ENOMEM; goto free_mem_region; @@ -903,25 +908,29 @@ static int ne_set_user_memory_region_ioctl(struct ne_enclave *ne_enclave, /* * TODO: Update once handled non-contiguous memory regions - * received from user space or contiguous physical memory regions - * larger than 2 MiB e.g. 8 MiB. + * received from user space. */ - phys_contig_mem_regions[i] = ne_mem_region->pages[i]; + if (nr_phys_regions && + prev_phys_region_end == page_to_phys(ne_mem_region->pages[i])) + phys_regions[nr_phys_regions - 1].size += + page_size(ne_mem_region->pages[i]); + else { + phys_regions[nr_phys_regions].paddr = + page_to_phys(ne_mem_region->pages[i]); + phys_regions[nr_phys_regions].size = + page_size(ne_mem_region->pages[i]); + nr_phys_regions++; + } + + prev_phys_region_end = phys_regions[nr_phys_regions - 1].paddr + + phys_regions[nr_phys_regions - 1].size; memory_size += page_size(ne_mem_region->pages[i]); ne_mem_region->nr_pages++; } while (memory_size < mem_region.memory_size); - /* - * TODO: Update once handled non-contiguous memory regions received - * from user space or contiguous physical memory regions larger than - * 2 MiB e.g. 8 MiB. - */ - nr_phys_contig_mem_regions = ne_mem_region->nr_pages; - - if ((ne_enclave->nr_mem_regions + nr_phys_contig_mem_regions) > - ne_enclave->max_mem_regions) { + if ((ne_enclave->nr_mem_regions + nr_phys_regions) > ne_enclave->max_mem_regions) { dev_err_ratelimited(ne_misc_dev.this_device, "Reached max memory regions %lld\n", ne_enclave->max_mem_regions); @@ -931,11 +940,8 @@ static int ne_set_user_memory_region_ioctl(struct ne_enclave *ne_enclave, goto put_pages; } - for (i = 0; i < nr_phys_contig_mem_regions; i++) { - u64 phys_region_addr = page_to_phys(phys_contig_mem_regions[i]); - u64 phys_region_size = page_size(phys_contig_mem_regions[i]); - - if (phys_region_size & (NE_MIN_MEM_REGION_SIZE - 1)) { + for (i = 0; i < nr_phys_regions; i++) { + if (phys_regions[i].size & (NE_MIN_MEM_REGION_SIZE - 1)) { dev_err_ratelimited(ne_misc_dev.this_device, "Physical mem region size is not multiple of 2 MiB\n"); @@ -944,7 +950,7 @@ static int ne_set_user_memory_region_ioctl(struct ne_enclave *ne_enclave, goto put_pages; } - if (!IS_ALIGNED(phys_region_addr, NE_MIN_MEM_REGION_SIZE)) { + if (!IS_ALIGNED(phys_regions[i].paddr, NE_MIN_MEM_REGION_SIZE)) { dev_err_ratelimited(ne_misc_dev.this_device, "Physical mem region address is not 2 MiB aligned\n"); @@ -959,13 +965,13 @@ static int ne_set_user_memory_region_ioctl(struct ne_enclave *ne_enclave, list_add(&ne_mem_region->mem_region_list_entry, &ne_enclave->mem_regions_list); - for (i = 0; i < nr_phys_contig_mem_regions; i++) { + for (i = 0; i < nr_phys_regions; i++) { struct ne_pci_dev_cmd_reply cmd_reply = {}; struct slot_add_mem_req slot_add_mem_req = {}; slot_add_mem_req.slot_uid = ne_enclave->slot_uid; - slot_add_mem_req.paddr = page_to_phys(phys_contig_mem_regions[i]); - slot_add_mem_req.size = page_size(phys_contig_mem_regions[i]); + slot_add_mem_req.paddr = phys_regions[i].paddr; + slot_add_mem_req.size = phys_regions[i].size; rc = ne_do_request(pdev, SLOT_ADD_MEM, &slot_add_mem_req, sizeof(slot_add_mem_req), @@ -974,7 +980,7 @@ static int ne_set_user_memory_region_ioctl(struct ne_enclave *ne_enclave, dev_err_ratelimited(ne_misc_dev.this_device, "Error in slot add mem [rc=%d]\n", rc); - kfree(phys_contig_mem_regions); + kfree(phys_regions); /* * Exit here without put pages as memory regions may @@ -987,7 +993,7 @@ static int ne_set_user_memory_region_ioctl(struct ne_enclave *ne_enclave, ne_enclave->nr_mem_regions++; } - kfree(phys_contig_mem_regions); + kfree(phys_regions); return 0; @@ -995,7 +1001,7 @@ static int ne_set_user_memory_region_ioctl(struct ne_enclave *ne_enclave, for (i = 0; i < ne_mem_region->nr_pages; i++) put_page(ne_mem_region->pages[i]); free_mem_region: - kfree(phys_contig_mem_regions); + kfree(phys_regions); kfree(ne_mem_region->pages); kfree(ne_mem_region); -- 1.8.3.1