Received: by 2002:a25:e74b:0:0:0:0:0 with SMTP id e72csp1600199ybh; Mon, 20 Jul 2020 02:27:56 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxUNxeOveXviV2JRoTeX1JaAOtaDaWqQHxUFDAj/eFN0EgceZTOZfmM1/2gW+z7qy5kOWgE X-Received: by 2002:a50:d6dd:: with SMTP id l29mr20951320edj.345.1595237276358; Mon, 20 Jul 2020 02:27:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1595237276; cv=none; d=google.com; s=arc-20160816; b=Fb6Zg1DVexrIjXAtprSROGMYVcHajWhhJENXJ/6crtCmK0qMtPd/eNJMtWQSbiVKf3 dQM3iygptetowKpak7f0rXgj04gwG0B3tJz8sYQg8ySx2zoTO1zLAu1G+SfylX68ZOVg f8ad0yP8v9L+fw5fi6sVV6VRo+KkJ6pnoOu3kKMCDOajCmGhOxiMkTd6GZbjXwaT/urZ OxHhDNTmlp3jLH/DsBjxVBRKBH/lSm7XvaPbPPZ/UEtQ5oI/C0qlXprr4ShsJnG5hcOh GXtQvueO9s5RLCVp9+Ugldl7gtLvlh7jFzvBqKNvp5uJK1ZOaf3h4FGB92K0GNKvj4ZR QWOw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=NG85B+4+y6XOe/6dBk+u88JUm8djqSU9MkTFRBZFRtU=; b=cIk9rEQtQ5iZ1bP7qFq4MYlZ6N5WiEeoGxDAAAPv/YydqZpVHdrQKopORB3p8l4qM5 32qQTrzcEEUmZtT3wCEpna3EP0AmIKYPYsGeSrYDb+l6A6O9gFWd5jxtYXx2Q57B2Qkz nNOVukN3pfGsroPhOIpvpElFjlSseg3DLxZvdLZ8j1tRbasPvPngiHpGZu8hwX1DWT3V p2raHhm7Sh3MHpl08/wid+fMLrHcAjW+flX+jOEFOBNaPurpAf1E8tbxIdzSkiDa9YDr HgA6mByJTnUOJHHRe7jceDagzgndPfqgz+W6hfNJY3FMxBAXVX21uWKDPeAFtCoIqGsp 8ZTw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=rrNLOpGt; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id p10si10603395eds.221.2020.07.20.02.27.32; Mon, 20 Jul 2020 02:27:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=rrNLOpGt; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728323AbgGTJ0C (ORCPT + 99 others); Mon, 20 Jul 2020 05:26:02 -0400 Received: from mail.kernel.org ([198.145.29.99]:40148 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728115AbgGTJ0B (ORCPT ); Mon, 20 Jul 2020 05:26:01 -0400 Received: from aquarius.haifa.ibm.com (nesher1.haifa.il.ibm.com [195.110.40.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id C6D012176B; Mon, 20 Jul 2020 09:25:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1595237160; bh=xIW2F2IJFqxTzMmaAs6C8+3yiJvBuUFpN7ht+vECA5A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rrNLOpGtsBkLwzoA0ZoHKqviCVJ3KJRCdTwu93rSdZ36ELhztooGsQ2e4T8KUYbwf g95kfDYYhWT0rijSXjiKDluNKSd11vd3VHc2VWrEE5zvJz5wMrVk6WVbraWxItZhwN Z/wN2S6aNtjO+InKcFYziVst7pgrShtls+s5WRPo= From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexander Viro , Andrew Morton , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Catalin Marinas , Christopher Lameter , Dan Williams , Dave Hansen , Elena Reshetova , "H. Peter Anvin" , Idan Yaniv , Ingo Molnar , James Bottomley , "Kirill A. Shutemov" , Matthew Wilcox , Mike Rapoport , Mike Rapoport , Palmer Dabbelt , Paul Walmsley , Peter Zijlstra , Thomas Gleixner , Tycho Andersen , Will Deacon , linux-api@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-nvdimm@lists.01.org, linux-riscv@lists.infradead.org, x86@kernel.org Subject: [PATCH 6/6] mm: secretmem: add ability to reserve memory at boot Date: Mon, 20 Jul 2020 12:24:35 +0300 Message-Id: <20200720092435.17469-7-rppt@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200720092435.17469-1-rppt@kernel.org> References: <20200720092435.17469-1-rppt@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Mike Rapoport Taking pages out from the direct map and bringing them back may create undesired fragmentation and usage of the smaller pages in the direct mapping of the physical memory. This can be avoided if a significantly large area of the physical memory would be reserved for secretmem purposes at boot time. Add ability to reserve physical memory for secretmem at boot time using "secretmem" kernel parameter and then use that reserved memory as a global pool for secret memory needs. Signed-off-by: Mike Rapoport --- mm/secretmem.c | 134 ++++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 126 insertions(+), 8 deletions(-) diff --git a/mm/secretmem.c b/mm/secretmem.c index dce56f84968f..322f425dbb22 100644 --- a/mm/secretmem.c +++ b/mm/secretmem.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include #include @@ -30,6 +31,39 @@ struct secretmem_ctx { unsigned int mode; }; +struct secretmem_pool { + struct gen_pool *pool; + unsigned long reserved_size; + void *reserved; +}; + +static struct secretmem_pool secretmem_pool; + +static struct page *secretmem_alloc_huge_page(gfp_t gfp) +{ + struct gen_pool *pool = secretmem_pool.pool; + unsigned long addr = 0; + struct page *page = NULL; + + if (pool) { + if (gen_pool_avail(pool) < PMD_SIZE) + return NULL; + + addr = gen_pool_alloc(pool, PMD_SIZE); + if (!addr) + return NULL; + + page = virt_to_page(addr); + } else { + page = alloc_pages(gfp, PMD_PAGE_ORDER); + + if (page) + split_page(page, PMD_PAGE_ORDER); + } + + return page; +} + static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp) { unsigned long nr_pages = (1 << PMD_PAGE_ORDER); @@ -38,12 +72,11 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp) struct page *page; int err; - page = alloc_pages(gfp, PMD_PAGE_ORDER); + page = secretmem_alloc_huge_page(gfp); if (!page) return -ENOMEM; addr = (unsigned long)page_address(page); - split_page(page, PMD_PAGE_ORDER); err = gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE); if (err) { @@ -266,11 +299,13 @@ SYSCALL_DEFINE1(secretmemfd, unsigned long, flags) return err; } -static void secretmem_cleanup_chunk(struct gen_pool *pool, - struct gen_pool_chunk *chunk, void *data) +static void secretmem_recycle_range(unsigned long start, unsigned long end) +{ + gen_pool_free(secretmem_pool.pool, start, PMD_SIZE); +} + +static void secretmem_release_range(unsigned long start, unsigned long end) { - unsigned long start = chunk->start_addr; - unsigned long end = chunk->end_addr; unsigned long nr_pages, addr; nr_pages = (end - start + 1) / PAGE_SIZE; @@ -280,6 +315,18 @@ static void secretmem_cleanup_chunk(struct gen_pool *pool, put_page(virt_to_page(addr)); } +static void secretmem_cleanup_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + unsigned long start = chunk->start_addr; + unsigned long end = chunk->end_addr; + + if (secretmem_pool.pool) + secretmem_recycle_range(start, end); + else + secretmem_release_range(start, end); +} + static void secretmem_cleanup_pool(struct secretmem_ctx *ctx) { struct gen_pool *pool = ctx->pool; @@ -319,14 +366,85 @@ static struct file_system_type secretmem_fs = { .kill_sb = kill_anon_super, }; +static int secretmem_reserved_mem_init(void) +{ + struct gen_pool *pool; + struct page *page; + void *addr; + int err; + + if (!secretmem_pool.reserved) + return 0; + + pool = gen_pool_create(PMD_SHIFT, NUMA_NO_NODE); + if (!pool) + return -ENOMEM; + + err = gen_pool_add(pool, (unsigned long)secretmem_pool.reserved, + secretmem_pool.reserved_size, NUMA_NO_NODE); + if (err) + goto err_destroy_pool; + + for (addr = secretmem_pool.reserved; + addr < secretmem_pool.reserved + secretmem_pool.reserved_size; + addr += PAGE_SIZE) { + page = virt_to_page(addr); + __ClearPageReserved(page); + set_page_count(page, 1); + } + + secretmem_pool.pool = pool; + page = virt_to_page(secretmem_pool.reserved); + __kernel_map_pages(page, secretmem_pool.reserved_size / PAGE_SIZE, 0); + return 0; + +err_destroy_pool: + gen_pool_destroy(pool); + return err; +} + static int secretmem_init(void) { - int ret = 0; + int ret; + + ret = secretmem_reserved_mem_init(); + if (ret) + return ret; secretmem_mnt = kern_mount(&secretmem_fs); - if (IS_ERR(secretmem_mnt)) + if (IS_ERR(secretmem_mnt)) { + gen_pool_destroy(secretmem_pool.pool); ret = PTR_ERR(secretmem_mnt); + } return ret; } fs_initcall(secretmem_init); + +static int __init secretmem_setup(char *str) +{ + phys_addr_t align = PMD_SIZE; + unsigned long reserved_size; + void *reserved; + + reserved_size = memparse(str, NULL); + if (!reserved_size) + return 0; + + if (reserved_size * 2 > PUD_SIZE) + align = PUD_SIZE; + + reserved = memblock_alloc(reserved_size, align); + if (!reserved) { + pr_err("failed to reserve %zu bytes\n", secretmem_pool.reserved_size); + return 0; + } + + secretmem_pool.reserved_size = reserved_size; + secretmem_pool.reserved = reserved; + + pr_info("reserved %zuM\n", reserved_size >> 20); + + return 1; +} +__setup("secretmem=", secretmem_setup); -- 2.26.2