Received: by 2002:a25:e74b:0:0:0:0:0 with SMTP id e72csp1068738ybh; Mon, 13 Jul 2020 08:35:14 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxM94yr1z09abDNaEQ0l2f61fXiUeZONbjiZTTSPwtFcVtXgYlvPASVWo+gA/JH4/WIp5TJ X-Received: by 2002:a17:906:364e:: with SMTP id r14mr331939ejb.258.1594654513960; Mon, 13 Jul 2020 08:35:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594654513; cv=none; d=google.com; s=arc-20160816; b=eFxBuOHdnBuyr+AOQ6cC/Fr4B5fuNtQ22AHf/T6NX5e2+lI0KdWvsOuPDohmmLSWrn X6F4BGUu2qkyr47ql2cufhfvSDP5ollNhvG2SPVKzELFmOOWUyWZI0MQasj23o2IkbFJ YfgQeHqlo5vVIAh97pG/kZ6Gve/3idmR4KRfU9pspFxuw4cv4mPQnnUCBm/ekJPfVEvf 1PpE66TefHxy2Lc8QOG5wanca7iFobKix3C5DLrBT5VT3KgHTg+1objb4EbtO0ttRaSV EajEuHAWAtS9rluFQ9Tofm19GljaWe2ZBaaE4ngHTqLkrGB3PXFN3c8YTf7H4bI5PjO0 jCmQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :dkim-signature; bh=FuFhVZVTMISyTc6un8UlF2x9aeKgvWJyw3o3YFByhJU=; b=Mw1mFLBnBID4OyUiraBhnRP1LRisYHIuLFcvLV3iTVEo71GYkdM3kPf0R5d9PVbLNV DuvdkJU1F7h+npUukbjj/+j3YYBj1T5XnlMDYePtlg9XQYcpX4BqusRsAIF6/yaZJkok 09I4cv7kVo3koTRdg08/li7zPsFesiOoplAFw0wfe91qxs+lny1GcVQc8e3nuQIUTbDc N/UwDkp8y82J/HxyWcOiwQeCFak5PI4RI4EzMNWlB09l6H1pp4Fj2RdgbTH32TJu8nE0 sQh88PoI59CITyyR+m0uJh5OYLCSPSIcVkkjkVg3PijVxvo6MhCGX7JXkFg6PMn8vDsq 994Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="gVM/ydYD"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id g20si9248999edq.447.2020.07.13.08.34.51; Mon, 13 Jul 2020 08:35:13 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="gVM/ydYD"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729840AbgGMPcp (ORCPT + 99 others); Mon, 13 Jul 2020 11:32:45 -0400 Received: from mail.kernel.org ([198.145.29.99]:32894 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729027AbgGMPcp (ORCPT ); Mon, 13 Jul 2020 11:32:45 -0400 Received: from kernel.org (unknown [87.71.40.38]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 4DDF7206F0; Mon, 13 Jul 2020 15:32:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1594654364; bh=BA45Q6AEXkFERQL4GWwTgvIF0IN9KuWDZxv/KQt11c8=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=gVM/ydYDODojWV7K+3vNvNLN+EKvGQkS3Wv873nQ3Y8rb7pos3CJPwsM5S5AzA+Ly +r5f0oRFGnA6lPVLS7c0V5NxoIKpRC5qad/DicL/4kl2Rwmze9mgeFjr6B8bwEbwJ2 kzS1ewxoSnmTO/0xk1ne6SJpwagrdfTaxmlkA1rA= Date: Mon, 13 Jul 2020 18:32:34 +0300 From: Mike Rapoport To: "Kirill A. Shutemov" Cc: linux-kernel@vger.kernel.org, Alan Cox , Andrew Morton , Andy Lutomirski , Christopher Lameter , Dave Hansen , Idan Yaniv , James Bottomley , Matthew Wilcox , Peter Zijlstra , "Reshetova, Elena" , Thomas Gleixner , Tycho Andersen , linux-api@vger.kernel.org, linux-mm@kvack.org, Mike Rapoport Subject: Re: [RFC PATCH v2 4/5] mm: secretmem: use PMD-size pages to amortize direct map fragmentation Message-ID: <20200713153234.GC707159@kernel.org> References: <20200706172051.19465-1-rppt@kernel.org> <20200706172051.19465-5-rppt@kernel.org> <20200713110505.mesvinqjbj7imsdz@box> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200713110505.mesvinqjbj7imsdz@box> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jul 13, 2020 at 02:05:05PM +0300, Kirill A. Shutemov wrote: > On Mon, Jul 06, 2020 at 08:20:50PM +0300, Mike Rapoport wrote: > > From: Mike Rapoport > > > > Removing a PAGE_SIZE page from the direct map every time such page is > > allocated for a secret memory mapping will cause severe fragmentation of > > the direct map. This fragmentation can be reduced by using PMD-size pages > > as a pool for small pages for secret memory mappings. > > > > Add a gen_pool per secretmem inode and lazily populate this pool with > > PMD-size pages. > > > > Signed-off-by: Mike Rapoport > > --- > > mm/secretmem.c | 107 ++++++++++++++++++++++++++++++++++++++++--------- > > 1 file changed, 88 insertions(+), 19 deletions(-) > > > > diff --git a/mm/secretmem.c b/mm/secretmem.c > > index df8f8c958cc2..c6fcf6d76951 100644 > > --- a/mm/secretmem.c > > +++ b/mm/secretmem.c > > @@ -5,6 +5,7 @@ > > #include > > #include > > #include > > +#include > > #include > > #include > > #include > > @@ -23,24 +24,66 @@ > > #define SECRETMEM_UNCACHED 0x2 > > > > struct secretmem_ctx { > > + struct gen_pool *pool; > > unsigned int mode; > > }; > > > > -static struct page *secretmem_alloc_page(gfp_t gfp) > > +static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp) > > { > > - /* > > - * FIXME: use a cache of large pages to reduce the direct map > > - * fragmentation > > - */ > > - return alloc_page(gfp); > > + unsigned long nr_pages = (1 << HPAGE_PMD_ORDER); > > + struct gen_pool *pool = ctx->pool; > > + unsigned long addr; > > + struct page *page; > > + int err; > > + > > + page = alloc_pages(gfp, HPAGE_PMD_ORDER); > > + if (!page) > > + return -ENOMEM; > > + > > + addr = (unsigned long)page_address(page); > > + split_page(page, HPAGE_PMD_ORDER); > > + > > + err = gen_pool_add(pool, addr, HPAGE_PMD_SIZE, NUMA_NO_NODE); > > + if (err) { > > + __free_pages(page, HPAGE_PMD_ORDER); > > + return err; > > + } > > + > > + __kernel_map_pages(page, nr_pages, 0); > > It's worth nothing that unlike flush_tlb_kernel_range(), > __kernel_map_pages() only flushed local TLB, so other CPU may still have > access to the page. It's shouldn't be a blocker, but deserve a comment. Sure. > > + > > + return 0; > > +} > > + -- Sincerely yours, Mike.