Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp3778385pxk; Tue, 29 Sep 2020 06:09:45 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy2EcI+aoWRql2WtgkfTBX9FJvzm3Qafn4YmZYjn59nMxvZlrD0qLleVSN+qmUhpHmLfogQ X-Received: by 2002:aa7:c3c8:: with SMTP id l8mr3260074edr.368.1601384985268; Tue, 29 Sep 2020 06:09:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601384985; cv=none; d=google.com; s=arc-20160816; b=TQhOWeR1Tj/2SZtmnZHC3LW48AJ2U9cf9Ev3Rp6tf2Kx347jpMXdt7SWH9oIpxvYLu GXn0qiuJHrq4ExbMf2AC5sdbsbd3suc8wMqDqWxDW7xWtTUTOSKR3ABv1XvudN6z8Qj7 H01AevJbOzxa/Rjq8y7mQMO8UG8snYEhagJ7oCUqBgZIsKIF9aZ9GzvtDghjgdvdlsDL vasZYUH6P4WMbZ1Jn5VwXUHP4MU9Pa8E/q/JAy+AxZCWP6n06ZZJjtGO3TR9zIcmBuwg kuRF1GlV2ZRgFvYh8izwi2Sm4o1ISaSMMKElXvmPA51JwKmC6ekDtLndptlNKgKcdAPu zpnQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=sRWdjmRksMB0+s3bFnqy52FsyhktjcYa6KeDV3HG0gc=; b=iKcbF900ILZ+jnbyZiYwgtHYgYh1wmRv4YPrhZk21tgwioHlrnNAOX0ZshJ7O8NW8q /GIBSaY5rk4QAtYZEryMW4OMpmyJzHUy1k3grqxaMPEbcqX/VfEhWp3Guc992HM33U0o OUSGYI9Y6LCFlNn/Iuk+7rxGXUgC7+e3Rx5cbtWAvdSUTs+yqSQcQDQv0HIXsn9M76oJ 4gUtP+1yLcJ2r+dfMT4zwFaWkVHpoLriPTelaI3g36H1wy8kJIm0gfdVUAd+4Ie4vapF wIrnHJtDhcqqYifY2TURhXers4BhhPKietoKMc5IrgUm5Y2heG9KGxCGTfyHQZFvfDmd Q3PQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=ff4cS6H0; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id bo3si2679381ejb.551.2020.09.29.06.08.59; Tue, 29 Sep 2020 06:09:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=ff4cS6H0; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729141AbgI2NHp (ORCPT + 99 others); Tue, 29 Sep 2020 09:07:45 -0400 Received: from mail.kernel.org ([198.145.29.99]:49550 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728241AbgI2NHo (ORCPT ); Tue, 29 Sep 2020 09:07:44 -0400 Received: from kernel.org (unknown [87.71.73.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 3D11420848; Tue, 29 Sep 2020 13:07:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601384863; bh=UZ0iM96aIv5qmIGrHFQ3nNuGd8q14qoW3IN+2hfXGCU=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=ff4cS6H09wsob6fG8dvSzyj1LvUQfBbkOpFxI0sSdU98FA9rPjkXt2iow0J8KbiXE zb+Di1ti/4IJRmKz0SOIT14xl6TtZOwVL/UB+NG+rI/tt+P8xz5F0/dp4cG22+QEa+ PIvBVdUoENxn2sQrb9B2BEdjgzVtwRerxkMIPhoA= Date: Tue, 29 Sep 2020 16:07:23 +0300 From: Mike Rapoport To: Peter Zijlstra Cc: David Hildenbrand , Andrew Morton , Alexander Viro , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Catalin Marinas , Christopher Lameter , Dan Williams , Dave Hansen , Elena Reshetova , "H. Peter Anvin" , Idan Yaniv , Ingo Molnar , James Bottomley , "Kirill A. Shutemov" , Matthew Wilcox , Mark Rutland , Mike Rapoport , Michael Kerrisk , Palmer Dabbelt , Paul Walmsley , Thomas Gleixner , Shuah Khan , Tycho Andersen , Will Deacon , linux-api@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-nvdimm@lists.01.org, linux-riscv@lists.infradead.org, x86@kernel.org Subject: Re: [PATCH v6 5/6] mm: secretmem: use PMD-size pages to amortize direct map fragmentation Message-ID: <20200929130723.GH2142832@kernel.org> References: <20200924132904.1391-1-rppt@kernel.org> <20200924132904.1391-6-rppt@kernel.org> <20200925074125.GQ2628@hirez.programming.kicks-ass.net> <8435eff6-7fa9-d923-45e5-d8850e4c6d73@redhat.com> <20200925095029.GX2628@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200925095029.GX2628@hirez.programming.kicks-ass.net> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Sep 25, 2020 at 11:50:29AM +0200, Peter Zijlstra wrote: > On Fri, Sep 25, 2020 at 11:00:30AM +0200, David Hildenbrand wrote: > > On 25.09.20 09:41, Peter Zijlstra wrote: > > > On Thu, Sep 24, 2020 at 04:29:03PM +0300, Mike Rapoport wrote: > > >> From: Mike Rapoport > > >> > > >> Removing a PAGE_SIZE page from the direct map every time such page is > > >> allocated for a secret memory mapping will cause severe fragmentation of > > >> the direct map. This fragmentation can be reduced by using PMD-size pages > > >> as a pool for small pages for secret memory mappings. > > >> > > >> Add a gen_pool per secretmem inode and lazily populate this pool with > > >> PMD-size pages. > > > > > > What's the actual efficacy of this? Since the pmd is per inode, all I > > > need is a lot of inodes and we're in business to destroy the directmap, > > > no? > > > > > > Afaict there's no privs needed to use this, all a process needs is to > > > stay below the mlock limit, so a 'fork-bomb' that maps a single secret > > > page will utterly destroy the direct map. > > > > > > I really don't like this, at all. > > > > As I expressed earlier, I would prefer allowing allocation of secretmem > > only from a previously defined CMA area. This would physically locally > > limit the pain. > > Given that this thing doesn't have a migrate hook, that seems like an > eminently reasonable contraint. Because not only will it mess up the > directmap, it will also destroy the ability of the page-allocator / > compaction to re-form high order blocks by sprinkling holes throughout. > > Also, this is all very close to XPFO, yet I don't see that mentioned > anywhere. It's close to XPFO in the sense it removes pages from the kernel page table. But unlike XPFO memfd_secret() does not mean allowing access to these pages in the kernel until they are freed by the user. And, unlike XPFO, it does not require TLB flushing all over the place. > Further still, it has this HAVE_SECRETMEM_UNCACHED nonsense which is > completely unused. I'm not at all sure exposing UNCACHED to random > userspace is a sane idea. The uncached mappings were originally proposed as a mean "... to prevent or considerably restrict speculation on such pages" [1] as a comment to my initial proposal to use mmap(MAP_EXCLUSIVE). I've added the ability to create uncached mappings into the fd-based implementation of the exclusive mappings as it is indeed can reduce availability of side channels and the implementation was quite straight forward. [1] https://lore.kernel.org/linux-mm/2236FBA76BA1254E88B949DDB74E612BA4EEC0CE@IRSMSX102.ger.corp.intel.com/ -- Sincerely yours, Mike.