Received: by 2002:a05:6a10:9e8c:0:0:0:0 with SMTP id y12csp3472474pxx; Mon, 2 Nov 2020 09:45:13 -0800 (PST) X-Google-Smtp-Source: ABdhPJwXxVWfdDyXNP2zcZZpvTtSbqAnadRWATtauYYgEQqXJfx7eYmDOI7EzySdcm2WO5q4Ev16 X-Received: by 2002:aa7:de97:: with SMTP id j23mr17889997edv.45.1604339113164; Mon, 02 Nov 2020 09:45:13 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1604339113; cv=none; d=google.com; s=arc-20160816; b=YETqSGCNmBHjk2dnkI5XAg4jgrHQgRMOxT81m/4Z7SOlL89w+BY9xdvMllfF7XLrb1 Zymnw3F5xpmr79PmbCtdKMrrZEJM57mZxFMqjafnw2Yi8CgMw7vJ3voa7FT5cDbGHBpJ yhJoU6xSjB3254e6MwEGKb0mfCtcVIJ+68z5WsrClc/M70ehxD3+a8VnsUPwaE4y2wq2 vBUoH+G0Q8n2iAoRVkfyQKMZgadiJHs/r0ZmYYoRCRtX4fzMVzShr1Tv/Y6NKjRrqQ70 iXe2+4W6CKiLp7YWJwzSYnyn5ghGx1mrQhbDTTlxsJxqyhvGxnpT1nqe4J30Ps2kFx2o rDLw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=Mvy5El8cqXoDojIwpxMDjiN+/bdd+OwaH0ZaD9XaiQ4=; b=kQvW0aIKel1rxNptMMlhhdcwosVJA18T38gO9zwfV+TcNrpIa53Vm9hGviQ1ph01wP +RY9jCWuakreUIbwCpLt91srtrkL5gO9etcNQqbpthTHTUFjnIbw+R0UsyJqWkSXG842 LSc5855JHfgSV5gNpWMGUFn53WX1R+XHo+tXGOi+wxE3miaaZGNanxm0GtTuDQHVy+0E JdrkXrn2sUiT9rWFEYzosgJPZTNPKER+qImX3Iu7ox7XkzCPNl8lknCZpbVbklGQU/fW AFy1aRRuCMDgtyLRF7wNDdGUdX9gpoy6e8uLIRPtmuhqCegjGrXb60ygqapvoSnQsvHi IdDA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=WaQHhcNI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id u23si11035112ejj.163.2020.11.02.09.44.50; Mon, 02 Nov 2020 09:45:13 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=WaQHhcNI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725833AbgKBRnX (ORCPT + 99 others); Mon, 2 Nov 2020 12:43:23 -0500 Received: from mail.kernel.org ([198.145.29.99]:59754 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725768AbgKBRnX (ORCPT ); Mon, 2 Nov 2020 12:43:23 -0500 Received: from kernel.org (unknown [87.71.17.26]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id F163621D91; Mon, 2 Nov 2020 17:43:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1604339002; bh=4lTOK6nJwjmoOAObnnRBOTKd0fl9eQpxyGZlvj7NIPU=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=WaQHhcNIPlO/3Y1BDKIiRptwVm/gNfTOsggLuTFWgEEfzGk4JpghqGJiTK1uJXPww xcrmfo2wowW6SunTYD97zxQcWOPYPEwCUuK+Opbf1HTskJ6CenmFJitmkrFyumA+RF 614VdGzOVlHHOfs2z6uhOZXESicNeUeTW56l5apA= Date: Mon, 2 Nov 2020 19:43:08 +0200 From: Mike Rapoport To: David Hildenbrand Cc: Andrew Morton , Alexander Viro , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Catalin Marinas , Christopher Lameter , Dan Williams , Dave Hansen , Elena Reshetova , "H. Peter Anvin" , Idan Yaniv , Ingo Molnar , James Bottomley , "Kirill A. Shutemov" , Matthew Wilcox , Mark Rutland , Mike Rapoport , Michael Kerrisk , Palmer Dabbelt , Paul Walmsley , Peter Zijlstra , Thomas Gleixner , Shuah Khan , Tycho Andersen , Will Deacon , linux-api@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-nvdimm@lists.01.org, linux-riscv@lists.infradead.org, x86@kernel.org Subject: Re: [PATCH v6 0/6] mm: introduce memfd_secret system call to create "secret" memory areas Message-ID: <20201102174308.GF4879@kernel.org> References: <20200924132904.1391-1-rppt@kernel.org> <9c38ac3b-c677-6a87-ce82-ec53b69eaf71@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <9c38ac3b-c677-6a87-ce82-ec53b69eaf71@redhat.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Nov 02, 2020 at 10:11:12AM +0100, David Hildenbrand wrote: > On 24.09.20 15:28, Mike Rapoport wrote: > > From: Mike Rapoport > > > > Hi, > > > > This is an implementation of "secret" mappings backed by a file descriptor. > > I've dropped the boot time reservation patch for now as it is not strictly > > required for the basic usage and can be easily added later either with or > > without CMA. > > Hi Mike, > > I'd like to stress again that I'd prefer *any* secretmem allocations going > via CMA as long as these pages are unmovable. The user can allocate a > non-significant amount of unmovable allocations only fenced by the mlock > limit, which behave very different to mlocked pages - they are not movable > for page compaction/migration. > > Assume you have a system with quite some ZONE_MOVABLE memory (esp. in > virtualized environments), eating up a significant amount of !ZONE_MOVABLE > memory dynamically at runtime can lead to non-obvious issues. It looks like > you have plenty of free memory, but the kernel might still OOM when trying > to do kernel allocations e.g., for pagetables. With CMA we at least know > what we're dealing with - it behaves like ZONE_MOVABLE except for the owner > that can place unmovable pages there. We can use it to compute statically > the amount of ZONE_MOVABLE memory we can have in the system without doing > harm to the system. Why would you say that secretmem allocates from !ZONE_MOVABLE? If we put boot time reservations aside, the memory allocation for secretmem follows the same rules as the memory allocations for any file descriptor. That means we allocate memory with GFP_HIGHUSER_MOVABLE. After the allocation the memory indeed becomes unmovable but it's not like we are eating memory from other zones here. Maybe I'm missing something, but it seems to me that using CMA for any secretmem allocation would needlessly complicate things. > Ideally, we would want to support page migration/compaction and allow for > allocation from ZONE_MOVABLE as well. Would involve temporarily mapping, > copying, unmapping. Sounds feasible, but not sure which roadblocks we would > find on the way. We can support migration/compaction with temporary mapping. The first roadblock I've hit there was that migration allocates 4K destination page and if we use it in secret map we are back to scrambling the direct map into 4K pieces. It still sounds feasible but not as trivial :) But again, there is nothing in the current form of secretmem that prevents allocation from ZONE_MOVABLE. > [...] > > > I've hesitated whether to continue to use new flags to memfd_create() or to > > add a new system call and I've decided to use a new system call after I've > > started to look into man pages update. There would have been two completely > > independent descriptions and I think it would have been very confusing. > > This was also raised on lwn.net by "dullfire" [1]. I do wonder if it would > be the right place as well. I lean towards a dedicated syscall because, as I said, to me it would seem less confusing. > [1] https://lwn.net/Articles/835342/#Comments > > > > > Hiding secret memory mappings behind an anonymous file allows (ab)use of > > the page cache for tracking pages allocated for the "secret" mappings as > > well as using address_space_operations for e.g. page migration callbacks. > > > > The anonymous file may be also used implicitly, like hugetlb files, to > > implement mmap(MAP_SECRET) and use the secret memory areas with "native" mm > > ABIs in the future. > > > > As the fragmentation of the direct map was one of the major concerns raised > > during the previous postings, I've added an amortizing cache of PMD-size > > pages to each file descriptor that is used as an allocation pool for the > > secret memory areas. -- Sincerely yours, Mike.