Received: by 2002:a05:6358:f14:b0:e5:3b68:ec04 with SMTP id b20csp5680207rwj; Wed, 21 Dec 2022 05:54:12 -0800 (PST) X-Google-Smtp-Source: AMrXdXuLygsdix6IQKc+M+wyLWj9C3/htsJkCXskpL0Ybm1YJHYTk1gc1wXgWGu37Vt0arbgVd3n X-Received: by 2002:a17:907:6d85:b0:83f:748a:5c6e with SMTP id sb5-20020a1709076d8500b0083f748a5c6emr1957835ejc.71.1671630852024; Wed, 21 Dec 2022 05:54:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1671630852; cv=none; d=google.com; s=arc-20160816; b=YYmfyDhexobjMLvj/vF4qRpsrjSnUPaXR08m9G3UiXHmZon78779nyV9enZkS0kS/l tIketLzWxxiAvIug6GDZsDInZY1vFv+5Hwq18cs6EKNeUCAWFrS7sz4ryMSv10Zmc4ID Mn4iF2jhXFbgwpraAF5twZ93oHHPQCkz1GUH3Mz8nV4/1x8rlxnF4wVPETD/9f5wXetI 1n2W/U2obVgVfaPAZ9pZl05LlwofBpgyZIaUH6ef6UvLm9NgUL4ttjsAquFJ9lgEJ0k2 4ALO4SdnglVJJQtQzXhZ6FVYpYnJSkaSIUO3y2Ajy2qVStSnrleEd8h53MHpAR8Sc803 zQtw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-transfer-encoding :content-disposition:mime-version:references:reply-to:message-id :subject:cc:to:from:date:dkim-signature; bh=nTPYQXJ5N3j9jX3oJ9X2N0ncxAYFpqobVfp2jfrKGwo=; b=LT2/gydDWFOIb3se25UtvcKRke/UYygx7Rxwtc+Slv9vnssuO90Jme8Z4aXxG/r00x dEEM1H+by7B3ml3bSlux/zOndHR7auwXCj1S4ZSvmb9AUVSCOTP5gYMdsaomRwuXW/c7 k/MKdvPaKBuBb1sFDupQkDMtd9V5CjcYoLCIELQMJasO+buUbB/quFSPB5Z3ZyYbxPpu JdOT5QtDY+YDWd+8797NPVYWqa2ooHB2Vu9WWmU5HAPXKoOjH9sZvrlnwd9wVE7VmVfi GVAoZnYAkKFH6BtHHpUXmRQ6pZgiRqdrrgVEseyEVrDXvGbyhrC1rU8iHM/BRyipRqZt Lfpw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=H2lhxz8q; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id hr15-20020a1709073f8f00b0078decbc3f73si4819028ejc.460.2022.12.21.05.53.55; Wed, 21 Dec 2022 05:54:12 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=H2lhxz8q; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232201AbiLUNni (ORCPT + 69 others); Wed, 21 Dec 2022 08:43:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43234 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229789AbiLUNnf (ORCPT ); Wed, 21 Dec 2022 08:43:35 -0500 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D4A70FD2F; Wed, 21 Dec 2022 05:43:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1671630213; x=1703166213; h=date:from:to:cc:subject:message-id:reply-to:references: mime-version:content-transfer-encoding:in-reply-to; bh=KeOWd+1s5+y+piSUTaodricsl+2xgK/0jmwlSljJARM=; b=H2lhxz8q4YJTpIViEVDPCYU52PGAYs5BwE0RwSTi2YMdg24ADkMxnoAq JzHTa2zb7qy6FuEfPxAR3wKDmzBn9rXOdxcW8kiMTqZV8wQRFnosagiIp dDSK9JZNIHo5G11ul/94dob5MrLNqM05P5QBx/hO61R8LNfybSUhRRguU +763/0rbo2hHmGRVg3RWZ+jFtIQZWIDUDcaznUWl+owo/ikQVcQ9zV/iN 9YycOMh0uWsRf5ogaCIzKRwJID9d68Oi70qhNKTIMlVbiQJKA2AInnpLQ a6OrajCHXmyJLJMDAhntjhyVmFgkpi8wL9OJL8Nrsfiz+UmoLi+0jrXeX A==; X-IronPort-AV: E=McAfee;i="6500,9779,10567"; a="318567865" X-IronPort-AV: E=Sophos;i="5.96,262,1665471600"; d="scan'208";a="318567865" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Dec 2022 05:43:32 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10567"; a="651401281" X-IronPort-AV: E=Sophos;i="5.96,262,1665471600"; d="scan'208";a="651401281" Received: from chaop.bj.intel.com (HELO localhost) ([10.240.193.75]) by orsmga002.jf.intel.com with ESMTP; 21 Dec 2022 05:43:21 -0800 Date: Wed, 21 Dec 2022 21:39:05 +0800 From: Chao Peng To: "Huang, Kai" Cc: "tglx@linutronix.de" , "linux-arch@vger.kernel.org" , "kvm@vger.kernel.org" , "jmattson@google.com" , "Lutomirski, Andy" , "ak@linux.intel.com" , "kirill.shutemov@linux.intel.com" , "Hocko, Michal" , "qemu-devel@nongnu.org" , "tabba@google.com" , "david@redhat.com" , "michael.roth@amd.com" , "corbet@lwn.net" , "bfields@fieldses.org" , "dhildenb@redhat.com" , "linux-kernel@vger.kernel.org" , "linux-fsdevel@vger.kernel.org" , "x86@kernel.org" , "bp@alien8.de" , "linux-api@vger.kernel.org" , "rppt@kernel.org" , "shuah@kernel.org" , "vkuznets@redhat.com" , "vbabka@suse.cz" , "mail@maciej.szmigiero.name" , "ddutile@redhat.com" , "qperret@google.com" , "arnd@arndb.de" , "pbonzini@redhat.com" , "vannapurve@google.com" , "naoya.horiguchi@nec.com" , "Christopherson,, Sean" , "wanpengli@tencent.com" , "yu.c.zhang@linux.intel.com" , "hughd@google.com" , "aarcange@redhat.com" , "mingo@redhat.com" , "hpa@zytor.com" , "Nakajima, Jun" , "jlayton@kernel.org" , "joro@8bytes.org" , "linux-mm@kvack.org" , "Wang, Wei W" , "steven.price@arm.com" , "linux-doc@vger.kernel.org" , "Hansen, Dave" , "akpm@linux-foundation.org" , "linmiaohe@huawei.com" Subject: Re: [PATCH v10 1/9] mm: Introduce memfd_restricted system call to create restricted user memory Message-ID: <20221221133905.GA1766136@chaop.bj.intel.com> Reply-To: Chao Peng References: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> <20221202061347.1070246-2-chao.p.peng@linux.intel.com> <5c6e2e516f19b0a030eae9bf073d555c57ca1f21.camel@intel.com> <20221219075313.GB1691829@chaop.bj.intel.com> <20221220072228.GA1724933@chaop.bj.intel.com> <126046ce506df070d57e6fe5ab9c92cdaf4cf9b7.camel@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <126046ce506df070d57e6fe5ab9c92cdaf4cf9b7.camel@intel.com> X-Spam-Status: No, score=-4.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Dec 20, 2022 at 08:33:05AM +0000, Huang, Kai wrote: > On Tue, 2022-12-20 at 15:22 +0800, Chao Peng wrote: > > On Mon, Dec 19, 2022 at 08:48:10AM +0000, Huang, Kai wrote: > > > On Mon, 2022-12-19 at 15:53 +0800, Chao Peng wrote: > > > > > > > > > > [...] > > > > > > > > > > > + > > > > > > + /* > > > > > > + * These pages are currently unmovable so don't place them into > > > > > > movable > > > > > > + * pageblocks (e.g. CMA and ZONE_MOVABLE). > > > > > > + */ > > > > > > + mapping = memfd->f_mapping; > > > > > > + mapping_set_unevictable(mapping); > > > > > > + mapping_set_gfp_mask(mapping, > > > > > > + ???? mapping_gfp_mask(mapping) & ~__GFP_MOVABLE); > > > > > > > > > > But, IIUC removing __GFP_MOVABLE flag here only makes page allocation from > > > > > non- > > > > > movable zones, but doesn't necessarily prevent page from being migrated.? My > > > > > first glance is you need to implement either a_ops->migrate_folio() or just > > > > > get_page() after faulting in the page to prevent. > > > > > > > > The current api restrictedmem_get_page() already does this, after the > > > > caller calling it, it holds a reference to the page. The caller then > > > > decides when to call put_page() appropriately. > > > > > > I tried to dig some history. Perhaps I am missing something, but it seems Kirill > > > said in v9 that this code doesn't prevent page migration, and we need to > > > increase page refcount in restrictedmem_get_page(): > > > > > > https://lore.kernel.org/linux-mm/20221129112139.usp6dqhbih47qpjl@box.shutemov.name/ > > > > > > But looking at this series it seems restrictedmem_get_page() in this v10 is > > > identical to the one in v9 (except v10 uses 'folio' instead of 'page')? > > > > restrictedmem_get_page() increases page refcount several versions ago so > > no change in v10 is needed. You probably missed my reply: > > > > https://lore.kernel.org/linux-mm/20221129135844.GA902164@chaop.bj.intel.com/ > > But for non-restricted-mem case, it is correct for KVM to decrease page's > refcount after setting up mapping in the secondary mmu, otherwise the page will > be pinned by KVM for normal VM (since KVM uses GUP to get the page). That's true. Actually even true for restrictedmem case, most likely we will still need the kvm_release_pfn_clean() for KVM generic code. On one side, other restrictedmem users like pKVM may not require page pinning at all. On the other side, see below. > > So what we are expecting is: for KVM if the page comes from restricted mem, then > KVM cannot decrease the refcount, otherwise for normal page via GUP KVM should. I argue that this page pinning (or page migration prevention) is not tied to where the page comes from, instead related to how the page will be used. Whether the page is restrictedmem backed or GUP() backed, once it's used by current version of TDX then the page pinning is needed. So such page migration prevention is really TDX thing, even not KVM generic thing (that's why I think we don't need change the existing logic of kvm_release_pfn_clean()). Wouldn't better to let TDX code (or who requires that) to increase/decrease the refcount when it populates/drops the secure EPT entries? This is exactly what the current TDX code does: get_page(): https://github.com/intel/tdx/blob/kvm-upstream/arch/x86/kvm/vmx/tdx.c#L1217 put_page(): https://github.com/intel/tdx/blob/kvm-upstream/arch/x86/kvm/vmx/tdx.c#L1334 Thanks, Chao > > > > > The current solution is clear: unless we have better approach, we will > > let restrictedmem user (KVM in this case) to hold the refcount to > > prevent page migration. > > > > OK. Will leave to others :) >