Received: by 2002:a05:6358:1087:b0:cb:c9d3:cd90 with SMTP id j7csp1352013rwi; Wed, 19 Oct 2022 09:32:37 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6CQiUIqd2slbRef4mTcAEzSPKU7QlTui+F9isuWGqTZUBo4qTcc31IPa09Nbx/Gy7Xme6p X-Received: by 2002:a17:90a:bd87:b0:20b:1cb4:2c92 with SMTP id z7-20020a17090abd8700b0020b1cb42c92mr11000822pjr.210.1666197157655; Wed, 19 Oct 2022 09:32:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666197157; cv=none; d=google.com; s=arc-20160816; b=OTzx03Jps46OWe2yEXnLje5is+0eM7ZJEVLj24ec0ilY9ICAyP2KpXlcYK0zJBe9/g cO5iOJOUFklEgLxoNLCPmeawkWtbP6oZA/oB01lFSCoOd+ZkW/kGzGRVp2Qh+3OGijXf xmtueacr6nuqMm8vD1tZgwN0dQLygXDTgpoJuPP2yg4beir5b/7B0V8O3QNi1cRD05OF Pj97V33+WW1GHntWVed4bmoqeaZzVvBKNSg/1178TI4pDIwcOdSyVda/OU4P6mTyJlUu BLoXiinfAdpAJyqBlarNtZP3UUq/v1UUSC5ajD//bGKKhtcyzX4t5Cgw/k9qOulBdCyU I5pg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=rgxMmN74zsvwzG+L9Sm4AdzI9bH7En2OWkIITYA9SeM=; b=YhC7iXp/MfweRl7Kn+XEXQp1817noGRmFladUJ3V1aiWi26RgIItE1k9zhOmUyz4nv XoWt1pB5twG0U+2neNQpHIDLkMRHC6Z0DDKzHMiV5MwmtGanfytzln+bvQUtbh8Yc7VT xZXdSTh1R4bg9eyFZm459s0rLqMpwvx9Cm5rJqD2IVLwHRYZqErKPl3vVa0njt5C2GVF BD0QAUs6EiPNG/4Bcf3I98NPvBxH1qVSPZ+ZkP/tNwRKqDWne6mHKh7Zf7zq6RGaqNVZ Qb9PtRGI+WWkNS11irk9zIvjppDfoYSDnB5Bq+edq7CzE9GaYGscCnSnt7uUwRuTnh8U +0Eg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=VvPpVcXK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id k1-20020a628401000000b005662e18c93asi17000656pfd.135.2022.10.19.09.32.24; Wed, 19 Oct 2022 09:32:37 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=VvPpVcXK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231477AbiJSPiQ (ORCPT + 99 others); Wed, 19 Oct 2022 11:38:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37738 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231405AbiJSPhz (ORCPT ); Wed, 19 Oct 2022 11:37:55 -0400 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 640DB11A14; Wed, 19 Oct 2022 08:34:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666193662; x=1697729662; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=UHIaR+J+RVnHQe0Epq5Q2DkYje1xjf/qW1B/YS+Op6c=; b=VvPpVcXKTnPm6ViHgyMo2/wtljFGuq5++wIOWJJ9lsIjZw6LauP0h0Mb L9Zy/NS3EsTIAcFWU15APa5MqDs8bKuQhlvN85adCPYfGT+IAMLvF47+F 154orwrFsWezjQqUTt1DxS+h2+iKAa/kEuOIXq2f2jhZgNjjhxxgqZyjb s4aqi2BiCcAOnfvZLXHfWA22nXOx7GVEsK5XKQHj1BSi0VVboLmvcN/Gz m6Q67G+bKe7iT+/Ec+4agKVwm/wt2qGZr2ckLQFbT6fHqkrAzcZgCBf8I QzancLZ9mEatE46AXkpNbFF/4MfyhI0aTvo6ybne0Ya9+kxYwfzxoWonG w==; X-IronPort-AV: E=McAfee;i="6500,9779,10505"; a="286163006" X-IronPort-AV: E=Sophos;i="5.95,196,1661842800"; d="scan'208";a="286163006" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Oct 2022 08:32:38 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10505"; a="607149381" X-IronPort-AV: E=Sophos;i="5.95,196,1661842800"; d="scan'208";a="607149381" Received: from selvaku-mobl.ger.corp.intel.com (HELO box.shutemov.name) ([10.249.38.73]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Oct 2022 08:32:28 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id DEF33106C73; Wed, 19 Oct 2022 18:32:25 +0300 (+03) Date: Wed, 19 Oct 2022 18:32:25 +0300 From: "Kirill A . Shutemov" To: Vishal Annapurve Cc: "Gupta, Pankaj" , Vlastimil Babka , Chao Peng , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org, Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Yu Zhang , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , Michael Roth , mhocko@suse.com, Muchun Song , wei.w.wang@intel.com Subject: Re: [PATCH v8 1/8] mm/memfd: Introduce userspace inaccessible memfd Message-ID: <20221019153225.njvg45glehlnjgc7@box.shutemov.name> References: <20220915142913.2213336-1-chao.p.peng@linux.intel.com> <20220915142913.2213336-2-chao.p.peng@linux.intel.com> <20221017161955.t4gditaztbwijgcn@box.shutemov.name> <20221017215640.hobzcz47es7dq2bi@box.shutemov.name> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Spam-Status: No, score=-4.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_PASS, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Oct 18, 2022 at 07:12:10PM +0530, Vishal Annapurve wrote: > On Tue, Oct 18, 2022 at 3:27 AM Kirill A . Shutemov > wrote: > > > > On Mon, Oct 17, 2022 at 06:39:06PM +0200, Gupta, Pankaj wrote: > > > On 10/17/2022 6:19 PM, Kirill A . Shutemov wrote: > > > > On Mon, Oct 17, 2022 at 03:00:21PM +0200, Vlastimil Babka wrote: > > > > > On 9/15/22 16:29, Chao Peng wrote: > > > > > > From: "Kirill A. Shutemov" > > > > > > > > > > > > KVM can use memfd-provided memory for guest memory. For normal userspace > > > > > > accessible memory, KVM userspace (e.g. QEMU) mmaps the memfd into its > > > > > > virtual address space and then tells KVM to use the virtual address to > > > > > > setup the mapping in the secondary page table (e.g. EPT). > > > > > > > > > > > > With confidential computing technologies like Intel TDX, the > > > > > > memfd-provided memory may be encrypted with special key for special > > > > > > software domain (e.g. KVM guest) and is not expected to be directly > > > > > > accessed by userspace. Precisely, userspace access to such encrypted > > > > > > memory may lead to host crash so it should be prevented. > > > > > > > > > > > > This patch introduces userspace inaccessible memfd (created with > > > > > > MFD_INACCESSIBLE). Its memory is inaccessible from userspace through > > > > > > ordinary MMU access (e.g. read/write/mmap) but can be accessed via > > > > > > in-kernel interface so KVM can directly interact with core-mm without > > > > > > the need to map the memory into KVM userspace. > > > > > > > > > > > > It provides semantics required for KVM guest private(encrypted) memory > > > > > > support that a file descriptor with this flag set is going to be used as > > > > > > the source of guest memory in confidential computing environments such > > > > > > as Intel TDX/AMD SEV. > > > > > > > > > > > > KVM userspace is still in charge of the lifecycle of the memfd. It > > > > > > should pass the opened fd to KVM. KVM uses the kernel APIs newly added > > > > > > in this patch to obtain the physical memory address and then populate > > > > > > the secondary page table entries. > > > > > > > > > > > > The userspace inaccessible memfd can be fallocate-ed and hole-punched > > > > > > from userspace. When hole-punching happens, KVM can get notified through > > > > > > inaccessible_notifier it then gets chance to remove any mapped entries > > > > > > of the range in the secondary page tables. > > > > > > > > > > > > The userspace inaccessible memfd itself is implemented as a shim layer > > > > > > on top of real memory file systems like tmpfs/hugetlbfs but this patch > > > > > > only implemented tmpfs. The allocated memory is currently marked as > > > > > > unmovable and unevictable, this is required for current confidential > > > > > > usage. But in future this might be changed. > > > > > > > > > > > > Signed-off-by: Kirill A. Shutemov > > > > > > Signed-off-by: Chao Peng > > > > > > --- > > > > > > > > > > ... > > > > > > > > > > > +static long inaccessible_fallocate(struct file *file, int mode, > > > > > > + loff_t offset, loff_t len) > > > > > > +{ > > > > > > + struct inaccessible_data *data = file->f_mapping->private_data; > > > > > > + struct file *memfd = data->memfd; > > > > > > + int ret; > > > > > > + > > > > > > + if (mode & FALLOC_FL_PUNCH_HOLE) { > > > > > > + if (!PAGE_ALIGNED(offset) || !PAGE_ALIGNED(len)) > > > > > > + return -EINVAL; > > > > > > + } > > > > > > + > > > > > > + ret = memfd->f_op->fallocate(memfd, mode, offset, len); > > > > > > + inaccessible_notifier_invalidate(data, offset, offset + len); > > > > > > > > > > Wonder if invalidate should precede the actual hole punch, otherwise we open > > > > > a window where the page tables point to memory no longer valid? > > > > > > > > Yes, you are right. Thanks for catching this. > > > > > > I also noticed this. But then thought the memory would be anyways zeroed > > > (hole punched) before this call? > > > > Hole punching can free pages, given that offset/len covers full page. > > > > -- > > Kiryl Shutsemau / Kirill A. Shutemov > > I think moving this notifier_invalidate before fallocate may not solve > the problem completely. Is it possible that between invalidate and > fallocate, KVM tries to handle the page fault for the guest VM from > another vcpu and uses the pages to be freed to back gpa ranges? Should > hole punching here also update mem_attr first to say that KVM should > consider the corresponding gpa ranges to be no more backed by > inaccessible memfd? We rely on external synchronization to prevent this. See code around mmu_invalidate_retry_hva(). -- Kiryl Shutsemau / Kirill A. Shutemov