Received: by 2002:a05:6a10:1d13:0:0:0:0 with SMTP id pp19csp267857pxb; Thu, 2 Sep 2021 03:51:21 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzkEi+8vX4x+O12xoV09911aEylCwOyYCKA8/Q0JbJNx0USN4EXhyXz4gestBs1x007PKtL X-Received: by 2002:a05:6638:cd5:: with SMTP id e21mr2215838jak.97.1630579880953; Thu, 02 Sep 2021 03:51:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1630579880; cv=none; d=google.com; s=arc-20160816; b=x9ch7sN2n4xsr0qMlc00IeohTwBL3vRB+UHO5USO2V7x3+CkgKDCc+rF63Vc0908Nj /alo2JUn87k4bXn0HOEfRUqRcE0m2cKQ7/5yRxYMYoE7LktCeHVQvTG17zMz/TQ3H/Bb e4NIGNUdGV6YUrYfwZcA8DpiWCYcTqAMP+ie11cEK6mQU8oL7ogSgdq3wGOT+Pq8+sy5 UMFda56FktzrlYLPn5BFaRY3MKP7GbI3ZcXNP4VJnQGHBXD3WBDRPHivOs7Ksu732KFA EhB/0JexJpDQ0FlmuihzCCbr4MsyicHeN5AlJXjhRwWtP5fNmX4eZA2nV/Dvc3LuyJTI Yr5g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=HhqEEWDZRRtaVVMm3V8iAl9fDKEizyqA3HiOzIuq33k=; b=IVCk+5+f5aMwqsqXjvjxDZhWTbWwdwxbSB/mr2xb0ZNOI37A3cybj3jNdxp6rMdz9A rzxsCH4pGVqaqhzw0DckidinMW+RXRK2sUSbrbJEniDK7LS86o5xfcTHG9KSZ4h3Xy1A QuWlzpaKfVrB5pPj8LsiJEqlCTFc5gOko9lOHiWNmEpSNBqouS2h7N/3fy3IvwAsL4gV Uu2+9HOFvxWPHH0XNSw7mldtwcAKRFTN/vwo+IElXWy3W3+2+PL+KDbCRD4v+lcUzuEf SQnLSJcXdo8vvWKE4AprksrNHDF1H9ijoXJsaypfxQ0YKvqGVZcu00Usqyz1Ulj/pNe8 pocg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id o12si1613994iow.40.2021.09.02.03.51.09; Thu, 02 Sep 2021 03:51:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244477AbhIBIgE (ORCPT + 99 others); Thu, 2 Sep 2021 04:36:04 -0400 Received: from mga05.intel.com ([192.55.52.43]:35783 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244387AbhIBIgD (ORCPT ); Thu, 2 Sep 2021 04:36:03 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10094"; a="304596704" X-IronPort-AV: E=Sophos;i="5.84,371,1620716400"; d="scan'208";a="304596704" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2021 01:35:05 -0700 X-IronPort-AV: E=Sophos;i="5.84,371,1620716400"; d="scan'208";a="499636365" Received: from liuj4-mobl.ccr.corp.intel.com (HELO localhost) ([10.249.173.176]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2021 01:34:56 -0700 Date: Thu, 2 Sep 2021 16:34:53 +0800 From: Yu Zhang To: David Hildenbrand Cc: Andy Lutomirski , Sean Christopherson , Paolo Bonzini , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm list , Linux Kernel Mailing List , Borislav Petkov , Andrew Morton , Joerg Roedel , Andi Kleen , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , "Peter Zijlstra (Intel)" , Ingo Molnar , Varad Gautam , Dario Faggioli , the arch/x86 maintainers , linux-mm@kvack.org, linux-coco@lists.linux.dev, "Kirill A. Shutemov" , "Kirill A . Shutemov" , Sathyanarayanan Kuppuswamy , Dave Hansen Subject: Re: [RFC] KVM: mm: fd-based approach for supporting KVM guest private memory Message-ID: <20210902083453.aeouc6fob53ydtc2@linux.intel.com> References: <20210824005248.200037-1-seanjc@google.com> <307d385a-a263-276f-28eb-4bc8dd287e32@redhat.com> <20210827023150.jotwvom7mlsawjh4@linux.intel.com> <8f3630ff-bd6d-4d57-8c67-6637ea2c9560@www.fastmail.com> <20210901102437.g5wrgezmrjqn3mvy@linux.intel.com> <2b2740ec-fa89-e4c3-d175-824e439874a6@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <2b2740ec-fa89-e4c3-d175-824e439874a6@redhat.com> User-Agent: NeoMutt/20171215 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Sep 01, 2021 at 06:27:20PM +0200, David Hildenbrand wrote: > On 01.09.21 18:07, Andy Lutomirski wrote: > > On 9/1/21 3:24 AM, Yu Zhang wrote: > > > On Tue, Aug 31, 2021 at 09:53:27PM -0700, Andy Lutomirski wrote: > > > > > > > > > > > > On Thu, Aug 26, 2021, at 7:31 PM, Yu Zhang wrote: > > > > > On Thu, Aug 26, 2021 at 12:15:48PM +0200, David Hildenbrand wrote: > > > > > > > > > Thanks a lot for this summary. A question about the requirement: do we or > > > > > do we not have plan to support assigned device to the protected VM? > > > > > > > > > > If yes. The fd based solution may need change the VFIO interface as well( > > > > > though the fake swap entry solution need mess with VFIO too). Because: > > > > > > > > > > 1> KVM uses VFIO when assigning devices into a VM. > > > > > > > > > > 2> Not knowing which GPA ranges may be used by the VM as DMA buffer, all > > > > > guest pages will have to be mapped in host IOMMU page table to host pages, > > > > > which are pinned during the whole life cycle fo the VM. > > > > > > > > > > 3> IOMMU mapping is done during VM creation time by VFIO and IOMMU driver, > > > > > in vfio_dma_do_map(). > > > > > > > > > > 4> However, vfio_dma_do_map() needs the HVA to perform a GUP to get the HPA > > > > > and pin the page. > > > > > > > > > > But if we are using fd based solution, not every GPA can have a HVA, thus > > > > > the current VFIO interface to map and pin the GPA(IOVA) wont work. And I > > > > > doubt if VFIO can be modified to support this easily. > > > > > > > > > > > > > > > > > > Do you mean assigning a normal device to a protected VM or a hypothetical protected-MMIO device? > > > > > > > > If the former, it should work more or less like with a non-protected VM. mmap the VFIO device, set up a memslot, and use it. I'm not sure whether anyone will actually do this, but it should be possible, at least in principle. Maybe someone will want to assign a NIC to a TDX guest. An NVMe device with the understanding that the guest can't trust it wouldn't be entirely crazy ether. > > > > > > > > If the latter, AFAIK there is no spec for how it would work even in principle. Presumably it wouldn't work quite like VFIO -- instead, the kernel could have a protection-virtual-io-fd mechanism, and that fd could be bound to a memslot in whatever way we settle on for binding secure memory to a memslot. > > > > > > > > > > Thanks Andy. I was asking the first scenario. > > > > > > Well, I agree it is doable if someone really want some assigned > > > device in TD guest. As Kevin mentioned in his reply, HPA can be > > > generated, by extending VFIO with a new mapping protocol which > > > uses fd+offset, instead of HVA. > > > > I'm confused. I don't see why any new code is needed for this at all. > > Every proposal I've seen for handling TDX memory continues to handle TDX > > *shared* memory exactly like regular guest memory today. The only > > differences are that more hole punching will be needed, which will > > require lightweight memslots (to have many of them), memslots with > > holes, or mappings backing memslots with holes (which can be done with > > munmap() on current kernels). > > > > So you can literally just mmap a VFIO device and expect it to work, > > exactly like it does right now. Whether the guest will be willing to > > use the device will depend on the guest security policy (all kinds of > > patches about that are flying around), but if the guest tries to use it, > > it really should just work. > > ... but if you end up mapping private memory into IOMMU of the device and > the device ends up accessing that memory, we're in the same position that > the host might get capped, just like access from user space, no? Well, according to the spec: - If the result of the translation results in a physical address with a TD private key ID, then the IOMMU will abort the transaction and report a VT-d DMA remapping failure. - If the GPA in the transaction that is input to the IOMMU is private (SHARED bit is 0), then the IOMMU may abort the transaction and report a VT-d DMA remapping failure. So I guess mapping private GPAs in IOMMU is not that dangerous as mapping into userspace. Though still wrong. > > Sure, you can map only the complete duplicate shared-memory region into the > IOMMU of the device, that would work. Shame vfio mostly always pins all > guest memory and you essentially cannot punch holes into the shared memory > anymore -- resulting in the worst case in a duplicate memory consumption for > your VM. > > So you'd actually want to map only the *currently* shared pieces into the > IOMMU and update the mappings on demand. Having worked on something related, Exactly. On demand mapping and page pinning for shared memory is necessary. > I can only say that 64k individual mappings, and not being able to modify > existing mappings except completely deleting them to replace with something > new (!atomic), can be quite an issue for bigger VMs. Do you mean atomicity in mapping/unmapping can hardly be guaranteed during the shared <-> private transition? May I ask for elaboration? Thanks! B.R. Yu