Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp1101668pxj; Wed, 2 Jun 2021 21:16:20 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxSA41XPH7wNkxx0CcgZFt+LL4dan0eKxeP0EUtuj9SuBTAn+BJ3SLfS8UYgsoE4RcyfCIA X-Received: by 2002:a17:906:16cb:: with SMTP id t11mr10957480ejd.112.1622693780101; Wed, 02 Jun 2021 21:16:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622693780; cv=none; d=google.com; s=arc-20160816; b=tFEnBbfDfS/ALNcKisJjWv+ZIpL6plr2F1vgpMLrkxNcMIbAUlt21if+U1rfEbBF7l 2vF2zMgz4aTULoFFjBG35WEvdii0dRl4MnmBXzfn9xo5FVJ8XpDFVUHAtczx6UTZwomG UtyM6lQ93i7LEySfx1HpWWONpx6/nEp084KxkQ8QKechUkXwDrd777oLuHH6H5AxFe9t vYZjvh16N2X6ujYRyVSgSYsYK/4xTNA8LLSGBVauXRGeh0Z598dINt0ozWQbDGNJEFb0 FJ9UiLbMzzw6tjowZ+XMytvNbs8qVBVDZM4xW6GXZ5FLdj+yZtEOK42V6xp/ddaPFJyq L6xg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:subject:cc:to:from:date :dkim-signature; bh=c2PxtKW9qJ+Vtox4Ih/NvHqS0Jy9B5n5YztFOiLLauQ=; b=l+2kA04wcK/bGWsDkhATbDo4BwORGN90NnSR9rjd7J5qmP0OINJpaTTSm7HUww0Vty fAUEYE+mmRVQ2UgZJh63opYkdVsjeZq1B4sHl55G6d2TwsJX/ovPZxoNZoSOAwdDsxYT cSA8Tn/0oCLTAtG9+pcJXKUqT/ZnPP03tkqSMV0WwGZgN1NKwRca/F6d9ajlBYPDVwqK FFVARyS7Jvj7+TvDTHBfHTOkB+d4xNQagWNFxHthGrVj0HeqZeTjzWIIMNa/81oQfYBj g87tMIQ0cRfVKqNbjSjENr81mAaIcdefckhwp3GdLK41UU9sjNWuQu3Q32BKHakakRgS KnSA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=AWIycqS6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id c11si1530982edy.324.2021.06.02.21.15.57; Wed, 02 Jun 2021 21:16:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=AWIycqS6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229697AbhFCEQw (ORCPT + 99 others); Thu, 3 Jun 2021 00:16:52 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:49351 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229441AbhFCEQv (ORCPT ); Thu, 3 Jun 2021 00:16:51 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1622693699; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=c2PxtKW9qJ+Vtox4Ih/NvHqS0Jy9B5n5YztFOiLLauQ=; b=AWIycqS6B6h5zwy0MerKa7Ykrhv09Feovcd+InUOIi1vQ8n5wwMwVbZ4X4LbhunOXYKMm5 zKCAl1Vy2uERzYK+BNrMfZLDIOgutcQWEb+yZBAOLJAWI6vUFoSWfm3CMn7Mkmf7O58/Mg RL1nAbylv6uWh9vHED/rz0k3Bs6dXXw= Received: from mail-ot1-f71.google.com (mail-ot1-f71.google.com [209.85.210.71]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-220-ipp0CUazNimzHJ5vtc5m4Q-1; Thu, 03 Jun 2021 00:14:58 -0400 X-MC-Unique: ipp0CUazNimzHJ5vtc5m4Q-1 Received: by mail-ot1-f71.google.com with SMTP id 88-20020a9d06e10000b029030513a66c79so2719161otx.0 for ; Wed, 02 Jun 2021 21:14:57 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=c2PxtKW9qJ+Vtox4Ih/NvHqS0Jy9B5n5YztFOiLLauQ=; b=b6zqr1aH15RtcZdMVaR09NNnFGfipv4900M3BIEW9wuu7I4Zzt7xYIAFmVt4AFL4lS qh9gOu9n/UP4CtiS2qbrejCepwwruvFaVJ+FVqVfEPNRtdXzrA1vfphbAOSykGCnfBVT EzKIIgPSO9xOJy+xYdvadsGJWChNnt3Puwwwpw9hRf+9H/hZO6e57syG6kbdfPmVzpUB i52dMNdDNEtYtDh0Y9gUnzGgQfwbBtJdc6OhHiIwMuoJM3370Uw3X3mRNRc0kbgB6J5D fBCCQC9TtDd6Mx7Qv207PAPI0libZnKijPcVpWStBZp+EdB108DkiXmNVZtL4TbzCI/P sDuw== X-Gm-Message-State: AOAM531EtD1jurJZkKHLZYHMaSUgscqI9yAfK2U8G2K8ZQaC3gaXvYWu HZ4fUyxT3anFARdencp6sZIf08Brp8zj+2Xi5dZg9w5jexvWRNf04wVW6cbcgXvB76ZJhB8X5ql vESzSUYeh/WPfgdyn3t7U1P0+ X-Received: by 2002:a9d:75d2:: with SMTP id c18mr28307576otl.219.1622693697185; Wed, 02 Jun 2021 21:14:57 -0700 (PDT) X-Received: by 2002:a9d:75d2:: with SMTP id c18mr28307561otl.219.1622693696891; Wed, 02 Jun 2021 21:14:56 -0700 (PDT) Received: from redhat.com ([198.99.80.109]) by smtp.gmail.com with ESMTPSA id t39sm415459ooi.42.2021.06.02.21.14.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Jun 2021 21:14:56 -0700 (PDT) Date: Wed, 2 Jun 2021 22:14:55 -0600 From: Alex Williamson To: "Tian, Kevin" Cc: Jason Gunthorpe , Jean-Philippe Brucker , "Jiang, Dave" , "Raj, Ashok" , "kvm@vger.kernel.org" , Jonathan Corbet , Robin Murphy , LKML , "iommu@lists.linux-foundation.org" , David Gibson , Kirti Wankhede , "David Woodhouse" , Jason Wang Subject: Re: [RFC] /dev/ioasid uAPI proposal Message-ID: <20210602221455.79a42878.alex.williamson@redhat.com> In-Reply-To: References: <20210601162225.259923bc.alex.williamson@redhat.com> <20210602160140.GV1002214@nvidia.com> <20210602111117.026d4a26.alex.williamson@redhat.com> <20210602173510.GE1002214@nvidia.com> <20210602120111.5e5bcf93.alex.williamson@redhat.com> <20210602180925.GH1002214@nvidia.com> <20210602130053.615db578.alex.williamson@redhat.com> <20210602195404.GI1002214@nvidia.com> <20210602143734.72fb4fa4.alex.williamson@redhat.com> <20210602224536.GJ1002214@nvidia.com> <20210602205054.3505c9c3.alex.williamson@redhat.com> X-Mailer: Claws Mail 3.17.8 (GTK+ 2.24.33; x86_64-redhat-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, 3 Jun 2021 03:22:27 +0000 "Tian, Kevin" wrote: > > From: Alex Williamson > > Sent: Thursday, June 3, 2021 10:51 AM > > > > On Wed, 2 Jun 2021 19:45:36 -0300 > > Jason Gunthorpe wrote: > > > > > On Wed, Jun 02, 2021 at 02:37:34PM -0600, Alex Williamson wrote: > > > > > > > Right. I don't follow where you're jumping to relaying DMA_PTE_SNP > > > > from the guest page table... what page table? > > > > > > I see my confusion now, the phrasing in your earlier remark led me > > > think this was about allowing the no-snoop performance enhancement in > > > some restricted way. > > > > > > It is really about blocking no-snoop 100% of the time and then > > > disabling the dangerous wbinvd when the block is successful. > > > > > > Didn't closely read the kvm code :\ > > > > > > If it was about allowing the optimization then I'd expect the guest to > > > enable no-snoopable regions via it's vIOMMU and realize them to the > > > hypervisor and plumb the whole thing through. Hence my remark about > > > the guest page tables.. > > > > > > So really the test is just 'were we able to block it' ? > > > > Yup. Do we really still consider that there's some performance benefit > > to be had by enabling a device to use no-snoop? This seems largely a > > legacy thing. > > Yes, there is indeed performance benefit for device to use no-snoop, > e.g. 8K display and some imaging processing path, etc. The problem is > that the IOMMU for such devices is typically a different one from the > default IOMMU for most devices. This special IOMMU may not have > the ability of enforcing snoop on no-snoop PCI traffic then this fact > must be understood by KVM to do proper mtrr/pat/wbinvd virtualization > for such devices to work correctly. The case where the IOMMU does not support snoop-control for such a device already works fine, we can't prevent no-snoop so KVM will emulate wbinvd. The harder one is if we should opt to allow no-snoop even if the IOMMU does support snoop-control. > > > > This support existed before mdev, IIRC we needed it for direct > > > > assignment of NVIDIA GPUs. > > > > > > Probably because they ignored the disable no-snoop bits in the control > > > block, or reset them in some insane way to "fix" broken bioses and > > > kept using it even though by all rights qemu would have tried hard to > > > turn it off via the config space. Processing no-snoop without a > > > working wbinvd would be fatal. Yeesh > > > > > > But Ok, back the /dev/ioasid. This answers a few lingering questions I > > > had.. > > > > > > 1) Mixing IOMMU_CAP_CACHE_COHERENCY > > and !IOMMU_CAP_CACHE_COHERENCY > > > domains. > > > > > > This doesn't actually matter. If you mix them together then kvm > > > will turn on wbinvd anyhow, so we don't need to use the DMA_PTE_SNP > > > anywhere in this VM. > > > > > > This if two IOMMU's are joined together into a single /dev/ioasid > > > then we can just make them both pretend to be > > > !IOMMU_CAP_CACHE_COHERENCY and both not set IOMMU_CACHE. > > > > Yes and no. Yes, if any domain is !IOMMU_CAP_CACHE_COHERENCY then > > we > > need to emulate wbinvd, but no we'll use IOMMU_CACHE any time it's > > available based on the per domain support available. That gives us the > > most consistent behavior, ie. we don't have VMs emulating wbinvd > > because they used to have a device attached where the domain required > > it and we can't atomically remap with new flags to perform the same as > > a VM that never had that device attached in the first place. > > > > > 2) How to fit this part of kvm in some new /dev/ioasid world > > > > > > What we want to do here is iterate over every ioasid associated > > > with the group fd that is passed into kvm. > > > > Yeah, we need some better names, binding a device to an ioasid (fd) but > > then attaching a device to an allocated ioasid (non-fd)... I assume > > you're talking about the latter ioasid. > > > > > Today the group fd has a single container which specifies the > > > single ioasid so this is being done trivially. > > > > > > To reorg we want to get the ioasid from the device not the > > > group (see my note to David about the groups vs device rational) > > > > > > This is just iterating over each vfio_device in the group and > > > querying the ioasid it is using. > > > > The IOMMU API group interfaces is largely iommu_group_for_each_dev() > > anyway, we still need to account for all the RIDs and aliases of a > > group. > > > > > Or perhaps more directly: an op attaching the vfio_device to the > > > kvm and having some simple helper > > > '(un)register ioasid with kvm (kvm, ioasid)' > > > that the vfio_device driver can call that just sorts this out. > > > > We could almost eliminate the device notion altogether here, use an > > ioasidfd_for_each_ioasid() but we really want a way to trigger on each > > change to the composition of the device set for the ioasid, which is > > why we currently do it on addition or removal of a group, where the > > group has a consistent set of IOMMU properties. Register a notifier > > callback via the ioasidfd? Thanks, > > > > When discussing I/O page fault support in another thread, the consensus > is that an device handle will be registered (by user) or allocated (return > to user) in /dev/ioasid when binding the device to ioasid fd. From this > angle we can register {ioasid_fd, device_handle} to KVM and then call > something like ioasidfd_device_is_coherent() to get the property. > Anyway the coherency is a per-device property which is not changed > by how many I/O page tables are attached to it. The mechanics are different, but this is pretty similar in concept to KVM learning coherence using the groupfd today. Do we want to compromise on kernel control of wbinvd emulation to allow userspace to make such decisions? Ownership of a device might be reason enough to allow the user that privilege. Thanks, Alex