Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1765547Ab3DEABo (ORCPT ); Thu, 4 Apr 2013 20:01:44 -0400 Received: from mail-db8lp0189.outbound.messaging.microsoft.com ([213.199.154.189]:31740 "EHLO db8outboundpool.messaging.microsoft.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1765470Ab3DEABn (ORCPT ); Thu, 4 Apr 2013 20:01:43 -0400 X-Forefront-Antispam-Report: CIP:70.37.183.190;KIP:(null);UIP:(null);IPV:NLI;H:mail.freescale.net;RD:none;EFVD:NLI X-SpamScore: -5 X-BigFish: VS-5(zz98dI9371I936eI542I1432Id799h4015Izz1f42h1fc6h1ee6h1de0h1fdah1202h1e76h1d1ah1d2ahzz8275dhz2dh2a8h668h839h8e2h8e3h93fhd25hf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh15d0h162dh1631h1758h18e1h1946h19b5h1ad9h1b0ahbe9i1155h) From: Sethi Varun-B16395 To: Alex Williamson CC: Joerg Roedel , Yoder Stuart-B08248 , Wood Scott-B07421 , "iommu@lists.linux-foundation.org" , "linuxppc-dev@lists.ozlabs.org" , "linux-kernel@vger.kernel.org" , "galak@kernel.crashing.org" , "benh@kernel.crashing.org" Subject: RE: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver and iommu implementation. Thread-Topic: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver and iommu implementation. Thread-Index: AQHOK+8t5MYhVlGTF0iQ16GsJktLA5jDI3qAgAGvMoCAATtRIIAAKn4AgAAO3OCAAAfogIAAeVsQ Date: Fri, 5 Apr 2013 00:01:38 +0000 Message-ID: References: <1364500442-20927-1-git-send-email-Varun.Sethi@freescale.com> <1364500442-20927-6-git-send-email-Varun.Sethi@freescale.com> <20130402161812.GI15687@8bytes.org> <1365012091.2882.252.camel@bling.home> <1365088930.2882.296.camel@bling.home> <1365093819.2882.301.camel@bling.home> In-Reply-To: <1365093819.2882.301.camel@bling.home> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.214.249.237] Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 X-OriginatorOrg: freescale.com Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from base64 to 8bit by mail.home.local id r3501pQW006071 Content-Length: 6119 Lines: 118 > -----Original Message----- > From: Alex Williamson [mailto:alex.williamson@redhat.com] > Sent: Thursday, April 04, 2013 10:14 PM > To: Sethi Varun-B16395 > Cc: Joerg Roedel; Yoder Stuart-B08248; Wood Scott-B07421; > iommu@lists.linux-foundation.org; linuxppc-dev@lists.ozlabs.org; linux- > kernel@vger.kernel.org; galak@kernel.crashing.org; > benh@kernel.crashing.org > Subject: Re: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver and iommu > implementation. > > On Thu, 2013-04-04 at 16:35 +0000, Sethi Varun-B16395 wrote: > > > > > -----Original Message----- > > > From: Alex Williamson [mailto:alex.williamson@redhat.com] > > > Sent: Thursday, April 04, 2013 8:52 PM > > > To: Sethi Varun-B16395 > > > Cc: Joerg Roedel; Yoder Stuart-B08248; Wood Scott-B07421; > > > iommu@lists.linux-foundation.org; linuxppc-dev@lists.ozlabs.org; > > > linux- kernel@vger.kernel.org; galak@kernel.crashing.org; > > > benh@kernel.crashing.org > > > Subject: Re: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver and > > > iommu implementation. > > > > > > On Thu, 2013-04-04 at 13:00 +0000, Sethi Varun-B16395 wrote: > > > > > > > > > -----Original Message----- > > > > > From: Alex Williamson [mailto:alex.williamson@redhat.com] > > > > > Sent: Wednesday, April 03, 2013 11:32 PM > > > > > To: Joerg Roedel > > > > > Cc: Sethi Varun-B16395; Yoder Stuart-B08248; Wood Scott-B07421; > > > > > iommu@lists.linux-foundation.org; linuxppc-dev@lists.ozlabs.org; > > > > > linux- kernel@vger.kernel.org; galak@kernel.crashing.org; > > > > > benh@kernel.crashing.org > > > > > Subject: Re: [PATCH 5/5 v11] iommu/fsl: Freescale PAMU driver > > > > > and iommu implementation. > > > > > > > > > > On Tue, 2013-04-02 at 18:18 +0200, Joerg Roedel wrote: > > > > > > Cc'ing Alex Williamson > > > > > > > > > > > > Alex, can you please review the iommu-group part of this patch? > > > > > > > > > > Sure, it looks pretty reasonable. AIUI, all PCI devices are > > > > > below some kind of host bridge that is either new and supports > > > > > partitioning or old and doesn't. I don't know if that's a > > > > > visibility or isolation requirement, perhaps PCI ACS-ish. In > > > > > the new host bridge case, each device gets a group. This seems > > > > > not to have any quirks for multifunction devices though. On AMD > > > > > and Intel IOMMUs we test multifunction device ACS support to > > > > > determine whether all the functions should be in the same group. > > > > > Is there any reason > > > to trust multifunction devices on PAMU? > > > > > > > > > [Sethi Varun-B16395] In the case where we can partition endpoints > > > > we can distinguish transactions based on the bus,device,function > > > > number combination. This support is available in the PCIe > > > > controller (host bridge). > > > > > > So can x86 IOMMUs, that's the visibility aspect of IOMMU groups. > > > Visibility alone doesn't necessarily imply that a device is isolated > > > though. A multifunction PCI device that doesn't expose ACS support > > > may not isolate functions from each other. For example a > > > peer-to-peer DMA between functions may not be translated by the > > > upstream IOMMU. IOMMU groups should encompass both visibility and > isolation. > > [Sethi Varun-B16395] We can isolate the DMA access to the host based > > on the to the pci bus,device,function number. > > The IOMMU can only isolate DMA that it can see. A multifunction device > may never expose peer-to-peer DMA to the upstream device, it's > implementation specific. The ACS flags allow that possibility to be > controlled and prevented. > > > I thought that was enough to put devices in to separate iommu groups. > > This is a PCIe controller property which allows us to partition PCIe > > devices. But, what I can understand from your point is that we also > > need to consider isolation at PCIe device level as well. I will check > > for the case of multifunction devices. > > > > > > > > > > I also find it curious what happens to the iommu group of the > > > > > host bridge. In the partitionable case the host bridge group is > > > > > removed, in the non-partitionable case the host bridge group > > > > > becomes the group for the children, removing the host bridge. > > > > > It's unique to PAMU so far that these host bridges are even in > > > > > an iommu group (x86 only adds pci devices), but I don't see it > > > > > as necessarily wrong leaving it in either scenario. Does it > > > > > solve some problem to remove > > > them from the groups? > > > > > Thanks, > > > > [Sethi Varun-B16395] The PCIe controller isn't a partitionable > > > > entity, it would always be owned by the host. > > > > > > Ownership of a device shouldn't play into the group context. An > > > IOMMU group should be defined by it's visibility and isolation from > > > other devices. Whether the PCIe controller is allowed to be handed > > > to userspace is a question for VFIO. > > [Sethi Varun-B16395] The problem is in the case, where we can't > > partition PCIe devices. PCIe devices share the same device group as > > the PCI controller. This becomes a problem while assigning the devices > > to the guest, as you are required to unbind all the PCIe devices > > including the controller from the host. PCIe controller can't be > > unbound from the host, so we simply delete the controller iommu_group. > > Unbinding devices is a VFIO implementation, it shouldn't leak into IOMMU > groups. Also note that VFIO has a driver white list where we can have > exceptions to the rule. I recently added pciehp to that list because the > host driver provides functionality. Being attached to the host driver > means the device is not accessible to the user through VFIO, but other > devices in the group are. Thanks, > Also, as Stuart pointed out the PCIe controller aren't the actual DMA devices (endpoints are the actual DMA devices). So, we remove the device group allocated for the PCIe controllers. -Varun ????{.n?+???????+%?????ݶ??w??{.n?+????{??G?????{ay?ʇڙ?,j??f???h?????????z_??(?階?ݢj"???m??????G????????????&???~???iO???z??v?^?m???? ????????I?