Received: by 2002:a25:6193:0:0:0:0:0 with SMTP id v141csp1276520ybb; Fri, 10 Apr 2020 22:53:42 -0700 (PDT) X-Google-Smtp-Source: APiQypK3tprf9Sfl8fQdE+jdB1iGZr2GfTpFe3l1ho9z+uVxpuv/WmnTZ525555187qklwi+OTKt X-Received: by 2002:aed:3b75:: with SMTP id q50mr2483010qte.23.1586584422342; Fri, 10 Apr 2020 22:53:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1586584422; cv=none; d=google.com; s=arc-20160816; b=RI2WfGWfTThaxGcxzsR5mg4tBWy8oOtzHzhl+0fvLR7sAI6JfJ+mvwt/+2D4xjj6vV SwqfodPXuSmOIfG9bJl+YTfyVcUwd7ssPdZZYIw9NN+21lpQSmfz08yaEhlKo7DJoIl7 vlwuj1mc7KhkfuLxlOblrM5EisSmUV5SR2v7zh2yzR5YzkghBX+i1EeeVUHA1gpfhtV5 zp5daC6bHYn+/JmusrYF7ix7v418nwhJphwX4CvAiPVmviR4tS7VV4z/e0Wi453gRIQ9 QKDas+AJ1TxGOIay6YtP3oCsFLZhdluv9O+5acXWgWPvbuAUPCloY4PGVmmHuXy74YKc TXGQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:content-transfer-encoding :dlp-reaction:dlp-version:dlp-product:content-language :accept-language:in-reply-to:references:message-id:date:thread-index :thread-topic:subject:cc:to:from:ironport-sdr:ironport-sdr; bh=inO8sVO6+aKQlO1u3cBXz+rvvXdnij7IWKNbY8cf++Q=; b=UTcHQeJbl2bD/4OHA6CfR79bDpqjB6fg0mwHgOy7EOTLIFyu0T32hPhObcZq3IKtTS hl7Xe7IJ4qU0rAzamOV2Y/hIzwmIrr6LtlTYPd9rhbK/6s5OgFt2vNmEyNg62DVouCio l3Ff3DqvEnyUSOlo5U9LhiY/M4FgocNXxXgketnx85VfP/xMxMsSldnkV8OfO7+2z/om bzkrwxthOETzef1F+eTf5jUmv2Nf72+xL4d1QjDYceyFLcn4VDMTln5UOPRReGxqDflw hXxM8drdxYP1KeTHHpGuXchtXXKzExo6QaT/nA7bFdybP/OzJXWsVHM0K3874/KOlDqx Afjg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r23si1880090qtp.26.2020.04.10.22.53.27; Fri, 10 Apr 2020 22:53:42 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725923AbgDKFwe convert rfc822-to-8bit (ORCPT + 99 others); Sat, 11 Apr 2020 01:52:34 -0400 Received: from mga06.intel.com ([134.134.136.31]:40189 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725855AbgDKFwd (ORCPT ); Sat, 11 Apr 2020 01:52:33 -0400 IronPort-SDR: 9/8Mo6zMwPxfP7Yb9gkWxCoo8wOK2dYosR+1YYnvIp6cH4NYl0jOM5xH8NQ0YTjd8hKP2gJ+dD J2DpDL1Li8ww== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Apr 2020 22:52:33 -0700 IronPort-SDR: D9S4NmhTY9He0vCHWlIIHDwfxarPdFygWdKBlgYxYMAyjMca3hoZdN6fUp+96mF+lgTf32YNKh C3aiBL+vOCkQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,368,1580803200"; d="scan'208";a="287382593" Received: from fmsmsx103.amr.corp.intel.com ([10.18.124.201]) by fmsmga002.fm.intel.com with ESMTP; 10 Apr 2020 22:52:33 -0700 Received: from fmsmsx115.amr.corp.intel.com (10.18.116.19) by FMSMSX103.amr.corp.intel.com (10.18.124.201) with Microsoft SMTP Server (TLS) id 14.3.439.0; Fri, 10 Apr 2020 22:52:33 -0700 Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by fmsmsx115.amr.corp.intel.com (10.18.116.19) with Microsoft SMTP Server (TLS) id 14.3.439.0; Fri, 10 Apr 2020 22:52:32 -0700 Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.225]) by SHSMSX101.ccr.corp.intel.com ([169.254.1.129]) with mapi id 14.03.0439.000; Sat, 11 Apr 2020 13:52:23 +0800 From: "Liu, Yi L" To: Alex Williamson CC: "eric.auger@redhat.com" , "Tian, Kevin" , "jacob.jun.pan@linux.intel.com" , "joro@8bytes.org" , "Raj, Ashok" , "Tian, Jun J" , "Sun, Yi Y" , "jean-philippe@linaro.org" , "peterx@redhat.com" , "iommu@lists.linux-foundation.org" , "kvm@vger.kernel.org" , "linux-kernel@vger.kernel.org" , "Wu, Hao" Subject: RE: [PATCH v1 6/8] vfio/type1: Bind guest page tables to host Thread-Topic: [PATCH v1 6/8] vfio/type1: Bind guest page tables to host Thread-Index: AQHWAEUdkW8K+/kg/06c7098DvJyv6hlyxcAgA2TZEA= Date: Sat, 11 Apr 2020 05:52:22 +0000 Message-ID: References: <1584880325-10561-1-git-send-email-yi.l.liu@intel.com> <1584880325-10561-7-git-send-email-yi.l.liu@intel.com> <20200402135700.0da30021@w520.home> In-Reply-To: <20200402135700.0da30021@w520.home> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-version: 11.2.0.6 dlp-reaction: no-action x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 8BIT MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Alex, > From: Alex Williamson > Sent: Friday, April 3, 2020 3:57 AM > To: Liu, Yi L > Subject: Re: [PATCH v1 6/8] vfio/type1: Bind guest page tables to host > > On Sun, 22 Mar 2020 05:32:03 -0700 > "Liu, Yi L" wrote: > > > From: Liu Yi L > > > > VFIO_TYPE1_NESTING_IOMMU is an IOMMU type which is backed by [...] > > +/** > > + * Unbind specific gpasid, caller of this function requires hold > > + * vfio_iommu->lock > > + */ > > +static long vfio_iommu_type1_do_guest_unbind(struct vfio_iommu *iommu, > > + struct iommu_gpasid_bind_data *gbind_data) > > +{ > > + return vfio_iommu_for_each_dev(iommu, > > + vfio_unbind_gpasid_fn, gbind_data); > > +} > > + > > +static long vfio_iommu_type1_bind_gpasid(struct vfio_iommu *iommu, > > + struct iommu_gpasid_bind_data *gbind_data) > > +{ > > + int ret = 0; > > + > > + mutex_lock(&iommu->lock); > > + if (!IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu)) { > > + ret = -EINVAL; > > + goto out_unlock; > > + } > > + > > + ret = vfio_iommu_for_each_dev(iommu, > > + vfio_bind_gpasid_fn, gbind_data); > > + /* > > + * If bind failed, it may not be a total failure. Some devices > > + * within the iommu group may have bind successfully. Although > > + * we don't enable pasid capability for non-singletion iommu > > + * groups, a unbind operation would be helpful to ensure no > > + * partial binding for an iommu group. > > Where was the non-singleton group restriction done, I missed that. > > > + */ > > + if (ret) > > + /* > > + * Undo all binds that already succeeded, no need to > > + * check the return value here since some device within > > + * the group has no successful bind when coming to this > > + * place switch. > > + */ > > + vfio_iommu_type1_do_guest_unbind(iommu, gbind_data); > > However, the for_each_dev function stops when the callback function > returns error, are we just assuming we stop at the same device as we > faulted on the first time and that we traverse the same set of devices > the second time? It seems strange to me that unbind should be able to > fail. I think the code needs enhancement. Although one group per container and one device per group is the most typical and desired case, but the code here loops domains, groups, devices. It should be able to unwind prior bind when the loop failed for a device. So I plan to do below change for bind path. list_for_each_entry(domain, &iommu->domain_list, next) { list_for_each_entry(group, &domain->group_list, next) { /* * if bind failed on a certain device, should unbind prior successful * bind iommu_group_for_each_dev() should be modified to take two * callbacks, one for forward loop and one for reverse loop when failure * happened. "return_upon_failure" indicates whether return upon failure * during forward loop or not. If yes, iommu_group_for_each_dev() should * unwind the prior bind in this iommu group before return. */ ret = iommu_group_for_each_dev(iommu_group, bind_gpasid_fn, unbind_gpasid_fn, data, return_upon_failure); if (ret) break; } if (ret) { /* unwind bindings with prior groups */ list_for_each_entry_continue_reverse(group, &domain->group_list, next) { iommu_group_for_each_dev(iommu_group, unbind_gpasid_fn, NULL, data, ignore_intermediate_failure); } break; } } if (ret) { /* unwind bindings with prior domains */ list_for_each_entry_continue_reverse(domain, &iommu->domain_list, next) { iommu_group_for_each_dev(iommu_group, unbind_gpasid_fn, NULL, data, ignore_intermediate_failure); } } } return ret; Regards, Yi Liu