Received: by 2002:a25:d7c1:0:0:0:0:0 with SMTP id o184csp3658359ybg; Mon, 28 Oct 2019 16:45:26 -0700 (PDT) X-Google-Smtp-Source: APXvYqzT53qzVUjqFhDn7vJniz2NBXeguDbxg7YdAIonBga5SLnDW2+PWFqXeACJnXsM/GKtG6TW X-Received: by 2002:aa7:c691:: with SMTP id n17mr11354392edq.100.1572306326780; Mon, 28 Oct 2019 16:45:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1572306326; cv=none; d=google.com; s=arc-20160816; b=LEVwOXPfx4saNoKVgDtGv5rPfReP5Vb6AssseETWFON+r9VyGJ4YC4qWI6/ijtrpVv RRs+TQfZO9jln3HiqCtfDajYsqDl8tjq9Ixm0msjegv6sqzNt5uxYZUYlLY+Cctll8a/ NPDpcKHAKhW9mES6d9SGbAKV+da67fhat9UIG10Jd4MWMmSdgVbi56CqOu9FidHvFC80 4yKjBY3kQsf4IkRE6WTCelJIOX1G7qzjeotF4shKqXBZBIBb9Vn9/Nu0JzjDWW+esURq SA8SaInXkMIYKWgyc8TOkmEKULl/sfo8m+ygalBJo5jAtNceBYwD5+7WSMyK/HRS2YgD utSQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :organization:references:in-reply-to:message-id:subject:cc:to:from :date; bh=yPKY9Hvp8jCWSNfIO8RA7/zjGCXjEnwrb/JfKcZg8wo=; b=dwW9F7b1F3E3w+VnZQKq7WiTNP5jzk+dQMQ1FKXl8RW8ZKG8NIpWUeGqaSJJZG+8OM p3vyjy/EAq89FF4OBGECQyj+03/T0lj83/2fvybrNEFNNekicWVw9qi2w0dbJbGQ0vlt UoRn9IB1d2hwW3v4526wfn8FQsiUs1gEbgEhZj6q5WTx1A2zPZGlZALolu/m1zF1BFeB 4QWYOHFAlvs23bais0zDMUBx4TGXIR2bMDibpsB8gyaTE7ugM8t45+u1TxCuXeAYwCsY kOsMbaRh7e2DYMZgsu8+6wI69UBxYFJEOiFavrmqroiMLBxwODUemUZ3DbZJyFXikHMY IGMQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m55si8538170edc.17.2019.10.28.16.45.03; Mon, 28 Oct 2019 16:45:26 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729036AbfJ1WeJ (ORCPT + 99 others); Mon, 28 Oct 2019 18:34:09 -0400 Received: from mga01.intel.com ([192.55.52.88]:48858 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725829AbfJ1WeI (ORCPT ); Mon, 28 Oct 2019 18:34:08 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Oct 2019 15:34:07 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.68,241,1569308400"; d="scan'208";a="205300605" Received: from jacob-builder.jf.intel.com (HELO jacob-builder) ([10.7.199.155]) by FMSMGA003.fm.intel.com with ESMTP; 28 Oct 2019 15:34:07 -0700 Date: Mon, 28 Oct 2019 15:38:31 -0700 From: Jacob Pan To: Lu Baolu Cc: iommu@lists.linux-foundation.org, LKML , Joerg Roedel , David Woodhouse , Alex Williamson , Jean-Philippe Brucker , Yi Liu , "Tian, Kevin" , Raj Ashok , Christoph Hellwig , Jonathan Cameron , Eric Auger , jacob.jun.pan@linux.intel.com Subject: Re: [PATCH v7 08/11] iommu/vt-d: Misc macro clean up for SVM Message-ID: <20191028153831.0594d56e@jacob-builder> In-Reply-To: References: <1571946904-86776-1-git-send-email-jacob.jun.pan@linux.intel.com> <1571946904-86776-9-git-send-email-jacob.jun.pan@linux.intel.com> Organization: OTC X-Mailer: Claws Mail 3.13.2 (GTK+ 2.24.30; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, 26 Oct 2019 09:00:51 +0800 Lu Baolu wrote: > Hi, > > On 10/25/19 3:55 AM, Jacob Pan wrote: > > Use combined macros for_each_svm_dev() to simplify SVM device > > iteration and error checking. > > > > Suggested-by: Andy Shevchenko > > Signed-off-by: Jacob Pan > > Reviewed-by: Eric Auger > > --- > > drivers/iommu/intel-svm.c | 89 > > ++++++++++++++++++++++------------------------- 1 file changed, 42 > > insertions(+), 47 deletions(-) > > > > diff --git a/drivers/iommu/intel-svm.c b/drivers/iommu/intel-svm.c > > index a9a7f85a09bc..a18b02a9709d 100644 > > --- a/drivers/iommu/intel-svm.c > > +++ b/drivers/iommu/intel-svm.c > > @@ -212,6 +212,10 @@ static const struct mmu_notifier_ops > > intel_mmuops = { static DEFINE_MUTEX(pasid_mutex); > > static LIST_HEAD(global_svm_list); > > > > +#define for_each_svm_dev(svm, dev) \ > > + list_for_each_entry(sdev, &svm->devs, list) \ > > + if (dev == sdev->dev) \ > > + > > int intel_svm_bind_mm(struct device *dev, int *pasid, int flags, > > struct svm_dev_ops *ops) { > > struct intel_iommu *iommu = > > intel_svm_device_to_iommu(dev); @@ -257,15 +261,13 @@ int > > intel_svm_bind_mm(struct device *dev, int *pasid, int flags, struct > > svm_dev_ goto out; } > > > > - list_for_each_entry(sdev, &svm->devs, > > list) { > > - if (dev == sdev->dev) { > > - if (sdev->ops != ops) { > > - ret = -EBUSY; > > - goto out; > > - } > > - sdev->users++; > > - goto success; > > + for_each_svm_dev(svm, dev) { > > + if (sdev->ops != ops) { > > + ret = -EBUSY; > > + goto out; > > } > > + sdev->users++; > > + goto success; > > } > > > > break; > > @@ -402,50 +404,43 @@ int intel_svm_unbind_mm(struct device *dev, > > int pasid) goto out; > > > > svm = ioasid_find(NULL, pasid, NULL); > > - if (IS_ERR(svm)) { > > + if (IS_ERR_OR_NULL(svm)) { > > ret = PTR_ERR(svm); > > goto out; > > } > > > > - if (!svm) > > - goto out; > > If svm == NULL here, this function will return success. This isn't > expected, right? > you are right, should handle separately. Thanks! > Others looks good to me. > > Reviewed-by: Lu Baolu > > Best regards, > baolu > > > - > > - list_for_each_entry(sdev, &svm->devs, list) { > > - if (dev == sdev->dev) { > > - ret = 0; > > - sdev->users--; > > - if (!sdev->users) { > > - list_del_rcu(&sdev->list); > > - /* Flush the PASID cache and IOTLB > > for this device. > > - * Note that we do depend on the > > hardware *not* using > > - * the PASID any more. Just as we > > depend on other > > - * devices never using PASIDs that > > they have no right > > - * to use. We have a *shared* > > PASID table, because it's > > - * large and has to be physically > > contiguous. So it's > > - * hard to be as defensive as we > > might like. */ > > - intel_pasid_tear_down_entry(iommu, > > dev, svm->pasid); > > - intel_flush_svm_range_dev(svm, > > sdev, 0, -1, 0); > > - kfree_rcu(sdev, rcu); > > - > > - if (list_empty(&svm->devs)) { > > - /* Clear private data so > > that free pass check */ > > - > > ioasid_set_data(svm->pasid, NULL); > > - ioasid_free(svm->pasid); > > - if (svm->mm) > > - > > mmu_notifier_unregister(&svm->notifier, svm->mm); - > > - list_del(&svm->list); > > - > > - /* We mandate that no page > > faults may be outstanding > > - * for the PASID when > > intel_svm_unbind_mm() is called. > > - * If that is not obeyed, > > subtle errors will happen. > > - * Let's make them less > > subtle... */ > > - memset(svm, 0x6b, > > sizeof(*svm)); > > - kfree(svm); > > - } > > + for_each_svm_dev(svm, dev) { > > + ret = 0; > > + sdev->users--; > > + if (!sdev->users) { > > + list_del_rcu(&sdev->list); > > + /* Flush the PASID cache and IOTLB for > > this device. > > + * Note that we do depend on the hardware > > *not* using > > + * the PASID any more. Just as we depend > > on other > > + * devices never using PASIDs that they > > have no right > > + * to use. We have a *shared* PASID table, > > because it's > > + * large and has to be physically > > contiguous. So it's > > + * hard to be as defensive as we might > > like. */ > > + intel_pasid_tear_down_entry(iommu, dev, > > svm->pasid); > > + intel_flush_svm_range_dev(svm, sdev, 0, > > -1, 0); > > + kfree_rcu(sdev, rcu); > > + > > + if (list_empty(&svm->devs)) { > > + /* Clear private data so that free > > pass check */ > > + ioasid_set_data(svm->pasid, NULL); > > + ioasid_free(svm->pasid); > > + if (svm->mm) > > + > > mmu_notifier_unregister(&svm->notifier, svm->mm); > > + list_del(&svm->list); > > + /* We mandate that no page faults > > may be outstanding > > + * for the PASID when > > intel_svm_unbind_mm() is called. > > + * If that is not obeyed, subtle > > errors will happen. > > + * Let's make them less subtle... > > */ > > + memset(svm, 0x6b, sizeof(*svm)); > > + kfree(svm); > > } > > - break; > > } > > + break; > > } > > out: > > mutex_unlock(&pasid_mutex); > > @@ -581,7 +576,7 @@ static irqreturn_t prq_event_thread(int irq, > > void *d) > > * to unbind the mm while any page faults > > are outstanding. > > * So we only need RCU to protect the > > internal idr code. */ rcu_read_unlock(); > > - if (IS_ERR(svm) || !svm) { > > + if (IS_ERR_OR_NULL(svm)) { > > pr_err("%s: Page request for > > invalid PASID %d: %08llx %08llx\n", iommu->name, req->pasid, > > ((unsigned long long *)req)[0], ((unsigned long long *)req)[1]); > > [Jacob Pan]