Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752296AbdLFSZm (ORCPT ); Wed, 6 Dec 2017 13:25:42 -0500 Received: from mga14.intel.com ([192.55.52.115]:47495 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751208AbdLFSZl (ORCPT ); Wed, 6 Dec 2017 13:25:41 -0500 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.45,369,1508828400"; d="scan'208";a="9660635" Date: Wed, 6 Dec 2017 10:23:11 -0800 From: "Raj, Ashok" To: Jerry Snitselaar Cc: iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, Alex Williamson , Joerg Roedel , Ashok Raj Subject: Re: [PATCH] iommu/vt-d: clean up pr_irq if request_threaded_irq fails Message-ID: <20171206182311.GA124624@otc-nc-03> References: <20171206164959.23794-1-jsnitsel@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20171206164959.23794-1-jsnitsel@redhat.com> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1711 Lines: 53 On Wed, Dec 06, 2017 at 09:49:59AM -0700, Jerry Snitselaar wrote: > It is unlikely request_threaded_irq will fail, but if it does for some > reason we should clear iommu->pr_irq in the error path. Also > intel_svm_finish_prq shouldn't try to clean up the page request > interrupt if pr_irq is 0. Without these, if request_threaded_irq were > to fail the following occurs: Looks good. Reviewed-by: Ashok Raj Cheers, Ashok > > Cc: Alex Williamson > Cc: Joerg Roedel > Cc: Ashok Raj > Signed-off-by: Jerry Snitselaar > --- > drivers/iommu/intel-svm.c | 9 ++++++--- > 1 file changed, 6 insertions(+), 3 deletions(-) > > diff --git a/drivers/iommu/intel-svm.c b/drivers/iommu/intel-svm.c > index ed1cf7c5a43b..6643277e321e 100644 > --- a/drivers/iommu/intel-svm.c > +++ b/drivers/iommu/intel-svm.c > @@ -129,6 +129,7 @@ int intel_svm_enable_prq(struct intel_iommu *iommu) > pr_err("IOMMU: %s: Failed to request IRQ for page request queue\n", > iommu->name); > dmar_free_hwirq(irq); > + iommu->pr_irq = 0; > goto err; > } > dmar_writeq(iommu->reg + DMAR_PQH_REG, 0ULL); > @@ -144,9 +145,11 @@ int intel_svm_finish_prq(struct intel_iommu *iommu) > dmar_writeq(iommu->reg + DMAR_PQT_REG, 0ULL); > dmar_writeq(iommu->reg + DMAR_PQA_REG, 0ULL); > > - free_irq(iommu->pr_irq, iommu); > - dmar_free_hwirq(iommu->pr_irq); > - iommu->pr_irq = 0; > + if (iommu->pr_irq) { > + free_irq(iommu->pr_irq, iommu); > + dmar_free_hwirq(iommu->pr_irq); > + iommu->pr_irq = 0; > + } > > free_pages((unsigned long)iommu->prq, PRQ_ORDER); > iommu->prq = NULL; > -- > 2.14.3 >