Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 352F5C63797 for ; Fri, 3 Feb 2023 08:53:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232514AbjBCIxc (ORCPT ); Fri, 3 Feb 2023 03:53:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50596 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231827AbjBCIx3 (ORCPT ); Fri, 3 Feb 2023 03:53:29 -0500 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E39052FCE2; Fri, 3 Feb 2023 00:53:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675414407; x=1706950407; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=i+nB9deLrKt42Lj4h2Ozf3dM6yQz6iX8nZHKknJIu1g=; b=Td5Zu/p/2RBg7lKcn8spfxTw0hPaqZ0hCKee5G69grR4EeFV5/WnnI3S XCBdkkXIydbYChbCB7PgRHSAMrVjxx+gUi9eyhLu+vsnPQTi+Yytcv+lk sxRvN3ROpzap0zI6hyrwMHBswMp8ZtOg7a9yVWeN2Mk22XZG4anAhF1De 8XRUpi/1mdh+MhWxWXYmUchGianWTSQgHoUv9EliHIF2ujmzn1OswoaWS yCNmYvY05EnHmzjF3lYrljQpM3SFPg/6h58TY2D0gpfVZ4p35jztmNfTy jt7i4lsBQX2t5Jzbo3xyo0hpgt2L+w4sxMWEO5H9hGha2XlLcNHt0GzbB Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10609"; a="327337319" X-IronPort-AV: E=Sophos;i="5.97,269,1669104000"; d="scan'208";a="327337319" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Feb 2023 00:53:27 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10609"; a="994447929" X-IronPort-AV: E=Sophos;i="5.97,269,1669104000"; d="scan'208";a="994447929" Received: from allen-box.sh.intel.com ([10.239.159.48]) by fmsmga005.fm.intel.com with ESMTP; 03 Feb 2023 00:53:12 -0800 From: Lu Baolu To: iommu@lists.linux.dev, dmaengine@vger.kernel.org Cc: Joerg Roedel , Will Deacon , Robin Murphy , Kevin Tian , Fenghua Yu , Dave Jiang , Vinod Koul , linux-kernel@vger.kernel.org, Lu Baolu Subject: [PATCH 1/2] dmaengine: idxd: Add enable/disable device IOPF feature Date: Fri, 3 Feb 2023 16:44:55 +0800 Message-Id: <20230203084456.469641-1-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The iommu subsystem requires IOMMU_DEV_FEAT_IOPF must be enabled before and disabled after IOMMU_DEV_FEAT_SVA, if device's I/O page faults rely on the IOMMU. Add explicit IOMMU_DEV_FEAT_IOPF enabling/disabling in this driver. At present, missing IOPF enabling/disabling doesn't cause any real issue, because the IOMMU driver places the IOPF enabling/disabling in the path of SVA feature handling. But this may change. Signed-off-by: Lu Baolu --- drivers/dma/idxd/init.c | 31 +++++++++++++++++++++++++------ 1 file changed, 25 insertions(+), 6 deletions(-) diff --git a/drivers/dma/idxd/init.c b/drivers/dma/idxd/init.c index 529ea09c9094..d5a709a842a8 100644 --- a/drivers/dma/idxd/init.c +++ b/drivers/dma/idxd/init.c @@ -511,6 +511,27 @@ static void idxd_disable_system_pasid(struct idxd_device *idxd) idxd->sva = NULL; } +static int idxd_enable_sva(struct pci_dev *pdev) +{ + int ret; + + ret = iommu_dev_enable_feature(&pdev->dev, IOMMU_DEV_FEAT_IOPF); + if (ret) + return ret; + + ret = iommu_dev_enable_feature(&pdev->dev, IOMMU_DEV_FEAT_SVA); + if (ret) + iommu_dev_disable_feature(&pdev->dev, IOMMU_DEV_FEAT_IOPF); + + return ret; +} + +static void idxd_disable_sva(struct pci_dev *pdev) +{ + iommu_dev_disable_feature(&pdev->dev, IOMMU_DEV_FEAT_SVA); + iommu_dev_disable_feature(&pdev->dev, IOMMU_DEV_FEAT_IOPF); +} + static int idxd_probe(struct idxd_device *idxd) { struct pci_dev *pdev = idxd->pdev; @@ -525,7 +546,7 @@ static int idxd_probe(struct idxd_device *idxd) dev_dbg(dev, "IDXD reset complete\n"); if (IS_ENABLED(CONFIG_INTEL_IDXD_SVM) && sva) { - if (iommu_dev_enable_feature(dev, IOMMU_DEV_FEAT_SVA)) { + if (idxd_enable_sva(pdev)) { dev_warn(dev, "Unable to turn on user SVA feature.\n"); } else { set_bit(IDXD_FLAG_USER_PASID_ENABLED, &idxd->flags); @@ -573,21 +594,19 @@ static int idxd_probe(struct idxd_device *idxd) if (device_pasid_enabled(idxd)) idxd_disable_system_pasid(idxd); if (device_user_pasid_enabled(idxd)) - iommu_dev_disable_feature(dev, IOMMU_DEV_FEAT_SVA); + idxd_disable_sva(pdev); return rc; } static void idxd_cleanup(struct idxd_device *idxd) { - struct device *dev = &idxd->pdev->dev; - perfmon_pmu_remove(idxd); idxd_cleanup_interrupts(idxd); idxd_cleanup_internals(idxd); if (device_pasid_enabled(idxd)) idxd_disable_system_pasid(idxd); if (device_user_pasid_enabled(idxd)) - iommu_dev_disable_feature(dev, IOMMU_DEV_FEAT_SVA); + idxd_disable_sva(idxd->pdev); } static int idxd_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) @@ -705,7 +724,7 @@ static void idxd_remove(struct pci_dev *pdev) pci_free_irq_vectors(pdev); pci_iounmap(pdev, idxd->reg_base); if (device_user_pasid_enabled(idxd)) - iommu_dev_disable_feature(&pdev->dev, IOMMU_DEV_FEAT_SVA); + idxd_disable_sva(pdev); pci_disable_device(pdev); destroy_workqueue(idxd->wq); perfmon_pmu_remove(idxd); -- 2.34.1