Received: by 2002:a05:6358:16cc:b0:ea:6187:17c9 with SMTP id r12csp13235129rwl; Wed, 4 Jan 2023 05:33:16 -0800 (PST) X-Google-Smtp-Source: AMrXdXthZMAtoarBYu1FAhx4gsb5XHN4jZtCsV1XmB+2ULpqiimq2/NivhPQDzHEDj3ck1pJMLiH X-Received: by 2002:a17:906:a9b:b0:83d:2525:234b with SMTP id y27-20020a1709060a9b00b0083d2525234bmr5876407ejf.65.1672839195895; Wed, 04 Jan 2023 05:33:15 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1672839195; cv=none; d=google.com; s=arc-20160816; b=WrAnrT+cQlkE8Q9DRBdZ6soT7sMbz+x8fUnXlC5yiXt0jWBuAYY61lOP2KUbPc2Wx3 Orc+tnBOdYy2RdOcvPDZpAHmToNLiNPzchpunGrVY6SHSHMqRPJGf8Q01KD3PsM4WPjU oxWDqGVuuJsaVwTg71bzF3P1tC2TG/6qn282JjJ/vdlpbjAvOyAPXbY2Ty6/2wHqwC9s 3u+UgvgWm3LUnyksykuOTwZCRdyVcfMR32xPrdetw7HDZbV7OVNaeSRM8dE9Qyj9utjT SbaBa7KWc7MxhBjlkUavtB6Vn1alAUZ8my+FCSXb6HbJDRYguZGhzeUG5EwUz/u3g2SR MEiw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=PMyfcXoqoyyU8yurHd19CYR+2J+werQyZ+959n92JDM=; b=vDR4I3OhwbweAyOt0n1RpvMW1ABOAf6SMkPJxrvO7aC+skptpjZXk1ZecanRZ84+SB julZfSZ33iw1jjcPxsTxoq+f/rD7kqllX2mtwLFhDnxXrwglmU0Fckdfa1Ps+ol6/Zbp eMlaEtA9Axfoty2N9K3AEkSv2X/KNu457+ALPYMggzihSSty3f637dLYqQ1pvxGAvv0w 0iOORBqCfYZFArp7EeHsTTaUaZncXCYx1a1J3G00j4JDbhqPxMR5SRmvEwc4dn9JUBAX wMnZ0FRzbINOqQ7rMydIFsiiZPg0xZim/NgYChqShepJQaew0S3IaVLatUMtPmWnGyIL BB1w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=OKT1ire+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id xi9-20020a170906dac900b007c4f75345e7si30410953ejb.458.2023.01.04.05.33.00; Wed, 04 Jan 2023 05:33:15 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=OKT1ire+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232753AbjADNLs (ORCPT + 57 others); Wed, 4 Jan 2023 08:11:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44928 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239444AbjADNLG (ORCPT ); Wed, 4 Jan 2023 08:11:06 -0500 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9846B395C8 for ; Wed, 4 Jan 2023 05:09:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1672837754; x=1704373754; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=etvQ4VuCYjgoc+S/bXFKsFRT2trDWdnEQJZuwIpkWNE=; b=OKT1ire+nXckr3/1b5FzlbgR+jtyRi7WInoKH0G6q+fNyux906jxNBln qBieaUiQ7UKWI2KRaja6Yu4vFPjAB7h4nMqHGNYLihixsFv2LkQus+Rvx QOzqLVz//I0MoRx2V3Z73EAmSAyTNMoEDVsQNfiQbIu6OjyJApAeY6DwP 2NJLHyNyBIw/AmGwfQK80VsCLFcaR++1ipdUGCbHj2cMySNoNRW9tdZh8 tqSH5D0FOWEW/UfhlE7RCzE0d4yoyCS/XbfXWjh7rORaIThLjLzNhXukx gqcIiQRw94JiaUio51YeEkxakIEBfU3iHB5iCvHGLzoijzTA+dOBGuean g==; X-IronPort-AV: E=McAfee;i="6500,9779,10579"; a="320640320" X-IronPort-AV: E=Sophos;i="5.96,300,1665471600"; d="scan'208";a="320640320" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jan 2023 05:07:20 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10579"; a="797529378" X-IronPort-AV: E=Sophos;i="5.96,299,1665471600"; d="scan'208";a="797529378" Received: from allen-box.sh.intel.com ([10.239.159.48]) by fmsmga001.fm.intel.com with ESMTP; 04 Jan 2023 05:07:14 -0800 From: Lu Baolu To: Joerg Roedel , Jason Gunthorpe , Christoph Hellwig , Kevin Tian , Will Deacon , Robin Murphy , Jean-Philippe Brucker Cc: Suravee Suthikulpanit , Hector Martin , Sven Peter , Rob Clark , Marek Szyprowski , Krzysztof Kozlowski , Andy Gross , Bjorn Andersson , Yong Wu , Matthias Brugger , Heiko Stuebner , Matthew Rosato , Orson Zhai , Baolin Wang , Chunyan Zhang , Chen-Yu Tsai , Thierry Reding , iommu@lists.linux.dev, linux-kernel@vger.kernel.org, Lu Baolu Subject: [PATCH v4 18/19] iommu: Remove deferred attach check from __iommu_detach_device() Date: Wed, 4 Jan 2023 20:57:24 +0800 Message-Id: <20230104125725.271850-19-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230104125725.271850-1-baolu.lu@linux.intel.com> References: <20230104125725.271850-1-baolu.lu@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Jason Gunthorpe At the current moment, __iommu_detach_device() is only called via call chains that are after the device driver is attached - eg via explicit attach APIs called by the device driver. Commit bd421264ed30 ("iommu: Fix deferred domain attachment") has removed deferred domain attachment check from __iommu_attach_device() path, so it should just unconditionally work in the __iommu_detach_device() path. It actually looks like a bug that we were blocking detach on these paths since the attach was unconditional and the caller is going to free the (probably) UNAMANGED domain once this returns. The only place we should be testing for deferred attach is during the initial point the dma device is linked to the group, and then again during the dma api calls. Signed-off-by: Jason Gunthorpe Signed-off-by: Lu Baolu --- include/linux/iommu.h | 2 ++ drivers/iommu/iommu.c | 70 ++++++++++++++++++++++--------------------- 2 files changed, 38 insertions(+), 34 deletions(-) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 7b3e3775b069..0d10566b3cb2 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -405,6 +405,7 @@ struct iommu_fault_param { * @iommu_dev: IOMMU device this device is linked to * @priv: IOMMU Driver private data * @max_pasids: number of PASIDs this device can consume + * @attach_deferred: the dma domain attachment is deferred * * TODO: migrate other per device data pointers under iommu_dev_data, e.g. * struct iommu_group *iommu_group; @@ -417,6 +418,7 @@ struct dev_iommu { struct iommu_device *iommu_dev; void *priv; u32 max_pasids; + u32 attach_deferred:1; }; int iommu_device_register(struct iommu_device *iommu, diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 4e35a9f94873..c7bd8663f1f5 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -371,6 +371,30 @@ static int __iommu_probe_device(struct device *dev, struct list_head *group_list return ret; } +static bool iommu_is_attach_deferred(struct device *dev) +{ + const struct iommu_ops *ops = dev_iommu_ops(dev); + + if (ops->is_attach_deferred) + return ops->is_attach_deferred(dev); + + return false; +} + +static int iommu_group_do_dma_first_attach(struct device *dev, void *data) +{ + struct iommu_domain *domain = data; + + lockdep_assert_held(&dev->iommu_group->mutex); + + if (iommu_is_attach_deferred(dev)) { + dev->iommu->attach_deferred = 1; + return 0; + } + + return __iommu_attach_device(domain, dev); +} + int iommu_probe_device(struct device *dev) { const struct iommu_ops *ops; @@ -401,7 +425,7 @@ int iommu_probe_device(struct device *dev) * attach the default domain. */ if (group->default_domain && !group->owner) { - ret = __iommu_attach_device(group->default_domain, dev); + ret = iommu_group_do_dma_first_attach(dev, group->default_domain); if (ret) { mutex_unlock(&group->mutex); iommu_group_put(group); @@ -947,16 +971,6 @@ static int iommu_create_device_direct_mappings(struct iommu_group *group, return ret; } -static bool iommu_is_attach_deferred(struct device *dev) -{ - const struct iommu_ops *ops = dev_iommu_ops(dev); - - if (ops->is_attach_deferred) - return ops->is_attach_deferred(dev); - - return false; -} - /** * iommu_group_add_device - add a device to an iommu group * @group: the group into which to add the device (reference should be held) @@ -1009,8 +1023,8 @@ int iommu_group_add_device(struct iommu_group *group, struct device *dev) mutex_lock(&group->mutex); list_add_tail(&device->list, &group->devices); - if (group->domain && !iommu_is_attach_deferred(dev)) - ret = __iommu_attach_device(group->domain, dev); + if (group->domain) + ret = iommu_group_do_dma_first_attach(dev, group->domain); mutex_unlock(&group->mutex); if (ret) goto err_put_group; @@ -1776,21 +1790,10 @@ static void probe_alloc_default_domain(struct bus_type *bus, } -static int iommu_group_do_dma_attach(struct device *dev, void *data) -{ - struct iommu_domain *domain = data; - int ret = 0; - - if (!iommu_is_attach_deferred(dev)) - ret = __iommu_attach_device(domain, dev); - - return ret; -} - -static int __iommu_group_dma_attach(struct iommu_group *group) +static int __iommu_group_dma_first_attach(struct iommu_group *group) { return __iommu_group_for_each_dev(group, group->default_domain, - iommu_group_do_dma_attach); + iommu_group_do_dma_first_attach); } static int iommu_group_do_probe_finalize(struct device *dev, void *data) @@ -1855,7 +1858,7 @@ int bus_iommu_probe(struct bus_type *bus) iommu_group_create_direct_mappings(group); - ret = __iommu_group_dma_attach(group); + ret = __iommu_group_dma_first_attach(group); mutex_unlock(&group->mutex); @@ -1987,9 +1990,11 @@ static int __iommu_attach_device(struct iommu_domain *domain, return -ENODEV; ret = domain->ops->attach_dev(domain, dev); - if (!ret) - trace_attach_device_to_domain(dev); - return ret; + if (ret) + return ret; + dev->iommu->attach_deferred = 0; + trace_attach_device_to_domain(dev); + return 0; } /** @@ -2034,7 +2039,7 @@ EXPORT_SYMBOL_GPL(iommu_attach_device); int iommu_deferred_attach(struct device *dev, struct iommu_domain *domain) { - if (iommu_is_attach_deferred(dev)) + if (dev->iommu && dev->iommu->attach_deferred) return __iommu_attach_device(domain, dev); return 0; @@ -2043,9 +2048,6 @@ int iommu_deferred_attach(struct device *dev, struct iommu_domain *domain) static void __iommu_detach_device(struct iommu_domain *domain, struct device *dev) { - if (iommu_is_attach_deferred(dev)) - return; - domain->ops->detach_dev(domain, dev); trace_detach_device_from_domain(dev); } -- 2.34.1