Received: by 2002:a05:6a10:af89:0:0:0:0 with SMTP id iu9csp3461920pxb; Mon, 24 Jan 2022 10:03:13 -0800 (PST) X-Google-Smtp-Source: ABdhPJxZX3C4vauhG1pG7sYP2yGWZIQ23TGP7h9jBNY6JK2VTrpXq+DlL15EOT83LXlwfK11c65r X-Received: by 2002:a17:90b:1c87:: with SMTP id oo7mr3096373pjb.114.1643047392820; Mon, 24 Jan 2022 10:03:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1643047392; cv=none; d=google.com; s=arc-20160816; b=KtQHHXHoCp9siNqpHrCh+o26bOBeFYH6MeIYtYN6rEBPodAgIaUMWyTM0/IeJPUPzm 6SvanHvcQrGrW2seLyyvX6HWs1qnD9KIo8y/zCArt6IIB3ack+g1Xlq6xGmt0Sege5/7 MJYxl3hzNiSHVUv8eh75cZPGRVZvpGW/p+iJNiBBCem6Wjv6dw2/0viXneRDedqjmyiP LLro8i5t1+/94/tjFN2lGzsVwG8+ZP7HLfTLYOxIeIHwrwzooszO1NqMd16E2F+fSRia eahcbDT1rxk795kW4l05Nmv5tA0jdxTq1JnYgoD3ZR6jsTrBI/ZAItqCjns5xFQsfC9f z+zg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=6VQBTLYrtpnqqtWX+Q9fdJMNLHVcX9WDJMGGlrjx2uA=; b=nB3o1rjkaTBp4rki1frRGJNt2GLMp0Msi0BepsSf+T4OL15e2dsc4BWtjzTiLMiPwL BH0WiREWMZyhZlG3okJu44gDILfvV8LbSowbUJL4YVr8+rikByacFSgcN6gdlVGQoqT6 urpqiAya+g0Iz3r91H/3ileLgu5PlQSmLD7HdAQTvs6o8k8QBHcBhYMpuJZc4WyMtv/3 1TdXWvDackawC1n/m9sy6aatIOX4Hw/bLDnbaOZ4+GHKuJfi/GY0znTZ1zRMmozIYa1O QlBYmhWt5+CTiIbyY8n+L4YYg8XPqr32BV6aj1GlZOT5WXKeuHmtCW4NJauWbT++RA++ dMLw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=SR2W85zW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s142si14334047pgs.337.2022.01.24.10.02.59; Mon, 24 Jan 2022 10:03:12 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=SR2W85zW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235819AbiAXHNG (ORCPT + 99 others); Mon, 24 Jan 2022 02:13:06 -0500 Received: from mga04.intel.com ([192.55.52.120]:53096 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241700AbiAXHNF (ORCPT ); Mon, 24 Jan 2022 02:13:05 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1643008385; x=1674544385; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=N6lXBd1Dp4wenuwyBvOUNcK/XHtYdsl24YRAMWC14ws=; b=SR2W85zWg3tO4QycsJPATWqHzVJvKwLphTfWDLJMBtihSsAIM+39ctkk t7gh/x0tRixGuInLEbgT90DxPYIXPABHyocB2qH6wAXyJ1FZJ2EAK+R9k A0yeOhcJWxWrCTWcERyEPHhT/ybq9Y0h/yir71wfNc5gdw1JXmkgVa1cX VJagXkGkjKqr+F861AewauJID4VtbfF4xlFn/+3IBLwWqMIpoKD10z4sh CgNm/Gq7IXrBkQns9LTZmus7cdqv4SpfS6OVNBAHd+HfyIiLwj7RURHEU 6mwmf/L1OGD94uQS9PpYjOGEB0Ayxa05dOMTkcccfMf7s2VD65PLf96J4 Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10236"; a="244814262" X-IronPort-AV: E=Sophos;i="5.88,311,1635231600"; d="scan'208";a="244814262" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Jan 2022 23:12:52 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,311,1635231600"; d="scan'208";a="627387820" Received: from allen-box.sh.intel.com ([10.239.159.118]) by orsmga004.jf.intel.com with ESMTP; 23 Jan 2022 23:12:47 -0800 From: Lu Baolu To: Joerg Roedel , Jason Gunthorpe , Christoph Hellwig , Ben Skeggs , Kevin Tian , Ashok Raj , Will Deacon , Robin Murphy Cc: Alex Williamson , Eric Auger , Liu Yi L , Jacob jun Pan , David Airlie , Daniel Vetter , Thierry Reding , Jonathan Hunter , iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, Lu Baolu Subject: [PATCH 6/7] iommu: Use right way to retrieve iommu_ops Date: Mon, 24 Jan 2022 15:11:01 +0800 Message-Id: <20220124071103.2097118-7-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220124071103.2097118-1-baolu.lu@linux.intel.com> References: <20220124071103.2097118-1-baolu.lu@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The common iommu_ops is hooked to both device and domain. When a helper has both device and domain pointer, the way to get the iommu_ops looks messy in iommu core. This sorts out the way to get iommu_ops. The device related helpers go through device pointer, while the domain related ones go through domain pointer. Signed-off-by: Lu Baolu --- include/linux/iommu.h | 8 ++++++++ drivers/iommu/iommu.c | 25 ++++++++++++++----------- 2 files changed, 22 insertions(+), 11 deletions(-) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index aa5486243892..111b3e9c79bb 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -385,6 +385,14 @@ static inline void iommu_iotlb_gather_init(struct iommu_iotlb_gather *gather) }; } +static inline const struct iommu_ops *dev_iommu_ops_get(struct device *dev) +{ + if (dev && dev->iommu && dev->iommu->iommu_dev) + return dev->iommu->iommu_dev->ops; + + return NULL; +} + #define IOMMU_BUS_NOTIFY_PRIORITY 0 #define IOMMU_GROUP_NOTIFY_ADD_DEVICE 1 /* Device added */ #define IOMMU_GROUP_NOTIFY_DEL_DEVICE 2 /* Pre Device removed */ diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 5230c6d90ece..6631e2ea44df 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -764,6 +764,7 @@ EXPORT_SYMBOL_GPL(iommu_group_set_name); static int iommu_create_device_direct_mappings(struct iommu_group *group, struct device *dev) { + const struct iommu_ops *ops = dev_iommu_ops_get(dev); struct iommu_domain *domain = group->default_domain; struct iommu_resv_region *entry; struct list_head mappings; @@ -785,8 +786,8 @@ static int iommu_create_device_direct_mappings(struct iommu_group *group, dma_addr_t start, end, addr; size_t map_size = 0; - if (domain->ops->apply_resv_region) - domain->ops->apply_resv_region(dev, domain, entry); + if (ops->apply_resv_region) + ops->apply_resv_region(dev, domain, entry); start = ALIGN(entry->start, pg_size); end = ALIGN(entry->start + entry->length, pg_size); @@ -831,8 +832,10 @@ static int iommu_create_device_direct_mappings(struct iommu_group *group, static bool iommu_is_attach_deferred(struct iommu_domain *domain, struct device *dev) { - if (domain->ops->is_attach_deferred) - return domain->ops->is_attach_deferred(domain, dev); + const struct iommu_ops *ops = dev_iommu_ops_get(dev); + + if (ops->is_attach_deferred) + return ops->is_attach_deferred(domain, dev); return false; } @@ -1251,10 +1254,10 @@ int iommu_page_response(struct device *dev, struct iommu_fault_event *evt; struct iommu_fault_page_request *prm; struct dev_iommu *param = dev->iommu; + const struct iommu_ops *ops = dev_iommu_ops_get(dev); bool has_pasid = msg->flags & IOMMU_PAGE_RESP_PASID_VALID; - struct iommu_domain *domain = iommu_get_domain_for_dev(dev); - if (!domain || !domain->ops->page_response) + if (!ops || !ops->page_response) return -ENODEV; if (!param || !param->fault_param) @@ -1295,7 +1298,7 @@ int iommu_page_response(struct device *dev, msg->pasid = 0; } - ret = domain->ops->page_response(dev, evt, msg); + ret = ops->page_response(dev, evt, msg); list_del(&evt->list); kfree(evt); break; @@ -1758,10 +1761,10 @@ static int __iommu_group_dma_attach(struct iommu_group *group) static int iommu_group_do_probe_finalize(struct device *dev, void *data) { - struct iommu_domain *domain = data; + const struct iommu_ops *ops = dev_iommu_ops_get(dev); - if (domain->ops->probe_finalize) - domain->ops->probe_finalize(dev); + if (ops->probe_finalize) + ops->probe_finalize(dev); return 0; } @@ -2020,7 +2023,7 @@ EXPORT_SYMBOL_GPL(iommu_attach_device); int iommu_deferred_attach(struct device *dev, struct iommu_domain *domain) { - const struct iommu_ops *ops = domain->ops; + const struct iommu_ops *ops = dev_iommu_ops_get(dev); if (ops->is_attach_deferred && ops->is_attach_deferred(domain, dev)) return __iommu_attach_device(domain, dev); -- 2.25.1