Received: by 2002:a05:6a10:1a4d:0:0:0:0 with SMTP id nk13csp5951775pxb; Mon, 14 Feb 2022 11:31:25 -0800 (PST) X-Google-Smtp-Source: ABdhPJxxLnTy6Q8w7wBIbGhMQz10R9XwEFuJ6C4AtIhsUDK3YbHrTc+syBmmwjKZXr3A8CdBISwN X-Received: by 2002:a62:190b:: with SMTP id 11mr351029pfz.77.1644867085107; Mon, 14 Feb 2022 11:31:25 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1644867085; cv=none; d=google.com; s=arc-20160816; b=JIV0hotsQlTT0BziAIdtSFO9IsRHl7k9DOBwtGhLDrkUZqLU23kdm/oYjJzZfaSp6m BTSCKxAYCWMSHcgQtNTSlgIBnTg5fCKOF0/ljslWlk8f/Mfy94tz9zj5BLf07wsN85SD /XPw0EZdvWoFgv4r2UgZtuWDFJwchfZzZklbqr6mvEE4P54HaeIz5NvZr7T0oaC6HSPl yyyds7o/SJcZxBwajBSx4cQwphnzOFT8qoT7U1IyVBn3ckNx3mqh6m9iAkiHZyCzgr5B Ctd6RIEdv8XA0+4ezgEydf230Ijst6UklBp1YV548YB9QzvlL4/As24+Hy6oNMNVx5Bh IQbQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=ExKQ8nPdABMq6BT5oOx5lrg0AIGaQ/3AB0sPhEgimh0=; b=ZS7vQ56pYi60FZFRqI23VWJHcc7d61gxRZTJ98ZFvsOh8ywgWtQRVCu938Jwobrks6 ziOeeUsKg5Oi/wfiOTVEKPZHEu1c40cXs5MDqho/LWMGdeQAjjHTkgbavsYlVyA3W0Ol v1xL58eiYY4yvB0SsUdTM4ZGhT87DAvYXPnBVBDjuP3idKVHwk5sKWX3sAp95bUUiHCl 3BsPnTYXmqdv4mqZMfwvkXgwgxvUvitaW/Tirhy2tjiSssSzHKiSAIguxFA/1jw108bZ GxKlVNHztZ5jXqACQw0Pptem5FZI6A+Si2josm4PPn2Mwrj8TRQEGFF4MPb+j7EU4jPw rutw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [2620:137:e000::1:18]) by mx.google.com with ESMTPS id s69si6721483pfc.67.2022.02.14.11.31.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Feb 2022 11:31:25 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) client-ip=2620:137:e000::1:18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 9C1AEECB25; Mon, 14 Feb 2022 11:19:55 -0800 (PST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1357136AbiBNRfO (ORCPT + 99 others); Mon, 14 Feb 2022 12:35:14 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:57398 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1357134AbiBNRfL (ORCPT ); Mon, 14 Feb 2022 12:35:11 -0500 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 83F6A65412 for ; Mon, 14 Feb 2022 09:35:03 -0800 (PST) Received: from fraeml742-chm.china.huawei.com (unknown [172.18.147.206]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4JyBDf14gMz6H7KH; Tue, 15 Feb 2022 01:34:42 +0800 (CST) Received: from lhreml724-chm.china.huawei.com (10.201.108.75) by fraeml742-chm.china.huawei.com (10.206.15.223) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Mon, 14 Feb 2022 18:35:01 +0100 Received: from localhost.localdomain (10.69.192.58) by lhreml724-chm.china.huawei.com (10.201.108.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Mon, 14 Feb 2022 17:34:57 +0000 From: John Garry To: , , CC: , , , , , , , , , John Garry Subject: [PATCH v5 4/5] iommu: Allow max opt DMA len be set for a group via sysfs Date: Tue, 15 Feb 2022 01:29:05 +0800 Message-ID: <1644859746-20279-5-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1644859746-20279-1-git-send-email-john.garry@huawei.com> References: <1644859746-20279-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.69.192.58] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To lhreml724-chm.china.huawei.com (10.201.108.75) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,RDNS_NONE, SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add support to allow the maximum optimised DMA len be set for an IOMMU group via sysfs. This is much the same with the method to change the default domain type for a group. Signed-off-by: John Garry --- .../ABI/testing/sysfs-kernel-iommu_groups | 16 +++++ drivers/iommu/iommu.c | 59 ++++++++++++++++++- include/linux/iommu.h | 6 ++ 3 files changed, 79 insertions(+), 2 deletions(-) diff --git a/Documentation/ABI/testing/sysfs-kernel-iommu_groups b/Documentation/ABI/testing/sysfs-kernel-iommu_groups index b15af6a5bc08..ed6f72794f6c 100644 --- a/Documentation/ABI/testing/sysfs-kernel-iommu_groups +++ b/Documentation/ABI/testing/sysfs-kernel-iommu_groups @@ -63,3 +63,19 @@ Description: /sys/kernel/iommu_groups//type shows the type of default system could lead to catastrophic effects (the users might need to reboot the machine to get it to normal state). So, it's expected that the users understand what they're doing. + +What: /sys/kernel/iommu_groups//max_opt_dma_size +Date: Feb 2022 +KernelVersion: v5.18 +Contact: iommu@lists.linux-foundation.org +Description: /sys/kernel/iommu_groups//max_opt_dma_size shows the + max optimised DMA size for the default IOMMU domain associated + with the group. + Each IOMMU domain has an IOVA domain. The IOVA domain caches + IOVAs upto a certain size as a performance optimisation. + This sysfs file allows the range of the IOVA domain caching be + set, such that larger than default IOVAs may be cached. + A value of 0 means that the default caching range is chosen. + A privileged user could request the kernel the change the range + by writing to this file. For this to happen, the same rules + and procedure applies as in changing the default domain type. diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index df9ffd76c184..79f5cbea5c95 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -48,6 +48,7 @@ struct iommu_group { struct iommu_domain *default_domain; struct iommu_domain *domain; struct list_head entry; + size_t max_opt_dma_size; }; struct group_device { @@ -89,6 +90,9 @@ static int iommu_create_device_direct_mappings(struct iommu_group *group, static struct iommu_group *iommu_group_get_for_dev(struct device *dev); static ssize_t iommu_group_store_type(struct iommu_group *group, const char *buf, size_t count); +static ssize_t iommu_group_store_max_opt_dma_size(struct iommu_group *group, + const char *buf, + size_t count); #define IOMMU_GROUP_ATTR(_name, _mode, _show, _store) \ struct iommu_group_attribute iommu_group_attr_##_name = \ @@ -570,6 +574,12 @@ static ssize_t iommu_group_show_type(struct iommu_group *group, return strlen(type); } +static ssize_t iommu_group_show_max_opt_dma_size(struct iommu_group *group, + char *buf) +{ + return sprintf(buf, "%zu\n", group->max_opt_dma_size); +} + static IOMMU_GROUP_ATTR(name, S_IRUGO, iommu_group_show_name, NULL); static IOMMU_GROUP_ATTR(reserved_regions, 0444, @@ -578,6 +588,9 @@ static IOMMU_GROUP_ATTR(reserved_regions, 0444, static IOMMU_GROUP_ATTR(type, 0644, iommu_group_show_type, iommu_group_store_type); +static IOMMU_GROUP_ATTR(max_opt_dma_size, 0644, iommu_group_show_max_opt_dma_size, + iommu_group_store_max_opt_dma_size); + static void iommu_group_release(struct kobject *kobj) { struct iommu_group *group = to_iommu_group(kobj); @@ -664,6 +677,10 @@ struct iommu_group *iommu_group_alloc(void) if (ret) return ERR_PTR(ret); + ret = iommu_group_create_file(group, &iommu_group_attr_max_opt_dma_size); + if (ret) + return ERR_PTR(ret); + pr_debug("Allocated group %d\n", group->id); return group; @@ -2302,6 +2319,11 @@ struct iommu_domain *iommu_get_dma_domain(struct device *dev) return dev->iommu_group->default_domain; } +size_t iommu_group_get_max_opt_dma_size(struct iommu_group *group) +{ + return group->max_opt_dma_size; +} + /* * IOMMU groups are really the natural working unit of the IOMMU, but * the IOMMU API works on domains and devices. Bridge that gap by @@ -3132,12 +3154,14 @@ EXPORT_SYMBOL_GPL(iommu_sva_get_pasid); * @prev_dev: The device in the group (this is used to make sure that the device * hasn't changed after the caller has called this function) * @type: The type of the new default domain that gets associated with the group + * @max_opt_dma_size: Set the IOMMU group max_opt_dma_size if non-zero * * Returns 0 on success and error code on failure * */ static int iommu_change_dev_def_domain(struct iommu_group *group, - struct device *prev_dev, int type) + struct device *prev_dev, int type, + unsigned long max_opt_dma_size) { struct iommu_domain *prev_dom; struct group_device *grp_dev; @@ -3238,6 +3262,9 @@ static int iommu_change_dev_def_domain(struct iommu_group *group, group->domain = group->default_domain; + if (max_opt_dma_size) + group->max_opt_dma_size = max_opt_dma_size; + /* * Release the mutex here because ops->probe_finalize() call-back of * some vendor IOMMU drivers calls arm_iommu_attach_device() which @@ -3264,6 +3291,7 @@ static int iommu_change_dev_def_domain(struct iommu_group *group, enum iommu_group_op { CHANGE_GROUP_TYPE, + CHANGE_DMA_OPT_SIZE, }; static int __iommu_group_store_type(const char *buf, struct iommu_group *group, @@ -3292,7 +3320,24 @@ static int __iommu_group_store_type(const char *buf, struct iommu_group *group, return -EINVAL; } - return iommu_change_dev_def_domain(group, dev, type); + return iommu_change_dev_def_domain(group, dev, type, 0); +} + +static int __iommu_group_store_max_opt_dma_size(const char *buf, + struct iommu_group *group, + struct device *dev) +{ + unsigned long val; + + if (kstrtoul(buf, 0, &val) || !val) + return -EINVAL; + + if (device_is_bound(dev)) { + pr_err_ratelimited("Device is still bound to driver\n"); + return -EINVAL; + } + + return iommu_change_dev_def_domain(group, dev, __IOMMU_DOMAIN_SAME, val); } /* @@ -3369,6 +3414,9 @@ static ssize_t iommu_group_store_common(struct iommu_group *group, case CHANGE_GROUP_TYPE: ret = __iommu_group_store_type(buf, group, dev); break; + case CHANGE_DMA_OPT_SIZE: + ret = __iommu_group_store_max_opt_dma_size(buf, group, dev); + break; default: ret = -EINVAL; } @@ -3385,3 +3433,10 @@ static ssize_t iommu_group_store_type(struct iommu_group *group, { return iommu_group_store_common(group, CHANGE_GROUP_TYPE, buf, count); } + +static ssize_t iommu_group_store_max_opt_dma_size(struct iommu_group *group, + const char *buf, + size_t count) +{ + return iommu_group_store_common(group, CHANGE_DMA_OPT_SIZE, buf, count); +} diff --git a/include/linux/iommu.h b/include/linux/iommu.h index d242fccc7c2d..f7f1799fb07a 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -434,6 +434,7 @@ extern int iommu_sva_unbind_gpasid(struct iommu_domain *domain, struct device *dev, ioasid_t pasid); extern struct iommu_domain *iommu_get_domain_for_dev(struct device *dev); extern struct iommu_domain *iommu_get_dma_domain(struct device *dev); +extern size_t iommu_group_get_max_opt_dma_size(struct iommu_group *group); extern int iommu_map(struct iommu_domain *domain, unsigned long iova, phys_addr_t paddr, size_t size, int prot); extern int iommu_map_atomic(struct iommu_domain *domain, unsigned long iova, @@ -732,6 +733,11 @@ static inline struct iommu_domain *iommu_get_domain_for_dev(struct device *dev) return NULL; } +static inline size_t iommu_group_get_max_opt_dma_size(struct iommu_group *group) +{ + return 0; +} + static inline int iommu_map(struct iommu_domain *domain, unsigned long iova, phys_addr_t paddr, size_t size, int prot) { -- 2.26.2