Received: by 2002:a6b:fb09:0:0:0:0:0 with SMTP id h9csp1455387iog; Tue, 14 Jun 2022 06:29:47 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyOQNa4b/kJifOeCvKjUk6RtqfBboDyqOb+CVTZ6nlM4r6smJ4My7h4N5/NDNba8WZTaFsq X-Received: by 2002:a17:907:3e21:b0:711:ea36:914e with SMTP id hp33-20020a1709073e2100b00711ea36914emr4288862ejc.101.1655213386931; Tue, 14 Jun 2022 06:29:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1655213386; cv=none; d=google.com; s=arc-20160816; b=Y1A9yrKHA0rFr6akhJYfG5RwYj2NO6N8WBf2/MDYRUt2nx7bFiqgtpdi1EWQJV9MwA IpEC620rLMTbU5sXtBAc/BvkswYrmbzgoF2R6ftAgWQp7buXALbGSXscjBxoWDTwhrVp rai/LNLzs5enFcABFh7NbDvFYyc1W4Prsa8NX8EcIp8Zw2G7EEIvULyvbVcJPirxC3j5 YrT4POnPIQLcN5yYMTbveMTYTMrAzEuEIbmcATSZjFcEHDA6/WtE+aoodykgD7ZyHakE zBrL/OV1/WEFKPaG2ayzkDZdriDnx2xoMUMOUFPtWPkhu0USBeqxeHgmtzMRz+mX18N8 SV/Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:subject:user-agent:mime-version:date:message-id; bh=vvzb/hqB40u7v8p5O2Pxz6J9rDTORPzAnGOZ/wjcne8=; b=eXVxUmXuwvLN+XwpohLjdQxQHheme4IOcH0Ertiu3VeU3kgrJvroNrhus0GwIHn2fm BwZ6pOSzSyz/NVIJdH/du6sbgFrSi50ndg/E/AWtrmMiIjzhTCFQZ+vnNfeSmlFIfLdC gRk5WjtYeCU91zBJL78hqKtGgJX29XNFnjYGVJjLng5Yd4PY9wShQGl9eoBGiAgTduDj 8F06bWsvCUzu+jaM6biUvUjQ6DhSRo1B0x8FgkehGTQyiVFZxx/aMEGUhbzPn+ZYIg1x BhIcewaWKXq9gHF3XDMmuCz7iEoWl+yY1TU1hrwdYAyMURFOzwDR5/ktYRZqX8KEiuQE 5pRQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f9-20020aa7d849000000b0042dce4d768csi10895615eds.387.2022.06.14.06.29.20; Tue, 14 Jun 2022 06:29:46 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244207AbiFNNJR (ORCPT + 99 others); Tue, 14 Jun 2022 09:09:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55420 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234149AbiFNNJQ (ORCPT ); Tue, 14 Jun 2022 09:09:16 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 52AC321E11; Tue, 14 Jun 2022 06:09:14 -0700 (PDT) Received: from fraeml736-chm.china.huawei.com (unknown [172.18.147.226]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4LMpd70JCTz6F90V; Tue, 14 Jun 2022 21:07:39 +0800 (CST) Received: from lhreml724-chm.china.huawei.com (10.201.108.75) by fraeml736-chm.china.huawei.com (10.206.15.217) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 14 Jun 2022 15:09:12 +0200 Received: from [10.195.33.253] (10.195.33.253) by lhreml724-chm.china.huawei.com (10.201.108.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 14 Jun 2022 14:09:10 +0100 Message-ID: <4a3ab043-f609-22cb-895f-e67c8dd8f6ab@huawei.com> Date: Tue, 14 Jun 2022 14:12:18 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.6.1 Subject: Re: [PATCH v3 2/4] dma-iommu: Add iommu_dma_opt_mapping_size() To: , , , , , , , CC: , , , , , , , References: <1654507822-168026-1-git-send-email-john.garry@huawei.com> <1654507822-168026-3-git-send-email-john.garry@huawei.com> From: John Garry In-Reply-To: <1654507822-168026-3-git-send-email-john.garry@huawei.com> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 7bit X-Originating-IP: [10.195.33.253] X-ClientProxiedBy: lhreml746-chm.china.huawei.com (10.201.108.196) To lhreml724-chm.china.huawei.com (10.201.108.75) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-3.8 required=5.0 tests=BAYES_00,NICE_REPLY_A, RCVD_IN_DNSWL_LOW,RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 06/06/2022 10:30, John Garry wrote: > Add the IOMMU callback for DMA mapping API dma_opt_mapping_size(), which > allows the drivers to know the optimal mapping limit and thus limit the > requested IOVA lengths. > > This value is based on the IOVA rcache range limit, as IOVAs allocated > above this limit must always be newly allocated, which may be quite slow. > Can I please get some sort of ack from the IOMMU people on this one? Thanks, John EOM > Signed-off-by: John Garry > Reviewed-by: Damien Le Moal > --- > drivers/iommu/dma-iommu.c | 6 ++++++ > drivers/iommu/iova.c | 5 +++++ > include/linux/iova.h | 2 ++ > 3 files changed, 13 insertions(+) > > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c > index f90251572a5d..9e1586447ee8 100644 > --- a/drivers/iommu/dma-iommu.c > +++ b/drivers/iommu/dma-iommu.c > @@ -1459,6 +1459,11 @@ static unsigned long iommu_dma_get_merge_boundary(struct device *dev) > return (1UL << __ffs(domain->pgsize_bitmap)) - 1; > } > > +static size_t iommu_dma_opt_mapping_size(void) > +{ > + return iova_rcache_range(); > +} > + > static const struct dma_map_ops iommu_dma_ops = { > .alloc = iommu_dma_alloc, > .free = iommu_dma_free, > @@ -1479,6 +1484,7 @@ static const struct dma_map_ops iommu_dma_ops = { > .map_resource = iommu_dma_map_resource, > .unmap_resource = iommu_dma_unmap_resource, > .get_merge_boundary = iommu_dma_get_merge_boundary, > + .opt_mapping_size = iommu_dma_opt_mapping_size, > }; > > /* > diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c > index db77aa675145..9f00b58d546e 100644 > --- a/drivers/iommu/iova.c > +++ b/drivers/iommu/iova.c > @@ -26,6 +26,11 @@ static unsigned long iova_rcache_get(struct iova_domain *iovad, > static void free_cpu_cached_iovas(unsigned int cpu, struct iova_domain *iovad); > static void free_iova_rcaches(struct iova_domain *iovad); > > +unsigned long iova_rcache_range(void) > +{ > + return PAGE_SIZE << (IOVA_RANGE_CACHE_MAX_SIZE - 1); > +} > + > static int iova_cpuhp_dead(unsigned int cpu, struct hlist_node *node) > { > struct iova_domain *iovad; > diff --git a/include/linux/iova.h b/include/linux/iova.h > index 320a70e40233..c6ba6d95d79c 100644 > --- a/include/linux/iova.h > +++ b/include/linux/iova.h > @@ -79,6 +79,8 @@ static inline unsigned long iova_pfn(struct iova_domain *iovad, dma_addr_t iova) > int iova_cache_get(void); > void iova_cache_put(void); > > +unsigned long iova_rcache_range(void); > + > void free_iova(struct iova_domain *iovad, unsigned long pfn); > void __free_iova(struct iova_domain *iovad, struct iova *iova); > struct iova *alloc_iova(struct iova_domain *iovad, unsigned long size,