Received: by 2002:a05:7412:d8a:b0:e2:908c:2ebd with SMTP id b10csp2474730rdg; Mon, 16 Oct 2023 05:53:41 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHg18w4Cz5u5BKnossul1ftqs3F1hSHZN0+IkpGX9myKtXZVjEdk+DbHTMBHVvAJ5OoHIEr X-Received: by 2002:a05:6870:e393:b0:1be:e1d9:6f83 with SMTP id x19-20020a056870e39300b001bee1d96f83mr39445335oad.28.1697460821099; Mon, 16 Oct 2023 05:53:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1697460821; cv=none; d=google.com; s=arc-20160816; b=m47cb9eYHlH/MN5gf3uuy57qDRwvo8lPhPkGBhVctNtIryRIE2LyBt+ms0u8jGaUPp blXThH24FL/b29KKTWmMmPjbgvjFO+MFP+PJQsyJwxRwP6Xa123kEwMvgA1tcvGkFuzU 0f08xxqQVqM9yNqqjOtboDEJc6Obiez7cKdS2If6iaKq254dDAgYCaGLYsdnOi9jyPQP Xg1b+aNnVmzD1hOdW2Y5Z9v5/Es6AIQ5TFc4Qx5vhbQccJbNHQ91XLLsqWl0VJMKNE1e FHYyEebjoLQcowkQzBrKoJ0XSUxKwqL4pcPtKjddj2AbIWR9YSSJBFacJ8rfRKSObDIG KaPA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=TG1wcK3sxM55d/N2AP5JGxS/SQtcH2GPbN+ggnh9UpE=; fh=68tg6b1bPtmZVBq+B2APHMHzj3ZViBb0z45z2Kpdrwo=; b=hTeorlVILRu0amjQwmZKSoaWprl9ASKFUyoSE5FYr7I6SkJYXR0o0fMNHZTHOOPQfD 1Z+ZFarZ+vFb770bufPW5ch54G/272tZ3ffsZWaeO341OcSQNeXNV8eCqUOlzJin/HBZ m6/00qkCIxIFC6LHREyPZjt7vA4c/7x6lTOon+d31+YVJs8gpSCggcsldHOUAHE70jjb oZHkPAyqXGrhYFt+093dFSU4AVnT55dwlvJI8rgJyB+iv7vzUWHcFmyBQ8Dh8XjvCYCA SzasGRRJIvjy8Qf7Qg0w1VpLqdaY8Cjca36TN/09VLJ+vJLgeNp3Jmc/CynbQ4AuZ1GC 24Ow== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from howler.vger.email (howler.vger.email. [2620:137:e000::3:4]) by mx.google.com with ESMTPS id az1-20020a056a02004100b0059f0cebd04bsi10701051pgb.763.2023.10.16.05.53.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 16 Oct 2023 05:53:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) client-ip=2620:137:e000::3:4; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id 76E1B80ACEFB; Mon, 16 Oct 2023 05:53:38 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233411AbjJPMxM (ORCPT + 99 others); Mon, 16 Oct 2023 08:53:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40082 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233491AbjJPMxK (ORCPT ); Mon, 16 Oct 2023 08:53:10 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 03113E6 for ; Mon, 16 Oct 2023 05:53:09 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4D77C1FB; Mon, 16 Oct 2023 05:53:49 -0700 (PDT) Received: from entos-ampere02.shanghai.arm.com (unknown [10.169.212.212]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 5232B3F5A1; Mon, 16 Oct 2023 05:53:06 -0700 (PDT) From: Jia He To: Christoph Hellwig , Marek Szyprowski , Robin Murphy , iommu@lists.linux.dev Cc: linux-kernel@vger.kernel.org, nd@arm.com, Jia He Subject: [PATCH v3 1/2] dma-mapping: export dma_addressing_limited() Date: Mon, 16 Oct 2023 12:52:53 +0000 Message-Id: <20231016125254.1875-2-justin.he@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231016125254.1875-1-justin.he@arm.com> References: <20231016125254.1875-1-justin.he@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on howler.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Mon, 16 Oct 2023 05:53:38 -0700 (PDT) This is a preparatory patch to move dma_addressing_limited so that it is exported instead of a new low-level helper. Suggested-by: Christoph Hellwig Signed-off-by: Jia He --- include/linux/dma-mapping.h | 19 +++++-------------- kernel/dma/mapping.c | 15 +++++++++++++++ 2 files changed, 20 insertions(+), 14 deletions(-) diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index f0ccca16a0ac..4a658de44ee9 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -144,6 +144,7 @@ bool dma_pci_p2pdma_supported(struct device *dev); int dma_set_mask(struct device *dev, u64 mask); int dma_set_coherent_mask(struct device *dev, u64 mask); u64 dma_get_required_mask(struct device *dev); +bool dma_addressing_limited(struct device *dev); size_t dma_max_mapping_size(struct device *dev); size_t dma_opt_mapping_size(struct device *dev); bool dma_need_sync(struct device *dev, dma_addr_t dma_addr); @@ -264,6 +265,10 @@ static inline u64 dma_get_required_mask(struct device *dev) { return 0; } +static inline bool dma_addressing_limited(struct device *dev) +{ + return false; +} static inline size_t dma_max_mapping_size(struct device *dev) { return 0; @@ -465,20 +470,6 @@ static inline int dma_coerce_mask_and_coherent(struct device *dev, u64 mask) return dma_set_mask_and_coherent(dev, mask); } -/** - * dma_addressing_limited - return if the device is addressing limited - * @dev: device to check - * - * Return %true if the devices DMA mask is too small to address all memory in - * the system, else %false. Lack of addressing bits is the prime reason for - * bounce buffering, but might not be the only one. - */ -static inline bool dma_addressing_limited(struct device *dev) -{ - return min_not_zero(dma_get_mask(dev), dev->bus_dma_limit) < - dma_get_required_mask(dev); -} - static inline unsigned int dma_get_max_seg_size(struct device *dev) { if (dev->dma_parms && dev->dma_parms->max_segment_size) diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index e323ca48f7f2..5bfe782f9a7f 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -793,6 +793,21 @@ int dma_set_coherent_mask(struct device *dev, u64 mask) } EXPORT_SYMBOL(dma_set_coherent_mask); +/** + * dma_addressing_limited - return if the device is addressing limited + * @dev: device to check + * + * Return %true if the devices DMA mask is too small to address all memory in + * the system, else %false. Lack of addressing bits is the prime reason for + * bounce buffering, but might not be the only one. + */ +bool dma_addressing_limited(struct device *dev) +{ + return min_not_zero(dma_get_mask(dev), dev->bus_dma_limit) < + dma_get_required_mask(dev); +} +EXPORT_SYMBOL(dma_addressing_limited); + size_t dma_max_mapping_size(struct device *dev) { const struct dma_map_ops *ops = get_dma_ops(dev); -- 2.25.1