Received: by 2002:a25:7ec1:0:0:0:0:0 with SMTP id z184csp5606979ybc; Wed, 27 Nov 2019 06:44:57 -0800 (PST) X-Google-Smtp-Source: APXvYqx5m8Sc36Ce3jnPHnVshzQwyYqzm3oeHgvdHR75e67yJnbQaKocJyVHgaqXfwhkarUUJ5gX X-Received: by 2002:a05:6402:1802:: with SMTP id g2mr29504617edy.253.1574865897150; Wed, 27 Nov 2019 06:44:57 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1574865897; cv=none; d=google.com; s=arc-20160816; b=zAI/Wej2KDCeCyc4eIVRx+/UW+t87owTC2NzSMgG0vR0EoprImpcl/b4pjwhZIZr40 xlX8u1C0AOvm5IMX1Gm0O2+x+ZGmwacuC6vRKsxHvOglo5xDlKfPrr5AJQqbqVQ8GMJa SbqRexcE5MmJHI3C5Jq7aleSZHyyCNJ/VYtgX03rS7xCd37CYS9A/2wztEPOmvoTg2rZ AaHAlhpGw3yDsAZWZNXRrJo9dXN2k3ZFuwl6gVxsiZPC5plzd76ONOlJ3gC5Im61zdmv fhww0npxhC5a0dHb01E6/oKx5uW719XQiWJyACJKsRy/sq/FVUuQD2fR+Cr0QXadhzHO USWw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=CZV3OYfpH1AK9pBqBZAdN+dUDGtEBZRBkTDFRY9k6uk=; b=TX0QwTz+FDIeJUsO0reHT/owyZFKSF4mTa6yW4T7hwnBXOftWV+vdBbJgbuChj4PNs fq5zjf4Xeq0zJAh3xOR/sNlNEx1HIPtJTRcphbCOOCAKq56WOsbF0NVUnEYtuczV4OKw iTgY5G3Rbccdk/zEeDpM1+gCIyTq/VJ3ol0W/eXdpmkLM9ITIjzlWzSbCgux+bR0q/kU XIdXwgFP2dxDyovcV2+SZDUgQ5ZI+Hm1DV43KDxCx4AD6dbJUEJ01KgJjHcx7dq5QCzF IZqiodXvWatf1iY+NiJZIv1pUYsb16ME0Ue9PD/f5ZlZR+lHogrwmboQTZQ1VDuiV5Fb Lz2g== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=bMOqtrlM; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b16si128743ejd.79.2019.11.27.06.44.33; Wed, 27 Nov 2019 06:44:57 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=bMOqtrlM; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727110AbfK0OkR (ORCPT + 99 others); Wed, 27 Nov 2019 09:40:17 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:38202 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726320AbfK0OkQ (ORCPT ); Wed, 27 Nov 2019 09:40:16 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=CZV3OYfpH1AK9pBqBZAdN+dUDGtEBZRBkTDFRY9k6uk=; b=bMOqtrlMh4aH68BYjeNt4pp+XA uaxEoWQVzB0IswnzUl4OdLcTF8zSAtQZIPqUPnFIKqnlCXzfbr4hhR1VjqvQ8SeEbl7pCnDQp0h/g c5Oh6rcdZLfOLhh7o3dkErRf3xwf4h+SeVcEQ8KC7JMkIbE9Vcg4PW8D4pwUCERnhvepg3FSaqmdN 4MoDQ83K6tE924fszzWssocdbGxClsNtfvkFhs0L4CxqN1vmN4C0XYyCbvLSQZWw0CbeUy6qXdi57 Wn9hcGtv4OIYJaMEISnUkoUguzqlQHn8sAP9GecvQ989DOmCIXEMr8DEDxopv8Eic/tQYVF+/0hEZ vmxIJwDg==; Received: from clnet-p19-102.ikbnet.co.at ([83.175.77.102] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1iZyUC-0006gj-Sk; Wed, 27 Nov 2019 14:40:09 +0000 From: Christoph Hellwig To: Thomas Hellstrom Cc: =?UTF-8?q?Christian=20K=C3=B6nig?= , Tom Lendacky , iommu@lists.linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 1/2] dma-mapping: move dma_addressing_limited out of line Date: Wed, 27 Nov 2019 15:40:05 +0100 Message-Id: <20191127144006.25998-2-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191127144006.25998-1-hch@lst.de> References: <20191127144006.25998-1-hch@lst.de> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This function isn't used in the fast path, and moving it out of line will reduce include clutter with the next change. Signed-off-by: Christoph Hellwig --- include/linux/dma-mapping.h | 14 +------------- kernel/dma/mapping.c | 15 +++++++++++++++ 2 files changed, 16 insertions(+), 13 deletions(-) diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index c4d8741264bd..94ef74ecd18a 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -687,19 +687,7 @@ static inline int dma_coerce_mask_and_coherent(struct device *dev, u64 mask) return dma_set_mask_and_coherent(dev, mask); } -/** - * dma_addressing_limited - return if the device is addressing limited - * @dev: device to check - * - * Return %true if the devices DMA mask is too small to address all memory in - * the system, else %false. Lack of addressing bits is the prime reason for - * bounce buffering, but might not be the only one. - */ -static inline bool dma_addressing_limited(struct device *dev) -{ - return min_not_zero(dma_get_mask(dev), dev->bus_dma_limit) < - dma_get_required_mask(dev); -} +bool dma_addressing_limited(struct device *dev); #ifdef CONFIG_ARCH_HAS_SETUP_DMA_OPS void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 12ff766ec1fa..1dbe6d725962 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -405,3 +405,18 @@ unsigned long dma_get_merge_boundary(struct device *dev) return ops->get_merge_boundary(dev); } EXPORT_SYMBOL_GPL(dma_get_merge_boundary); + +/** + * dma_addressing_limited - return if the device is addressing limited + * @dev: device to check + * + * Return %true if the devices DMA mask is too small to address all memory in + * the system, else %false. Lack of addressing bits is the prime reason for + * bounce buffering, but might not be the only one. + */ +bool dma_addressing_limited(struct device *dev) +{ + return min_not_zero(dma_get_mask(dev), dev->bus_dma_limit) < + dma_get_required_mask(dev); +} +EXPORT_SYMBOL_GPL(dma_addressing_limited); -- 2.20.1