Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp2038218ybl; Thu, 19 Dec 2019 07:06:29 -0800 (PST) X-Google-Smtp-Source: APXvYqwCtBluS+ELnG2XuU7nzQdMh1RAwTMCULd1Q+PKGFSVCh+oZw9Q27LG988GdnTQiyV7vwK4 X-Received: by 2002:aca:d985:: with SMTP id q127mr2212502oig.132.1576767989277; Thu, 19 Dec 2019 07:06:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1576767989; cv=none; d=google.com; s=arc-20160816; b=wdtOaK81+4zhpvKgOe9h6G3loCryNmrvAl6TBquh3d2XIFIa3gULt+0qBIm2R0d+Cr f4s46RyjG9jf6KLe7ZucpDFucQLoXY2UqCMCXBWL9s/qoXhya2PrXONmjFvlY0qYDPxF rDR9zFlVS12DGauDOvf3BAwo1PSnuIJwjZ2OI/8f4ep1yQfQaRUqWL5RVTOjjBa8xMbb MBhCvG/KCCQGWfUwQaCoZEF04kyIZLxWgdZc+3anmG39J6mI5ckLYcHTDD/zdyxQjv/i 5AA5wV2L5H3SpiaZvStqM5UTx8Q1YnxVesPjk2yU2RucxL6Bj7tyKDQYuYogNPaR+6/M uI7Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=WyRJvJyYv1CtsdlQKtyL58iUvQLIMv/CIF9J55BRKqo=; b=aIVdmIhMPm2a6uXSCZoSLBTcPOSHg1rCB5mfbEZrLn2elDhqCq3geJkQxZ+oWjECu6 DgOssCwkLZZZDNTvrRsaazPrG11ho9mAIptkpFNz8vldo2kkbNWZ+MJhADRkG3HQ0Onw mxyc2BApLO17L0ejmfZyTnv5mq8WkQbktyOQeFJwPKz/bQZhumhC/Rh7xTY+eGs/YyOi OC7rK3VUNv//5KtybvPCqj9hhhS0q/UrR7IxZJFh4I8mJDzQfZ4CZqz2+gsgjPnPqsUe YEbHkKJRovN8FVMe5+3blTX4OpNa1NiYaFCmPmKfUGOGYmbP6FFCv+o9+jxZZcZ4IRzu 7Ylg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w4si3246925otj.148.2019.12.19.07.06.11; Thu, 19 Dec 2019 07:06:29 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726793AbfLSPDF (ORCPT + 99 others); Thu, 19 Dec 2019 10:03:05 -0500 Received: from verein.lst.de ([213.95.11.211]:42233 "EHLO verein.lst.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726701AbfLSPDF (ORCPT ); Thu, 19 Dec 2019 10:03:05 -0500 Received: by verein.lst.de (Postfix, from userid 2407) id 5D62768B20; Thu, 19 Dec 2019 16:03:00 +0100 (CET) Date: Thu, 19 Dec 2019 16:02:59 +0100 From: Christoph Hellwig To: Peter Ujfalusi Cc: Christoph Hellwig , Russell King - ARM Linux admin , Roger Quadros , Vignesh Raghavendra , linux-arm-kernel@lists.infradead.org, iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, Konrad Rzeszutek Wilk Subject: Re: [PATCH 2/2] arm: use swiotlb for bounce buffer on LPAE configs Message-ID: <20191219150259.GA3003@lst.de> References: <20190709142011.24984-1-hch@lst.de> <20190709142011.24984-3-hch@lst.de> <9bbd87c2-5b6c-069c-dd22-5105dc827428@ti.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <9bbd87c2-5b6c-069c-dd22-5105dc827428@ti.com> User-Agent: Mutt/1.5.17 (2007-11-01) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Peter, can you try the patch below (it will need to be split into two): diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index e822af0d9219..30b9c6786ce3 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -221,7 +221,8 @@ EXPORT_SYMBOL(arm_coherent_dma_ops); static int __dma_supported(struct device *dev, u64 mask, bool warn) { - unsigned long max_dma_pfn = min(max_pfn, arm_dma_pfn_limit); + unsigned long max_dma_pfn = + min_t(unsigned long, max_pfn, zone_dma_limit >> PAGE_SHIFT); /* * Translate the device's DMA mask to a PFN limit. This diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c index 3ef204137e73..dd0e169a1bb1 100644 --- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -19,6 +19,7 @@ #include #include #include +#include #include #include #include @@ -84,15 +85,6 @@ static void __init find_limits(unsigned long *min, unsigned long *max_low, phys_addr_t arm_dma_zone_size __read_mostly; EXPORT_SYMBOL(arm_dma_zone_size); -/* - * The DMA mask corresponding to the maximum bus address allocatable - * using GFP_DMA. The default here places no restriction on DMA - * allocations. This must be the smallest DMA mask in the system, - * so a successful GFP_DMA allocation will always satisfy this. - */ -phys_addr_t arm_dma_limit; -unsigned long arm_dma_pfn_limit; - static void __init arm_adjust_dma_zone(unsigned long *size, unsigned long *hole, unsigned long dma_size) { @@ -108,14 +100,14 @@ static void __init arm_adjust_dma_zone(unsigned long *size, unsigned long *hole, void __init setup_dma_zone(const struct machine_desc *mdesc) { -#ifdef CONFIG_ZONE_DMA - if (mdesc->dma_zone_size) { + if (!IS_ENABLED(CONFIG_ZONE_DMA)) { + zone_dma_limit = ((phys_addr_t)~0); + } else if (mdesc->dma_zone_size) { arm_dma_zone_size = mdesc->dma_zone_size; - arm_dma_limit = PHYS_OFFSET + arm_dma_zone_size - 1; - } else - arm_dma_limit = 0xffffffff; - arm_dma_pfn_limit = arm_dma_limit >> PAGE_SHIFT; -#endif + zone_dma_limit = PHYS_OFFSET + arm_dma_zone_size - 1; + } else { + zone_dma_limit = 0xffffffff; + } } static void __init zone_sizes_init(unsigned long min, unsigned long max_low, @@ -279,7 +271,7 @@ void __init arm_memblock_init(const struct machine_desc *mdesc) early_init_fdt_scan_reserved_mem(); /* reserve memory for DMA contiguous allocations */ - dma_contiguous_reserve(arm_dma_limit); + dma_contiguous_reserve(zone_dma_limit); arm_memblock_steal_permitted = false; memblock_dump_all(); diff --git a/arch/arm/mm/mm.h b/arch/arm/mm/mm.h index 88c121ac14b3..7dbd77554273 100644 --- a/arch/arm/mm/mm.h +++ b/arch/arm/mm/mm.h @@ -82,14 +82,6 @@ extern __init void add_static_vm_early(struct static_vm *svm); #endif -#ifdef CONFIG_ZONE_DMA -extern phys_addr_t arm_dma_limit; -extern unsigned long arm_dma_pfn_limit; -#else -#define arm_dma_limit ((phys_addr_t)~0) -#define arm_dma_pfn_limit (~0ul >> PAGE_SHIFT) -#endif - extern phys_addr_t arm_lowmem_limit; void __init bootmem_init(void); diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index b65dffdfb201..7a7501acd763 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -441,7 +441,7 @@ void __init arm64_memblock_init(void) early_init_fdt_scan_reserved_mem(); if (IS_ENABLED(CONFIG_ZONE_DMA)) { - zone_dma_bits = ARM64_ZONE_DMA_BITS; + zone_dma_limit = DMA_BIT_MASK(ARM64_ZONE_DMA_BITS); arm64_dma_phys_limit = max_zone_phys(ARM64_ZONE_DMA_BITS); } diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c index 9488b63dfc87..337ace03d3f0 100644 --- a/arch/powerpc/mm/mem.c +++ b/arch/powerpc/mm/mem.c @@ -223,7 +223,7 @@ static int __init mark_nonram_nosave(void) * everything else. GFP_DMA32 page allocations automatically fall back to * ZONE_DMA. * - * By using 31-bit unconditionally, we can exploit zone_dma_bits to inform the + * By using 31-bit unconditionally, we can exploit zone_dma_limit to inform the * generic DMA mapping code. 32-bit only devices (if not handled by an IOMMU * anyway) will take a first dip into ZONE_NORMAL and get otherwise served by * ZONE_DMA. @@ -257,18 +257,20 @@ void __init paging_init(void) printk(KERN_DEBUG "Memory hole size: %ldMB\n", (long int)((top_of_ram - total_ram) >> 20)); +#ifdef CONFIG_ZONE_DMA /* * Allow 30-bit DMA for very limited Broadcom wifi chips on many * powerbooks. */ - if (IS_ENABLED(CONFIG_PPC32)) - zone_dma_bits = 30; - else - zone_dma_bits = 31; - -#ifdef CONFIG_ZONE_DMA - max_zone_pfns[ZONE_DMA] = min(max_low_pfn, - 1UL << (zone_dma_bits - PAGE_SHIFT)); + if (IS_ENABLED(CONFIG_PPC32)) { + zone_dma_limit = DMA_BIT_MASK(30); + max_zone_pfns[ZONE_DMA] = min(max_low_pfn, + 1UL << (30 - PAGE_SHIFT)); + } else { + zone_dma_limit = DMA_BIT_MASK(31); + max_zone_pfns[ZONE_DMA] = min(max_low_pfn, + 1UL << (31 - PAGE_SHIFT)); + } #endif max_zone_pfns[ZONE_NORMAL] = max_low_pfn; #ifdef CONFIG_HIGHMEM diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c index f0ce22220565..c403f61cb56b 100644 --- a/arch/s390/mm/init.c +++ b/arch/s390/mm/init.c @@ -118,7 +118,7 @@ void __init paging_init(void) sparse_memory_present_with_active_regions(MAX_NUMNODES); sparse_init(); - zone_dma_bits = 31; + zone_dma_limit = DMA_BIT_MASK(31); memset(max_zone_pfns, 0, sizeof(max_zone_pfns)); max_zone_pfns[ZONE_DMA] = PFN_DOWN(MAX_DMA_ADDRESS); max_zone_pfns[ZONE_NORMAL] = max_low_pfn; diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h index 24b8684aa21d..20d56d597506 100644 --- a/include/linux/dma-direct.h +++ b/include/linux/dma-direct.h @@ -6,7 +6,7 @@ #include /* for min_low_pfn */ #include -extern unsigned int zone_dma_bits; +extern phys_addr_t zone_dma_limit; #ifdef CONFIG_ARCH_HAS_PHYS_TO_DMA #include diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 6af7ae83c4ad..5ea1bed2ba6f 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -21,7 +21,7 @@ * it for entirely different regions. In that case the arch code needs to * override the variable below for dma-direct to work properly. */ -unsigned int zone_dma_bits __ro_after_init = 24; +phys_addr_t zone_dma_limit __ro_after_init = DMA_BIT_MASK(24); static void report_addr(struct device *dev, dma_addr_t dma_addr, size_t size) { @@ -74,7 +74,7 @@ static gfp_t __dma_direct_optimal_gfp_mask(struct device *dev, u64 dma_mask, * Note that GFP_DMA32 and GFP_DMA are no ops without the corresponding * zones. */ - if (*phys_limit <= DMA_BIT_MASK(zone_dma_bits)) + if (*phys_limit <= zone_dma_limit) return GFP_DMA; if (*phys_limit <= DMA_BIT_MASK(32)) return GFP_DMA32; @@ -483,7 +483,7 @@ int dma_direct_supported(struct device *dev, u64 mask) u64 min_mask; if (IS_ENABLED(CONFIG_ZONE_DMA)) - min_mask = DMA_BIT_MASK(zone_dma_bits); + min_mask = zone_dma_limit; else min_mask = DMA_BIT_MASK(32);