Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp1969366imu; Fri, 14 Dec 2018 03:49:43 -0800 (PST) X-Google-Smtp-Source: AFSGD/VzhtFLuD7qW14IcJArlPTVI2+nr0+730asLqcuD6VPdL8DXVtlEmXN3XQ36Tk6qTFtoMK3 X-Received: by 2002:aa7:81d0:: with SMTP id c16mr2484423pfn.153.1544788183136; Fri, 14 Dec 2018 03:49:43 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1544788183; cv=none; d=google.com; s=arc-20160816; b=WGkoFiFk4om8cHg4eR4K40BhG0S0X2EyFPCfInGRbluI0chhxXcwW1XwXNqJNa2/C2 oXt/JJBvANHtmGc7Xk6YddAtS6mMOrr2pa4mIPNKpRFY1eFdMogOx9/7Br2Q1RQNYk08 HiDNEtYkhGeCq9kNVIHtdxEDLdiISdwHmvmZybd2BBaBqnyJLtrXyOD43FzwV4teWAWJ Q+ufJEr/B+1s+JXsPZ47tW/vL5jf9UpF2IafgsAnRx6mQCj7W9H8FsMqyUhHHw7yiLbu fnGm2PP7kDyXBNS+W+uQcK7NfungEeZpVXM234xBev5kK641QLm0glIlm/16UBE8svbw y1AQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=ryLr5qm6CqQwFX3UN/PSbSQmR7qmDSoAtCEpxu2dY2E=; b=RV5iVO5NHJgzAtcw0gH1hKo+sPyfZrNISkb1xqi95pUApfLw95YgaNbyOv7qylonIx j1DsETx2mM5BNeyDkF6GuxvQuvqWl1WUSIt+5Yqou8tnVKk6iGFSbzQBn/Hdo8p7QvM1 i7hKsoWV93VH2A/7+Rr4/NOs+9veANcZzAHcQ6/7s36IxHuDx1QPGKQhDsPafjBe+8YO LnXCGksA8EqofxOkgnpyvet+j4gspeq4y2QXIqQH/11YgasInIZKbB9e5f3pBI7I/g7D CwCuwaiyTA2a6HMyPrTbVdNNN1G4R5QJkL4PtrxFwNJpDa21INekQjWIiX9fPExF16wo 5rEA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 30si3918237pgr.396.2018.12.14.03.49.27; Fri, 14 Dec 2018 03:49:43 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729499AbeLNLrW (ORCPT + 99 others); Fri, 14 Dec 2018 06:47:22 -0500 Received: from verein.lst.de ([213.95.11.211]:46716 "EHLO newverein.lst.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726670AbeLNLrW (ORCPT ); Fri, 14 Dec 2018 06:47:22 -0500 Received: by newverein.lst.de (Postfix, from userid 2407) id 05FEC68DDD; Fri, 14 Dec 2018 12:47:20 +0100 (CET) Date: Fri, 14 Dec 2018 12:47:19 +0100 From: Christoph Hellwig To: Geert Uytterhoeven Cc: Christoph Hellwig , Linux IOMMU , Michal Simek , ashutosh.dixit@intel.com, alpha , arcml , linux-c6x-dev@linux-c6x.org, linux-m68k , Openrisc , Parisc List , linux-s390 , sparclinux , linux-xtensa@linux-xtensa.org, Linux Kernel Mailing List Subject: Re: [PATCH 1/2] dma-mapping: zero memory returned from dma_alloc_* Message-ID: <20181214114719.GA3316@lst.de> References: <20181214082515.14835-1-hch@lst.de> <20181214082515.14835-2-hch@lst.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.17 (2007-11-01) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Dec 14, 2018 at 10:54:32AM +0100, Geert Uytterhoeven wrote: > > - page = alloc_pages(flag, order); > > + page = alloc_pages(flag | GFP_ZERO, order); > > if (!page) > > return NULL; > > There's second implementation below, which calls __get_free_pages() and > does an explicit memset(). As __get_free_pages() calls alloc_pages(), perhaps > it makes sense to replace the memset() by GFP_ZERO, to increase consistency? It would, but this patch really tries to be minimally invasive to just provide the zeroing everywhere. There is plenty of opportunity to improve the m68k dma allocator if I can get enough reviewers/testers: - for one the coldfire/nommu case absolutely does not make sense to me as there is not work done at all to make sure the memory is mapped uncached despite the architecture implementing cache flushing for the map interface. So this whole implementation looks broken to me and will need some major work (I had a previous discussion with Greg on that which needs to be dug out) - the "regular" implementation in this patch should probably be replaced with the generic remapping helpers that have been added for the 4.21 merge window: http://git.infradead.org/users/hch/dma-mapping.git/commitdiff/0c3b3171ceccb8830c2bb5adff1b4e9b204c1450 Compile tested only patch below: -- From ade86dc75b9850daf9111ebf9ce15825a6144f2d Mon Sep 17 00:00:00 2001 From: Christoph Hellwig Date: Fri, 14 Dec 2018 12:41:45 +0100 Subject: m68k: use the generic dma coherent remap allocator This switche to using common code for the DMA allocations, including potential use of the CMA allocator if configure. Also add a few comments where the existing behavior seems to be lacking. Signed-off-by: Christoph Hellwig --- arch/m68k/Kconfig | 2 ++ arch/m68k/kernel/dma.c | 64 ++++++++++++------------------------------ 2 files changed, 20 insertions(+), 46 deletions(-) diff --git a/arch/m68k/Kconfig b/arch/m68k/Kconfig index 8a5868e9a3a0..60788cf02fbc 100644 --- a/arch/m68k/Kconfig +++ b/arch/m68k/Kconfig @@ -2,10 +2,12 @@ config M68K bool default y + select ARCH_HAS_DMA_MMAP_PGPROT if MMU && !COLDFIRE select ARCH_HAS_SYNC_DMA_FOR_DEVICE if HAS_DMA select ARCH_MIGHT_HAVE_PC_PARPORT if ISA select ARCH_NO_COHERENT_DMA_MMAP if !MMU select ARCH_NO_PREEMPT if !COLDFIRE + select DMA_DIRECT_REMAP if MMU && !COLDFIRE select HAVE_IDE select HAVE_AOUT if MMU select HAVE_DEBUG_BUGVERBOSE diff --git a/arch/m68k/kernel/dma.c b/arch/m68k/kernel/dma.c index dafe99d08a6a..16da5d96e228 100644 --- a/arch/m68k/kernel/dma.c +++ b/arch/m68k/kernel/dma.c @@ -18,57 +18,29 @@ #include #if defined(CONFIG_MMU) && !defined(CONFIG_COLDFIRE) - -void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, - gfp_t flag, unsigned long attrs) +void arch_dma_prep_coherent(struct page *page, size_t size) { - struct page *page, **map; - pgprot_t pgprot; - void *addr; - int i, order; - - pr_debug("dma_alloc_coherent: %d,%x\n", size, flag); - - size = PAGE_ALIGN(size); - order = get_order(size); - - page = alloc_pages(flag | GFP_ZERO, order); - if (!page) - return NULL; - - *handle = page_to_phys(page); - map = kmalloc(sizeof(struct page *) << order, flag & ~__GFP_DMA); - if (!map) { - __free_pages(page, order); - return NULL; - } - split_page(page, order); - - order = 1 << order; - size >>= PAGE_SHIFT; - map[0] = page; - for (i = 1; i < size; i++) - map[i] = page + i; - for (; i < order; i++) - __free_page(page + i); - pgprot = __pgprot(_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_DIRTY); - if (CPU_IS_040_OR_060) - pgprot_val(pgprot) |= _PAGE_GLOBAL040 | _PAGE_NOCACHE_S; - else - pgprot_val(pgprot) |= _PAGE_NOCACHE030; - addr = vmap(map, size, VM_MAP, pgprot); - kfree(map); - - return addr; + /* + * XXX: don't we need to flush and invalidate the caches before + * creating a coherent mapping? + * coherent? + */ } -void arch_dma_free(struct device *dev, size_t size, void *addr, - dma_addr_t handle, unsigned long attrs) +pgprot_t arch_dma_mmap_pgprot(struct device *dev, pgprot_t prot, + unsigned long attrs) { - pr_debug("dma_free_coherent: %p, %x\n", addr, handle); - vfree(addr); + /* + * XXX: this doesn't seem to handle the sun3 MMU at all. + */ + if (CPU_IS_040_OR_060) { + pgprot_val(prot) &= ~_PAGE_CACHE040; + pgprot_val(prot) |= _PAGE_GLOBAL040 | _PAGE_NOCACHE_S; + } else { + pgprot_val(prot) |= _PAGE_NOCACHE030; + } + return prot; } - #else #include -- 2.19.2