Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933148AbcKBRQJ (ORCPT ); Wed, 2 Nov 2016 13:16:09 -0400 Received: from mga05.intel.com ([192.55.52.43]:1264 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932766AbcKBRQF (ORCPT ); Wed, 2 Nov 2016 13:16:05 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.31,583,1473145200"; d="scan'208";a="26619203" Subject: [mm PATCH v2 17/26] arch/parisc: Add option to skip DMA sync as a part of map and unmap From: Alexander Duyck To: linux-mm@kvack.org, akpm@linux-foundation.org Cc: netdev@vger.kernel.org, Helge Deller , "James E.J. Bottomley" , linux-parisc@vger.kernel.org, linux-kernel@vger.kernel.org Date: Wed, 02 Nov 2016 07:15:08 -0400 Message-ID: <20161102111501.79519.99354.stgit@ahduyck-blue-test.jf.intel.com> In-Reply-To: <20161102111031.79519.14741.stgit@ahduyck-blue-test.jf.intel.com> References: <20161102111031.79519.14741.stgit@ahduyck-blue-test.jf.intel.com> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2421 Lines: 78 This change allows us to pass DMA_ATTR_SKIP_CPU_SYNC which allows us to avoid invoking cache line invalidation if the driver will just handle it via a sync_for_cpu or sync_for_device call. Cc: "James E.J. Bottomley" Cc: Helge Deller Cc: linux-parisc@vger.kernel.org Signed-off-by: Alexander Duyck --- arch/parisc/kernel/pci-dma.c | 20 +++++++++++++++----- 1 file changed, 15 insertions(+), 5 deletions(-) diff --git a/arch/parisc/kernel/pci-dma.c b/arch/parisc/kernel/pci-dma.c index 02d9ed0..be55ede 100644 --- a/arch/parisc/kernel/pci-dma.c +++ b/arch/parisc/kernel/pci-dma.c @@ -459,7 +459,9 @@ static dma_addr_t pa11_dma_map_page(struct device *dev, struct page *page, void *addr = page_address(page) + offset; BUG_ON(direction == DMA_NONE); - flush_kernel_dcache_range((unsigned long) addr, size); + if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + flush_kernel_dcache_range((unsigned long) addr, size); + return virt_to_phys(addr); } @@ -469,8 +471,11 @@ static void pa11_dma_unmap_page(struct device *dev, dma_addr_t dma_handle, { BUG_ON(direction == DMA_NONE); + if (attrs & DMA_ATTR_SKIP_CPU_SYNC) + return; + if (direction == DMA_TO_DEVICE) - return; + return; /* * For PCI_DMA_FROMDEVICE this flush is not necessary for the @@ -479,7 +484,6 @@ static void pa11_dma_unmap_page(struct device *dev, dma_addr_t dma_handle, */ flush_kernel_dcache_range((unsigned long) phys_to_virt(dma_handle), size); - return; } static int pa11_dma_map_sg(struct device *dev, struct scatterlist *sglist, @@ -496,6 +500,10 @@ static int pa11_dma_map_sg(struct device *dev, struct scatterlist *sglist, sg_dma_address(sg) = (dma_addr_t) virt_to_phys(vaddr); sg_dma_len(sg) = sg->length; + + if (attrs & DMA_ATTR_SKIP_CPU_SYNC) + continue; + flush_kernel_dcache_range(vaddr, sg->length); } return nents; @@ -510,14 +518,16 @@ static void pa11_dma_unmap_sg(struct device *dev, struct scatterlist *sglist, BUG_ON(direction == DMA_NONE); + if (attrs & DMA_ATTR_SKIP_CPU_SYNC) + return; + if (direction == DMA_TO_DEVICE) - return; + return; /* once we do combining we'll need to use phys_to_virt(sg_dma_address(sglist)) */ for_each_sg(sglist, sg, nents, i) flush_kernel_vmap_range(sg_virt(sg), sg->length); - return; } static void pa11_dma_sync_single_for_cpu(struct device *dev,