Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756798AbcKBRO7 (ORCPT ); Wed, 2 Nov 2016 13:14:59 -0400 Received: from mga07.intel.com ([134.134.136.100]:9869 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756644AbcKBRO4 (ORCPT ); Wed, 2 Nov 2016 13:14:56 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.31,583,1473145200"; d="scan'208";a="896950288" Subject: [mm PATCH v2 08/26] arch/c6x: Add option to skip sync on DMA map and unmap From: Alexander Duyck To: linux-mm@kvack.org, akpm@linux-foundation.org Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Mark Salter Date: Wed, 02 Nov 2016 07:13:54 -0400 Message-ID: <20161102111342.79519.23149.stgit@ahduyck-blue-test.jf.intel.com> In-Reply-To: <20161102111031.79519.14741.stgit@ahduyck-blue-test.jf.intel.com> References: <20161102111031.79519.14741.stgit@ahduyck-blue-test.jf.intel.com> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1871 Lines: 57 This change allows us to pass DMA_ATTR_SKIP_CPU_SYNC which allows us to avoid invoking cache line invalidation if the driver will just handle it later via a sync_for_cpu or sync_for_device call. Acked-by: Mark Salter Signed-off-by: Alexander Duyck --- arch/c6x/kernel/dma.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/arch/c6x/kernel/dma.c b/arch/c6x/kernel/dma.c index db4a6a3..6752df3 100644 --- a/arch/c6x/kernel/dma.c +++ b/arch/c6x/kernel/dma.c @@ -42,14 +42,17 @@ static dma_addr_t c6x_dma_map_page(struct device *dev, struct page *page, { dma_addr_t handle = virt_to_phys(page_address(page) + offset); - c6x_dma_sync(handle, size, dir); + if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + c6x_dma_sync(handle, size, dir); + return handle; } static void c6x_dma_unmap_page(struct device *dev, dma_addr_t handle, size_t size, enum dma_data_direction dir, unsigned long attrs) { - c6x_dma_sync(handle, size, dir); + if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + c6x_dma_sync(handle, size, dir); } static int c6x_dma_map_sg(struct device *dev, struct scatterlist *sglist, @@ -60,7 +63,8 @@ static int c6x_dma_map_sg(struct device *dev, struct scatterlist *sglist, for_each_sg(sglist, sg, nents, i) { sg->dma_address = sg_phys(sg); - c6x_dma_sync(sg->dma_address, sg->length, dir); + if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + c6x_dma_sync(sg->dma_address, sg->length, dir); } return nents; @@ -72,9 +76,11 @@ static void c6x_dma_unmap_sg(struct device *dev, struct scatterlist *sglist, struct scatterlist *sg; int i; + if (attrs & DMA_ATTR_SKIP_CPU_SYNC) + return; + for_each_sg(sglist, sg, nents, i) c6x_dma_sync(sg_dma_address(sg), sg->length, dir); - } static void c6x_dma_sync_single_for_cpu(struct device *dev, dma_addr_t handle,