Received: by 2002:a25:8b12:0:0:0:0:0 with SMTP id i18csp5149373ybl; Mon, 26 Aug 2019 23:40:53 -0700 (PDT) X-Google-Smtp-Source: APXvYqyxI7OYZ/E3AMv/Jf8HyGM7yNUY6d9Aj55mlfXA1MHA0DKBr+gvZ5NOISh+n5auh9xKpnQ+ X-Received: by 2002:a65:6152:: with SMTP id o18mr19132366pgv.279.1566888053376; Mon, 26 Aug 2019 23:40:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1566888053; cv=none; d=google.com; s=arc-20160816; b=WtTWFAprMUskAiVusKwKeR41lrN4Gru6yH4zDzVfUf/3VKOSxuf0O7bHs7R/rEFEZL TkxGJS7R7ezRjYqejUBWsr5ZL9R6oxJFnIyy2EnUhJv0K9ugmWEY5wYjXYieTC9UaWij HWdyC8k7tnA1b3UMbwoB5ayl5ZNRxmeA50yklb5G2hRWTry8oblkD/h2zDFcgf8rAjzr MMT9KjnfjfLpLHAE3MLBh/cUBo5qqA/hp/zKyk+y7i3qng9Cv3LMVCWF7r62WLk7OAvY JFHPDSmGuu2tubPJTKivYItZwXcfgKQSPokNcZ8t3vOi/eSrCVSRijZkfjtrhiIrOAUS TsPQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=HnlSi5ylvSkAssQl0J+JbsBG3uwVIk/PPh3HE5USI+A=; b=eWuD+/fiaOZFS1UBROschY+1qdR/C4MEkEK7rp3eSAFCKvldAXi9lK/LM4kLd6uABp a4Pm0kPVu7DMS/tHrqP2TlECud4Hs9ak5kHpqE0ncAygYkwQR91pjz/sGbCY+hyTZ23W innrMiwiqaBv0vrY8xqlkwjF2tnwL3i29D07PSM+lZwg4ajfgZ025F/rUXW7QFEAXDPS oY5ajvmy7R5FGNORJ/t9P/RjHQ7q/c1uNZHsO1uNbjzfv3iDg0s+A0JFOBiHPk1O/cO6 h96/LqS/8+YQ0ZRKiObgpeqjQ06XvZLdB1DVKtERwKkn2PtpdtsTxbSbDM9DdMzlMKli 9D/Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a17si10970248pgi.430.2019.08.26.23.40.37; Mon, 26 Aug 2019 23:40:53 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727306AbfH0Gh6 (ORCPT + 99 others); Tue, 27 Aug 2019 02:37:58 -0400 Received: from verein.lst.de ([213.95.11.211]:53995 "EHLO verein.lst.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725811AbfH0Gh6 (ORCPT ); Tue, 27 Aug 2019 02:37:58 -0400 Received: by verein.lst.de (Postfix, from userid 2407) id 8625768AFE; Tue, 27 Aug 2019 08:37:54 +0200 (CEST) Date: Tue, 27 Aug 2019 08:37:54 +0200 From: Christoph Hellwig To: Stefano Stabellini , Konrad Rzeszutek Wilk Cc: xen-devel@lists.xenproject.org, iommu@lists.linux-foundation.org, x86@kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: Re: [PATCH 03/11] xen/arm: simplify dma_cache_maint Message-ID: <20190827063754.GA32045@lst.de> References: <20190826121944.515-1-hch@lst.de> <20190826121944.515-4-hch@lst.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190826121944.515-4-hch@lst.de> User-Agent: Mutt/1.5.17 (2007-11-01) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org And this was still buggy I think, it really needs some real Xen/Arm testing which I can't do. Hopefully better version below: -- From 5ad4b6e291dbb49f65480c9b769414931cbd485a Mon Sep 17 00:00:00 2001 From: Christoph Hellwig Date: Wed, 24 Jul 2019 15:26:08 +0200 Subject: xen/arm: simplify dma_cache_maint Calculate the required operation in the caller, and pass it directly instead of recalculating it for each page, and use simple arithmetics to get from the physical address to Xen page size aligned chunks. Signed-off-by: Christoph Hellwig --- arch/arm/xen/mm.c | 61 ++++++++++++++++------------------------------- 1 file changed, 21 insertions(+), 40 deletions(-) diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c index 90574d89d0d4..2fde161733b0 100644 --- a/arch/arm/xen/mm.c +++ b/arch/arm/xen/mm.c @@ -35,64 +35,45 @@ unsigned long xen_get_swiotlb_free_pages(unsigned int order) return __get_free_pages(flags, order); } -enum dma_cache_op { - DMA_UNMAP, - DMA_MAP, -}; static bool hypercall_cflush = false; -/* functions called by SWIOTLB */ - -static void dma_cache_maint(dma_addr_t handle, unsigned long offset, - size_t size, enum dma_data_direction dir, enum dma_cache_op op) +/* buffers in highmem or foreign pages cannot cross page boundaries */ +static void dma_cache_maint(dma_addr_t handle, size_t size, u32 op) { struct gnttab_cache_flush cflush; - unsigned long xen_pfn; - size_t left = size; - xen_pfn = (handle >> XEN_PAGE_SHIFT) + offset / XEN_PAGE_SIZE; - offset %= XEN_PAGE_SIZE; + cflush.a.dev_bus_addr = handle & XEN_PAGE_MASK; + cflush.offset = xen_offset_in_page(handle); + cflush.op = op; do { - size_t len = left; - - /* buffers in highmem or foreign pages cannot cross page - * boundaries */ - if (len + offset > XEN_PAGE_SIZE) - len = XEN_PAGE_SIZE - offset; - - cflush.op = 0; - cflush.a.dev_bus_addr = xen_pfn << XEN_PAGE_SHIFT; - cflush.offset = offset; - cflush.length = len; - - if (op == DMA_UNMAP && dir != DMA_TO_DEVICE) - cflush.op = GNTTAB_CACHE_INVAL; - if (op == DMA_MAP) { - if (dir == DMA_FROM_DEVICE) - cflush.op = GNTTAB_CACHE_INVAL; - else - cflush.op = GNTTAB_CACHE_CLEAN; - } - if (cflush.op) - HYPERVISOR_grant_table_op(GNTTABOP_cache_flush, &cflush, 1); + if (size + cflush.offset > XEN_PAGE_SIZE) + cflush.length = XEN_PAGE_SIZE - cflush.offset; + else + cflush.length = size; + + HYPERVISOR_grant_table_op(GNTTABOP_cache_flush, &cflush, 1); - offset = 0; - xen_pfn++; - left -= len; - } while (left); + cflush.offset = 0; + cflush.a.dev_bus_addr += cflush.length; + size -= cflush.length; + } while (size); } static void __xen_dma_page_dev_to_cpu(struct device *hwdev, dma_addr_t handle, size_t size, enum dma_data_direction dir) { - dma_cache_maint(handle & PAGE_MASK, handle & ~PAGE_MASK, size, dir, DMA_UNMAP); + if (dir != DMA_TO_DEVICE) + dma_cache_maint(handle, size, GNTTAB_CACHE_INVAL); } static void __xen_dma_page_cpu_to_dev(struct device *hwdev, dma_addr_t handle, size_t size, enum dma_data_direction dir) { - dma_cache_maint(handle & PAGE_MASK, handle & ~PAGE_MASK, size, dir, DMA_MAP); + if (dir == DMA_FROM_DEVICE) + dma_cache_maint(handle, size, GNTTAB_CACHE_INVAL); + else + dma_cache_maint(handle, size, GNTTAB_CACHE_CLEAN); } void __xen_dma_map_page(struct device *hwdev, struct page *page, -- 2.20.1