Received: by 2002:a25:8b12:0:0:0:0:0 with SMTP id i18csp4245609ybl; Mon, 26 Aug 2019 07:39:01 -0700 (PDT) X-Google-Smtp-Source: APXvYqwYDy5Y37VRLsbbZIraMgfsl+voC2QnXQpYpIt27FZfmmDDQ/Gka5fIg5wWpz4ZFD6ahnxg X-Received: by 2002:aa7:96dc:: with SMTP id h28mr20546502pfq.86.1566830341229; Mon, 26 Aug 2019 07:39:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1566830341; cv=none; d=google.com; s=arc-20160816; b=feTFKWrZx2hWMukpD1qyhnc4nhEYOvDj7u3nvEqfVx1wZrj04INItbRw5Lez40kLA1 bOstMm2WpWdT4cbma99IFdupIP/hxQo7ZEHFNPLIWo8Y7CXSRhK34qL5HcX9MCtK0GNg U5iky4c0k6SjA85iY1NzJ8vpA8okw4L0ZWJDRufFBTVvgayH0ORqd3xgxBO5UBesZeAc 1RsKIcvhyWRwIF9ax2ADPIdKjGZ4CTildHsR9sjTot4Fn8bw2Rw+2w4J4GQwGoXMFOnP X4hLK3ieQv+afXhciIsK2U7qOC3xNIaEubB3o2y5FyQGj0Rpe+hHFJXYwH+saDtiUQ5O HlZA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=AQ3Nhdaheed5ujrFwCPWTqAbT1dxRCd+WcE0+WEhj5E=; b=uGuQz7nH9kKGACCKYfbJ4167tAiAK9t/bQhR6Y3+suDyAdceMHIqNVSKlzbImbse7v Z20wlI4NkU2Bnsjp5PmRuhC9zWM/4ZsowQ+GW7j0Iuj7LotR8xq/p2b0gmbAW1xGtJwl GguH93RQ3CZjyLs2T1344tFNtRbSn+y52NJ9tLQQSnw/TlWzcfe613cfrzGsfspe/kvi 1YeE9kuXS7Q85TLyrQ6OrdxVVCJbhs8lJgo0ck+F4WgE3JZ2/87SQ7yxQycz2gPS0lif yqqAx2qXU/xPoFe2f+NFNYAbjSixL4WW7qeERWt5ZCMbkN1d4KdL0A6Ha92cjQCoNp95 Zzpg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=TgTlUD4Y; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u5si8819813pgr.310.2019.08.26.07.38.45; Mon, 26 Aug 2019 07:39:01 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=TgTlUD4Y; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730846AbfHZMUA (ORCPT + 99 others); Mon, 26 Aug 2019 08:20:00 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:48662 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730652AbfHZMT5 (ORCPT ); Mon, 26 Aug 2019 08:19:57 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=AQ3Nhdaheed5ujrFwCPWTqAbT1dxRCd+WcE0+WEhj5E=; b=TgTlUD4YAkGQnFk4yVDTirLQpM dQuHs1JFKBMPdoH4184keQr28AS9M+E7kxFvPQG6NxnJ7Xvh+uhBq9wsBjFIeM/CUoLgbyrDdDJjt 8a24XtfyY0jdTOhQdezkq/cJoW/s7VSfayp+Ebz90fdwVJ75iUrGkyDRc6mIsRqoAitAvw08tW68u aY3BXsPIP+gxfdzaGL3o1mJY/9V3G6p0YqEYc8sIWKA35mKIdnx1Xu69c0XdU1KhHUL4Ethv5G52n dyir+8A6vmyQpGRp/IkfAwAbV0HtG1UQS21menjk8pgSN+XgTEdUd05eGOeICQ4Tvn5ZiWl8WVPQl oZ3uyMYg==; Received: from clnet-p19-102.ikbnet.co.at ([83.175.77.102] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1i2DyU-0002Kc-A5; Mon, 26 Aug 2019 12:19:54 +0000 From: Christoph Hellwig To: Stefano Stabellini , Konrad Rzeszutek Wilk Cc: x86@kernel.org, linux-arm-kernel@lists.infradead.org, xen-devel@lists.xenproject.org, iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org Subject: [PATCH 03/11] xen/arm: simplify dma_cache_maint Date: Mon, 26 Aug 2019 14:19:36 +0200 Message-Id: <20190826121944.515-4-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190826121944.515-1-hch@lst.de> References: <20190826121944.515-1-hch@lst.de> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Calculate the required operation in the caller, and pass it directly instead of recalculating it for each page, and use simple arithmetics to get from the physical address to Xen page size aligned chunks. Signed-off-by: Christoph Hellwig --- arch/arm/xen/mm.c | 62 +++++++++++++++++------------------------------ 1 file changed, 22 insertions(+), 40 deletions(-) diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c index 90574d89d0d4..14210ebdea1a 100644 --- a/arch/arm/xen/mm.c +++ b/arch/arm/xen/mm.c @@ -35,64 +35,46 @@ unsigned long xen_get_swiotlb_free_pages(unsigned int order) return __get_free_pages(flags, order); } -enum dma_cache_op { - DMA_UNMAP, - DMA_MAP, -}; static bool hypercall_cflush = false; -/* functions called by SWIOTLB */ - -static void dma_cache_maint(dma_addr_t handle, unsigned long offset, - size_t size, enum dma_data_direction dir, enum dma_cache_op op) +/* buffers in highmem or foreign pages cannot cross page boundaries */ +static void dma_cache_maint(dma_addr_t handle, size_t size, u32 op) { struct gnttab_cache_flush cflush; - unsigned long xen_pfn; - size_t left = size; - xen_pfn = (handle >> XEN_PAGE_SHIFT) + offset / XEN_PAGE_SIZE; - offset %= XEN_PAGE_SIZE; + cflush.a.dev_bus_addr = handle & XEN_PAGE_MASK; + cflush.offset = xen_offset_in_page(handle); + cflush.op = op; do { - size_t len = left; - - /* buffers in highmem or foreign pages cannot cross page - * boundaries */ - if (len + offset > XEN_PAGE_SIZE) - len = XEN_PAGE_SIZE - offset; - - cflush.op = 0; - cflush.a.dev_bus_addr = xen_pfn << XEN_PAGE_SHIFT; - cflush.offset = offset; - cflush.length = len; - - if (op == DMA_UNMAP && dir != DMA_TO_DEVICE) - cflush.op = GNTTAB_CACHE_INVAL; - if (op == DMA_MAP) { - if (dir == DMA_FROM_DEVICE) - cflush.op = GNTTAB_CACHE_INVAL; - else - cflush.op = GNTTAB_CACHE_CLEAN; - } - if (cflush.op) - HYPERVISOR_grant_table_op(GNTTABOP_cache_flush, &cflush, 1); + if (size + cflush.offset > XEN_PAGE_SIZE) + cflush.length = XEN_PAGE_SIZE - cflush.offset; + else + cflush.length = size; + + HYPERVISOR_grant_table_op(GNTTABOP_cache_flush, &cflush, 1); + + handle += cflush.length; + size -= cflush.length; - offset = 0; - xen_pfn++; - left -= len; - } while (left); + cflush.offset = 0; + } while (size); } static void __xen_dma_page_dev_to_cpu(struct device *hwdev, dma_addr_t handle, size_t size, enum dma_data_direction dir) { - dma_cache_maint(handle & PAGE_MASK, handle & ~PAGE_MASK, size, dir, DMA_UNMAP); + if (dir != DMA_TO_DEVICE) + dma_cache_maint(handle, size, GNTTAB_CACHE_INVAL); } static void __xen_dma_page_cpu_to_dev(struct device *hwdev, dma_addr_t handle, size_t size, enum dma_data_direction dir) { - dma_cache_maint(handle & PAGE_MASK, handle & ~PAGE_MASK, size, dir, DMA_MAP); + if (dir == DMA_FROM_DEVICE) + dma_cache_maint(handle, size, GNTTAB_CACHE_INVAL); + else + dma_cache_maint(handle, size, GNTTAB_CACHE_CLEAN); } void __xen_dma_map_page(struct device *hwdev, struct page *page, -- 2.20.1