Received: by 2002:a25:683:0:0:0:0:0 with SMTP id 125csp842757ybg; Wed, 3 Jun 2020 15:26:15 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwisq/+2GJxf6QqHIqM3MHPXktuhlqjF78TvVQCCc3TI1KwFbY820Akrxi9jWM4Ixuzyaxx X-Received: by 2002:a50:d499:: with SMTP id s25mr1610319edi.161.1591223175103; Wed, 03 Jun 2020 15:26:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1591223175; cv=none; d=google.com; s=arc-20160816; b=t1tEa2i7WmOd70RXkuN1Timr0G2DG/zND51r/c4qXCt5Glzl7laZlu8qnRTAs94euD osZnYn5YrdzSNSQyL9nhid7Mf+oZA/XqrSMirN1NlSGx68AyStgQDZska+Qm9oniytqW /LqCOQyLHfL+W8o7VMPmogpMCki3IkT/M24FvpVAtkCFgDMKAUJzh3eYGuu4e4cTn71H C1Lijcc9OFmkKwWqsFbtb9wyTzBJ5Spv/KfZGTrrGmrVO8n+zkLnJIY3fgzqJpo/gXgu pdfkbMJrshtMRx2Ig8MdwHACcB1I656/MaHG3EjPO7edabHEbdOWPqUavrQ7iruZb3hX 7QdA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=fScJHXzI4CXbq17dB8Ov6ARAd8J9jYMHvzSpar70yNg=; b=n129eicuTW+m8F8SPmPMGcC8XlqtP5pMprgpenkfVGZRTYr3hAFcbOTaesB90gftsW F2KBzEKFimMovmmbW3DN5arT7w00r645MzVQhyvaFx/EjZOxJs42YAnjnjuvmyuXp2V+ ytUsPXZMTsEsy8Md2ffYEdcUQ02WKXYgQ8076zc3/evCFQqWdQ6f9cK0FIrWmsavItZ0 o0hIXBNPalCJ4P36XDv3oGVkGjljEsV+O+mRfRE5p9KX/gDesdLG+/xPohOk+dO5boWA cqisdsSxhYUW0AFlsTc+eGQt+vNUVbeJgNaiagcuo5vuWpxM6oRXVhiJaY8n4KQarst9 0uwQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=AurZxJAJ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id u24si498288ejg.166.2020.06.03.15.25.52; Wed, 03 Jun 2020 15:26:15 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=AurZxJAJ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726964AbgFCWXM (ORCPT + 99 others); Wed, 3 Jun 2020 18:23:12 -0400 Received: from mail.kernel.org ([198.145.29.99]:56584 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726894AbgFCWWx (ORCPT ); Wed, 3 Jun 2020 18:22:53 -0400 Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id EAB97207F5; Wed, 3 Jun 2020 22:22:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1591222973; bh=p13t7h6OgZd3c3rlyfugdVMmCRk9goZBfF3/sQLkgpI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=AurZxJAJFOB7GSce2yJBE5z0Eu104WUXUTPvSKeFv/irIMHX/PuBIfT0tdjwggKaW 7GCQsJ2VWCh327dm4Ybc2CJe54tOeeR6oSZ7Eo14y3kYiRMph040HSWsEV6/ddWT2r FSteCb/rY0YjJJSQg6BgK7qST3egSc9aGEbLg8l0= From: Stefano Stabellini To: jgross@suse.com, boris.ostrovsky@oracle.com, konrad.wilk@oracle.com Cc: sstabellini@kernel.org, xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, tamas@tklengyel.com, roman@zededa.com, Stefano Stabellini Subject: [PATCH v2 09/11] swiotlb-xen: rename xen_phys_to_bus to xen_phys_to_dma and xen_bus_to_phys to xen_dma_to_phys Date: Wed, 3 Jun 2020 15:22:45 -0700 Message-Id: <20200603222247.11681-9-sstabellini@kernel.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org so that their names can better describe their behavior. No functional changes. Signed-off-by: Stefano Stabellini --- drivers/xen/swiotlb-xen.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 60ef07440905..41129c02d59a 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -57,7 +57,7 @@ static unsigned long xen_io_tlb_nslabs; * can be 32bit when dma_addr_t is 64bit leading to a loss in * information if the shift is done before casting to 64bit. */ -static inline dma_addr_t xen_phys_to_bus(struct device *dev, phys_addr_t paddr) +static inline dma_addr_t xen_phys_to_dma(struct device *dev, phys_addr_t paddr) { unsigned long bfn = pfn_to_bfn(XEN_PFN_DOWN(paddr)); dma_addr_t dma = (dma_addr_t)bfn << XEN_PAGE_SHIFT; @@ -67,7 +67,7 @@ static inline dma_addr_t xen_phys_to_bus(struct device *dev, phys_addr_t paddr) return phys_to_dma(dev, dma); } -static inline phys_addr_t xen_bus_to_phys(struct device *dev, +static inline phys_addr_t xen_dma_to_phys(struct device *dev, dma_addr_t dma_addr) { phys_addr_t baddr = dma_to_phys(dev, dma_addr); @@ -80,7 +80,7 @@ static inline phys_addr_t xen_bus_to_phys(struct device *dev, static inline dma_addr_t xen_virt_to_bus(struct device *dev, void *address) { - return xen_phys_to_bus(dev, virt_to_phys(address)); + return xen_phys_to_dma(dev, virt_to_phys(address)); } static inline int range_straddles_page_boundary(phys_addr_t p, size_t size) @@ -309,7 +309,7 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size, * Do not use virt_to_phys(ret) because on ARM it doesn't correspond * to *dma_handle. */ phys = dma_to_phys(hwdev, *dma_handle); - dev_addr = xen_phys_to_bus(hwdev, phys); + dev_addr = xen_phys_to_dma(hwdev, phys); if (((dev_addr + size - 1 <= dma_mask)) && !range_straddles_page_boundary(phys, size)) *dma_handle = dev_addr; @@ -340,7 +340,7 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr, /* do not use virt_to_phys because on ARM it doesn't return you the * physical address */ - phys = xen_bus_to_phys(hwdev, dev_addr); + phys = xen_dma_to_phys(hwdev, dev_addr); /* Convert the size to actually allocated. */ size = 1UL << (order + XEN_PAGE_SHIFT); @@ -369,7 +369,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page, unsigned long attrs) { phys_addr_t map, phys = page_to_phys(page) + offset; - dma_addr_t dev_addr = xen_phys_to_bus(dev, phys); + dma_addr_t dev_addr = xen_phys_to_dma(dev, phys); BUG_ON(dir == DMA_NONE); /* @@ -394,7 +394,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page, return DMA_MAPPING_ERROR; phys = map; - dev_addr = xen_phys_to_bus(dev, map); + dev_addr = xen_phys_to_dma(dev, map); /* * Ensure that the address returned is DMA'ble @@ -422,7 +422,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page, static void xen_swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { - phys_addr_t paddr = xen_bus_to_phys(hwdev, dev_addr); + phys_addr_t paddr = xen_dma_to_phys(hwdev, dev_addr); BUG_ON(dir == DMA_NONE); @@ -438,7 +438,7 @@ static void xen_swiotlb_sync_single_for_cpu(struct device *dev, dma_addr_t dma_addr, size_t size, enum dma_data_direction dir) { - phys_addr_t paddr = xen_bus_to_phys(dev, dma_addr); + phys_addr_t paddr = xen_dma_to_phys(dev, dma_addr); if (!dev_is_dma_coherent(dev)) xen_dma_sync_for_cpu(dev, dma_addr, paddr, size, dir); @@ -451,7 +451,7 @@ static void xen_swiotlb_sync_single_for_device(struct device *dev, dma_addr_t dma_addr, size_t size, enum dma_data_direction dir) { - phys_addr_t paddr = xen_bus_to_phys(dev, dma_addr); + phys_addr_t paddr = xen_dma_to_phys(dev, dma_addr); if (is_xen_swiotlb_buffer(dev, dma_addr)) swiotlb_tbl_sync_single(dev, paddr, size, dir, SYNC_FOR_DEVICE); -- 2.17.1