Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp974901yba; Sat, 20 Apr 2019 18:32:40 -0700 (PDT) X-Google-Smtp-Source: APXvYqzEgvkz+y59srIoEjMVYHEVBfbf+QRzDm+hiqZ900qQcOj+5vLw86jt8p5zicyRVeLb92Y7 X-Received: by 2002:a62:5797:: with SMTP id i23mr12741429pfj.12.1555810360234; Sat, 20 Apr 2019 18:32:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1555810360; cv=none; d=google.com; s=arc-20160816; b=LeD80BBUi9kG77xHTEw2WyMzAmyLO7bz+YcRjF37AxGkW6tbSC/fAnYaYU06cb7hbm OwF8I497GeFjbfTj65nILnam7oyvHcBUw2YyikY9mjdb6YgQba11Bj3DxGiQ64rHXGDJ 7OyQCss327QUkqNRRg0699hT8Dr917SYW8K9Us7lBPGvaGYfRwqMi3G+5G8Qdp+pKcF3 3YJR2oPBBArfheg+w5nWiJQt5h5cpSk338+fL9fFzNrip0sDsMxNcUTm5626CZ8+Jclm Q1n6bIt8m9fjKfIun7Uk7WlawQX5L9X1sq6DoGYrwbJNIeabLUoqUJt0DrL5uahl2oco sI/A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=QtItQioNwpobSganFOvBC/frUR5XaPF1RBcgBry54Xs=; b=vQ/Gd1wsACRilzJjcsThDpWervYq0r/fXfbml6zzORDNDyG5okAJWWGhjCyIHXN0qu 4oWQ+GkSwBf20zmaeJgOkjC6CBepehmxuAvzruwH6cI4euq4p7Gg7dvj2tvrWj+0f2ar ajMRUlLoCSAQ6FVkeQxQCm81g+w788OiAbwQJkLd/VDV5gimxFai2FJ7TE4vsa7aUqFW RVbPHRkQq+dWA9LxZ4/IViTKXlnVkPLkrcax7VhzOBggD4DSLzovzMk3OR3JUICYcZCr HXFEKkBdQjaa605cBmRyzdnyDTMrUGV1RsyQZopwuwiDbXR1TcktRZgWUeNsSCZsazqr zCsw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p2si8490879pgk.326.2019.04.20.18.31.44; Sat, 20 Apr 2019 18:32:40 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727953AbfDUB2u (ORCPT + 99 others); Sat, 20 Apr 2019 21:28:50 -0400 Received: from mga18.intel.com ([134.134.136.126]:3762 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725895AbfDUB2t (ORCPT ); Sat, 20 Apr 2019 21:28:49 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 20 Apr 2019 18:28:47 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,376,1549958400"; d="scan'208";a="136008131" Received: from allen-box.sh.intel.com ([10.239.159.136]) by orsmga008.jf.intel.com with ESMTP; 20 Apr 2019 18:23:50 -0700 From: Lu Baolu To: David Woodhouse , Joerg Roedel Cc: ashok.raj@intel.com, jacob.jun.pan@intel.com, alan.cox@intel.com, kevin.tian@intel.com, mika.westerberg@linux.intel.com, pengfei.xu@intel.com, Konrad Rzeszutek Wilk , Christoph Hellwig , Marek Szyprowski , Robin Murphy , iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, Lu Baolu Subject: [PATCH v3 02/10] swiotlb: Factor out slot allocation and free Date: Sun, 21 Apr 2019 09:17:11 +0800 Message-Id: <20190421011719.14909-3-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190421011719.14909-1-baolu.lu@linux.intel.com> References: <20190421011719.14909-1-baolu.lu@linux.intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This moves slot allocation and free code into two common functions in order to avoid code duplication. There's no functional change. Cc: Konrad Rzeszutek Wilk Cc: Christoph Hellwig Cc: Marek Szyprowski Cc: Robin Murphy Signed-off-by: Lu Baolu --- kernel/dma/swiotlb.c | 72 +++++++++++++++++++++++++++++--------------- 1 file changed, 47 insertions(+), 25 deletions(-) diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 53012db1e53c..173122d16b7f 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -439,11 +439,9 @@ static void swiotlb_bounce(phys_addr_t orig_addr, phys_addr_t tlb_addr, } } -phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, - dma_addr_t tbl_dma_addr, - phys_addr_t orig_addr, size_t size, - enum dma_data_direction dir, - unsigned long attrs) +static phys_addr_t +swiotlb_tbl_alloc_tlb(struct device *hwdev, dma_addr_t tbl_dma_addr, + phys_addr_t orig_addr, size_t size) { unsigned long flags; phys_addr_t tlb_addr; @@ -539,8 +537,6 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, not_found: spin_unlock_irqrestore(&io_tlb_lock, flags); - if (!(attrs & DMA_ATTR_NO_WARN) && printk_ratelimit()) - dev_warn(hwdev, "swiotlb buffer is full (sz: %zd bytes)\n", size); return DMA_MAPPING_ERROR; found: io_tlb_used += nslots; @@ -553,32 +549,16 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, */ for (i = 0; i < nslots; i++) io_tlb_orig_addr[index+i] = orig_addr + (i << IO_TLB_SHIFT); - if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && - (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)) - swiotlb_bounce(orig_addr, tlb_addr, size, DMA_TO_DEVICE); return tlb_addr; } -/* - * tlb_addr is the physical address of the bounce buffer to unmap. - */ -void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, - size_t size, enum dma_data_direction dir, - unsigned long attrs) +static void +swiotlb_tbl_free_tlb(struct device *hwdev, phys_addr_t tlb_addr, size_t size) { unsigned long flags; int i, count, nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT; int index = (tlb_addr - io_tlb_start) >> IO_TLB_SHIFT; - phys_addr_t orig_addr = io_tlb_orig_addr[index]; - - /* - * First, sync the memory before unmapping the entry - */ - if (orig_addr != INVALID_PHYS_ADDR && - !(attrs & DMA_ATTR_SKIP_CPU_SYNC) && - ((dir == DMA_FROM_DEVICE) || (dir == DMA_BIDIRECTIONAL))) - swiotlb_bounce(orig_addr, tlb_addr, size, DMA_FROM_DEVICE); /* * Return the buffer to the free list by setting the corresponding @@ -610,6 +590,48 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, spin_unlock_irqrestore(&io_tlb_lock, flags); } +phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, + dma_addr_t tbl_dma_addr, + phys_addr_t orig_addr, size_t size, + enum dma_data_direction dir, + unsigned long attrs) +{ + phys_addr_t tlb_addr; + + tlb_addr = swiotlb_tbl_alloc_tlb(hwdev, tbl_dma_addr, orig_addr, size); + if (tlb_addr == DMA_MAPPING_ERROR) { + if (!(attrs & DMA_ATTR_NO_WARN) && printk_ratelimit()) + dev_warn(hwdev, "swiotlb buffer is full (sz: %zd bytes)\n", + size); + } else if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && + (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)) { + swiotlb_bounce(orig_addr, tlb_addr, size, DMA_TO_DEVICE); + } + + return tlb_addr; +} + +/* + * tlb_addr is the physical address of the bounce buffer to unmap. + */ +void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, + size_t size, enum dma_data_direction dir, + unsigned long attrs) +{ + int index = (tlb_addr - io_tlb_start) >> IO_TLB_SHIFT; + phys_addr_t orig_addr = io_tlb_orig_addr[index]; + + /* + * First, sync the memory before unmapping the entry + */ + if (orig_addr != INVALID_PHYS_ADDR && + !(attrs & DMA_ATTR_SKIP_CPU_SYNC) && + ((dir == DMA_FROM_DEVICE) || (dir == DMA_BIDIRECTIONAL))) + swiotlb_bounce(orig_addr, tlb_addr, size, DMA_FROM_DEVICE); + + swiotlb_tbl_free_tlb(hwdev, tlb_addr, size); +} + void swiotlb_tbl_sync_single(struct device *hwdev, phys_addr_t tlb_addr, size_t size, enum dma_data_direction dir, enum dma_sync_target target) -- 2.17.1