Received: by 2002:a05:6a10:9e8c:0:0:0:0 with SMTP id y12csp124513pxx; Tue, 27 Oct 2020 23:46:23 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwlhd3eYtLHrhECPy7YBdPtVNGdkO+moxy6+El/z/Lh44EPwb9jzmZH1nZ1MrL/MSkDATu8 X-Received: by 2002:a17:906:c293:: with SMTP id r19mr5889966ejz.63.1603867583313; Tue, 27 Oct 2020 23:46:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1603867583; cv=none; d=google.com; s=arc-20160816; b=FaWV7t+vks3Rt6yYqqzrIoA1uNhkGQKxydgb0duVrYXRODgQQjuTNpVIPFUGEQlq7a 0oHRNYLnZFA8dEAQYz9SGEw5LJ/A8+AEW/gZFx3dXgC1ZxZibuJnDdi9t3ElnkragBXf b3OUYAO3ZYtkUnjupU7MRhkXJyGZ4L5iNAIA1YfkwrGmX3WGafsRopKdWC7dafrrxJkm SbExRWPVQNRImYFMP9+HMmqYW792nrUPTRebIBosR+sQqeG2zBJzfowApPBwmQDF9kwn EcuikuHN5wH85bJZ46KOYFWMP9oVZpRhP3frUpFJ7DhpHNit0s7x6yf+LTHohMU3Ocla pAIA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=cgeU0LpdjW6nkLcxv9N9LO1gzAjLAJ/AkIzQjRpPBw0=; b=LGxu07ycQh7Dw03cZ+RiJUMtxIZIMrDTO7Fv7lygH9AO3nKCKvrXPGHxpzoihETR3N sRohLS0DFAKp2vnssWeATFN2WsJMmgRC3HeDgcE81/gU/2GmxW6Rg7VJxaA4DjtUxpnv emfxXWHFPCvMFLKqHnUL8KFXbW9STsQVAmsQA9kABlfs+b65aVbfksrJIC5r4+y8b3VN NBjLmi1NjJ8TvcQI81A+M6njREQCal0VEAxPgUsrr+1UyNGnqpA+p9pxNQ6bp4yvhMyx FclHuqNI52WyOuZtVLFJGdf9coUdDfT41kUQGBKOnYN1vPXLpcLC0KUp81g2fAfYvRg2 FCYg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id d7si2704000ejk.598.2020.10.27.23.46.01; Tue, 27 Oct 2020 23:46:23 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2897764AbgJ0KT0 (ORCPT + 99 others); Tue, 27 Oct 2020 06:19:26 -0400 Received: from ozlabs.ru ([107.174.27.60]:46380 "EHLO ozlabs.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2897759AbgJ0KTZ (ORCPT ); Tue, 27 Oct 2020 06:19:25 -0400 Received: from fstn1-p1.ozlabs.ibm.com (localhost [IPv6:::1]) by ozlabs.ru (Postfix) with ESMTP id A3EA6AE80277; Tue, 27 Oct 2020 06:18:10 -0400 (EDT) From: Alexey Kardashevskiy To: linuxppc-dev@lists.ozlabs.org Cc: Christoph Hellwig , Michael Ellerman , iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, Alexey Kardashevskiy Subject: [PATCH kernel v2 1/2] dma: Allow mixing bypass and normal IOMMU operation Date: Tue, 27 Oct 2020 21:18:40 +1100 Message-Id: <20201027101841.96056-2-aik@ozlabs.ru> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201027101841.96056-1-aik@ozlabs.ru> References: <20201027101841.96056-1-aik@ozlabs.ru> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org At the moment we allow bypassing DMA ops only when we can do this for the entire RAM. However there are configs with mixed type memory where we could still allow bypassing IOMMU in most cases; POWERPC with persistent memory is one example. This adds another check for the bus limit to determine where bypass can still work and we invoke direct DMA API; when DMA handle is outside that limit, we fall back to DMA ops. This adds a CONFIG_DMA_OPS_BYPASS_BUS_LIMIT config option which is off by default and will be enable for PPC_PSERIES in the following patch. Signed-off-by: Alexey Kardashevskiy --- kernel/dma/mapping.c | 61 ++++++++++++++++++++++++++++++++++++++++++-- kernel/dma/Kconfig | 4 +++ 2 files changed, 63 insertions(+), 2 deletions(-) diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 51bb8fa8eb89..0f4f998e6c72 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -137,6 +137,18 @@ static inline bool dma_map_direct(struct device *dev, return dma_go_direct(dev, *dev->dma_mask, ops); } +#ifdef CONFIG_DMA_OPS_BYPASS_BUS_LIMIT +static inline bool can_map_direct(struct device *dev, phys_addr_t addr) +{ + return dev->bus_dma_limit >= phys_to_dma(dev, addr); +} + +static inline bool dma_handle_direct(struct device *dev, dma_addr_t dma_handle) +{ + return dma_handle >= dev->archdata.dma_offset; +} +#endif + dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page, size_t offset, size_t size, enum dma_data_direction dir, unsigned long attrs) @@ -151,6 +163,11 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page, if (dma_map_direct(dev, ops)) addr = dma_direct_map_page(dev, page, offset, size, dir, attrs); +#ifdef CONFIG_DMA_OPS_BYPASS_BUS_LIMIT + else if (dev->bus_dma_limit && + can_map_direct(dev, (phys_addr_t) page_to_phys(page) + offset + size)) + addr = dma_direct_map_page(dev, page, offset, size, dir, attrs); +#endif else addr = ops->map_page(dev, page, offset, size, dir, attrs); debug_dma_map_page(dev, page, offset, size, dir, addr); @@ -167,6 +184,10 @@ void dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, size_t size, BUG_ON(!valid_dma_direction(dir)); if (dma_map_direct(dev, ops)) dma_direct_unmap_page(dev, addr, size, dir, attrs); +#ifdef CONFIG_DMA_OPS_BYPASS_BUS_LIMIT + else if (dev->bus_dma_limit && dma_handle_direct(dev, addr + size)) + dma_direct_unmap_page(dev, addr, size, dir, attrs); +#endif else if (ops->unmap_page) ops->unmap_page(dev, addr, size, dir, attrs); debug_dma_unmap_page(dev, addr, size, dir); @@ -190,6 +211,23 @@ int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents, if (dma_map_direct(dev, ops)) ents = dma_direct_map_sg(dev, sg, nents, dir, attrs); +#ifdef CONFIG_DMA_OPS_BYPASS_BUS_LIMIT + else if (dev->bus_dma_limit) { + struct scatterlist *s; + bool direct = true; + int i; + + for_each_sg(sg, s, nents, i) { + direct = can_map_direct(dev, sg_phys(s) + s->offset + s->length); + if (!direct) + break; + } + if (direct) + ents = dma_direct_map_sg(dev, sg, nents, dir, attrs); + else + ents = ops->map_sg(dev, sg, nents, dir, attrs); + } +#endif else ents = ops->map_sg(dev, sg, nents, dir, attrs); BUG_ON(ents < 0); @@ -207,9 +245,28 @@ void dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sg, BUG_ON(!valid_dma_direction(dir)); debug_dma_unmap_sg(dev, sg, nents, dir); - if (dma_map_direct(dev, ops)) + if (dma_map_direct(dev, ops)) { dma_direct_unmap_sg(dev, sg, nents, dir, attrs); - else if (ops->unmap_sg) + return; + } +#ifdef CONFIG_DMA_OPS_BYPASS_BUS_LIMIT + if (dev->bus_dma_limit) { + struct scatterlist *s; + bool direct = true; + int i; + + for_each_sg(sg, s, nents, i) { + direct = dma_handle_direct(dev, s->dma_address + s->length); + if (!direct) + break; + } + if (direct) { + dma_direct_unmap_sg(dev, sg, nents, dir, attrs); + return; + } + } +#endif + if (ops->unmap_sg) ops->unmap_sg(dev, sg, nents, dir, attrs); } EXPORT_SYMBOL(dma_unmap_sg_attrs); diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig index c99de4a21458..02fa174fbdec 100644 --- a/kernel/dma/Kconfig +++ b/kernel/dma/Kconfig @@ -20,6 +20,10 @@ config DMA_OPS config DMA_OPS_BYPASS bool +# IOMMU driver limited by a DMA window size may switch between bypass and window +config DMA_OPS_BYPASS_BUS_LIMIT + bool + config NEED_SG_DMA_LENGTH bool -- 2.17.1