Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp2500902pxj; Sun, 13 Jun 2021 23:27:16 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyK7U1rFcqtQuy2WkvpIzjkTEUdws0ZJs6kLyQ68Sj2KaXR9Jo1rGSKDsHSX9WrzRcQKMet X-Received: by 2002:a05:6402:3c1:: with SMTP id t1mr15413618edw.270.1623652036259; Sun, 13 Jun 2021 23:27:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1623652036; cv=none; d=google.com; s=arc-20160816; b=gNMSfD/Utv18JGvnkNN0jjs+DeIzNfS73lSxRjkaTu6GJz0f7TvdvIQk6wJSx8ZHka ZCS6W6X4Fwj6AlE0Nvofpe6y+Ns6p3y2f/F6Xpsw/woACldSC1KhSUVQxUkg10miEGeM GeMkbXdQuxzxOQfqxUeQvBQqCK/cvYYOr95K6r7ff4s0pEaYWNzsnGhjfSrlEPt11Zio 8kA7InoF0TH21QUpkiGDfLNBfwC8v8hoeqhLjMeJMu2O3N8VOm50Q2Iyh6Yx6ZZ2ZhBm lDQMxOtG4GV+dXP46ZQfI+dMxhTAzFvGTKJhItg532L+aDWBqwJJZD2po9ditsO9MXEt exow== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=7bszyCDq8oB2uhxK9f7BMdJKqKNJBEPADiwVpupB/1Y=; b=xeIqPWUSh+uYierXtRkW/rViKl1L4zWZ9S3lT+QZnK3YeD6bnK7U3c4OERYWeqIv4S yaO/DkL9xzCrToeXuWJmkkbPjWJeADg/XyTb/kqQskD1iZCJ3f/4lQeTsCrTOGc9l5iv D82PxPO8TeoT3mdLVCpoa9zOikdECm/eNryppEtV/3Prp47g1FpXoUwSr52ePC4moQcB 0u5TN2xiXjwYplBEn5sV/Oem0jgbpHeQzu5MxHyOyI/uVDqvDks0xnq2O9kLQoQLF6Qb AsDd1R7l/xeOrDMS/nk11fevIzOpnRsVeaAyeanHJXl6nNS1pCK1K0fgtY0MpkEAtZsj rwcw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 20si1099155ejj.363.2021.06.13.23.26.52; Sun, 13 Jun 2021 23:27:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232462AbhFNG1i (ORCPT + 99 others); Mon, 14 Jun 2021 02:27:38 -0400 Received: from verein.lst.de ([213.95.11.211]:42794 "EHLO verein.lst.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232134AbhFNG1f (ORCPT ); Mon, 14 Jun 2021 02:27:35 -0400 Received: by verein.lst.de (Postfix, from userid 2407) id BF02567373; Mon, 14 Jun 2021 08:25:30 +0200 (CEST) Date: Mon, 14 Jun 2021 08:25:30 +0200 From: Christoph Hellwig To: Claire Chang Cc: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski , benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , tfiga@chromium.org, bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, jxgao@google.com, joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Subject: Re: [PATCH v9 07/14] swiotlb: Bounce data from/to restricted DMA pool if available Message-ID: <20210614062530.GG28343@lst.de> References: <20210611152659.2142983-1-tientzu@chromium.org> <20210611152659.2142983-8-tientzu@chromium.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210611152659.2142983-8-tientzu@chromium.org> User-Agent: Mutt/1.5.17 (2007-11-01) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jun 11, 2021 at 11:26:52PM +0800, Claire Chang wrote: > Regardless of swiotlb setting, the restricted DMA pool is preferred if > available. > > The restricted DMA pools provide a basic level of protection against the > DMA overwriting buffer contents at unexpected times. However, to protect > against general data leakage and system memory corruption, the system > needs to provide a way to lock down the memory access, e.g., MPU. > > Note that is_dev_swiotlb_force doesn't check if > swiotlb_force == SWIOTLB_FORCE. Otherwise the memory allocation behavior > with default swiotlb will be changed by the following patche > ("dma-direct: Allocate memory from restricted DMA pool if available"). > > Signed-off-by: Claire Chang > --- > include/linux/swiotlb.h | 10 +++++++++- > kernel/dma/direct.c | 3 ++- > kernel/dma/direct.h | 3 ++- > kernel/dma/swiotlb.c | 1 + > 4 files changed, 14 insertions(+), 3 deletions(-) > > diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h > index 06cf17a80f5c..8200c100fe10 100644 > --- a/include/linux/swiotlb.h > +++ b/include/linux/swiotlb.h > @@ -85,6 +85,7 @@ extern enum swiotlb_force swiotlb_force; > * unmap calls. > * @debugfs: The dentry to debugfs. > * @late_alloc: %true if allocated using the page allocator > + * @force_swiotlb: %true if swiotlb is forced > */ > struct io_tlb_mem { > phys_addr_t start; > @@ -95,6 +96,7 @@ struct io_tlb_mem { > spinlock_t lock; > struct dentry *debugfs; > bool late_alloc; > + bool force_swiotlb; > struct io_tlb_slot { > phys_addr_t orig_addr; > size_t alloc_size; > @@ -115,6 +117,11 @@ static inline void swiotlb_set_io_tlb_default_mem(struct device *dev) > dev->dma_io_tlb_mem = io_tlb_default_mem; > } > > +static inline bool is_dev_swiotlb_force(struct device *dev) > +{ > + return dev->dma_io_tlb_mem->force_swiotlb; > +} > + > void __init swiotlb_exit(void); > unsigned int swiotlb_max_segment(void); > size_t swiotlb_max_mapping_size(struct device *dev); > @@ -126,8 +133,9 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr) > { > return false; > } > -static inline void swiotlb_set_io_tlb_default_mem(struct device *dev) > +static inline bool is_dev_swiotlb_force(struct device *dev) > { > + return false; > } > static inline void swiotlb_exit(void) > { > diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c > index 7a88c34d0867..078f7087e466 100644 > --- a/kernel/dma/direct.c > +++ b/kernel/dma/direct.c > @@ -496,7 +496,8 @@ size_t dma_direct_max_mapping_size(struct device *dev) > { > /* If SWIOTLB is active, use its maximum mapping size */ > if (is_swiotlb_active(dev) && > - (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE)) > + (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE || > + is_dev_swiotlb_force(dev))) I think we can remove the extra swiotlb_force check here if the swiotlb_force setting is propagated into io_tlb_default_mem->force when that is initialized. This avoids an extra check in the fast path. > - if (unlikely(swiotlb_force == SWIOTLB_FORCE)) > + if (unlikely(swiotlb_force == SWIOTLB_FORCE) || > + is_dev_swiotlb_force(dev)) Same here.