Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp40920pxj; Wed, 16 Jun 2021 19:43:36 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxpS3IDY66nYvyW521R9jhJHdFjJApQXWC5h6Fk/twokuswKlcWM/0BFbgnuhbv5fLiqxW9 X-Received: by 2002:a17:906:1dc5:: with SMTP id v5mr2661837ejh.212.1623897816040; Wed, 16 Jun 2021 19:43:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1623897816; cv=none; d=google.com; s=arc-20160816; b=SW9pYNoLW+SoepbvEzzvgGXLLs3tVlE9bbsOENNmmoZjzTrNP5frTo915r/O/lF0R0 D2gdJ0QCsAJHfL/lMyeJlMiiPFQPT5KB5m0isbia2T3i+27lhm3mfSetupV0K3UOlSJo WiP6wVJbpB0iSAwRg+G2ysUPFZc6YLIE0iv7VPFa5loqGdx/iAPlCBuYI3bXtZHvYGq5 G/SBnCwgTyNrENnzI4QJMwP5P0t7gPXbw9Vx6GU/4CvjnoNtJDUMzlh6qftuYGoX5Usm rbCWJHb8FUBe9JGKf+QnWpp7LZaa2mhDM6IVeXEd0HEGD/2Z7g5QDq2QTrkGbnB/Kxcl O4Lw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:user-agent:references:message-id :in-reply-to:subject:cc:to:from:date:dkim-signature; bh=Fb04CYzm1BN9XEo9S5e6EGu9n9NyWbKFKnCFedHK5pg=; b=JcCnFTvLOxiDERfS7t2rDIP39XuAlhQM50aYgaTa14A1CPRVJ7wbCIb7iitEZ3LIOi eL1e9eIDnzhmVKmEb34BEeKCLhJ7uSbas07F0it+9KHyw4+Gfgz+pyj7eDRDXpHF//Nn qy9FoueAlYs6yDYn7NlHbfLGVWbJ70TUYQZq2TFrBmPcYeINoOvPa/p2TS/03PzMvf5t /ZyWaZMw/7Lvn55m9uNlV1XyPIgi6RzsC7TAHTpsQjUHLnkHWB78NdUiow75XrtN+pxq JeluHPuylRGwpMB4BDK+6oc+K3hlQ8lTAAne8JJKHHkNzq+7ymi2QcCyw7JI1np2FC75 kMDQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=rMohPe+t; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 8si4010762ejq.15.2021.06.16.19.43.13; Wed, 16 Jun 2021 19:43:36 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=rMohPe+t; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231590AbhFQAtQ (ORCPT + 99 others); Wed, 16 Jun 2021 20:49:16 -0400 Received: from mail.kernel.org ([198.145.29.99]:60956 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231496AbhFQAtQ (ORCPT ); Wed, 16 Jun 2021 20:49:16 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id C7A2B613B9; Thu, 17 Jun 2021 00:47:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1623890829; bh=WCzGPQl/aym123sINb8QnvJYTwJNBRaVx2zLf3Bv1N8=; h=Date:From:To:cc:Subject:In-Reply-To:References:From; b=rMohPe+tbKQa0AYXnI+IW3OaFUpO6B0eUu0gZForopqzfrCjPyFI1Ewjobgur6/dk bLRtb8PIYvryWZweFMXTwAbMWtg0nctMUoV8kxk6uELmUKnBiVw5LeKCqL5SBhOByE jiTek0nnewDrHmhCidXDSQ3WaahCsU5g4GDabRezUbSedVXgMEydAYEHuXFwzTS91g cJMCyter7dx77I2E16IUKEfK8pcdYqTSbuwilDTS8T+uuTkXRJvXElaDgqvBEXf5YU 0wAWL6nOk87f0a1jJqezXJtxglL4nvT9peSjxOtxRa83ff4UqBDa3CGXSoXj4TqnFN tHVw/jZbmu50w== Date: Wed, 16 Jun 2021 17:47:07 -0700 (PDT) From: Stefano Stabellini X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s To: Claire Chang cc: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski , benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , tfiga@chromium.org, bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, jxgao@google.com, joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Subject: Re: [PATCH v12 06/12] swiotlb: Use is_swiotlb_force_bounce for swiotlb data bouncing In-Reply-To: <20210616062157.953777-7-tientzu@chromium.org> Message-ID: References: <20210616062157.953777-1-tientzu@chromium.org> <20210616062157.953777-7-tientzu@chromium.org> User-Agent: Alpine 2.21 (DEB 202 2017-01-01) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 16 Jun 2021, Claire Chang wrote: > Propagate the swiotlb_force into io_tlb_default_mem->force_bounce and > use it to determine whether to bounce the data or not. This will be > useful later to allow for different pools. > > Signed-off-by: Claire Chang > --- > include/linux/swiotlb.h | 11 +++++++++++ > kernel/dma/direct.c | 2 +- > kernel/dma/direct.h | 2 +- > kernel/dma/swiotlb.c | 4 ++++ > 4 files changed, 17 insertions(+), 2 deletions(-) > > diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h > index dd1c30a83058..8d8855c77d9a 100644 > --- a/include/linux/swiotlb.h > +++ b/include/linux/swiotlb.h > @@ -84,6 +84,7 @@ extern enum swiotlb_force swiotlb_force; > * unmap calls. > * @debugfs: The dentry to debugfs. > * @late_alloc: %true if allocated using the page allocator > + * @force_bounce: %true if swiotlb bouncing is forced > */ > struct io_tlb_mem { > phys_addr_t start; > @@ -94,6 +95,7 @@ struct io_tlb_mem { > spinlock_t lock; > struct dentry *debugfs; > bool late_alloc; > + bool force_bounce; > struct io_tlb_slot { > phys_addr_t orig_addr; > size_t alloc_size; > @@ -109,6 +111,11 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr) > return mem && paddr >= mem->start && paddr < mem->end; > } > > +static inline bool is_swiotlb_force_bounce(struct device *dev) > +{ > + return dev->dma_io_tlb_mem->force_bounce; > +} > void __init swiotlb_exit(void); > unsigned int swiotlb_max_segment(void); > size_t swiotlb_max_mapping_size(struct device *dev); > @@ -120,6 +127,10 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr) > { > return false; > } > +static inline bool is_swiotlb_force_bounce(struct device *dev) > +{ > + return false; > +} > static inline void swiotlb_exit(void) > { > } > diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c > index 7a88c34d0867..a92465b4eb12 100644 > --- a/kernel/dma/direct.c > +++ b/kernel/dma/direct.c > @@ -496,7 +496,7 @@ size_t dma_direct_max_mapping_size(struct device *dev) > { > /* If SWIOTLB is active, use its maximum mapping size */ > if (is_swiotlb_active(dev) && > - (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE)) > + (dma_addressing_limited(dev) || is_swiotlb_force_bounce(dev))) > return swiotlb_max_mapping_size(dev); > return SIZE_MAX; > } > diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h > index 13e9e7158d94..4632b0f4f72e 100644 > --- a/kernel/dma/direct.h > +++ b/kernel/dma/direct.h > @@ -87,7 +87,7 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev, > phys_addr_t phys = page_to_phys(page) + offset; > dma_addr_t dma_addr = phys_to_dma(dev, phys); > > - if (unlikely(swiotlb_force == SWIOTLB_FORCE)) > + if (is_swiotlb_force_bounce(dev)) > return swiotlb_map(dev, phys, size, dir, attrs); > > if (unlikely(!dma_capable(dev, dma_addr, size, true))) { Should we also make the same change in drivers/xen/swiotlb-xen.c:xen_swiotlb_map_page ? If I make that change, I can see that everything is working as expected for a restricted-dma device with Linux running as dom0 on Xen. However, is_swiotlb_force_bounce returns non-zero even for normal non-restricted-dma devices. That shouldn't happen, right? It looks like struct io_tlb_slot is not zeroed on allocation. Adding memset(mem, 0x0, struct_size) in swiotlb_late_init_with_tbl solves the issue. With those two changes, the series passes my tests and you can add my tested-by.