Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp1006204pxj; Thu, 17 Jun 2021 19:59:01 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxZhhBommzAOz490mmy+z0fMywCHtrvKRptuvBHN4e+RPZoxiyNufbDInUZEEWWBTKLTkcg X-Received: by 2002:aa7:de1a:: with SMTP id h26mr1944518edv.176.1623985141173; Thu, 17 Jun 2021 19:59:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1623985141; cv=none; d=google.com; s=arc-20160816; b=WTcstaapbL95BPCCiG7XXU1f5GLWQ1JJkCmc5Eh9W9gxuo2ZChp386EQf6xHs+z6lP I1A8JOXsq1YUeTP7RZjkJ08OXP0ye9MOkEj0s7Npe9gk7RH9J3HDU3LJ3yQ3lCIz/IT+ CnIijWnTT2Zp+B10B+a65JuwzCWZHDlI750T0/NafmJU0dkNeIs4vGP/JR0MlYj3b8e2 2bjqzRKv3E0Z8XpTIJyiprcA1T5kxoUgKIIpdkZI90UPmrXbhQpx/z/c1hSlsodgrN/1 OomFm/NSFdsmJAM6IQugPja2X06ntivTQkTm/4/Z9YNrkkB4FfFBF447yoCU8dRkd1YW Nxcw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:user-agent:references:message-id :in-reply-to:subject:cc:to:from:date:dkim-signature; bh=POgq1cjTqVBWb0eLi9K0qvQRC0+6miqgIfmkgc78qpc=; b=tQ+YwG3SaMCFdP0Zh7yNpZZyQF6066DJlFpWQ4WOZOLb1VJpLMtbP7LYSsxXvFQB7b kNnwzvkPRR5A1HR8E1uZZsmZA/d668VJoUau0kGi2sraXXalGa6Z8cBQL6ut2Bwx8664 lPWg2wzQ820ZBbFDaq32mI+nWX45BuDN35lJUbCvj8n3pDxyJ/Ql6XqvskRuDagBcpWj Fk33+qwbjWBJcdrhSYLTXkHE1yyl6dU7zSov3I+OCg+vxILO8O633NRtKR/zshJYld3W 54eDxtnTK8UgSQY4bY2SbHOWahWcEb5rOs8dkP/+ZAgtGy7eZg8KGEZfA9KFklv8k7pN cLzA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=igsWuyQP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 8si1008163ejb.113.2021.06.17.19.58.38; Thu, 17 Jun 2021 19:59:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=igsWuyQP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233111AbhFQXdW (ORCPT + 99 others); Thu, 17 Jun 2021 19:33:22 -0400 Received: from mail.kernel.org ([198.145.29.99]:49258 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233077AbhFQXdU (ORCPT ); Thu, 17 Jun 2021 19:33:20 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id AEEAC6117A; Thu, 17 Jun 2021 23:31:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1623972672; bh=OQTvXWc5KTHXd26e21lniSV4nLp1qu2sC2xgNiM6A40=; h=Date:From:To:cc:Subject:In-Reply-To:References:From; b=igsWuyQPQeQ/DRXp976arBlh2Cd5oMHU2rmslY5U2yOFBKGWeEuqoDHh/Bj3mGHkd bLMM1Kw2CstAytFNmlUOY8WTs+QEtmm/R9UpxalWSGPL6zjJqu+ywf2zRKIqqo2Vn2 ska/hVnarr3cmbXV1zh/4q6O7Ya6EcMANA1sH0BM5teIhgKxWe/aPUjXDGMJHDQcyG 02ttdVu/nqV8fLZFut1pkllMnC/G+MCX8oF5zPpY6lMJCM4R+UgwgZXWkVN0xhuJAr UeEfhIO3NooiVZttZkgEwUTjN1Qplp4be+9ogZRntMC6T8c0ripHJgEPpcV4OFRjHa ks0He4jC+xUpQ== Date: Thu, 17 Jun 2021 16:31:10 -0700 (PDT) From: Stefano Stabellini X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s To: Claire Chang cc: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski , benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , tfiga@chromium.org, bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, jxgao@google.com, joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Subject: Re: [PATCH v13 09/12] swiotlb: Add restricted DMA alloc/free support In-Reply-To: <20210617062635.1660944-10-tientzu@chromium.org> Message-ID: References: <20210617062635.1660944-1-tientzu@chromium.org> <20210617062635.1660944-10-tientzu@chromium.org> User-Agent: Alpine 2.21 (DEB 202 2017-01-01) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, 17 Jun 2021, Claire Chang wrote: > Add the functions, swiotlb_{alloc,free} and is_swiotlb_for_alloc to > support the memory allocation from restricted DMA pool. > > The restricted DMA pool is preferred if available. > > Note that since coherent allocation needs remapping, one must set up > another device coherent pool by shared-dma-pool and use > dma_alloc_from_dev_coherent instead for atomic coherent allocation. > > Signed-off-by: Claire Chang > Reviewed-by: Christoph Hellwig > Tested-by: Stefano Stabellini > Tested-by: Will Deacon Acked-by: Stefano Stabellini > --- > include/linux/swiotlb.h | 26 ++++++++++++++++++++++ > kernel/dma/direct.c | 49 +++++++++++++++++++++++++++++++---------- > kernel/dma/swiotlb.c | 38 ++++++++++++++++++++++++++++++-- > 3 files changed, 99 insertions(+), 14 deletions(-) > > diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h > index 8d8855c77d9a..a73fad460162 100644 > --- a/include/linux/swiotlb.h > +++ b/include/linux/swiotlb.h > @@ -85,6 +85,7 @@ extern enum swiotlb_force swiotlb_force; > * @debugfs: The dentry to debugfs. > * @late_alloc: %true if allocated using the page allocator > * @force_bounce: %true if swiotlb bouncing is forced > + * @for_alloc: %true if the pool is used for memory allocation > */ > struct io_tlb_mem { > phys_addr_t start; > @@ -96,6 +97,7 @@ struct io_tlb_mem { > struct dentry *debugfs; > bool late_alloc; > bool force_bounce; > + bool for_alloc; > struct io_tlb_slot { > phys_addr_t orig_addr; > size_t alloc_size; > @@ -156,4 +158,28 @@ static inline void swiotlb_adjust_size(unsigned long size) > extern void swiotlb_print_info(void); > extern void swiotlb_set_max_segment(unsigned int); > > +#ifdef CONFIG_DMA_RESTRICTED_POOL > +struct page *swiotlb_alloc(struct device *dev, size_t size); > +bool swiotlb_free(struct device *dev, struct page *page, size_t size); > + > +static inline bool is_swiotlb_for_alloc(struct device *dev) > +{ > + return dev->dma_io_tlb_mem->for_alloc; > +} > +#else > +static inline struct page *swiotlb_alloc(struct device *dev, size_t size) > +{ > + return NULL; > +} > +static inline bool swiotlb_free(struct device *dev, struct page *page, > + size_t size) > +{ > + return false; > +} > +static inline bool is_swiotlb_for_alloc(struct device *dev) > +{ > + return false; > +} > +#endif /* CONFIG_DMA_RESTRICTED_POOL */ > + > #endif /* __LINUX_SWIOTLB_H */ > diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c > index a92465b4eb12..2de33e5d302b 100644 > --- a/kernel/dma/direct.c > +++ b/kernel/dma/direct.c > @@ -75,6 +75,15 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size) > min_not_zero(dev->coherent_dma_mask, dev->bus_dma_limit); > } > > +static void __dma_direct_free_pages(struct device *dev, struct page *page, > + size_t size) > +{ > + if (IS_ENABLED(CONFIG_DMA_RESTRICTED_POOL) && > + swiotlb_free(dev, page, size)) > + return; > + dma_free_contiguous(dev, page, size); > +} > + > static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size, > gfp_t gfp) > { > @@ -86,6 +95,16 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size, > > gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask, > &phys_limit); > + if (IS_ENABLED(CONFIG_DMA_RESTRICTED_POOL) && > + is_swiotlb_for_alloc(dev)) { > + page = swiotlb_alloc(dev, size); > + if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) { > + __dma_direct_free_pages(dev, page, size); > + return NULL; > + } > + return page; > + } > + > page = dma_alloc_contiguous(dev, size, gfp); > if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) { > dma_free_contiguous(dev, page, size); > @@ -142,7 +161,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, > gfp |= __GFP_NOWARN; > > if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && > - !force_dma_unencrypted(dev)) { > + !force_dma_unencrypted(dev) && !is_swiotlb_for_alloc(dev)) { > page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO); > if (!page) > return NULL; > @@ -155,18 +174,23 @@ void *dma_direct_alloc(struct device *dev, size_t size, > } > > if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && > - !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && > - !dev_is_dma_coherent(dev)) > + !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) && > + !is_swiotlb_for_alloc(dev)) > return arch_dma_alloc(dev, size, dma_handle, gfp, attrs); > > /* > * Remapping or decrypting memory may block. If either is required and > * we can't block, allocate the memory from the atomic pools. > + * If restricted DMA (i.e., is_swiotlb_for_alloc) is required, one must > + * set up another device coherent pool by shared-dma-pool and use > + * dma_alloc_from_dev_coherent instead. > */ > if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) && > !gfpflags_allow_blocking(gfp) && > (force_dma_unencrypted(dev) || > - (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev)))) > + (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && > + !dev_is_dma_coherent(dev))) && > + !is_swiotlb_for_alloc(dev)) > return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); > > /* we always manually zero the memory once we are done */ > @@ -237,7 +261,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, > return NULL; > } > out_free_pages: > - dma_free_contiguous(dev, page, size); > + __dma_direct_free_pages(dev, page, size); > return NULL; > } > > @@ -247,15 +271,15 @@ void dma_direct_free(struct device *dev, size_t size, > unsigned int page_order = get_order(size); > > if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && > - !force_dma_unencrypted(dev)) { > + !force_dma_unencrypted(dev) && !is_swiotlb_for_alloc(dev)) { > /* cpu_addr is a struct page cookie, not a kernel address */ > dma_free_contiguous(dev, cpu_addr, size); > return; > } > > if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && > - !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && > - !dev_is_dma_coherent(dev)) { > + !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) && > + !is_swiotlb_for_alloc(dev)) { > arch_dma_free(dev, size, cpu_addr, dma_addr, attrs); > return; > } > @@ -273,7 +297,7 @@ void dma_direct_free(struct device *dev, size_t size, > else if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_CLEAR_UNCACHED)) > arch_dma_clear_uncached(cpu_addr, size); > > - dma_free_contiguous(dev, dma_direct_to_page(dev, dma_addr), size); > + __dma_direct_free_pages(dev, dma_direct_to_page(dev, dma_addr), size); > } > > struct page *dma_direct_alloc_pages(struct device *dev, size_t size, > @@ -283,7 +307,8 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size, > void *ret; > > if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) && > - force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp)) > + force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp) && > + !is_swiotlb_for_alloc(dev)) > return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); > > page = __dma_direct_alloc_pages(dev, size, gfp); > @@ -310,7 +335,7 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size, > *dma_handle = phys_to_dma_direct(dev, page_to_phys(page)); > return page; > out_free_pages: > - dma_free_contiguous(dev, page, size); > + __dma_direct_free_pages(dev, page, size); > return NULL; > } > > @@ -329,7 +354,7 @@ void dma_direct_free_pages(struct device *dev, size_t size, > if (force_dma_unencrypted(dev)) > set_memory_encrypted((unsigned long)vaddr, 1 << page_order); > > - dma_free_contiguous(dev, page, size); > + __dma_direct_free_pages(dev, page, size); > } > > #if defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE) || \ > diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c > index ff09341bb9f5..6499cfbfe95f 100644 > --- a/kernel/dma/swiotlb.c > +++ b/kernel/dma/swiotlb.c > @@ -463,8 +463,9 @@ static int swiotlb_find_slots(struct device *dev, phys_addr_t orig_addr, > > index = wrap = wrap_index(mem, ALIGN(mem->index, stride)); > do { > - if ((slot_addr(tbl_dma_addr, index) & iotlb_align_mask) != > - (orig_addr & iotlb_align_mask)) { > + if (orig_addr && > + (slot_addr(tbl_dma_addr, index) & iotlb_align_mask) != > + (orig_addr & iotlb_align_mask)) { > index = wrap_index(mem, index + 1); > continue; > } > @@ -703,3 +704,36 @@ static int __init swiotlb_create_default_debugfs(void) > late_initcall(swiotlb_create_default_debugfs); > > #endif > + > +#ifdef CONFIG_DMA_RESTRICTED_POOL > +struct page *swiotlb_alloc(struct device *dev, size_t size) > +{ > + struct io_tlb_mem *mem = dev->dma_io_tlb_mem; > + phys_addr_t tlb_addr; > + int index; > + > + if (!mem) > + return NULL; > + > + index = swiotlb_find_slots(dev, 0, size); > + if (index == -1) > + return NULL; > + > + tlb_addr = slot_addr(mem->start, index); > + > + return pfn_to_page(PFN_DOWN(tlb_addr)); > +} > + > +bool swiotlb_free(struct device *dev, struct page *page, size_t size) > +{ > + phys_addr_t tlb_addr = page_to_phys(page); > + > + if (!is_swiotlb_buffer(dev, tlb_addr)) > + return false; > + > + swiotlb_release_slots(dev, tlb_addr); > + > + return true; > +} > + > +#endif /* CONFIG_DMA_RESTRICTED_POOL */ > -- > 2.32.0.288.g62a8d224e6-goog >