Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp2682564yba; Mon, 22 Apr 2019 11:04:03 -0700 (PDT) X-Google-Smtp-Source: APXvYqyd4Xq65YyL9Kl9hb2l40ALvIJiOsQi8yBekBELymLrzERscx1bPzvqaZcfeWzQzQl4GpmN X-Received: by 2002:a63:c605:: with SMTP id w5mr19710615pgg.355.1555956242947; Mon, 22 Apr 2019 11:04:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1555956242; cv=none; d=google.com; s=arc-20160816; b=eQdQg6aG5HudnzXjYz7MxfF+Ga5e7yiOPsFbI+cOeXcB++AfsC4NjMOWB9YSajRi+A jzlzEh0nrZf7h2AcMcSbeGgnomvOC7RJRtY0y0iai8GP1PyDxHg/d11jvjU8tr3MH+Zk p5Ai+rnx6OyMaw2uSBjZ1krNxj878fTUxP3Rn2uPtP2qL/ZgUgXwVhzxNAsSYTiaR/ge 1eOQawI4adwOqU8tDhya3GkgrugRHZGTkNr/aBM1uIU0Flxw6GgOYuunqMgoFhjlqLad rEtrSOUx+8YSBX6n7NM2mmJpJO1YS8TFO7VwUlxAXyeJXDqcDHxS8gjb8R3M/jZWKzaY 8tgQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=OQgnQL8ZNGoEIawcra+UXoSaQUp201X+Fppo94xEHDA=; b=oxs4W+J89bHFDRSQeE8OChc4Dptq7eJq7k5gDGaMiOcbcJ9vIJVxN5RLpdTX2VRSoX IduzDRIV/c3otN6UKFZiwPBafSdInA66zMko/bEQa11jHJ2LdTIJ6OmZc6olcBnJ6kfJ 7rcwPp2K1GdH4VsCalEuYPHFXnh/TkoLoDfy4U31VcZyZHhg0GcH3SYTf4Rv0881kYel tuZ7NanekhoLUOumrz6p9QKgk/Qe/IvAQ3IQxqajeU6iMxmZ411z0CwCYXje3u68FJ8v IYXVzTiPil3+FPm+vUYmbSp1W7jDZQhuNoiV16ND9/DtqO1zfTpjfVc9HtA1deSbOAOg KN8Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b="JU/HhN5I"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z22si12165281pgv.196.2019.04.22.11.03.45; Mon, 22 Apr 2019 11:04:02 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b="JU/HhN5I"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728618AbfDVSBG (ORCPT + 99 others); Mon, 22 Apr 2019 14:01:06 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:39534 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728583AbfDVSBB (ORCPT ); Mon, 22 Apr 2019 14:01:01 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=OQgnQL8ZNGoEIawcra+UXoSaQUp201X+Fppo94xEHDA=; b=JU/HhN5IAdYsQY+k0heBYpXQBM GqAj0Z49IjUpzq7B6jvT6EyW05VPjZaQkouu1YxYoa/7qPqVlytqbncs4kaHQCv8DkVzESuaJZTVx +382a6/o7pGAeq/0BBnzBKFas7gb7JmSCNPyDZXfYNlddbpkyb1xPpI2bO0xPJORuYxhZhFNbdRfw TWhwgD9zRIyTX01Atw/PrphwWtT8p17g8D6hNosnc0oLycEHa33eTYjkFI+4RUect6k0MWW5fFElp qc6q0pwsQ5ioe81+iRe4dYFeKBFEjbzjH4Li/kSoX5Lqmjji/9EgjWyjcD1CuOsNRvKggVmppkoY7 jmWhTChg==; Received: from 213-225-37-80.nat.highway.a1.net ([213.225.37.80] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1hIdFM-0002I2-Nb; Mon, 22 Apr 2019 18:00:53 +0000 From: Christoph Hellwig To: Robin Murphy Cc: Joerg Roedel , Catalin Marinas , Will Deacon , Tom Lendacky , iommu@lists.linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH 17/26] iommu/dma: Merge the CMA and alloc_pages allocation paths Date: Mon, 22 Apr 2019 19:59:33 +0200 Message-Id: <20190422175942.18788-18-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190422175942.18788-1-hch@lst.de> References: <20190422175942.18788-1-hch@lst.de> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Instead of having a separate code path for the non-blocking alloc_pages and CMA allocations paths merge them into one. There is a slight behavior change here in that we try the page allocator if CMA fails. This matches what dma-direct and other iommu drivers do and will be needed to use the dma-iommu code on architectures without DMA remapping later on. Signed-off-by: Christoph Hellwig --- drivers/iommu/dma-iommu.c | 32 ++++++++++++-------------------- 1 file changed, 12 insertions(+), 20 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 6f4febf5e1de..a1b8c232ad42 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -957,7 +957,7 @@ static void *iommu_dma_alloc(struct device *dev, size_t size, bool coherent = dev_is_dma_coherent(dev); int ioprot = dma_info_to_prot(DMA_BIDIRECTIONAL, coherent, attrs); size_t iosize = size; - struct page *page; + struct page *page = NULL; void *addr; size = PAGE_ALIGN(size); @@ -967,35 +967,26 @@ static void *iommu_dma_alloc(struct device *dev, size_t size, !(attrs & DMA_ATTR_FORCE_CONTIGUOUS)) return iommu_dma_alloc_remap(dev, iosize, handle, gfp, attrs); - if (!gfpflags_allow_blocking(gfp)) { - /* - * In atomic context we can't remap anything, so we'll only - * get the virtually contiguous buffer we need by way of a - * physically contiguous allocation. - */ - if (coherent) { - page = alloc_pages(gfp, get_order(size)); - addr = page ? page_address(page) : NULL; - } else { - addr = dma_alloc_from_pool(size, &page, gfp); - } + if (!gfpflags_allow_blocking(gfp) && !coherent) { + addr = dma_alloc_from_pool(size, &page, gfp); if (!addr) return NULL; *handle = __iommu_dma_map(dev, page_to_phys(page), iosize, ioprot); if (*handle == DMA_MAPPING_ERROR) { - if (coherent) - __free_pages(page, get_order(size)); - else - dma_free_from_pool(addr, size); + dma_free_from_pool(addr, size); return NULL; } return addr; } - page = dma_alloc_from_contiguous(dev, size >> PAGE_SHIFT, - get_order(size), gfp & __GFP_NOWARN); + if (gfpflags_allow_blocking(gfp)) + page = dma_alloc_from_contiguous(dev, size >> PAGE_SHIFT, + get_order(size), + gfp & __GFP_NOWARN); + if (!page) + page = alloc_pages(gfp, get_order(size)); if (!page) return NULL; @@ -1021,7 +1012,8 @@ static void *iommu_dma_alloc(struct device *dev, size_t size, out_unmap: __iommu_dma_unmap(dev, *handle, iosize); out_free_pages: - dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT); + if (!dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT)) + __free_pages(page, get_order(size)); return NULL; } -- 2.20.1