Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp6206229yba; Tue, 14 May 2019 03:42:29 -0700 (PDT) X-Google-Smtp-Source: APXvYqwRd3IWRFdalh6rePY03DiYTKmFzvt2CwQ22r03ivs/q9MujoH23TJpICVzFLwlpmaQB/dZ X-Received: by 2002:a63:de53:: with SMTP id y19mr33771641pgi.166.1557830549558; Tue, 14 May 2019 03:42:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557830549; cv=none; d=google.com; s=arc-20160816; b=H4OR/hIyJmE6KrZtdyPEt/b6SwH/uvCYnfmqEGLfbxmg+h7Dgva53YZZbYk6fECLMR OBwxx253PQtXCZgoSwYDUwt0HL4tEk5Lur73k1HqS8LLVtVjnuwM3BN6wGnzXhEqPrNl w7eSJT+Kds+OSttjnDKSkBj4Mfu9QllVTqpzWVGvPQgJHBK/vbhqAouA68BGBUGkFuK2 fESnZg4hyspqGrq8RdOO2C05GxUrRmlPvXkrDcn2/uhKHx4UoHdpZHu0QW0CoIM5w79i ZXcQDKkFk+tGurTsGGS2o2gpd3mALLKfZxezixI7kqQVMvC7ojriT8MZd4+xrtONImbn WQyA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=PxFAUR1GyHReLkFHSw4JZ4ZyCYRft3xiEy0cItwsRhg=; b=FlOxjaEcZ1twS9ndK67blgzWWct2WGAokQMFeWQdsne+rsta1vSt0JbKsAktdmsPPv ieEi6yVHPx94KjjIhOu4wbkOrsS9oz4gGpfxp1+R9pQQsnbUtROwz6Jmo63uQCeBw/FQ LI3PBQK5yO8WYZGrw3eZi9OJZ6oecEzmf715xcZyVfS7ItgjntmWc5r9DHiZGWf1zBuS WeC2HZQWWT3EA6+vc6cvnsK1KLbSVLvCX++3Bqdovhi4j2rVI688VPDs3DY/WRP0EjKl /8sc9NaieR77qIX/2F37jAm0vK/5EIR/1hDSaecVKzXVOODZ1wmZnrZ89dQYX1qZw7Ih cf4w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f10si12694237pgb.464.2019.05.14.03.42.14; Tue, 14 May 2019 03:42:29 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726342AbfENKkL (ORCPT + 99 others); Tue, 14 May 2019 06:40:11 -0400 Received: from szxga07-in.huawei.com ([45.249.212.35]:42768 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725881AbfENKkL (ORCPT ); Tue, 14 May 2019 06:40:11 -0400 Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id A749EA890B1BACCFAA64; Tue, 14 May 2019 18:40:08 +0800 (CST) Received: from [127.0.0.1] (10.151.21.212) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.439.0; Tue, 14 May 2019 18:39:58 +0800 Subject: Re: [RFC][PATCH 4/5 v4] dma-buf: heaps: Add CMA heap to dmabuf heaps To: John Stultz , lkml CC: Laura Abbott , Benjamin Gaignard , Sumit Semwal , "Liam Mark" , Pratik Patel , "Brian Starkey" , Vincent Donnefort , Sudipto Paul , "Andrew F . Davis" , Xu YiPing , "Chenfeng (puck)" , butao , Yudongbin , Christoph Hellwig , Chenbo Feng , Alistair Strachan , , "Liuyi (Daniel)" , Kongfei References: <20190513183727.15755-1-john.stultz@linaro.org> <20190513183727.15755-5-john.stultz@linaro.org> From: "Xiaqing (A)" Message-ID: <0333e2b3-0e3d-360e-c8ac-62f3235d24be@hisilicon.com> Date: Tue, 14 May 2019 18:39:36 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.3.0 MIME-Version: 1.0 In-Reply-To: <20190513183727.15755-5-john.stultz@linaro.org> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.151.21.212] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2019/5/14 2:37, John Stultz wrote: > This adds a CMA heap, which allows userspace to allocate > a dma-buf of contiguous memory out of a CMA region. > > This code is an evolution of the Android ION implementation, so > thanks to its original author and maintainters: > Benjamin Gaignard, Laura Abbott, and others! > > Cc: Laura Abbott > Cc: Benjamin Gaignard > Cc: Sumit Semwal > Cc: Liam Mark > Cc: Pratik Patel > Cc: Brian Starkey > Cc: Vincent Donnefort > Cc: Sudipto Paul > Cc: Andrew F. Davis > Cc: Xu YiPing > Cc: "Chenfeng (puck)" > Cc: butao > Cc: "Xiaqing (A)" > Cc: Yudongbin > Cc: Christoph Hellwig > Cc: Chenbo Feng > Cc: Alistair Strachan > Cc: dri-devel@lists.freedesktop.org > Signed-off-by: John Stultz > --- > v2: > * Switch allocate to return dmabuf fd > * Simplify init code > * Checkpatch fixups > v3: > * Switch to inline function for to_cma_heap() > * Minor cleanups suggested by Brian > * Fold in new registration style from Andrew > * Folded in changes from Andrew to use simplified page list > from the heap helpers > * Use the fd_flags when creating dmabuf fd (Suggested by > Benjamin) > * Use precalculated pagecount (Suggested by Andrew) > --- > drivers/dma-buf/heaps/Kconfig | 8 ++ > drivers/dma-buf/heaps/Makefile | 1 + > drivers/dma-buf/heaps/cma_heap.c | 169 +++++++++++++++++++++++++++++++ > 3 files changed, 178 insertions(+) > create mode 100644 drivers/dma-buf/heaps/cma_heap.c > > diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig > index 205052744169..a5eef06c4226 100644 > --- a/drivers/dma-buf/heaps/Kconfig > +++ b/drivers/dma-buf/heaps/Kconfig > @@ -4,3 +4,11 @@ config DMABUF_HEAPS_SYSTEM > help > Choose this option to enable the system dmabuf heap. The system heap > is backed by pages from the buddy allocator. If in doubt, say Y. > + > +config DMABUF_HEAPS_CMA > + bool "DMA-BUF CMA Heap" > + depends on DMABUF_HEAPS && DMA_CMA > + help > + Choose this option to enable dma-buf CMA heap. This heap is backed > + by the Contiguous Memory Allocator (CMA). If your system has these > + regions, you should say Y here. > diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile > index d1808eca2581..6e54cdec3da0 100644 > --- a/drivers/dma-buf/heaps/Makefile > +++ b/drivers/dma-buf/heaps/Makefile > @@ -1,3 +1,4 @@ > # SPDX-License-Identifier: GPL-2.0 > obj-y += heap-helpers.o > obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o > +obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o > diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c > new file mode 100644 > index 000000000000..3d0ffbbd0a34 > --- /dev/null > +++ b/drivers/dma-buf/heaps/cma_heap.c > @@ -0,0 +1,169 @@ > +// SPDX-License-Identifier: GPL-2.0 > +/* > + * DMABUF CMA heap exporter > + * > + * Copyright (C) 2012, 2019 Linaro Ltd. > + * Author: for ST-Ericsson. > + */ > + > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > + > +#include "heap-helpers.h" > + > +struct cma_heap { > + struct dma_heap *heap; > + struct cma *cma; > +}; > + > +static void cma_heap_free(struct heap_helper_buffer *buffer) > +{ > + struct cma_heap *cma_heap = dma_heap_get_data(buffer->heap_buffer.heap); > + unsigned long nr_pages = buffer->pagecount; > + struct page *pages = buffer->priv_virt; > + > + /* free page list */ > + kfree(buffer->pages); > + /* release memory */ > + cma_release(cma_heap->cma, pages, nr_pages); > + kfree(buffer); > +} > + > +/* dmabuf heap CMA operations functions */ > +static int cma_heap_allocate(struct dma_heap *heap, > + unsigned long len, > + unsigned long fd_flags, > + unsigned long heap_flags) > +{ > + struct cma_heap *cma_heap = dma_heap_get_data(heap); > + struct heap_helper_buffer *helper_buffer; > + struct page *pages; > + size_t size = PAGE_ALIGN(len); > + unsigned long nr_pages = size >> PAGE_SHIFT; > + unsigned long align = get_order(size); > + DEFINE_DMA_BUF_EXPORT_INFO(exp_info); > + struct dma_buf *dmabuf; > + int ret = -ENOMEM; > + pgoff_t pg; > + > + if (align > CONFIG_CMA_ALIGNMENT) > + align = CONFIG_CMA_ALIGNMENT; > + > + helper_buffer = kzalloc(sizeof(*helper_buffer), GFP_KERNEL); > + if (!helper_buffer) > + return -ENOMEM; > + > + INIT_HEAP_HELPER_BUFFER(helper_buffer, cma_heap_free); > + helper_buffer->heap_buffer.flags = heap_flags; > + helper_buffer->heap_buffer.heap = heap; > + helper_buffer->heap_buffer.size = len; > + > + pages = cma_alloc(cma_heap->cma, nr_pages, align, false); > + if (!pages) > + goto free_buf; > + > + if (PageHighMem(pages)) { > + unsigned long nr_clear_pages = nr_pages; > + struct page *page = pages; > + > + while (nr_clear_pages > 0) { > + void *vaddr = kmap_atomic(page); > + > + memset(vaddr, 0, PAGE_SIZE); > + kunmap_atomic(vaddr); > + page++; > + nr_clear_pages--; > + } > + } else { > + memset(page_address(pages), 0, size); > + } > + > + helper_buffer->pagecount = nr_pages; > + helper_buffer->pages = kmalloc_array(helper_buffer->pagecount, > + sizeof(*helper_buffer->pages), > + GFP_KERNEL); > + if (!helper_buffer->pages) { > + ret = -ENOMEM; > + goto free_cma; > + } > + > + for (pg = 0; pg < helper_buffer->pagecount; pg++) { > + helper_buffer->pages[pg] = &pages[pg]; > + if (!helper_buffer->pages[pg]) > + goto free_pages; > + } > + > + /* create the dmabuf */ > + exp_info.ops = &heap_helper_ops; > + exp_info.size = len; > + exp_info.flags = fd_flags; > + exp_info.priv = &helper_buffer->heap_buffer; > + dmabuf = dma_buf_export(&exp_info); > + if (IS_ERR(dmabuf)) { > + ret = PTR_ERR(dmabuf); > + goto free_pages; > + } Can the dmabuf be created in the framework layer? each heap needs to add the same code here, which is not very good. > + > + helper_buffer->heap_buffer.dmabuf = dmabuf; > + helper_buffer->priv_virt = pages; > + > + ret = dma_buf_fd(dmabuf, fd_flags); > + if (ret < 0) { > + dma_buf_put(dmabuf); > + /* just return, as put will call release and that will free */ > + return ret; > + } > + > + return ret; > + > +free_pages: > + kfree(helper_buffer->pages); > +free_cma: > + cma_release(cma_heap->cma, pages, nr_pages); > +free_buf: > + kfree(helper_buffer); > + return ret; > +} > + > +static struct dma_heap_ops cma_heap_ops = { > + .allocate = cma_heap_allocate, > +}; > + > +static int __add_cma_heap(struct cma *cma, void *data) > +{ > + struct cma_heap *cma_heap; > + struct dma_heap_export_info exp_info; > + > + cma_heap = kzalloc(sizeof(*cma_heap), GFP_KERNEL); > + if (!cma_heap) > + return -ENOMEM; > + cma_heap->cma = cma; > + > + exp_info.name = cma_get_name(cma); > + exp_info.ops = &cma_heap_ops; > + exp_info.priv = cma_heap; > + > + cma_heap->heap = dma_heap_add(&exp_info); > + if (IS_ERR(cma_heap->heap)) { > + int ret = PTR_ERR(cma_heap->heap); > + > + kfree(cma_heap); > + return ret; > + } > + > + return 0; > +} > + > +static int add_cma_heaps(void) > +{ > + cma_for_each_area(__add_cma_heap, NULL); > + return 0; > +} > +device_initcall(add_cma_heaps); >