Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp697469pxk; Thu, 1 Oct 2020 11:32:12 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzkCpLSiWQEVxh5JPDJn5bL35m0PdaJKLaYRMYoK91VKusqkA7nqbyfxdY1bKTBziU+zB90 X-Received: by 2002:a17:906:8687:: with SMTP id g7mr9680036ejx.129.1601577132066; Thu, 01 Oct 2020 11:32:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601577132; cv=none; d=google.com; s=arc-20160816; b=WLg6Ob8ZO8vlGkdLuhSn5ULZU8OqxsqWfBQv0pIDBHf461Zo3dp3dJhCVtkcP+2lt3 zxrVtta0L0zoAqXlHgmsjTM2amk+W7uNN2sY62gFR0jZ8QIztm+HTHUp/qpDVWXQhdlG beFOrafViRWkXYfjbP0m3+Wz6uNM5pLhaBmcSYIIVMNtgio4h02MOQEzJ+y65gX5mxZ8 5IjKycbjEto9dd5ho/mi9FBIVMVMUlvG+X3B6X8utWmkFrC3lAPMRplWs19qMJqtYhbA e3jlvjQTI+q27JbZI9B0/DB/wYklwD0E3l9L/5IihUYRzTkha1bAx+pd1WO4u2DIsP3C q5iQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=kz26kZpY5ux5dw8pIZFhmygrJ593+m//lklRbB5JLMk=; b=Qp8CQghKYMbMHFgSuUZAXnC0Tex1wWnkC1PivGGo4O2RMbrKyR9c4GDP+8qyJYKyZa YHvuIv4S9k29KuAsS0HKxK57x1sL1wsE9YYbJIzpwpnffZuBuuJBQUg6rqpKiZvE4CcH iq+kwaqvDWF/coexDH7iKgrZWVcdVlnHbLhlvGl/e6w5yFIn2ePZKHKeVP8EqABl0RKp lRHGEkGUXszkwLAeAv5yXsSO+auqN05KZSC9fKalpc35FMrOuKoS7vje/bIWOK6qjl/p o0DPG3TH99dWrhx1MCPPA9LmkPSPM9q/TUtHEafbBJWuuVzfagZSmV2MVRGUpO2aZsYd dCAQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=krgZIkCB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id bt4si1109673edb.130.2020.10.01.11.31.48; Thu, 01 Oct 2020 11:32:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=krgZIkCB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732380AbgJAS25 (ORCPT + 99 others); Thu, 1 Oct 2020 14:28:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38476 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729412AbgJAS25 (ORCPT ); Thu, 1 Oct 2020 14:28:57 -0400 Received: from mail-ot1-x344.google.com (mail-ot1-x344.google.com [IPv6:2607:f8b0:4864:20::344]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 08DBEC0613D0 for ; Thu, 1 Oct 2020 11:28:57 -0700 (PDT) Received: by mail-ot1-x344.google.com with SMTP id a2so6424337otr.11 for ; Thu, 01 Oct 2020 11:28:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=kz26kZpY5ux5dw8pIZFhmygrJ593+m//lklRbB5JLMk=; b=krgZIkCBzg/+xOHmycMkxy7ew3qfkC0MS0ZIrW5pW70VF+EdVeRyXZkO0Oqn+NmQPZ 3d8X7dl4fIZHceqwzlgL7ugXqkARNaXTy5Jvjm7d23XORMeTKP6u/GCUxyGzd2Kp7xSW NlJ8SXOPI4fnPP9FPBquFC7YwdWMSbvrQjMj1vHCIHtyBfwjrIUctakeZp2iUeob2HzI 493E8mgiXqSHYFRmii2ajY29+GjRs7aZtpJsCwdyiR8TDqSSgQ5r48GSUQi0ynZ+YL6t sRL2rgrHkzJ1gabqFC3brt9Irdgq1Um5Ex8kmH1G01ZIfEbF4HpRZeHg9WwpslvpZKZH mHTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=kz26kZpY5ux5dw8pIZFhmygrJ593+m//lklRbB5JLMk=; b=b1Eo4kAfn5A/vqldorQQ+tpRaHIfdB7fuEwg2Z5Mq60OgU+tuoc1XFwD7eNjG1u7t1 IBrfJIi7mRtwzUDsZsi/LX2YAMesNHYyJZQ7+eZpkRLaDHtnwpycj078qc1XE90kEOoI n7GYAnsi9OepNPXMmMg9FUXMvEWHSzirxTNCzeSw62yAYEMJGldS4dPhBzn46v4Wc5ax 9hKTrcaSkjFlXP/tIOVUMAsMyIoF9qSodNOr8t0xftTeISlg8+uGDNIEiqxkWJq//vNM giGiVcIIvvr6QxvbNZrvr4j6iPuwHbgkzgZvc4QobmtHFE27LY2jOAtq0CbVkYDMEzqE hB9Q== X-Gm-Message-State: AOAM533vouMvgqKvwXZ65xxb0SGY+NesSjrU9hNh3Mgrus0RIJgA6YgZ zjHxpMqYvTq0WJpKnlHJlngFP6qVck0gymgdrUrdjw== X-Received: by 2002:a05:6830:196:: with SMTP id q22mr5638905ota.221.1601576936284; Thu, 01 Oct 2020 11:28:56 -0700 (PDT) MIME-Version: 1.0 References: <20200926042453.67517-1-john.stultz@linaro.org> <20200926042453.67517-6-john.stultz@linaro.org> <1e109a138c86be7b06e20cb30a243fc7@codeaurora.org> In-Reply-To: From: John Stultz Date: Thu, 1 Oct 2020 11:28:45 -0700 Message-ID: Subject: Re: [RFC][PATCH 5/6] dma-buf: system_heap: Add pagepool support to system heap To: Chris Goldsworthy Cc: lkml , Sumit Semwal , Liam Mark , Laura Abbott , Brian Starkey , Hridya Valsaraju , Suren Baghdasaryan , Sandeep Patil , =?UTF-8?Q?=C3=98rjan_Eide?= , Robin Murphy , Ezequiel Garcia , Simon Ser , James Jones , linux-media , dri-devel Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Oct 1, 2020 at 7:49 AM Chris Goldsworthy wrote: > On 2020-09-29 21:46, Chris Goldsworthy wrote: > > On 2020-09-25 21:24, John Stultz wrote: > >> Reuse/abuse the pagepool code from the network code to speed > >> up allocation performance. > >> > >> This is similar to the ION pagepool usage, but tries to > >> utilize generic code instead of a custom implementation. > >> > >> Cc: Sumit Semwal > >> Cc: Liam Mark > >> Cc: Laura Abbott > >> Cc: Brian Starkey > >> Cc: Hridya Valsaraju > >> Cc: Suren Baghdasaryan > >> Cc: Sandeep Patil > >> Cc: =C3=98rjan Eide > >> Cc: Robin Murphy > >> Cc: Ezequiel Garcia > >> Cc: Simon Ser > >> Cc: James Jones > >> Cc: linux-media@vger.kernel.org > >> Cc: dri-devel@lists.freedesktop.org > >> Signed-off-by: John Stultz > >> --- > >> drivers/dma-buf/heaps/Kconfig | 1 + > >> drivers/dma-buf/heaps/system_heap.c | 32 > >> +++++++++++++++++++++++++---- > >> 2 files changed, 29 insertions(+), 4 deletions(-) > >> > >> diff --git a/drivers/dma-buf/heaps/Kconfig > >> b/drivers/dma-buf/heaps/Kconfig > >> index a5eef06c4226..f13cde4321b1 100644 > >> --- a/drivers/dma-buf/heaps/Kconfig > >> +++ b/drivers/dma-buf/heaps/Kconfig > >> @@ -1,6 +1,7 @@ > >> config DMABUF_HEAPS_SYSTEM > >> bool "DMA-BUF System Heap" > >> depends on DMABUF_HEAPS > >> + select PAGE_POOL > >> help > >> Choose this option to enable the system dmabuf heap. The system > >> heap > >> is backed by pages from the buddy allocator. If in doubt, say Y= . > >> diff --git a/drivers/dma-buf/heaps/system_heap.c > >> b/drivers/dma-buf/heaps/system_heap.c > >> index 882a632e9bb7..9f57b4c8ae69 100644 > >> --- a/drivers/dma-buf/heaps/system_heap.c > >> +++ b/drivers/dma-buf/heaps/system_heap.c > >> @@ -20,6 +20,7 @@ > >> #include > >> #include > >> #include > >> +#include > >> > >> struct dma_heap *sys_heap; > >> > >> @@ -46,6 +47,7 @@ struct dma_heap_attachment { > >> static gfp_t order_flags[] =3D {HIGH_ORDER_GFP, LOW_ORDER_GFP, > >> LOW_ORDER_GFP}; > >> static const unsigned int orders[] =3D {8, 4, 0}; > >> #define NUM_ORDERS ARRAY_SIZE(orders) > >> +struct page_pool *pools[NUM_ORDERS]; > >> > >> static struct sg_table *dup_sg_table(struct sg_table *table) > >> { > >> @@ -264,13 +266,17 @@ static void system_heap_dma_buf_release(struct > >> dma_buf *dmabuf) > >> struct system_heap_buffer *buffer =3D dmabuf->priv; > >> struct sg_table *table; > >> struct scatterlist *sg; > >> - int i; > >> + int i, j; > >> > >> table =3D &buffer->sg_table; > >> for_each_sg(table->sgl, sg, table->nents, i) { > >> struct page *page =3D sg_page(sg); > >> > >> - __free_pages(page, compound_order(page)); > >> + for (j =3D 0; j < NUM_ORDERS; j++) { > >> + if (compound_order(page) =3D=3D orders[j]) > >> + break; > >> + } > >> + page_pool_put_full_page(pools[j], page, false); > >> } > >> sg_free_table(table); > >> kfree(buffer); > >> @@ -300,8 +306,7 @@ static struct page > >> *alloc_largest_available(unsigned long size, > >> continue; > >> if (max_order < orders[i]) > >> continue; > >> - > >> - page =3D alloc_pages(order_flags[i], orders[i]); > >> + page =3D page_pool_alloc_pages(pools[i], order_flags[i]); > >> if (!page) > >> continue; > >> return page; > >> @@ -406,6 +411,25 @@ static const struct dma_heap_ops system_heap_ops > >> =3D { > >> static int system_heap_create(void) > >> { > >> struct dma_heap_export_info exp_info; > >> + int i; > >> + > >> + for (i =3D 0; i < NUM_ORDERS; i++) { > >> + struct page_pool_params pp; > >> + > >> + memset(&pp, 0, sizeof(pp)); > >> + pp.order =3D orders[i]; > >> + pp.dma_dir =3D DMA_BIDIRECTIONAL; > > Hey John, > > Correct me if I'm wrong, but I think that in order for pp.dma_dir to be > used in either page_pool_alloc_pages() or page_pool_put_full_page(), we > need to at least have PP_FLAG_DMA_MAP set (to have > page_pool_dma_sync_for_device() called, PP_FLAG_DMA_SYNC_DEV should also > be set I think). I think you'd also need to to have pp->dev set. Are > we setting dma_dir with the intention of doing the necessary CMOs before > we start using the page? Looking, I think my setting of the dma_dir there on the pool is unnecessary (and as you point out, it doesn't have much effect as long as the PP_FLAG_DMA_MAP isn't set). I'm really only using the pagepool as a page cache, and the dmabuf ops are still used for mapping and syncing operations. thanks -john