Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp1472621pxb; Fri, 22 Jan 2021 17:31:38 -0800 (PST) X-Google-Smtp-Source: ABdhPJxesI/PEn7ExcOvL6H7Ior2/shXaKq9crbGpFxXWHYB+4QYfKdKuVU2efFQDZkLZPNr9xIU X-Received: by 2002:a05:6402:1a2a:: with SMTP id be10mr248459edb.185.1611365497938; Fri, 22 Jan 2021 17:31:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1611365497; cv=none; d=google.com; s=arc-20160816; b=OElXk98dyVTeLTBg6iwGz1dEI+joCXYCbdGLiWWTicoxZl9pU5/0BfqZ4Q9gvK86c6 yxFf2CMG7B1RgsW6XbEOkKECLXCWirYZkp5kNKqPoK2SC4OraN5ORmRyIeFFZOmxN5JC x4t9ogA7vzkK8B/h9bSxD6vFBNA8ZNnwHoIYhM4chgQhFU/y3h9fMePT1TkAAKOQPI45 iOLSFlZDVRuauXntvS10UocH+2r6zrpzXY5JaTHmML3jfOSs+mCmATYV9XGbYPQZRqhe vztFeCbtQPAWc2k2cjG8kfNaSP1h0bDDTgkVog0v+26qNYm+y3coDPu/YOwaSth+TUi2 jxQA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=yfeJphQ0sDWpVDHO4BIQKQejXBdG4Gl17zoealRBPD8=; b=Wv7UTeKKkE97LRiSGEwjYGzu0toJ5ncFw1qRs0hMkEgw/fBdwOkDOXjG4obXW9lFvV U5/T4yjP9uEPYqxXING/xdIWcakwbNCmqd/tiM2mP3UDgZ9G2PEhEGFl7cMVVRgk+pVc FDL+2DgbvN9k9eXZfswgLw2r0H6Qf3XFF1ApC3VbLL6wqbgNGqJpg9hrnHnGXcBjlDAO Etb5ynIkVjvMzBrUbUhH6RmGGsUl8zA9VBlXoZi8EcfK9U8mETis6thkrlTS4XyMW2XU Bxubw669e2xYwgtb1MTsM1QPmXWrZZfrlHGnFdiKij3n3Ukoj2idi1EngIf6Au5GpAtJ dDKQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=gICXjdQz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a8si2013472ejk.646.2021.01.22.17.31.14; Fri, 22 Jan 2021 17:31:37 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=gICXjdQz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726213AbhAWB3b (ORCPT + 99 others); Fri, 22 Jan 2021 20:29:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53460 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726007AbhAWB32 (ORCPT ); Fri, 22 Jan 2021 20:29:28 -0500 Received: from mail-lj1-x229.google.com (mail-lj1-x229.google.com [IPv6:2a00:1450:4864:20::229]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0BF08C06174A for ; Fri, 22 Jan 2021 17:28:47 -0800 (PST) Received: by mail-lj1-x229.google.com with SMTP id f2so3434460ljp.11 for ; Fri, 22 Jan 2021 17:28:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to; bh=yfeJphQ0sDWpVDHO4BIQKQejXBdG4Gl17zoealRBPD8=; b=gICXjdQzjIZupISrGhW4/NffiZvV7rNuGnLKX3VcHrhEpbrHDK/isnFulFOw7k6YmD M9YIjj3XVYpfO4CkPcLVWyiOP9hLzmB6VWBRedDhebfMjgquqYTtSpCnHe5AQXimqFIL X9l59yHd5nBiHqKPsk1xeWUYHFCyXbvnZZE6Mqi2uqhYzK8sUaWxNuVv5g5Sxf307mu5 lQ0Rk5bCFgv4OUYAmpYychz0y+xM8PB8DIN+hgym5WN97YX7TOTWwaqzGMyqMB+jpjLx Kvq3YB5cwJgKZQrOw8tKKyRvgCdGbE65gwEHZNCQNvHkFbfIwnb/mJq6u6DgfPBYkyJE qenA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to; bh=yfeJphQ0sDWpVDHO4BIQKQejXBdG4Gl17zoealRBPD8=; b=B3d/d2vV069Y0dtFq0UlHUHFhmhaVqYQzOCVy3rKG+7/+OQOT4HQK9PmOJUFy4FwXH xIX0ZIzBLr1PyT1P+Z/6RtRV61yL/yZPjrmytDoHga1Fj0Hm/pwMtFV5hOXhOpbpNu4W CjKdKiEWaXL44zJbHBjI5swIhWIsSAEfgEgOk4Hpdr3l2DMarBV/hYk74JY5GUgXi2tI nTCOqM372uR5KmxKq1LeT6Gtivw//xC65BxbxeMwVI8ml3S8qtPacve7I3QIQ0lPzF9t LteNAp1WKbZeXJayrotEscwAO0PtuG+ZIRoUHY/LPq7bhqX92LIeZ48Lf/I4sDcTZhda wGtA== X-Gm-Message-State: AOAM531wzcwSXPWStXgMrRGIJeutFWwdP4wPlvuw3MbGF8O0M7cL2EBT y9rtppe2LZJCgrXNiRlGhh57LisyG3r2ttlTr7Ixs+wVnoE= X-Received: by 2002:a05:651c:10e:: with SMTP id a14mr1388108ljb.128.1611365325426; Fri, 22 Jan 2021 17:28:45 -0800 (PST) MIME-Version: 1.0 References: <20201217230612.32397-1-john.stultz@linaro.org> <20201217230612.32397-2-john.stultz@linaro.org> In-Reply-To: From: John Stultz Date: Fri, 22 Jan 2021 17:28:32 -0800 Message-ID: Subject: Re: [RFC][PATCH 2/3] dma-buf: system_heap: Add pagepool support to system heap To: John Stultz , lkml , Sandeep Patil , dri-devel , Ezequiel Garcia , Robin Murphy , James Jones , Liam Mark , Laura Abbott , Chris Goldsworthy , Hridya Valsaraju , =?UTF-8?Q?=C3=98rjan_Eide?= , linux-media , Suren Baghdasaryan , Daniel Mentz Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Dec 21, 2020 at 2:09 PM Daniel Vetter wrote: > > On Fri, Dec 18, 2020 at 05:16:56PM -0800, John Stultz wrote: > > On Fri, Dec 18, 2020 at 6:36 AM Daniel Vetter wrote: > > > On Thu, Dec 17, 2020 at 11:06:11PM +0000, John Stultz wrote: > > > > Reuse/abuse the pagepool code from the network code to speed > > > > up allocation performance. > > > > > > > > This is similar to the ION pagepool usage, but tries to > > > > utilize generic code instead of a custom implementation. > > > > > > We also have one of these in ttm. I think we should have at most one of > > > these for the gpu ecosystem overall, maybe as a helper that can be plugged > > > into all the places. > > > > > > Or I'm kinda missing something, which could be since I only glanced at > > > yours for a bit. But it's also called page pool for buffer allocations, > > > and I don't think there's that many ways to implement that really :-) > > > > Yea, when I was looking around the ttm one didn't seem quite as > > generic as the networking one, which more easily fit in here. > > Oops, I didn't look that closely and didn't realize you're reusing the one > from net/core/. > > > The main benefit for the system heap is not so much the pool itself > > (the normal page allocator is pretty good), as it being able to defer > > the free and zero the pages in a background thread, so the pool is > > effectively filled with pre-zeroed pages. > > > > But I'll take another look at the ttm implementation and see if it can > > be re-used or the shared code refactored and pulled out somehow. > > I think moving the page_pool from net into lib and using it in ttm might > also be an option. Lack of shrinker in the networking one might be a bit a > problem. Yea. I've been looking at this, to see how to abstract out a generic pool implementation, but each pool implementation has been tweaked for the specific use cases, so a general abstraction is a bit tough right off. For example the ttm pool's handling allocations both from alloc_pages and dma_alloc in a pool, where the net page pool only uses alloc_pages (but can pre map via dma_map_attr). And as you mentioned, the networking page pool is statically sized where the ttm pool is dynamic and shrinker controlled. Further, as the ttm pool is utilized for keeping pools of pages set for specific cache types, it makes it difficult to abstract that out as we have to be able to reset the caching (set_pages_wb()) when shrinking, so that would also have to be pushed down into the pool attributes as well. So far, in my attempts to share an abstraction for both the net page_pool and the ttm page pool, it seems to make the code complexity worse on both sides - so while I'm interested in continuing to try to find a way to share code here, I'm not sure it makes sense to hold up this series (which is already re-using an existing implementation and provide a performance bump in microbenchmarks) for the grand-unified-page-pool. Efforts to refactor the ttm pool and net page pool can continue on indepently, and I'd be happy to move the system heap to whatever that ends up being. thanks -john