Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp37020pxb; Tue, 2 Feb 2021 22:00:31 -0800 (PST) X-Google-Smtp-Source: ABdhPJwu3cnKt5pVfpRya5G/5OCTPcPv5OfG5pfVzriySMRjJQoIU1dT9HfGxQVzCkfgz8/3ExM1 X-Received: by 2002:a17:906:9588:: with SMTP id r8mr1635088ejx.167.1612332031370; Tue, 02 Feb 2021 22:00:31 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612332031; cv=none; d=google.com; s=arc-20160816; b=qZa5p5EsS9r6Oq5mFn/d3+jDxBKeIvvD134v2Np+DCGIcK332/qeZa2ctPHJLn48VJ cujaLEOw5IfNk14eoSgrlxKs5fy1UMFcUUwOpUr29DAqe6YBKH8h6yenFn4Q5vcHqwsa CnKPksH0mSrlvbRiH76y2r8AH+Zuv/tGur7s497bhX3TFFPTBNCzrlxAf3qsQuUCqMry YQpuJKE2LxzjymCFQs56U57aJlqVlBRdRjVI0OvgBHzpenhXdMSZ2v8/yT5WBJn4QaOJ 6mLZ2pH05g43QZfKfGmMMvzpYLvd23Cu/kMuUCj5BxqRtjixBt08h8PmT6WFUlEd3UBo Ll5Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=JK+v/H9Y5ZpRZhdXAC5BUOgw8LiPelyQG0wSqPcrtWI=; b=pM8CIYvmvywRp0NPMBSkbwd3CyrY7CZPhbVsdaOFGIwGWcFwPBf9X8OFkbOd/ghWG7 FzQiDcy+NjsR1u6P29x68Ij+XW+t4A82aqPOaDM+PP5mO+D9cWRKngmb3sgLc3gYQD8I yq0ABWIFzu3A9il3b2fxsit2i1MSBsUijRobHjKRzEp0BaiA/Mi9Y1UvTgPwEeXPJyMG u6WAK158QYmYTEDJTasWhL+1O9DwmSEPSRmFmtQYHPE2a9nPfMKCtvzLnxNOqiHs1aKC XOS3BYyhtEAMkRPWpys2gHM7k2ZoE7Bdmn2ZkzqnLal34o/RbhO8sW/pYvpW5IJ26dgy KwUw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=fR9fmJIU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id lf27si663178ejb.503.2021.02.02.22.00.06; Tue, 02 Feb 2021 22:00:31 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=fR9fmJIU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231186AbhBCF5T (ORCPT + 99 others); Wed, 3 Feb 2021 00:57:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57748 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229831AbhBCF5H (ORCPT ); Wed, 3 Feb 2021 00:57:07 -0500 Received: from mail-ot1-x32c.google.com (mail-ot1-x32c.google.com [IPv6:2607:f8b0:4864:20::32c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3C774C061573 for ; Tue, 2 Feb 2021 21:56:27 -0800 (PST) Received: by mail-ot1-x32c.google.com with SMTP id t25so12204469otc.5 for ; Tue, 02 Feb 2021 21:56:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to; bh=JK+v/H9Y5ZpRZhdXAC5BUOgw8LiPelyQG0wSqPcrtWI=; b=fR9fmJIUrOuAx4WrH055RjPXJ7h71Ne+Me1+7Vja0BEHf3Jb9pGFvE4EIUK8lfLyoC PyfACCCxnCjU7DOX4XiAULuPtRAjdAulHDKHQXXFO9wekCHP5rWUmmagMBPUKlf6RIU2 slND/I0sFpiR/zmnZnRwdsFOOU8BYsbJStTSuwEecddQmBEIkzulKqdKxHUgZS/vGzeZ ei+tG04r4OkQKHDMDOcK/fDVaf9t5WJ7TnFC1QIxGkEKGNNiKiqetxt2bjOLQm/D11tb J5D4jz3b1r4mkm1Lk2mC6zc0a5BVVjPccq8PZ3v7BDWaXpvtA7c1UrwS/txd45nyoL+m IjtQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to; bh=JK+v/H9Y5ZpRZhdXAC5BUOgw8LiPelyQG0wSqPcrtWI=; b=hZ6lKicd4o23HRLXvHyVFXl+seGSsQFjlu/w8f4VzH5ZqxmD6mYSnaa7MbBr6t9lcM 6UV6D6Yp+mNFctRfdGNWpY4l6fIqaI6dK2lX6/pK6qe+w9VLs7WWOVV7yWqF0ORUui8J hxm1BQ7gKkCp7DafbGILF08yRIM3wvpcPj664PEiuwnKthhCGmoBl3ORKEuXVstzBmE0 0lvZDl8+PhTWO+gOmsMmMJ14Xv+Xj+rMdzeWkzes318Tfyn0wqZVSV4aRW+S4WFReE+6 NyeLXNn9qqAmDczFQrnkhduouVklo3BRcPzJBfj15UgmXKXNvbWW8LWHBs4CZdlrpTLp lorg== X-Gm-Message-State: AOAM5305DcIDYDi0dcwbv6I+PG0aomfR4Q2bRI9+ha4cB8YKCt+e8vOJ PseJDi8/bVmPWx4pvfd+oxkTMtVfs3gT4tOZo5zjLQ== X-Received: by 2002:a9d:2c2:: with SMTP id 60mr975041otl.70.1612331786600; Tue, 02 Feb 2021 21:56:26 -0800 (PST) MIME-Version: 1.0 References: <20201217230612.32397-1-john.stultz@linaro.org> <20201217230612.32397-2-john.stultz@linaro.org> In-Reply-To: From: John Stultz Date: Tue, 2 Feb 2021 21:56:14 -0800 Message-ID: Subject: Re: [RFC][PATCH 2/3] dma-buf: system_heap: Add pagepool support to system heap To: John Stultz , lkml , Sandeep Patil , dri-devel , Ezequiel Garcia , Robin Murphy , James Jones , Liam Mark , Laura Abbott , Chris Goldsworthy , Hridya Valsaraju , =?UTF-8?Q?=C3=98rjan_Eide?= , linux-media , Suren Baghdasaryan , Daniel Mentz Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Feb 2, 2021 at 6:04 AM Daniel Vetter wrote: > > On Fri, Jan 22, 2021 at 05:28:32PM -0800, John Stultz wrote: > > On Mon, Dec 21, 2020 at 2:09 PM Daniel Vetter wrote: > > > > > > On Fri, Dec 18, 2020 at 05:16:56PM -0800, John Stultz wrote: > > > > On Fri, Dec 18, 2020 at 6:36 AM Daniel Vetter wrote: > > > > > On Thu, Dec 17, 2020 at 11:06:11PM +0000, John Stultz wrote: > > > > > > Reuse/abuse the pagepool code from the network code to speed > > > > > > up allocation performance. > > > > > > > > > > > > This is similar to the ION pagepool usage, but tries to > > > > > > utilize generic code instead of a custom implementation. > > > > > > > > > > We also have one of these in ttm. I think we should have at most one of > > > > > these for the gpu ecosystem overall, maybe as a helper that can be plugged > > > > > into all the places. > > > > > > > > > > Or I'm kinda missing something, which could be since I only glanced at > > > > > yours for a bit. But it's also called page pool for buffer allocations, > > > > > and I don't think there's that many ways to implement that really :-) > > > > > > > > Yea, when I was looking around the ttm one didn't seem quite as > > > > generic as the networking one, which more easily fit in here. > > > > > > Oops, I didn't look that closely and didn't realize you're reusing the one > > > from net/core/. > > > > > > > The main benefit for the system heap is not so much the pool itself > > > > (the normal page allocator is pretty good), as it being able to defer > > > > the free and zero the pages in a background thread, so the pool is > > > > effectively filled with pre-zeroed pages. > > > > > > > > But I'll take another look at the ttm implementation and see if it can > > > > be re-used or the shared code refactored and pulled out somehow. > > > > > > I think moving the page_pool from net into lib and using it in ttm might > > > also be an option. Lack of shrinker in the networking one might be a bit a > > > problem. > > > > Yea. I've been looking at this, to see how to abstract out a generic > > pool implementation, but each pool implementation has been tweaked for > > the specific use cases, so a general abstraction is a bit tough right > > off. > > > > For example the ttm pool's handling allocations both from alloc_pages > > and dma_alloc in a pool, where the net page pool only uses alloc_pages > > (but can pre map via dma_map_attr). > > > > And as you mentioned, the networking page pool is statically sized > > where the ttm pool is dynamic and shrinker controlled. > > > > Further, as the ttm pool is utilized for keeping pools of pages set > > for specific cache types, it makes it difficult to abstract that out > > as we have to be able to reset the caching (set_pages_wb()) when > > shrinking, so that would also have to be pushed down into the pool > > attributes as well. > > > > So far, in my attempts to share an abstraction for both the net > > page_pool and the ttm page pool, it seems to make the code complexity > > worse on both sides - so while I'm interested in continuing to try to > > find a way to share code here, I'm not sure it makes sense to hold up > > this series (which is already re-using an existing implementation and > > provide a performance bump in microbenchmarks) for the > > grand-unified-page-pool. Efforts to refactor the ttm pool and net page > > pool can continue on indepently, and I'd be happy to move the system > > heap to whatever that ends up being. > > The thing is, I'm not sure sharing code with net/core is a really good > idea, at least it seems like we have some impendence mismatch with the ttm > pool. And going forward I expect sooner or later we need alignment between > the pools/caches under drm with dma-buf heap pools a lot more than between > dma-buf and net/core. I mean... I don't think you're wrong here, but it was your suggestion. > So this feels like a bit code sharing for code sharing's sake and not > where it makes sense. Expecting net/core and gpu stacks to have the exact > same needs for a page pool allocator has good chances to bite us in the > long run. Again, I agree with you at the high level here (dmabuf system heap and ttm page pooling are conceptually more likely to align, and duplication of buffer pools is non-optimal), but there's still the practical aspect of the ttm pool being pretty tied to the ttm code (utilizing ttm contexts, fixed MAX_ORDER*TTM_NUM_CACHING_TYPES subpools per pool + 4 global sub-pools for only x86). So... I guess I'll go for another pass at trying to pull something generic out of the ttm_pool, but the cynic in me suspects folks will just object to any inefficiencies added in order to do so (the code-sharing for its own sake argument above) and I'll be back to where I am now. But we'll see. thanks -john