Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp650016pxb; Tue, 2 Feb 2021 14:21:18 -0800 (PST) X-Google-Smtp-Source: ABdhPJz9pa8oa9GQ5pbrT/DybGwUNDZ/Q9hIUsUUOfRGdQabUE+s2/4p0BXlVmucsmNaX2dkvkw2 X-Received: by 2002:a17:907:f81:: with SMTP id kb1mr162427ejc.412.1612304478103; Tue, 02 Feb 2021 14:21:18 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612304478; cv=none; d=google.com; s=arc-20160816; b=d56EK3Yba3d1QF6neBvCnAGP6RWMPFlp+RjKZmKEzvpRZQuNA9ugJxdzt58zy9KRX8 FQC7R8xVeNguBlE0bep6hoh3tZTloswsnhZ9WiNy34ivu+XfyicHVqUxlOThP5y6Rbev 69+FSwh7+ADWsAP8oTlxemwUdoTa2fkUX7Z3i4JRDIOMkr3LAPCiA4osrTejEARogyet HqiuxfOhxiG+ZS13T91/HPd//Iwh1MaE0agqHn8FTHfNwNRpkbxE/XtpUDFWs2S5YYmE haQYVWJUnoFaLWul0+k/ziptEXwNglktlZL75bdAj5HQ5bdTn9mmHcqvKSu1gxLHvF1S IbZQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:mail-followup-to:message-id:subject:cc:to:from:date :dkim-signature; bh=d1+IBxBT9tSPUjHykQz7xx/LX6WrnCN2wV67HrbifMs=; b=eLCP/VBlyeIp5zYr8eiGvWcp+AbarxNqd3sbqjuJr7Nw9O+BD9L4KSfPyB1upfEuxn AA5Jv92JduTuzhOxKl4OgBCLE5EZxAMtuNIjCrFdJmz23/KtuXyF6GHWrr7cQLrsWyOt df6OkCcWJskL2rXGGlkZwPyrTari3KV4gDRoqsVuBduEge+2U3vFSXtq/yfgQKKDh3Kv ET+at0ueTiZ8KK246B3yRfrZbcHRd0QR2HACnKfA2X2GQyodmeEcaLcC1OSqUE3sGOe8 hU2u/ib/4D789H4rUtYFrue3JXJrIWler0+m2EqJL7y1S5sTosfNJ0HpxVBjBAKjGnRv OGsg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ffwll.ch header.s=google header.b=C+fg493h; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id z4si51295ede.430.2021.02.02.14.20.49; Tue, 02 Feb 2021 14:21:18 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ffwll.ch header.s=google header.b=C+fg493h; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233795AbhBBOL3 (ORCPT + 99 others); Tue, 2 Feb 2021 09:11:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50310 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233361AbhBBOEu (ORCPT ); Tue, 2 Feb 2021 09:04:50 -0500 Received: from mail-wr1-x430.google.com (mail-wr1-x430.google.com [IPv6:2a00:1450:4864:20::430]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6C128C061573 for ; Tue, 2 Feb 2021 06:04:09 -0800 (PST) Received: by mail-wr1-x430.google.com with SMTP id m13so20594464wro.12 for ; Tue, 02 Feb 2021 06:04:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=date:from:to:cc:subject:message-id:mail-followup-to:references :mime-version:content-disposition:in-reply-to; bh=d1+IBxBT9tSPUjHykQz7xx/LX6WrnCN2wV67HrbifMs=; b=C+fg493h2nPGEZN0gct3cjZSNTmVy/KeUueauEnwdp8qrNzKsUs4PJu5LWgd6nW63U kkr5Ur3SbL0D6HdW2d1PRH0QR2rrPGcot1VAd0nmQluZ3RB4lhKWIzr0s9fvj5CuDaCg P8B7zGg2ax1eZXi7rUBvheV6NNZSaO9udf6ho= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id :mail-followup-to:references:mime-version:content-disposition :in-reply-to; bh=d1+IBxBT9tSPUjHykQz7xx/LX6WrnCN2wV67HrbifMs=; b=BBeAR8HGIKI4HH3vu19epwHv1tdzLPGE/51D/2zOLDlM2YUjs0HmgeYyHuktAIeOpI EsaW5vXC1IHZKZKh3Eh86h7Fheia5gySAAQtlPayUjxAbBFeOCm91ubWhtwG1nZoJsfi JyVEh+zQmY/avbPMT4HrQb03G3q+OgHxP27E9VjLjzh5QyZWrnO8dQH6tE+FpIEETdX3 tMruO5DK1vbRLlCMsexfEjvWusev3/W1YZGAXW/wmxt6CeMElAfN+dYktYly0ecptOWc +RR0tiwzT6Dd6LBxqMn+6KBdvqHuCuWI65gTUJLxl1jRAPHsQ0XWahOceYmM6MGl3Mik ROmg== X-Gm-Message-State: AOAM530M+OXASo55MwfcFG+aV+1fCOib7rXJhKGqb0F2vpkryaWKSkAs xNFUzoQMWZxdZT8U/l6yJoZ/UQ== X-Received: by 2002:adf:e384:: with SMTP id e4mr23960042wrm.13.1612274648010; Tue, 02 Feb 2021 06:04:08 -0800 (PST) Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa]) by smtp.gmail.com with ESMTPSA id s23sm3211821wmc.29.2021.02.02.06.04.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 02 Feb 2021 06:04:07 -0800 (PST) Date: Tue, 2 Feb 2021 15:04:05 +0100 From: Daniel Vetter To: John Stultz Cc: lkml , Sandeep Patil , dri-devel , Ezequiel Garcia , Robin Murphy , James Jones , Liam Mark , Laura Abbott , Chris Goldsworthy , Hridya Valsaraju , =?iso-8859-1?Q?=D8rjan?= Eide , linux-media , Suren Baghdasaryan , Daniel Mentz Subject: Re: [RFC][PATCH 2/3] dma-buf: system_heap: Add pagepool support to system heap Message-ID: Mail-Followup-To: John Stultz , lkml , Sandeep Patil , dri-devel , Ezequiel Garcia , Robin Murphy , James Jones , Liam Mark , Laura Abbott , Chris Goldsworthy , Hridya Valsaraju , =?iso-8859-1?Q?=D8rjan?= Eide , linux-media , Suren Baghdasaryan , Daniel Mentz References: <20201217230612.32397-1-john.stultz@linaro.org> <20201217230612.32397-2-john.stultz@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Operating-System: Linux phenom 5.7.0-1-amd64 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jan 22, 2021 at 05:28:32PM -0800, John Stultz wrote: > On Mon, Dec 21, 2020 at 2:09 PM Daniel Vetter wrote: > > > > On Fri, Dec 18, 2020 at 05:16:56PM -0800, John Stultz wrote: > > > On Fri, Dec 18, 2020 at 6:36 AM Daniel Vetter wrote: > > > > On Thu, Dec 17, 2020 at 11:06:11PM +0000, John Stultz wrote: > > > > > Reuse/abuse the pagepool code from the network code to speed > > > > > up allocation performance. > > > > > > > > > > This is similar to the ION pagepool usage, but tries to > > > > > utilize generic code instead of a custom implementation. > > > > > > > > We also have one of these in ttm. I think we should have at most one of > > > > these for the gpu ecosystem overall, maybe as a helper that can be plugged > > > > into all the places. > > > > > > > > Or I'm kinda missing something, which could be since I only glanced at > > > > yours for a bit. But it's also called page pool for buffer allocations, > > > > and I don't think there's that many ways to implement that really :-) > > > > > > Yea, when I was looking around the ttm one didn't seem quite as > > > generic as the networking one, which more easily fit in here. > > > > Oops, I didn't look that closely and didn't realize you're reusing the one > > from net/core/. > > > > > The main benefit for the system heap is not so much the pool itself > > > (the normal page allocator is pretty good), as it being able to defer > > > the free and zero the pages in a background thread, so the pool is > > > effectively filled with pre-zeroed pages. > > > > > > But I'll take another look at the ttm implementation and see if it can > > > be re-used or the shared code refactored and pulled out somehow. > > > > I think moving the page_pool from net into lib and using it in ttm might > > also be an option. Lack of shrinker in the networking one might be a bit a > > problem. > > Yea. I've been looking at this, to see how to abstract out a generic > pool implementation, but each pool implementation has been tweaked for > the specific use cases, so a general abstraction is a bit tough right > off. > > For example the ttm pool's handling allocations both from alloc_pages > and dma_alloc in a pool, where the net page pool only uses alloc_pages > (but can pre map via dma_map_attr). > > And as you mentioned, the networking page pool is statically sized > where the ttm pool is dynamic and shrinker controlled. > > Further, as the ttm pool is utilized for keeping pools of pages set > for specific cache types, it makes it difficult to abstract that out > as we have to be able to reset the caching (set_pages_wb()) when > shrinking, so that would also have to be pushed down into the pool > attributes as well. > > So far, in my attempts to share an abstraction for both the net > page_pool and the ttm page pool, it seems to make the code complexity > worse on both sides - so while I'm interested in continuing to try to > find a way to share code here, I'm not sure it makes sense to hold up > this series (which is already re-using an existing implementation and > provide a performance bump in microbenchmarks) for the > grand-unified-page-pool. Efforts to refactor the ttm pool and net page > pool can continue on indepently, and I'd be happy to move the system > heap to whatever that ends up being. The thing is, I'm not sure sharing code with net/core is a really good idea, at least it seems like we have some impendence mismatch with the ttm pool. And going forward I expect sooner or later we need alignment between the pools/caches under drm with dma-buf heap pools a lot more than between dma-buf and net/core. So this feels like a bit code sharing for code sharing's sake and not where it makes sense. Expecting net/core and gpu stacks to have the exact same needs for a page pool allocator has good chances to bite us in the long run. -Daniel > > thanks > -john > _______________________________________________ > dri-devel mailing list > dri-devel@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/dri-devel -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch