Received: by 2002:a05:6a10:f3d0:0:0:0:0 with SMTP id a16csp4987860pxv; Tue, 6 Jul 2021 14:21:16 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzkFuYESuxUFy0CQR2TSU6WQQhMWTm1JmpdPbKkwSuFCPm1Ym4XiOg49gPlI3mg0ciPwnJj X-Received: by 2002:aa7:c0ca:: with SMTP id j10mr22950851edp.203.1625606475934; Tue, 06 Jul 2021 14:21:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1625606475; cv=none; d=google.com; s=arc-20160816; b=o6GswiVKfJ0gWQqbiICQCKyRDWpfyXQiNG54073N7Znm9Nf0m5phivoaOuHqED9kJ4 VSzGEdaDrl9nk+lg0Twzpe0ozdnN81lDRPtuv0LuC9BSoey7AQSn1KCtOGKZncol7jU+ iT9sQhs/68VAZ4EFZ0q3wfE6TfRgR4kuSr8kTqxhkLKTrrDlgbAbrUtVAPa+Rsp4ziFy DC+9QCMPaOiMCUuwDOo5cX8CQtn/xhH5tnF+Xha6HXZ1rfG58fLkhpbOSmntTkuVhTYX 1g37W9UuuwwlNPRyMW69RwOY9JCPSmUXYXfiNY8gNZqBURlaFCY6yCcBNszxwgZ5kuUM ibFg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=AEc+HkBX5y/VLGNpWlw7flwG5MZ8KnQgN0iK4yphSPk=; b=Nd90RQqNaK3Xly8bxBwkj4s/t+bwEwmajwdFaUkf0wSQMGFK1jbtA/C73t+TemQedO MHX9dSMcL5URj7yxxUvriM27ZpuNsmdQTjRYZK/JHg1xxd9Cc34YMZKdxY+Idhxeq2fg 14JhMauxtaDUpO6wtA10epXl+9kUiuvYLyj0rFLgY9q2tVqX1duHoXUD1WWBe6S4x4dC mmyXOiciNkHkWpV7tnwJ6SaYNqj4WacpKlNOESF5JSAtTVQdaydqQJXnlSnL2CZSbaNQ doE1G51NHloS3D0iuvq1lscgu3M/0iFNOSdiI6Q6tk0FOjx3iG+mCCmBQttT0upETO0O 8Zkw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=YkEMQFzC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id cf28si7838179edb.137.2021.07.06.14.20.52; Tue, 06 Jul 2021 14:21:15 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=YkEMQFzC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229950AbhGFVWU (ORCPT + 99 others); Tue, 6 Jul 2021 17:22:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39124 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229770AbhGFVWT (ORCPT ); Tue, 6 Jul 2021 17:22:19 -0400 Received: from mail-lf1-x134.google.com (mail-lf1-x134.google.com [IPv6:2a00:1450:4864:20::134]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 333ECC06175F for ; Tue, 6 Jul 2021 14:19:39 -0700 (PDT) Received: by mail-lf1-x134.google.com with SMTP id a18so210861lfs.10 for ; Tue, 06 Jul 2021 14:19:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=AEc+HkBX5y/VLGNpWlw7flwG5MZ8KnQgN0iK4yphSPk=; b=YkEMQFzCa+UfeZVPkWbnkus+8zV97RkUZbzlImE1zbCLCGCm/gK7Xyyw5bV54Jv9hY q1a56NXmM47fFTBGmT+PgF/npmXE0C+saZicZDUDurcBOfdGCc2YQLe2ie6nDd4JwbWX 42G4cWzfaY9QfnK7EFpNeEX+UUPBD/YJr9q48FV5Wps7heZ+mW7l5ZxY6AP2E3er96jp hl578MWSebN59w2fPueB2OaR1iPTpwMqG3QZtr9KVMEYwx610N0vyXrLtCT65bCna8lS G9o0j3IcutbW3LtGa5IPLjh9zgoTZiSRpNpS9RV/dwCRMPOVWAO8/bCcmErbdAwGn9k7 6AUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=AEc+HkBX5y/VLGNpWlw7flwG5MZ8KnQgN0iK4yphSPk=; b=ApEKqNwkG5wkA2kiqmxn9Dld/mjpxTlmv3rdjESnoIvdS6y6GeBlLgzpZX75r9wK6m qDFLEBnAJR14wOQfRucRGfTLXb0xYAQry8AJFSa8uBIRFiZyRSkT4V/jt4Gyj98fm4Dc hPXPrthbUHG5lUIT+x6Vzh/tJhPYxTvTPNwttgKAn304JGMR9kQMFh8eSdf/MJ26ZPm0 vdlcsGFEO6Lwgzev7jSX673GMOekRxX4kuC1gJiXqSOoWF4y2ueDzTqtvYDYS2LobwaS EU8LPVSEX2C5hYlXlD95oiLD0TajpwoDvmoF+o8Oa2FKJiN3K+VrzazStDt9BhH640LK hndA== X-Gm-Message-State: AOAM533Re7bwIlDjkWBrQ9WKYMDrrfZQrle+dJaAXAP5+ayM8ix9at2M G8IZKOLYqe/NkKZrlchBArL0gvuGVl++fzdHWqS86Q== X-Received: by 2002:a05:6512:2246:: with SMTP id i6mr7842791lfu.7.1625606377532; Tue, 06 Jul 2021 14:19:37 -0700 (PDT) MIME-Version: 1.0 References: <20210630013421.735092-1-john.stultz@linaro.org> <20210630013421.735092-2-john.stultz@linaro.org> <6a472a24-a40f-1160-70dd-5cb9e9ae85f1@amd.com> In-Reply-To: From: John Stultz Date: Tue, 6 Jul 2021 14:19:27 -0700 Message-ID: Subject: Re: [PATCH v9 1/5] drm: Add a sharable drm page-pool implementation To: Daniel Vetter Cc: =?UTF-8?Q?Christian_K=C3=B6nig?= , lkml , Sumit Semwal , Liam Mark , Chris Goldsworthy , Laura Abbott , Brian Starkey , Hridya Valsaraju , Suren Baghdasaryan , Sandeep Patil , Daniel Mentz , =?UTF-8?Q?=C3=98rjan_Eide?= , Robin Murphy , Ezequiel Garcia , Simon Ser , James Jones , linux-media , dri-devel Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jul 6, 2021 at 2:15 PM Daniel Vetter wrote: > > On Tue, Jul 6, 2021 at 11:04 PM John Stultz wrot= e: > > On Wed, Jun 30, 2021 at 11:52 PM Christian K=C3=B6nig > > wrote: > > > > > > Am 01.07.21 um 00:24 schrieb John Stultz: > > > > On Wed, Jun 30, 2021 at 2:10 AM Christian K=C3=B6nig > > > > wrote: > > > >> Am 30.06.21 um 03:34 schrieb John Stultz: > > > >>> +static unsigned long page_pool_size; /* max size of the pool */ > > > >>> + > > > >>> +MODULE_PARM_DESC(page_pool_size, "Number of pages in the drm pag= e pool"); > > > >>> +module_param(page_pool_size, ulong, 0644); > > > >>> + > > > >>> +static atomic_long_t nr_managed_pages; > > > >>> + > > > >>> +static struct mutex shrinker_lock; > > > >>> +static struct list_head shrinker_list; > > > >>> +static struct shrinker mm_shrinker; > > > >>> + > > > >>> +/** > > > >>> + * drm_page_pool_set_max - Sets maximum size of all pools > > > >>> + * > > > >>> + * Sets the maximum number of pages allows in all pools. > > > >>> + * This can only be set once, and the first caller wins. > > > >>> + */ > > > >>> +void drm_page_pool_set_max(unsigned long max) > > > >>> +{ > > > >>> + if (!page_pool_size) > > > >>> + page_pool_size =3D max; > > > >>> +} > > > >>> + > > > >>> +/** > > > >>> + * drm_page_pool_get_max - Maximum size of all pools > > > >>> + * > > > >>> + * Return the maximum number of pages allows in all pools > > > >>> + */ > > > >>> +unsigned long drm_page_pool_get_max(void) > > > >>> +{ > > > >>> + return page_pool_size; > > > >>> +} > > > >> Well in general I don't think it is a good idea to have getters/se= tters > > > >> for one line functionality, similar applies to locking/unlocking t= he > > > >> mutex below. > > > >> > > > >> Then in this specific case what those functions do is to aid > > > >> initializing the general pool manager and that in turn should abso= lutely > > > >> not be exposed. > > > >> > > > >> The TTM pool manager exposes this as function because initializing= the > > > >> pool manager is done in one part of the module and calculating the > > > >> default value for the pages in another one. But that is not someth= ing I > > > >> would like to see here. > > > > So, I guess I'm not quite clear on what you'd like to see... > > > > > > > > Part of what I'm balancing here is the TTM subsystem normally sets = a > > > > global max size, whereas the old ION pool didn't have caps (instead > > > > just relying on the shrinker when needed). > > > > So I'm trying to come up with a solution that can serve both uses. = So > > > > I've got this drm_page_pool_set_max() function to optionally set th= e > > > > maximum value, which is called in the TTM initialization path or se= t > > > > the boot argument. But for systems that use the dmabuf system heap, > > > > but don't use TTM, no global limit is enforced. > > > > > > Yeah, exactly that's what I'm trying to prevent. > > > > > > See if we have the same functionality used by different use cases we > > > should not have different behavior depending on what drivers are load= ed. > > > > > > Is it a problem if we restrict the ION pool to 50% of system memory a= s > > > well? If yes than I would rather drop the limit from TTM and only rel= y > > > on the shrinker there as well. > > > > Would having the default value as a config option (still overridable > > via boot argument) be an acceptable solution? > > We're also trying to get ttm over to the shrinker model, and a first > cut of that even landed, but didn't really work out yet. So maybe just > aiming for the shrinker? I do agree this should be consistent across > the board, otherwise we're just sharing code but not actually sharing > functionality, which is a recipe for disaster because one side will > end up breaking the other side's use-case. Fair enough, maybe it would be best to remove the default limit, but leave the logic so it can still be set via the boot argument? thanks -john