Received: by 2002:a05:7412:31a9:b0:e2:908c:2ebd with SMTP id et41csp6160882rdb; Mon, 18 Sep 2023 06:07:20 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFMb3Ix3W5sGWFmNP+pPs8dYND9eSJVt8HR7ml+f1SAnjQPEuTGiKJPeVbnoCwadw+Kth/g X-Received: by 2002:a17:90a:f196:b0:26b:3751:652a with SMTP id bv22-20020a17090af19600b0026b3751652amr8641654pjb.38.1695042440208; Mon, 18 Sep 2023 06:07:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1695042440; cv=none; d=google.com; s=arc-20160816; b=LJAQed+eZsJKX3kcxASYaNi6SAXQUunhnRcv6nW80M2QIrr/rpcUWDm6WAXfcU8rOo fSva/I/ksmxQXnsqT3IDSO9IjlmRMPWukbeKGD0zOYQ48rtFi5jxM09KGiEOEovwWf6G 1NlHuLoy0N/3dMPVqmoodZJPi/Biub/W8zk3PdIxsJc+cFHStZNkI/wWEuUzknHDeikM n8ujg2MLoDpURjM7e4o/kihhiJ9Mv4cqbGJYbvVzCWZxSb3ADsimkkMPh4xk3LgMf85M 70/JDWMDtOUpOR0qX3EE+4fmj9vIPyqXyASCVNWnUdaC45Q11+lr5dhZQ0McHRBrKLJY q5Cw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id; bh=DSNAQ2SLoXOCSKsg34o8du0JvP7Ec3qnPM1wngBwx+o=; fh=DAF6ZZ4HX9Q5phSzS6Gei++U4l+Vv9JHuuzcf3rG3WA=; b=jB2mkBoIlSZsMnW6tW6wCOdxSTjdIVQ+z7m+9rjOOEhuwV5FCJh3s6Zf180uB5Dx/e Up3cL/h4vtqWvJV/hLiT0S7xCmeyNnTyZxUB0SC4GiL6SlLcXCP6o5C8buGv6OSnP7JS H+wB62qXBrls27F9+4hfJ8OrnSauEpQO9qn/av5XKPCTaUKpQ1HeTk+w7tUi5DqnaQQt kyi97+hlNlBE+jshPll/0b8k7vyxJp25gsqTBT3yAoaluqLMoeBj0tTXicR4wLYU6rJY b8BZRXb1pht66e9Il1v7e34GzUvz1wWYvxKQKuN+2gttYqU06AAB4ykaPoWb/u6NfctO txxw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from howler.vger.email (howler.vger.email. [2620:137:e000::3:4]) by mx.google.com with ESMTPS id a19-20020a17090acb9300b00252d84b7af0si7985449pju.181.2023.09.18.06.07.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 18 Sep 2023 06:07:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) client-ip=2620:137:e000::3:4; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id 14077830D370; Mon, 18 Sep 2023 06:01:43 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241966AbjIRNBX (ORCPT + 99 others); Mon, 18 Sep 2023 09:01:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49610 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242038AbjIRNAu (ORCPT ); Mon, 18 Sep 2023 09:00:50 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 6EF7E103; Mon, 18 Sep 2023 05:59:52 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 692A11FB; Mon, 18 Sep 2023 06:00:29 -0700 (PDT) Received: from [10.57.64.210] (unknown [10.57.64.210]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 290C23F5A1; Mon, 18 Sep 2023 05:59:47 -0700 (PDT) Message-ID: Date: Mon, 18 Sep 2023 13:59:48 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.15.0 Subject: Re: [PATCH v5 5/6] drm/panfrost: Implement generic DRM object RSS reporting function Content-Language: en-GB To: Boris Brezillon Cc: =?UTF-8?Q?Adri=c3=a1n_Larumbe?= , maarten.lankhorst@linux.intel.com, mripard@kernel.org, tzimmermann@suse.de, airlied@gmail.com, daniel@ffwll.ch, robdclark@gmail.com, quic_abhinavk@quicinc.com, dmitry.baryshkov@linaro.org, sean@poorly.run, marijn.suijten@somainline.org, robh@kernel.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, healych@amazon.com, kernel@collabora.com References: <20230914223928.2374933-1-adrian.larumbe@collabora.com> <20230914223928.2374933-6-adrian.larumbe@collabora.com> <20230918123218.14ca9fde@collabora.com> From: Steven Price In-Reply-To: <20230918123218.14ca9fde@collabora.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-3.4 required=5.0 tests=BAYES_00,NICE_REPLY_A, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Mon, 18 Sep 2023 06:01:43 -0700 (PDT) On 18/09/2023 11:32, Boris Brezillon wrote: > On Mon, 18 Sep 2023 11:01:43 +0100 > Steven Price wrote: > >> On 14/09/2023 23:38, Adrián Larumbe wrote: >>> BO's RSS is updated every time new pages are allocated on demand and mapped >>> for the object at GPU page fault's IRQ handler, but only for heap buffers. >>> The reason this is unnecessary for non-heap buffers is that they are mapped >>> onto the GPU's VA space and backed by physical memory in their entirety at >>> BO creation time. >>> >>> This calculation is unnecessary for imported PRIME objects, since heap >>> buffers cannot be exported by our driver, and the actual BO RSS size is the >>> one reported in its attached dmabuf structure. >>> >>> Signed-off-by: Adrián Larumbe >>> Reviewed-by: Boris Brezillon >> >> Am I missing something, or are we missing a way of resetting >> heap_rss_size when the shrinker purges? It looks like after several >> grow/purge cycles, heap_rss_size could actually grow to be larger than >> the BO which is clearly wrong. > > Didn't even consider this case since we don't flag heap BOs purgeable > in mesa(panfrost), but let's assume we did. If the BO is purged, I'd > expect the core to report 0MB of resident memory anyway. And purged BOs > are not supposed to be re-used if MADVISE(WILL_NEED) returns > retained=false, they should be destroyed. Not 100% sure this is > enforced everywhere though (we might actually miss tests to make sure > users don't pass purged BOs to jobs, or make the alloc-on-fault logic > doesn't try to grow a purged GEM). > > If we want to implement transparent BO swap{out,in} (Dmitry's > patchset), that's be a different story, and we'll indeed have to set > heap_rss_size back to zero on eviction. Ah, ok. So we should be safe as things stand - but this is something to remember about in the future. Looking more closely at the code I can see an madvise(WILL_NEED) will fail if retained=false (drm_gem_shmem_madvise() only updates the state it shmem->madv >= 0). In which case: Reviewed-by: Steven Price >> >> Steve >> >>> --- >>> drivers/gpu/drm/panfrost/panfrost_gem.c | 15 +++++++++++++++ >>> drivers/gpu/drm/panfrost/panfrost_gem.h | 5 +++++ >>> drivers/gpu/drm/panfrost/panfrost_mmu.c | 1 + >>> 3 files changed, 21 insertions(+) >>> >>> diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c >>> index 7d8f83d20539..4365434b48db 100644 >>> --- a/drivers/gpu/drm/panfrost/panfrost_gem.c >>> +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c >>> @@ -208,6 +208,20 @@ static enum drm_gem_object_status panfrost_gem_status(struct drm_gem_object *obj >>> return res; >>> } >>> >>> +static size_t panfrost_gem_rss(struct drm_gem_object *obj) >>> +{ >>> + struct panfrost_gem_object *bo = to_panfrost_bo(obj); >>> + >>> + if (bo->is_heap) { >>> + return bo->heap_rss_size; >>> + } else if (bo->base.pages) { >>> + WARN_ON(bo->heap_rss_size); >>> + return bo->base.base.size; >>> + } else { >>> + return 0; >>> + } >>> +} >>> + >>> static const struct drm_gem_object_funcs panfrost_gem_funcs = { >>> .free = panfrost_gem_free_object, >>> .open = panfrost_gem_open, >>> @@ -220,6 +234,7 @@ static const struct drm_gem_object_funcs panfrost_gem_funcs = { >>> .vunmap = drm_gem_shmem_object_vunmap, >>> .mmap = drm_gem_shmem_object_mmap, >>> .status = panfrost_gem_status, >>> + .rss = panfrost_gem_rss, >>> .vm_ops = &drm_gem_shmem_vm_ops, >>> }; >>> >>> diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.h b/drivers/gpu/drm/panfrost/panfrost_gem.h >>> index ad2877eeeccd..13c0a8149c3a 100644 >>> --- a/drivers/gpu/drm/panfrost/panfrost_gem.h >>> +++ b/drivers/gpu/drm/panfrost/panfrost_gem.h >>> @@ -36,6 +36,11 @@ struct panfrost_gem_object { >>> */ >>> atomic_t gpu_usecount; >>> >>> + /* >>> + * Object chunk size currently mapped onto physical memory >>> + */ >>> + size_t heap_rss_size; >>> + >>> bool noexec :1; >>> bool is_heap :1; >>> }; >>> diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c >>> index d54d4e7b2195..7b1490cdaa48 100644 >>> --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c >>> +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c >>> @@ -522,6 +522,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, >>> IOMMU_WRITE | IOMMU_READ | IOMMU_NOEXEC, sgt); >>> >>> bomapping->active = true; >>> + bo->heap_rss_size += SZ_2; >>> >>> dev_dbg(pfdev->dev, "mapped page fault @ AS%d %llx", as, addr); >>> >> >