Received: by 2002:a05:7412:98c1:b0:fa:551:50a7 with SMTP id kc1csp320023rdb; Fri, 5 Jan 2024 10:51:36 -0800 (PST) X-Google-Smtp-Source: AGHT+IE3HYre7mqkeVgItxg2MCkovm9UwxEoVJxLxF/nqJsCk5jaQYdS9JVd0Cu6duDPmeXoNBgU X-Received: by 2002:a17:90a:1b84:b0:285:a426:5913 with SMTP id w4-20020a17090a1b8400b00285a4265913mr2323917pjc.41.1704480696028; Fri, 05 Jan 2024 10:51:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704480696; cv=none; d=google.com; s=arc-20160816; b=JYOlpmjg/+9gBjGlc+RHy2/Bz0oR1toGEdLnIPl2SCtWU7nz12XkXmpF1mcHCpGX4m TdU06ZqmIZfDR8qNM1xp7Fx45lX2yIYgtkkSgSB4720nhhO9/7u5ddkNBKJRYflrnNtK xPaUSsmpETNIbbQ/Mq0VB5Css1d15hVolIDRvZCASNHdUInbwGqSlsSXXCxniEla5nU2 suPkfRZtA0Bz05oE0yAa9CWbkDeJ9XWNeEOzacdfjClRgdnyTSIMiPAYQcQ/0ctw7bVr iN8RfxBjjqBPOmok01i4igYCpkawSkbOUNmFTCiifU5QicZNSO1BKgVx4gRZJi1M13JP 02ow== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=WGepiqYKsfgCh/MRMXAkiatkEzgEkVPz/Pk3Kp3B3dg=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=vXz/z6tgyAprN8HVWJpYwnoCKsDRJIDmN8Xzba5OwBRv16wJSca1Vd1MzamiCGcoZ7 J3mLbvMx4+3U8qcBSTy01axYlHI4t/lNU/N7uj5bALrSMVpwcsqXZNGH/uUnneyFb382 H4BbJYtNpmpu5r6GsZbCGeuB/wkzh0twNEODnc58QEJbtQjg34/xvE+M8DDgoASoeCrK nvaX2EFowlxWark6IxsjdeNsVQde75g3TeXslH3/ZyhDOt9bXpTJZFmOXCXtqyNrA0fM CxR9tmZnFUg4UkodoW06592TCqtaQ5c/xVtWTDiwdrGxLa4ycO+RJDNvaPBDQSuBV+aD 9siA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=eL0GBCOi; spf=pass (google.com: domain of linux-kernel+bounces-18256-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18256-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id 11-20020a17090a098b00b0028cfb02353asi1182615pjo.169.2024.01.05.10.51.35 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 10:51:36 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-18256-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=eL0GBCOi; spf=pass (google.com: domain of linux-kernel+bounces-18256-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18256-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 2C823285FC8 for ; Fri, 5 Jan 2024 18:51:33 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 21A5139FF1; Fri, 5 Jan 2024 18:47:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=collabora.com header.i=@collabora.com header.b="eL0GBCOi" X-Original-To: linux-kernel@vger.kernel.org Received: from madrid.collaboradmins.com (madrid.collaboradmins.com [46.235.227.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DCFE339AFD for ; Fri, 5 Jan 2024 18:47:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1704480433; bh=sTS/2S+BYwtPCc7TuBkqhut/HYwRwiXmAJgkEuu7P8Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eL0GBCOiEAFErgy+WytN0hVIe/1GuDil9psBDToaEuM+ziwat7seTyzWb+Im8H3DB k0YBICdm1jU4HQPFe0GzcR/Z7uU/m2tUv9r+N53baQ5zZOcFWfrSO1a9yd0o1yGnCY 1Eu+FibV8JgITDVgncjPPUs+hVAwEwNiA4b57NRx4mox6SUoprO1u9y0ULNTtHufR7 BbRad+Bma5gTftk3pbioR/lQ7pk5CRLinByW6G9s450LSC7dTuK4geSVsDSR9zYArP I1GvVFL3/vk7xCouAEu+mur+CezOKgiBZgY4+ABsfWzXCxxcO01GKUp6pLrPPWFM+D OAVMDk9G/Qtqg== Received: from workpc.. (cola.collaboradmins.com [195.201.22.229]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madrid.collaboradmins.com (Postfix) with ESMTPSA id F3158378204B; Fri, 5 Jan 2024 18:47:11 +0000 (UTC) From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?UTF-8?q?Christian=20K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v19 18/30] drm/panfrost: Explicitly get and put drm-shmem pages Date: Fri, 5 Jan 2024 21:46:12 +0300 Message-ID: <20240105184624.508603-19-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240105184624.508603-1-dmitry.osipenko@collabora.com> References: <20240105184624.508603-1-dmitry.osipenko@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit To simplify the drm-shmem refcnt handling, we're moving away from the implicit get_pages() that is used by get_pages_sgt(). From now on drivers will have to pin pages while they use sgt. Panfrost's shrinker doesn't support swapping out BOs, hence pages are pinned and sgt is valid as long as pages' use-count > 0. In Panfrost, panfrost_gem_mapping, which is the object representing a GPU mapping of a BO, owns a pages ref. This guarantees that any BO being mapped GPU side has its pages retained till the mapping is destroyed. Since pages are no longer guaranteed to stay pinned for the BO lifetime, and MADVISE(DONT_NEED) flagging remains after the GEM handle has been destroyed, we need to add an extra 'is_purgeable' check in panfrost_gem_purge(), to make sure we're not trying to purge a BO that already had its pages released. Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/panfrost/panfrost_gem.c | 63 ++++++++++++++----- .../gpu/drm/panfrost/panfrost_gem_shrinker.c | 6 ++ 2 files changed, 52 insertions(+), 17 deletions(-) diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c index f268bd5c2884..7edfc12f7c1f 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c @@ -35,20 +35,6 @@ static void panfrost_gem_free_object(struct drm_gem_object *obj) */ WARN_ON_ONCE(!list_empty(&bo->mappings.list)); - if (bo->sgts) { - int i; - int n_sgt = bo->base.base.size / SZ_2M; - - for (i = 0; i < n_sgt; i++) { - if (bo->sgts[i].sgl) { - dma_unmap_sgtable(pfdev->dev, &bo->sgts[i], - DMA_BIDIRECTIONAL, 0); - sg_free_table(&bo->sgts[i]); - } - } - kvfree(bo->sgts); - } - drm_gem_shmem_free(&bo->base); } @@ -85,11 +71,40 @@ panfrost_gem_teardown_mapping(struct panfrost_gem_mapping *mapping) static void panfrost_gem_mapping_release(struct kref *kref) { - struct panfrost_gem_mapping *mapping; - - mapping = container_of(kref, struct panfrost_gem_mapping, refcount); + struct panfrost_gem_mapping *mapping = + container_of(kref, struct panfrost_gem_mapping, refcount); + struct panfrost_gem_object *bo = mapping->obj; + struct panfrost_device *pfdev = bo->base.base.dev->dev_private; panfrost_gem_teardown_mapping(mapping); + + /* On heap BOs, release the sgts created in the fault handler path. */ + if (bo->sgts) { + int i, n_sgt = bo->base.base.size / SZ_2M; + + for (i = 0; i < n_sgt; i++) { + if (bo->sgts[i].sgl) { + dma_unmap_sgtable(pfdev->dev, &bo->sgts[i], + DMA_BIDIRECTIONAL, 0); + sg_free_table(&bo->sgts[i]); + } + } + kvfree(bo->sgts); + } + + /* Pages ref is owned by the panfrost_gem_mapping object. We must + * release our pages ref (if any), before releasing the object + * ref. + * Non-heap BOs acquired the pages at panfrost_gem_mapping creation + * time, and heap BOs may have acquired pages if the fault handler + * was called, in which case bo->sgts should be non-NULL. + */ + if (!bo->base.base.import_attach && (!bo->is_heap || bo->sgts) && + bo->base.madv >= 0) { + drm_gem_shmem_put_pages(&bo->base); + bo->sgts = NULL; + } + drm_gem_object_put(&mapping->obj->base.base); panfrost_mmu_ctx_put(mapping->mmu); kfree(mapping); @@ -125,6 +140,20 @@ int panfrost_gem_open(struct drm_gem_object *obj, struct drm_file *file_priv) if (!mapping) return -ENOMEM; + if (!bo->is_heap && !bo->base.base.import_attach) { + /* Pages ref is owned by the panfrost_gem_mapping object. + * For non-heap BOs, we request pages at mapping creation + * time, such that the panfrost_mmu_map() call, further down in + * this function, is guaranteed to have pages_use_count > 0 + * when drm_gem_shmem_get_pages_sgt() is called. + */ + ret = drm_gem_shmem_get_pages(&bo->base); + if (ret) { + kfree(mapping); + return ret; + } + } + INIT_LIST_HEAD(&mapping->node); kref_init(&mapping->refcount); drm_gem_object_get(obj); diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c index 02b60ea1433a..d4fb0854cf2f 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c @@ -50,6 +50,12 @@ static bool panfrost_gem_purge(struct drm_gem_object *obj) if (!dma_resv_trylock(shmem->base.resv)) goto unlock_mappings; + /* BO might have become unpurgeable if the last pages_use_count ref + * was dropped, but the BO hasn't been destroyed yet. + */ + if (!drm_gem_shmem_is_purgeable(shmem)) + goto unlock_mappings; + panfrost_gem_teardown_mappings_locked(bo); drm_gem_shmem_purge_locked(&bo->base); ret = true; -- 2.43.0