Received: by 2002:a05:6358:701b:b0:131:369:b2a3 with SMTP id 27csp4124473rwo; Tue, 25 Jul 2023 00:35:02 -0700 (PDT) X-Google-Smtp-Source: APBJJlG4aFGlRcUamHc34BVMWPyEL9YSSQzsayPf5Q7IA8jBAe5fXfxDIMsKzCY4dYCBHRODfEIG X-Received: by 2002:a05:6402:1a3c:b0:51d:d390:143f with SMTP id be28-20020a0564021a3c00b0051dd390143fmr9877236edb.5.1690270502301; Tue, 25 Jul 2023 00:35:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1690270502; cv=none; d=google.com; s=arc-20160816; b=QcEHqWb+OCSaotGWtEfLkRX1u/QFzRYnzmDW/YLLsysoXHOD86rlROVIpOKFyh5FKN rgV0RF6UWwjpmwXnFTObKUGqCvFCIooMyNIF+ZkpTdsgeY+8VcODO9MPovumlpdJHquV p7uuTA383KdiYVlq06zcVjc9+BxblKHn2lgj0MWOMvYPiQvH5IS5HR3N7SxGO/atSr/d DSfAVLbli6PxS59XZTfKWIAT2UrtPv29Ro4g647PA4mQSvXIavgnVPt42FfJL+yorxJB Lwb1ADNe3NsWC4nc7RxEhUL701ORrXp0c+CIi08OouJwnR576wgfyGiOQbuLxXVcWSqL EiXA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :organization:references:in-reply-to:message-id:subject:cc:to:from :date:dkim-signature; bh=c1fDMIju71y8qI/6JJXqhcjmk88l2xMUZMBYd1Rp13k=; fh=+PArvOrmmh0iVc/5XGads6zY/5UXlaaY8lrP6PsZ1cY=; b=fgPSq6YIDuc2XeEOSr2rIRBOVJ0pHc0IFc42xuUY5DZe/nZYLRSFTqLY1cj7LIAlyI 4HoX7fQ8cMCFmCDNJWQCejtAjDp63Nqvo0hZDBcQ6FMh9pKT2CA+sgohSJLQQE5r1iNe imtRkHSgurATzpriXC8Xo/nbz0IP77aVHyslPUVcc/SCGPAqAteHnttIzs4oY0G4xzpy WDBm/tBUH5jd6W9d2yVK2L3H885TH0UWdehCjf3mChb58Iy9mvZ80M7y3fBiwfaH9z4D xpLleLlLsjcN7bJEeX9VQ0QA1XRVa8eAP9hzTck5qcpJPInOEhBLeoQZzlvnfgIOlYlN KTFQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=GLcosbsF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h6-20020aa7de06000000b0051e19bf1f79si7690991edv.481.2023.07.25.00.34.37; Tue, 25 Jul 2023 00:35:02 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=GLcosbsF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232296AbjGYHO7 (ORCPT + 99 others); Tue, 25 Jul 2023 03:14:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39480 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229545AbjGYHO5 (ORCPT ); Tue, 25 Jul 2023 03:14:57 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4000810CC for ; Tue, 25 Jul 2023 00:14:53 -0700 (PDT) Received: from localhost (unknown [IPv6:2a01:e0a:2c:6930:5cf4:84a1:2763:fe0d]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bbrezillon) by madras.collabora.co.uk (Postfix) with ESMTPSA id 0534C6606FD7; Tue, 25 Jul 2023 08:14:50 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1690269291; bh=2Up7Q4ggLqZwRNdv/plAaqb8BMBgbL4oTYcjSdMQnow=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=GLcosbsF9805iCbE8eMvWlBaeBQxI53nor3WoWEKfZ5Hj/OXrrl3yEPPnLRW+TpRz iKuHpSPyTIWyXIT29oO+7L7G+HITZzPd4GlWnb38HINfGIiom/lq3+h9ArzmviwEKp WbXCLx+qEhV5lJie++W+SiYk79wKqF3enWPkKVXfXpkajuostTzGKwBGQ2bOnV3sKw BWOYiHqmLRPWxmky2VY5WkUB3em82QbGqEQD5rASxpWTChzgkqoQL72TsEvfif2TWq 5lPoW74N+x6kXkS6Qp1yk9zEYkaG60R5mPY5EfQv8zb7R5eCfTbNAoVM1tb9i+za3d tdd/dDLnHluxA== Date: Tue, 25 Jul 2023 09:14:48 +0200 From: Boris Brezillon To: Dmitry Osipenko Cc: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Christian =?UTF-8?B?S8O2bmln?= , Qiang Yu , Steven Price , Emma Anholt , Melissa Wen , dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: Re: [PATCH v14 01/12] drm/shmem-helper: Factor out pages alloc/release from drm_gem_shmem_get/put_pages() Message-ID: <20230725091448.7ac0c4aa@collabora.com> In-Reply-To: <20230722234746.205949-2-dmitry.osipenko@collabora.com> References: <20230722234746.205949-1-dmitry.osipenko@collabora.com> <20230722234746.205949-2-dmitry.osipenko@collabora.com> Organization: Collabora X-Mailer: Claws Mail 4.1.1 (GTK 3.24.38; x86_64-redhat-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, 23 Jul 2023 02:47:35 +0300 Dmitry Osipenko wrote: > Factor out pages allocation from drm_gem_shmem_get_pages() into > drm_gem_shmem_acquire_pages() function and similar for the put_pages() > in a preparation for addition of shrinker support to drm-shmem. > > Once shrinker will be added, the pages_use_count>0 will no longer determine > whether pages are pinned because pages could be swapped out by the shrinker > and then pages_use_count will be greater than 0 in this case. We will add > new pages_pin_count in a later patch. > > The new common drm_gem_shmem_acquire/release_pages() will be used by > shrinker code for performing the page swapping. > > Signed-off-by: Dmitry Osipenko > --- > drivers/gpu/drm/drm_gem_shmem_helper.c | 65 ++++++++++++++++++++------ > 1 file changed, 52 insertions(+), 13 deletions(-) > > diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c > index a783d2245599..267153853e2c 100644 > --- a/drivers/gpu/drm/drm_gem_shmem_helper.c > +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c > @@ -165,21 +165,26 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) > } > EXPORT_SYMBOL_GPL(drm_gem_shmem_free); > > -static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) > +static int > +drm_gem_shmem_acquire_pages(struct drm_gem_shmem_object *shmem) > { > struct drm_gem_object *obj = &shmem->base; > struct page **pages; > > dma_resv_assert_held(shmem->base.resv); Not directly related to this patch, but can we start using _locked suffixes for any function that's expecting the dma-resv lock to be held? > > - if (shmem->pages_use_count++ > 0) > - return 0; > + if (shmem->madv < 0) { > + drm_WARN_ON(obj->dev, shmem->pages); > + return -ENOMEM; > + } > + > + if (drm_WARN_ON(obj->dev, !shmem->pages_use_count)) > + return -EINVAL; > > pages = drm_gem_get_pages(obj); > if (IS_ERR(pages)) { > drm_dbg_kms(obj->dev, "Failed to get pages (%ld)\n", > PTR_ERR(pages)); > - shmem->pages_use_count = 0; > return PTR_ERR(pages); > } > > @@ -198,6 +203,48 @@ static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) > return 0; > } > > +static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) > +{ > + int err; > + > + dma_resv_assert_held(shmem->base.resv); > + > + if (shmem->madv < 0) > + return -ENOMEM; > + > + if (shmem->pages_use_count++ > 0) > + return 0; > + > + err = drm_gem_shmem_acquire_pages(shmem); > + if (err) > + goto err_zero_use; > + > + return 0; > + > +err_zero_use: > + shmem->pages_use_count = 0; > + > + return err; > +} > + > +static void > +drm_gem_shmem_release_pages(struct drm_gem_shmem_object *shmem) > +{ > + struct drm_gem_object *obj = &shmem->base; > + > + dma_resv_assert_held(shmem->base.resv); > + > +#ifdef CONFIG_X86 > + if (shmem->map_wc) > + set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); > +#endif > + > + drm_gem_put_pages(obj, shmem->pages, > + shmem->pages_mark_dirty_on_put, > + shmem->pages_mark_accessed_on_put); > + shmem->pages = NULL; > +} > + > /* > * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object > * @shmem: shmem GEM object > @@ -216,15 +263,7 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem) > if (--shmem->pages_use_count > 0) > return; > > -#ifdef CONFIG_X86 > - if (shmem->map_wc) > - set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); > -#endif > - > - drm_gem_put_pages(obj, shmem->pages, > - shmem->pages_mark_dirty_on_put, > - shmem->pages_mark_accessed_on_put); > - shmem->pages = NULL; > + drm_gem_shmem_release_pages(shmem); > } > EXPORT_SYMBOL(drm_gem_shmem_put_pages); >