Received: by 2002:a05:7412:31a9:b0:e2:908c:2ebd with SMTP id et41csp3899726rdb; Thu, 14 Sep 2023 06:16:58 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGqHReqoI3crM4VTagc0kHdyp0TqnCTKf49R32u45Pb+oQFkR18YnnT88NRuK9NCdJEH+dZ X-Received: by 2002:a17:90b:3508:b0:273:ec96:b6f9 with SMTP id ls8-20020a17090b350800b00273ec96b6f9mr5237633pjb.25.1694697418129; Thu, 14 Sep 2023 06:16:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1694697418; cv=none; d=google.com; s=arc-20160816; b=GnDlbPulqntAUhTrW8/cvGNxHzcIEXulVpK0t6Nve9JKpSC4JECOTXf/S5yAxbwCOs OK565xZfsICgz1jpuWhs/J4Yll3rSl3OPHp1SKtIwlYnNq2Ez6B32psbEe/6zFQBn+nU HYUEjfKiYPaZQB52iPx2/bUPmpS9btbGYU4Gh6H3CoyODMJu1z/4DG6kkz/SLyB7AvjW rLHIe6MDr+Q9PmMAqqNFO+jPbW4SBelvI/P/qZMO81ZynGwCcg7TRJ1ShOaY8sv1j9/O JvXrc5HNLHGIbgjNTyj+q30a/Z+6Oechp2Go1p75eDwkMUk9aVo1LjeGOlv9K+P/8C+L R+oQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :organization:references:in-reply-to:message-id:subject:cc:to:from :date:dkim-signature; bh=aDDfZmnY+dw3OQFGKJib0hqhUAgsZz0dpIG3uhL+Zk4=; fh=uxLNaPoPRUzmkUc1KOgGwCEU/bTSjK8Yed2Up6ovXLY=; b=JZya7dmYIf6fn8oG3QftEX0fx2tsdlThEaA3Mbkgqn2phvI0h3+9rP4olPLfRFny3P aI9m9umuEgpCgzqtt390Xvy66nJlwLy9TODHfmofIbgyE/CE2OKFI88VwfYioyTT2dQP /+dNSSe980ozkf8whEypqXfKqD5DIqMahRid6BP6/UwNx8kYxncXZZo4g/z68ZAucrXu Hzt4yc/MCDaFsR8YelQhRDfhFVtNZPlNN3sHtCE4KsEYPo25+urCzrC+dOpmgTOKeuwE 9ZJRHkLN/v8IoYMxCrAkdnCDnnkjO3FbpElraeB2zHX2GOju40d+KoiNx1HWkMkc9CNj oxWg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=FKxmq59Z; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Return-Path: Received: from howler.vger.email (howler.vger.email. [2620:137:e000::3:4]) by mx.google.com with ESMTPS id gi23-20020a17090b111700b0026818f6a0c0si1604338pjb.86.2023.09.14.06.16.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Sep 2023 06:16:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) client-ip=2620:137:e000::3:4; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=FKxmq59Z; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id BFF038411D5A; Thu, 14 Sep 2023 00:36:36 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234521AbjINHgi (ORCPT + 99 others); Thu, 14 Sep 2023 03:36:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48742 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234268AbjINHgh (ORCPT ); Thu, 14 Sep 2023 03:36:37 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9AA28CF1 for ; Thu, 14 Sep 2023 00:36:32 -0700 (PDT) Received: from localhost (unknown [IPv6:2a01:e0a:2c:6930:5cf4:84a1:2763:fe0d]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bbrezillon) by madras.collabora.co.uk (Postfix) with ESMTPSA id A32566607314; Thu, 14 Sep 2023 08:36:29 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1694676990; bh=pkIMXiRptScDhvFCBdvwd0NpQ5Qev1tICpMTya+WWpI=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=FKxmq59ZuoRhHYZFdfAwrVcr/Ve94wVOtJHlCv1DinSkF0KRkLNHitApEByep9TNJ G+6FcVPGauUJetZvVukgeaQxDmoWWtKubg12VjSMf0otDAjn3yFz9wQmmowTb9qDFI p77t7fXAijBt7S7VQHB2tibzKcy7Y0Yt2oId1JpHU6z3DHSu6DcqhvTCRRqqJIP2Rc pKqzaqWVFCYiyv5yV1ob3F7vSNoKXHA0wwqgZx97f/nmI8KRZiZTlkmCXlDpdzktoQ 09biY3pbjNTMfECKYBnhpXl1Cgz2p623TI7jqq4RVnhsMoIyCydGbRI97fPdKxiI88 gCCSm8dT3SpRA== Date: Thu, 14 Sep 2023 09:36:26 +0200 From: Boris Brezillon To: Dmitry Osipenko Cc: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Christian =?UTF-8?B?S8O2bmln?= , Qiang Yu , Steven Price , Emma Anholt , Melissa Wen , dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: Re: [PATCH v16 15/20] drm/shmem-helper: Add memory shrinker Message-ID: <20230914093626.19692c24@collabora.com> In-Reply-To: References: <20230903170736.513347-1-dmitry.osipenko@collabora.com> <20230903170736.513347-16-dmitry.osipenko@collabora.com> <20230905100306.3564e729@collabora.com> <26f7ba6d-3520-0311-35e2-ef5706a98232@collabora.com> <20230913094832.3317c2df@collabora.com> Organization: Collabora X-Mailer: Claws Mail 4.1.1 (GTK 3.24.38; x86_64-redhat-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Thu, 14 Sep 2023 00:36:37 -0700 (PDT) On Thu, 14 Sep 2023 07:02:52 +0300 Dmitry Osipenko wrote: > On 9/13/23 10:48, Boris Brezillon wrote: > > On Wed, 13 Sep 2023 03:56:14 +0300 > > Dmitry Osipenko wrote: > > > >> On 9/5/23 11:03, Boris Brezillon wrote: > >>>> * But > >>>> + * acquiring the obj lock in drm_gem_shmem_release_pages_locked() can > >>>> + * cause a locking order inversion between reservation_ww_class_mutex > >>>> + * and fs_reclaim. > >>>> + * > >>>> + * This deadlock is not actually possible, because no one should > >>>> + * be already holding the lock when drm_gem_shmem_free() is called. > >>>> + * Unfortunately lockdep is not aware of this detail. So when the > >>>> + * refcount drops to zero, don't touch the reservation lock. > >>>> + */ > >>>> + if (shmem->got_pages_sgt && > >>>> + refcount_dec_and_test(&shmem->pages_use_count)) { > >>>> + drm_gem_shmem_do_release_pages_locked(shmem); > >>>> + shmem->got_pages_sgt = false; > >>>> } > >>> Leaking memory is the right thing to do if pages_use_count > 1 (it's > >>> better to leak than having someone access memory it no longer owns), but > >>> I think it's worth mentioning in the above comment. > >> > >> It's unlikely that it will be only a leak without a following up > >> use-after-free. Neither is acceptable. > > > > Not necessarily, if you have a page leak, it could be that the GPU has > > access to those pages, but doesn't need the GEM object anymore > > (pages are mapped by the iommu, which doesn't need shmem->sgt or > > shmem->pages after the mapping is created). Without a WARN_ON(), this > > can go unnoticed and lead to memory corruptions/information leaks. > > > >> > >> The drm_gem_shmem_free() could be changed such that kernel won't blow up > >> on a refcnt bug, but that's not worthwhile doing because drivers > >> shouldn't have silly bugs. > > > > We definitely don't want to fix that, but we want to complain loudly > > (WARN_ON()), and make sure the risk is limited (preventing memory from > > being re-assigned to someone else by not freeing it). > > That's what the code did and continues to do here. Not exactly sure what > you're trying to say. I'm going to relocate the comment in v17 to > put_pages(), we can continue discussing it there if I'm missing yours point. > I'm just saying it would be worth mentioning that we're intentionally leaking memory if shmem->pages_use_count > 1. Something like: /** * shmem->pages_use_count should be 1 when ->sgt != NULL and * zero otherwise. If some users still hold a pages reference * that's a bug, and we intentionally leak the pages so they * can't be re-allocated to someone else while the GPU/CPU * still have access to it. */ drm_WARN_ON(drm, refcount_read(&shmem->pages_use_count) == (shmem->sgt ? 1 : 0)); if (shmem->sgt && refcount_dec_and_test(&shmem->pages_use_count)) drm_gem_shmem_free_pages(shmem);