Received: by 2002:a05:7412:31a9:b0:e2:908c:2ebd with SMTP id et41csp4064223rdb; Thu, 14 Sep 2023 10:36:26 -0700 (PDT) X-Google-Smtp-Source: AGHT+IE9yJt7HbG2mAcqasqYm5o9UkxZmvn6d3r7Mfj6d02o5Cp3539lvns1RUII5hwHMJWV3rZ1 X-Received: by 2002:a05:6a21:7782:b0:138:60e:9bb with SMTP id bd2-20020a056a21778200b00138060e09bbmr5275089pzc.28.1694712986052; Thu, 14 Sep 2023 10:36:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1694712986; cv=none; d=google.com; s=arc-20160816; b=LN3Sxl2bk/UFXupimbg8cxWH+uWur67H+X4B5GekcGE2tNrcHe7iIUfc09kHOtADfM ImlZsP2zCK2Vv1dKqGUTIIf5dBdhhzYafxx4XcKy2aHkrTS/KgIu7c3uTayQ21QtbU8x RvjN6UDq4FIN11/poQUpaHBGu4WXDIG4zSWtUt9LN2GaoPwSt+T/J24fI/TkdrgJqska AOwfMemf6iWn+xSophAxXiXjNLvV8tKzoW3sZi7MsawpkDhb2P3DsPBJ6JP3A1aVLzXr hwVXC/SIhlug2ZuFjzkkcU6GkuULzIaR4FTaYAgSrc2uWFnDq7YiHogjelB9EsVReLFW iFYw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :organization:references:in-reply-to:message-id:subject:cc:to:from :date:dkim-signature; bh=2OHtxrWm3JI41ajRXPIW1kpLSzjcFqD8g5dtnKaQg5U=; fh=uxLNaPoPRUzmkUc1KOgGwCEU/bTSjK8Yed2Up6ovXLY=; b=cdxv6Mtip9iEi2RQkehAixXDIYqBzRz1nYzYxnsjLYswviSjT7CZeTdGfSkzZcX8OR D5mqMk/bXhvSNvhXfSO2hQOaoM9SUMS0aLe9LwR9EB8GFhcWsfrl+ES77+7GVIjpfsW5 Db2U2O3TfBxSbHl4gplDWM0SrJIKaYFzDNHF06xWIh1hbzMz9+tyidon6DynbIdc3l9X NlpIvCsrGay3GpiB/C0R5kMdDYQzPtD2py4zIp2s8VFEOKXxr3TdW4Eg6TCuf66PlQEC QdLXWRdzzVHESjiYXNqaV79YM5GkmioyanKCYcx/2HlkXOW29fT/eiLbuvGpeKTH3V2S 61aw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=RuUCDeNb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Return-Path: Received: from agentk.vger.email (agentk.vger.email. [2620:137:e000::3:2]) by mx.google.com with ESMTPS id y23-20020a63de57000000b00563dfffe7b9si1821939pgi.810.2023.09.14.10.36.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Sep 2023 10:36:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) client-ip=2620:137:e000::3:2; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=RuUCDeNb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by agentk.vger.email (Postfix) with ESMTP id 437E58112A99; Thu, 14 Sep 2023 06:27:30 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at agentk.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238987AbjINN1N (ORCPT + 99 others); Thu, 14 Sep 2023 09:27:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58682 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235397AbjINN1M (ORCPT ); Thu, 14 Sep 2023 09:27:12 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5C3CD1BEF for ; Thu, 14 Sep 2023 06:27:08 -0700 (PDT) Received: from localhost (unknown [IPv6:2a01:e0a:2c:6930:5cf4:84a1:2763:fe0d]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bbrezillon) by madras.collabora.co.uk (Postfix) with ESMTPSA id 661E8660734B; Thu, 14 Sep 2023 14:27:06 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1694698026; bh=0NG0mHKcQ7KLf0/jfcYljD+vBWRaQV154x2ndB6bslY=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=RuUCDeNb/cM3xwCBxpJbt1OVmQ7CyKLn4SQAZgjafOMCAd2DFN+Qc1G/LmhMYBd1D 1aljSyjzMDOaFD56++IdOEMq6/2fEfLwB9nH+xMAcFs9gNrCpkOWc1+57vMPdv6bJ/ gHboE2VxTh0ZjpryN5Im72ZiiGyJJRRF1vzonJW4fSBsOyfx8qwWWuD3/WVVwSH9V6 M0GYFnHkEBW59aDjw4utqO24sSfhrKHje+uAXUFOOsYarv5aAgfgrf7am244sZlI4Z V8gg0mD3ixgvVqO1rDUhfpHWUXPM/UEL7rlzk7QKudBWbF/bZennEq9QZoLH0KouxH kZp8zFAlLeEKA== Date: Thu, 14 Sep 2023 15:27:03 +0200 From: Boris Brezillon To: Dmitry Osipenko Cc: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Christian =?UTF-8?B?S8O2bmln?= , Qiang Yu , Steven Price , Emma Anholt , Melissa Wen , dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: Re: [PATCH v16 15/20] drm/shmem-helper: Add memory shrinker Message-ID: <20230914152703.78b1ac82@collabora.com> In-Reply-To: References: <20230903170736.513347-1-dmitry.osipenko@collabora.com> <20230903170736.513347-16-dmitry.osipenko@collabora.com> <20230905100306.3564e729@collabora.com> <26f7ba6d-3520-0311-35e2-ef5706a98232@collabora.com> <20230913094832.3317c2df@collabora.com> <20230914093626.19692c24@collabora.com> <21dda0bd-4264-b480-dbbc-29a7744bc96c@collabora.com> <20230914102737.08e61498@collabora.com> <20230914135840.5e0e11fe@collabora.com> Organization: Collabora X-Mailer: Claws Mail 4.1.1 (GTK 3.24.38; x86_64-redhat-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (agentk.vger.email [0.0.0.0]); Thu, 14 Sep 2023 06:27:30 -0700 (PDT) X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on agentk.vger.email On Thu, 14 Sep 2023 16:01:37 +0300 Dmitry Osipenko wrote: > On 9/14/23 14:58, Boris Brezillon wrote: > > On Thu, 14 Sep 2023 14:36:23 +0300 > > Dmitry Osipenko wrote: > > > >> On 9/14/23 11:27, Boris Brezillon wrote: > >>> On Thu, 14 Sep 2023 10:50:32 +0300 > >>> Dmitry Osipenko wrote: > >>> > >>>> On 9/14/23 10:36, Boris Brezillon wrote: > >>>>> On Thu, 14 Sep 2023 07:02:52 +0300 > >>>>> Dmitry Osipenko wrote: > >>>>> > >>>>>> On 9/13/23 10:48, Boris Brezillon wrote: > >>>>>>> On Wed, 13 Sep 2023 03:56:14 +0300 > >>>>>>> Dmitry Osipenko wrote: > >>>>>>> > >>>>>>>> On 9/5/23 11:03, Boris Brezillon wrote: > >>>>>>>>>> * But > >>>>>>>>>> + * acquiring the obj lock in drm_gem_shmem_release_pages_locked() can > >>>>>>>>>> + * cause a locking order inversion between reservation_ww_class_mutex > >>>>>>>>>> + * and fs_reclaim. > >>>>>>>>>> + * > >>>>>>>>>> + * This deadlock is not actually possible, because no one should > >>>>>>>>>> + * be already holding the lock when drm_gem_shmem_free() is called. > >>>>>>>>>> + * Unfortunately lockdep is not aware of this detail. So when the > >>>>>>>>>> + * refcount drops to zero, don't touch the reservation lock. > >>>>>>>>>> + */ > >>>>>>>>>> + if (shmem->got_pages_sgt && > >>>>>>>>>> + refcount_dec_and_test(&shmem->pages_use_count)) { > >>>>>>>>>> + drm_gem_shmem_do_release_pages_locked(shmem); > >>>>>>>>>> + shmem->got_pages_sgt = false; > >>>>>>>>>> } > >>>>>>>>> Leaking memory is the right thing to do if pages_use_count > 1 (it's > >>>>>>>>> better to leak than having someone access memory it no longer owns), but > >>>>>>>>> I think it's worth mentioning in the above comment. > >>>>>>>> > >>>>>>>> It's unlikely that it will be only a leak without a following up > >>>>>>>> use-after-free. Neither is acceptable. > >>>>>>> > >>>>>>> Not necessarily, if you have a page leak, it could be that the GPU has > >>>>>>> access to those pages, but doesn't need the GEM object anymore > >>>>>>> (pages are mapped by the iommu, which doesn't need shmem->sgt or > >>>>>>> shmem->pages after the mapping is created). Without a WARN_ON(), this > >>>>>>> can go unnoticed and lead to memory corruptions/information leaks. > >>>>>>> > >>>>>>>> > >>>>>>>> The drm_gem_shmem_free() could be changed such that kernel won't blow up > >>>>>>>> on a refcnt bug, but that's not worthwhile doing because drivers > >>>>>>>> shouldn't have silly bugs. > >>>>>>> > >>>>>>> We definitely don't want to fix that, but we want to complain loudly > >>>>>>> (WARN_ON()), and make sure the risk is limited (preventing memory from > >>>>>>> being re-assigned to someone else by not freeing it). > >>>>>> > >>>>>> That's what the code did and continues to do here. Not exactly sure what > >>>>>> you're trying to say. I'm going to relocate the comment in v17 to > >>>>>> put_pages(), we can continue discussing it there if I'm missing yours point. > >>>>>> > >>>>> > >>>>> I'm just saying it would be worth mentioning that we're intentionally > >>>>> leaking memory if shmem->pages_use_count > 1. Something like: > >>>>> > >>>>> /** > >>>>> * shmem->pages_use_count should be 1 when ->sgt != NULL and > >>>>> * zero otherwise. If some users still hold a pages reference > >>>>> * that's a bug, and we intentionally leak the pages so they > >>>>> * can't be re-allocated to someone else while the GPU/CPU > >>>>> * still have access to it. > >>>>> */ > >>>>> drm_WARN_ON(drm, > >>>>> refcount_read(&shmem->pages_use_count) == (shmem->sgt ? 1 : 0)); > >>>>> if (shmem->sgt && refcount_dec_and_test(&shmem->pages_use_count)) > >>>>> drm_gem_shmem_free_pages(shmem); > >>>> > >>>> That may be acceptable, but only once there will a driver using this > >>>> feature. > >>> > >>> Which feature? That's not related to a specific feature, that's just > >>> how drm_gem_shmem_get_pages_sgt() works, it takes a pages ref that can > >>> only be released in drm_gem_shmem_free(), because sgt users are not > >>> refcounted and the sgt stays around until the GEM object is freed or > >>> its pages are evicted. The only valid cases we have at the moment are: > >>> > >>> - pages_use_count == 1 && sgt != NULL > >>> - pages_use_count == 0 > >>> > >>> any other situations are buggy. > >> > >> sgt may belong to dma-buf for which pages_use_count=0, this can't be > >> done until sgt mess is sorted out > > > > No it can't, not in that path, because the code you're adding is in the > > if (!obj->import_branch) branch: > > > > > > if (obj->import_attach) { > > drm_prime_gem_destroy(obj, shmem->sgt); > > } else { > > ... > > // Your changes are here. > > ... > > This branch is taken for the dma-buf in the prime import error code path. I suggested a fix for this error that didn't involve adding a new flag, but that's orthogonal to the piece of code we're discussing anyway. > But yes, the pages_use_count=0 for the dma-buf and then it can be > written as: > > if (obj->import_attach) { > drm_prime_gem_destroy(obj, shmem->sgt); > } else { > drm_WARN_ON(obj->dev, refcount_read(&shmem->vmap_use_count)); > > if (shmem->sgt && refcount_read(&shmem->pages_use_count)) { You should drop the '&& refcount_read(&shmem->pages_use_count)', otherwise you'll never enter this branch (sgt allocation retained a ref, so pages_use_count > 0 when ->sgt != NULL). If you added this pages_use_count > 0 check to deal with the 'free-partially-imported-GEM' case, I keep thinking this is not the right fix. You should just assume that obj->import_attach == NULL means not-a-prime-buffer, and then make sure partially-initialized-prime-GEMs have import_attach assigned (see the oneliner I suggested in my review of `[PATCH v15 01/23] drm/shmem-helper: Fix UAF in error path when freeing SGT of imported GEM`). > dma_unmap_sgtable(obj->dev->dev, shmem->sgt, > DMA_BIDIRECTIONAL, 0); > sg_free_table(shmem->sgt); > kfree(shmem->sgt); > > __drm_gem_shmem_put_pages(shmem); You need to decrement pages_use_count: /* shmem->pages_use_count should be 1 when ->sgt != NULL and * zero otherwise. If some users still hold a pages reference * that's a bug, and we intentionally leak the pages so they * can't be re-allocated to someone else while the GPU/CPU * still have access to it. */ if (refcount_dec_and_test(&shmem->pages_use_count)) __drm_gem_shmem_put_pages(shmem); > } > > drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_use_count)); And now this WARN_ON() ^ should catch unexpected pages leak. > > Alright, I'll check if it works as expected for fixing the error code path bug for v17 >