Received: by 2002:a6b:500f:0:0:0:0:0 with SMTP id e15csp4632247iob; Sun, 8 May 2022 19:58:02 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzBhNvK57qPbWHU1KvBVVsuBQfeYYvqUCVA9JqL2QXH0L1oZgZ/+6OP+2E/gV1+q3Vtz97h X-Received: by 2002:a17:902:ec8c:b0:15e:a371:ad7d with SMTP id x12-20020a170902ec8c00b0015ea371ad7dmr14185523plg.12.1652065082156; Sun, 08 May 2022 19:58:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1652065082; cv=none; d=google.com; s=arc-20160816; b=riZHMRFabKXLH8ZF8l8viXswlqchlTGx4hg4e3H4zWvoajlW1YfT14bEk4WEBFC19z E90y31sg+K1WHYG3tofPCXKkY1J+MOAMYtUR1smkl+NGysjUhYBZ3Ssfw6ldNse2iAsj ISHWJqbQDJdnjohAbjgiZw+g8dSzSHPkLS6ar91h78LayEugPw7rba6iwG3loZ+StM4U uD7fQZJNk+CxSY/XY3Y605kD1iAPAN9viOCw2gKjzT/U0/gam4pcdQi7Vd1zH2+OGRMi xBixhIdZmaUCUJSroYBGyyrAvTroZKxo0beqPQlTcPhFU9jgLBQ1h+I4GstEVKsalMbh PtNg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:to:content-language:subject:user-agent:mime-version:date :message-id:dkim-signature; bh=89ob3QnFsmQPQb5pWvZNM/1uwkOAXProTs31MNnqdGs=; b=b975NMJlYbRQR4tjnKNk+k0zmdpBn8fwDvQ14103Ig1mmgKYjRDbWDY7FN73qDqPgb 66hyccrjY/b+qfTNMV+cQgNaXa2mY+75y1KboGchfVp3r5SQ+eevFXtzpp0yCsgPxGOa sU6I+PFlg+pVnP+3FD/8UhuT0ni0ncCuKYoSu98fVVjnBU/qb9fVRphtWiPKoBnMmlwa NtVhe8iCajF1MRpQ2jK+B/+W3gXSrq+IQ3Mpjn3POrXAUoQ4gosA2AxzF/iEzAGzI7NA dmLBeV6//QpaEmEG8rcP9mg3CovtnHQEZAAY5NYvMzuB4yUkKdrqTzy+JwZ8QD16ZgVT Kbsw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=bwatyiwZ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=collabora.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [2620:137:e000::1:18]) by mx.google.com with ESMTPS id 2-20020a631642000000b003c604c0e2fcsi12105670pgw.633.2022.05.08.19.58.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 08 May 2022 19:58:02 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) client-ip=2620:137:e000::1:18; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=bwatyiwZ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=collabora.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 540CD7FF42; Sun, 8 May 2022 19:56:19 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1386505AbiEEWxB (ORCPT + 99 others); Thu, 5 May 2022 18:53:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35712 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1386487AbiEEWw6 (ORCPT ); Thu, 5 May 2022 18:52:58 -0400 Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8F72E5DD1C for ; Thu, 5 May 2022 15:49:17 -0700 (PDT) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id CC3041F40774 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1651790956; bh=ISp00WA5pK91zAAU1GJQYbnnxVyu4GtysQuECSVD5SU=; h=Date:Subject:To:References:From:In-Reply-To:From; b=bwatyiwZ4kDcFa6lvUhE4lcQTXXu+ZquCX1nIIoLtbo5lgvoss7jP7T3TU/NfmpcS aLmFmGDnWo80BlCfgC3u0ysYro21h5XqsTfr4FNS4Uf4VxerGtu1BgAbzZtIyIq57W p70AeBWglC7f/q/P+989NEwASAy8ue+u3H2HD9T7NMcVdAjrGsWzN4qs9eqUVOFHKn WvvI/igjZd7CH1Nv54Od6RkijyI3yyBKVpktUHR/uGhSWZbzR2s39tddrOQyKfPo6T t6cKOSJ1G6C7M7u5mEC7Zac4bAXXIjEialQTPiy1vKnhfPyjv9K2tHjGKB9IYDTyhM vRkW0/7WThHjw== Message-ID: <83e68918-68de-c0c6-6f9b-e94d34b19383@collabora.com> Date: Fri, 6 May 2022 01:49:12 +0300 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.8.0 Subject: Re: [PATCH v4 10/15] drm/shmem-helper: Take reservation lock instead of drm_gem_shmem locks Content-Language: en-US To: Daniel Stone , Thomas Zimmermann , David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Almeida , Gert Wollny , Gustavo Padovan , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Dmitry Osipenko , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, virtualization@lists.linux-foundation.org References: <20220417223707.157113-1-dmitry.osipenko@collabora.com> <20220417223707.157113-11-dmitry.osipenko@collabora.com> <248083d2-b8f2-a4d7-099d-70a7e7859c11@suse.de> <8f932ab0-bb72-8fea-4078-dc59e9164bd4@collabora.com> <01506516-ab2f-cb6e-7507-f2a3295efb59@collabora.com> From: Dmitry Osipenko In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-3.5 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,NICE_REPLY_A,RDNS_NONE,SPF_HELO_NONE, T_SCC_BODY_TEXT_LINE,UNPARSEABLE_RELAY autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 5/5/22 11:12, Daniel Vetter wrote: > On Wed, May 04, 2022 at 06:56:09PM +0300, Dmitry Osipenko wrote: >> On 5/4/22 11:21, Daniel Vetter wrote: >> ... >>>>> - Maybe also do what you suggest and keep a separate lock for this, but >>>>> the fundamental issue is that this doesn't really work - if you share >>>>> buffers both ways with two drivers using shmem helpers, then the >>>>> ordering of this vmap_count_mutex vs dma_resv_lock is inconsistent and >>>>> you can get some nice deadlocks. So not a great approach (and also the >>>>> reason why we really need to get everyone to move towards dma_resv_lock >>>>> as _the_ buffer object lock, since otherwise we'll never get a >>>>> consistent lock nesting hierarchy). >>>> >>>> The separate locks should work okay because it will be always the >>>> exporter that takes the dma_resv_lock. But I agree that it's less ideal >>>> than defining the new rules for dma-bufs since sometime you will take >>>> the resv lock and sometime not, potentially hiding bugs related to lockings. >>> >>> That's the issue, some importers need to take the dma_resv_lock for >>> dma_buf_vmap too (e.g. to first nail the buffer in place when it's a >>> dynamic memory manager). In practice it'll work as well as what we have >>> currently, which is similarly inconsistent, except with per-driver locks >>> instead of shared locks from shmem helpers or dma-buf, so less obvious >>> that things are inconsistent. >>> >>> So yeah if it's too messy maybe the approach is to have a separate lock >>> for vmap for now, land things, and then fix up dma_buf_vmap in a follow up >>> series. >> >> The amdgpu driver was the fist who introduced the concept of movable >> memory for dma-bufs. Now we want to support it for DRM SHMEM too. For >> both amdgpu ttm and shmem drivers we will want to hold the reservation >> lock when we're touching moveable buffers. The current way of denoting >> that dma-buf is movable is to implement the pin/unpin callbacks of the >> dma-buf ops, should be doable for shmem. > > Hm that sounds like a bridge too far? I don't think we want to start > adding moveable dma-bufs for shmem, thus far at least no one asked for > that. Goal here is just to streamline the locking a bit and align across > all the different ways of doing buffers in drm. > > Or do you mean something else and I'm just completely lost? I'm talking about aligning DRM locks with the dma-buf locks. The problem is that the convention of dma-bufs isn't specified yet. In particular there is no convention for the mapping operations. If we want to switch vmapping of shmem to use reservation lock, then somebody will have to hold this lock for dma_buf_vmap() and the locking convention needs to be specified firmly. In case of dynamic buffers, we will also need to specify whether dma_buf_vmap() should imply the implicit pinning by exporter or the buffer must be pinned explicitly by importer before dma_buf_vmap() is invoked. Perhaps I indeed shouldn't care about this for this patchset. The complete locking model of dma-bufs must be specified first. >> A day ago I found that mapping of imported dma-bufs is broken at least >> for the Tegra DRM driver (and likely for others too) because driver >> doesn't assume that anyone will try to mmap imported buffer and just >> doesn't handle this case at all, so we're getting a hard lockup on >> touching mapped memory because we're mapping something else than the >> dma-buf. > > Huh that sounds bad, how does this happen? Pretty much all pieces of > dma-buf (cpu vmap, userspace mmap, heck even dma_buf_attach) are optional > or at least can fail for various reasons. So exporters not providing mmap > support is fine, but importers then dying is not. Those drivers that die don't have userspace that uses dma-bufs extensively. I noticed it only because was looking at this code too much for the last days. Drivers that don't die either map imported BOs properly or don't allow mapping at all. >> My plan is to move the dma-buf management code to the level of DRM core >> and make it aware of the reservation locks for the dynamic dma-bufs. >> This way we will get the proper locking for dma-bufs and fix mapping of >> imported dma-bufs for Tegra and other drivers. > > So maybe we're completely talking past each another, or coffee is not > working here on my end, but I've no idea what you mean. > > We do have some helpers for taking care of the dma_resv_lock dance, and > Christian König has an rfc patch set to maybe unify this further. But that > should be fairly orthogonal to reworking shmem (it might help a bit with > reworking shmem though). The reservation lock itself doesn't help much shmem, IMO. It should help only in the context of dynamic dma-bufs and today we don't have a need in the dynamic shmem dma-bufs. You were talking about making DRM locks consistent with dma-buf locks, so I thought that yours main point of making use of reservation locks for shmem is to prepare to the new locking scheme. I wanted to try to specify the dma-buf locking convention for mapping operations because it's missing right now and it should affect how DRM should take the reservation locks, but this is not easy to do as I see now. Could you please point at the Christian's RFC patch? He posted too many patches, can't find it :) I'm curious to take a look. -- Best regards, Dmitry