Received: by 2002:a05:6a10:1287:0:0:0:0 with SMTP id d7csp1141192pxv; Fri, 23 Jul 2021 00:19:11 -0700 (PDT) X-Google-Smtp-Source: ABdhPJydrUhL7y1TWGWYOuJWkbUyuNfxDz1T4bzHkm0pNPd6RSSIGlhHygZiN9d0YUVL5jTS8zfM X-Received: by 2002:a17:906:585a:: with SMTP id h26mr3456494ejs.31.1627024751503; Fri, 23 Jul 2021 00:19:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1627024751; cv=none; d=google.com; s=arc-20160816; b=dieS2IgMnhANCh6kqcJaQJXerth7FCTAiNR9LdLUBLclEyPP0vOE2bzekuwv7sWxMa OFREKAkfv0sq3ACrgeRymdGSDXkWUHcel8CGDqazREGhsFCmvKNus/PBnpRmM8G2LsoQ i958QjZs2DHArblRwZjUVcpHiV76Ud/ZdW3990kYMwo3R9gSHHmc06F1WsaS3U2Yocu7 U3n8tjd0Y3b29ysrnqCMB+umOpWS5PJHbbne8OClHFV6kfIJ9ngKbMKeIMWuAh7+Q7J5 bGh/mGLMMPW6KOQzPiREoB9Su8/gMRfzf/RB1OLV+80PJG9q0KLYfigjW+6iYlghQXm1 pt0g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:to:from:date:dkim-signature; bh=xe2OINBH+BRr0c0B6Rjuunf1y/CF6ftNbio56e+sO/U=; b=juLcEObCzyTu8FZPvgR6eTT1D+pcBEfbCW92fyOeNfaUHz0fdFrjd8QNmrLgOCDHe1 miAiDTjkeT+pSmQoD760aI7GVpWBWDczkQqDQZiCIlikqlgRv+YTQtTKMwRWFBDoebB2 oL54vXnLr7kqg5MYd5a3b0ulmmejNzmuQLDgg4EYQOPNzYenfSnSIvbpFUuJB+otYYP0 34rgB9OwkCZ0LMJgLKj17qn60bWqpBnSDmgJFYBLHjSBARebDJX1EdxOYPUYBWlUyFdC gpBln08cAUu95zTHr7ZXdkqrKlEJpbX1sngZ/+2G5uj/9dkzsqt73V6SS7RS/PzqjEeg tGIQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b="B/E7k4to"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id t1si31897631edr.453.2021.07.23.00.18.48; Fri, 23 Jul 2021 00:19:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b="B/E7k4to"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232798AbhGWGgS (ORCPT + 99 others); Fri, 23 Jul 2021 02:36:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48044 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232892AbhGWGgR (ORCPT ); Fri, 23 Jul 2021 02:36:17 -0400 Received: from mail-io1-xd36.google.com (mail-io1-xd36.google.com [IPv6:2607:f8b0:4864:20::d36]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 80BF0C061757 for ; Fri, 23 Jul 2021 00:16:51 -0700 (PDT) Received: by mail-io1-xd36.google.com with SMTP id l126so1336488ioa.12 for ; Fri, 23 Jul 2021 00:16:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=date:from:to:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=xe2OINBH+BRr0c0B6Rjuunf1y/CF6ftNbio56e+sO/U=; b=B/E7k4toRTdG51Xn1+b8uQblMIAYTeDSYwFtqlEV9w+f2GnxRubcqAxTzyIeWJN8mH O6n61pgcCQYYe9DBzfjgPrMspAwBHD985Z2CH1soCmAVawStI9gQczByWPtOiU+HCdU3 B6PgMeVBSNKhRW94qwYABJ+F06HZE2vfgi7TX85mWBI1t8aS7+gbGRMjp9DCqtzZPsx0 +AGDC23HR5qLj09CNuvI0JYT3mjgLs+VaFIVVo2EeyKQ2KNwM/2oDYzHZkV/2JVE42L8 LSCrIvYdben9raJIiSfC43nD6SoE6NEJ7xDPibMhfShJAr/GNAQWg67gD1byv3Xnjk4O Zd/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=xe2OINBH+BRr0c0B6Rjuunf1y/CF6ftNbio56e+sO/U=; b=uVdPYCWQM4aSc0d/0GRKiYA4HHGw8hpB0QdL5RLUe3obJzWoEl5237baJ5qoExcu3r 0UOFOHRSBEuq3E3k7OMUHsg6D/wG4+nl8iM2SmA5dLCvB2YXQOrwZcS3sy/ycyWIUqUB WLYsnwWHm+UXGj8YKkv22SqYG+JNZhTlHGoCfYA+CKLHS6DvnaxdNsWg930M3RWky2xw AAtyide8w5FNiyltGBI7KoMlx0xTvAz9p9Ks1/rUlM+R1Hu3OEiKsZrmQPGNJEXhaLa7 I5LHoMdrYZD5Jvih+7k8ZWXEDjFCL92Sbl1zx4tge9jc8ABX3htCaRJ3Q4i3aVLwtLGO GzYA== X-Gm-Message-State: AOAM532j5ZX0i3+OKKmqCD5AFvTzoHf6epACFnC7o1FNf2BVfpyTnSGn 43kQkcKwTJMQhluN9MH2Voo= X-Received: by 2002:a02:666d:: with SMTP id l45mr3089958jaf.0.1627024610836; Fri, 23 Jul 2021 00:16:50 -0700 (PDT) Received: from auth1-smtp.messagingengine.com (auth1-smtp.messagingengine.com. [66.111.4.227]) by smtp.gmail.com with ESMTPSA id w10sm15622116ilo.17.2021.07.23.00.16.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Jul 2021 00:16:50 -0700 (PDT) Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailauth.nyi.internal (Postfix) with ESMTP id 9C8A227C0054; Fri, 23 Jul 2021 03:16:49 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute1.internal (MEProxy); Fri, 23 Jul 2021 03:16:49 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvtddrfeejgdduudefucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhepfffhvffukfhfgggtuggjsehttdertddttddvnecuhfhrohhmpeeuohhquhhn ucfhvghnghcuoegsohhquhhnrdhfvghnghesghhmrghilhdrtghomheqnecuggftrfgrth htvghrnhepgedugeeftdejtdffvdelleetfeduvdekgfehjeeuudejheefleekteejgedt fefgnecuffhomhgrihhnpehffhiflhhlrdgthhenucevlhhushhtvghrufhiiigvpedtne curfgrrhgrmhepmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghr shhonhgrlhhithihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvg hngheppehgmhgrihhlrdgtohhmsehfihigmhgvrdhnrghmvg X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 23 Jul 2021 03:16:48 -0400 (EDT) Date: Fri, 23 Jul 2021 15:16:42 +0800 From: Boqun Feng To: Desmond Cheong Zhi Xi , LKML , Peter Zijlstra , VMware Graphics , Zack Rusin , Dave Airlie , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , dri-devel , intel-gfx , Shuah Khan , Greg KH , linux-kernel-mentees@lists.linuxfoundation.org Subject: Re: [PATCH 1/3] drm: use the lookup lock in drm_is_current_master Message-ID: References: <20210722092929.244629-1-desmondcheongzx@gmail.com> <20210722092929.244629-2-desmondcheongzx@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jul 22, 2021 at 09:02:41PM +0200, Daniel Vetter wrote: > On Thu, Jul 22, 2021 at 6:00 PM Boqun Feng wrote: > > > > On Thu, Jul 22, 2021 at 12:38:10PM +0200, Daniel Vetter wrote: > > > On Thu, Jul 22, 2021 at 05:29:27PM +0800, Desmond Cheong Zhi Xi wrote: > > > > Inside drm_is_current_master, using the outer drm_device.master_mutex > > > > to protect reads of drm_file.master makes the function prone to creating > > > > lock hierarchy inversions. Instead, we can use the > > > > drm_file.master_lookup_lock that sits at the bottom of the lock > > > > hierarchy. > > > > > > > > Reported-by: Daniel Vetter > > > > Signed-off-by: Desmond Cheong Zhi Xi > > > > --- > > > > drivers/gpu/drm/drm_auth.c | 9 +++++---- > > > > 1 file changed, 5 insertions(+), 4 deletions(-) > > > > > > > > diff --git a/drivers/gpu/drm/drm_auth.c b/drivers/gpu/drm/drm_auth.c > > > > index f00354bec3fb..9c24b8cc8e36 100644 > > > > --- a/drivers/gpu/drm/drm_auth.c > > > > +++ b/drivers/gpu/drm/drm_auth.c > > > > @@ -63,8 +63,9 @@ > > > > > > > > static bool drm_is_current_master_locked(struct drm_file *fpriv) > > > > { > > > > - lockdep_assert_held_once(&fpriv->minor->dev->master_mutex); > > > > - > > > > + /* Either drm_device.master_mutex or drm_file.master_lookup_lock > > > > + * should be held here. > > > > + */ > > > > > > Disappointing that lockdep can't check or conditions for us, a > > > lockdep_assert_held_either would be really neat in some cases. > > > > > > > The implementation is not hard but I don't understand the usage, for > > example, if we have a global variable x, and two locks L1 and L2, and > > the function > > > > void do_something_to_x(void) > > { > > lockdep_assert_held_either(L1, L2); > > x++; > > } > > > > and two call sites: > > > > void f(void) > > { > > lock(L1); > > do_something_to_x(); > > unlock(L1); > > } > > > > void g(void) > > { > > lock(L2); > > do_something_to_x(); > > unlock(L2); > > } > > > > , wouldn't it be racy if f() and g() called by two threads at the same > > time? Usually I would expect there exists a third synchronazition > > mechanism (say M), which synchronizes the calls to f() and g(), and we > > put M in the lockdep_assert_held() check inside do_something_to_x() > > like: > > > > void do_something_to_x(void) > > { > > lockdep_assert_held_once(M); > > x++; > > } > > > > But of course, M may not be a lock, so we cannot put the assert there. > > > > My cscope failed to find ->master_lookup_lock in -rc2 and seems it's not > > introduced in the patchset either, could you point me the branch this > > patchset is based on, so that I could understand this better, and maybe > > come up with a solution? Thanks ;-) > > The use case is essentially 2 nesting locks, and only the innermost is > used to update a field. So when you only read this field, it's safe if > either of these two locks are held. Essentially this is a read/write lock > type of thing, except for various reasons the two locks might not be of > the same type (like here where the write lock is a mutex, but the read > lock is a spinlock). > > It's a bit like the rcu_derefence macro where it's ok to either be in a > rcu_read_lock() section, or holding the relevant lock that's used to > update the value. We do _not_ have two different locks that allow writing > to the same X. > > Does that make it clearer what's the use-case here? > > In an example: > > void * interesting_pointer. > > do_update_interesting_pointer() > { > mutex_lock(A); > /* do more stuff to prepare things */ > spin_lock(B); > interesting_pointer = new_value; > spin_unlock(B); > mutex_unlock(A); > } > > read_interesting_thing_locked() > { > lockdep_assert_held_either(A, B); > > return interesting_pointer->thing; > } > > read_interesting_thing() > { > int thing; > spin_lock(B); > thing = interesting_pointer->thing; > spin_unlock(B); > > return B; > } > > spinlock might also be irqsafe here if this can be called from irq > context. > Make sense, so we'd better also provide lockdep_assert_held_both(), I think, to use it at the update side, something as below: /* * lockdep_assert_held_{both,either}(). * * Sometimes users can use a combination of two locks to * implement a rwlock-like lock, for example, say we have * locks L1 and L2, and we only allow updates when two locks * both held like: * * update() * { * lockdep_assert_held_both(L1, L2); * x++; // update x * } * * while for read-only accesses, either lock suffices (since * holding either lock means others cannot hold both, so readers * serialized with the updaters): * * read() * { * lockdep_assert_held_either(L1, L2); * r = x; // read x * } */ #define lockdep_assert_held_both(l1, l2) do { \ WARN_ON_ONCE(debug_locks && \ (!lockdep_is_held(l1) || \ !lockdep_is_held(l2))); \ } while (0) #define lockdep_assert_held_either(l1, l2) do { \ WARN_ON_ONCE(debug_locks && \ (!lockdep_is_held(l1) && \ !lockdep_is_held(l2))); \ } while (0) Still need sometime to think through this (e.g. on whether this it the best implementation). Regards, Boqun > Cheers, Daniel > > > Regards, > > Boqun > > > > > Adding lockdep folks, maybe they have ideas. > > > > > > On the patch: > > > > > > Reviewed-by: Daniel Vetter > > > > > > > return fpriv->is_master && drm_lease_owner(fpriv->master) == fpriv->minor->dev->master; > > > > } > > > > > > > > @@ -82,9 +83,9 @@ bool drm_is_current_master(struct drm_file *fpriv) > > > > { > > > > bool ret; > > > > > > > > - mutex_lock(&fpriv->minor->dev->master_mutex); > > > > + spin_lock(&fpriv->master_lookup_lock); > > > > ret = drm_is_current_master_locked(fpriv); > > > > - mutex_unlock(&fpriv->minor->dev->master_mutex); > > > > + spin_unlock(&fpriv->master_lookup_lock); > > > > > > > > return ret; > > > > } > > > > -- > > > > 2.25.1 > > > > > > > > > > -- > > > Daniel Vetter > > > Software Engineer, Intel Corporation > > > http://blog.ffwll.ch > > > > -- > Daniel Vetter > Software Engineer, Intel Corporation > http://blog.ffwll.ch