Received: by 2002:a05:6a10:17d3:0:0:0:0 with SMTP id hz19csp862594pxb; Thu, 15 Apr 2021 08:17:26 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwZnsVDM6GzczQHsnWL59aVYPGUWEqyPq29OOttA8vXydvNO872p6JdwU1hIe/L0HeaoYrG X-Received: by 2002:a62:1949:0:b029:241:be9d:c708 with SMTP id 70-20020a6219490000b0290241be9dc708mr3494395pfz.37.1618499846634; Thu, 15 Apr 2021 08:17:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1618499846; cv=none; d=google.com; s=arc-20160816; b=0H7CwsNrqlXtACUAR+cvcVidA7NYh1rW7Wig6zecQf8QJJEL/wzNELOF9f+k07W0rV L/T2Ot7dyA7nTXYyKzaGTBAeOamW2e43TpA+y808MGPI+Lq04R1baiLX4w/RY9EnGqjo nSdTnng/lH+Lm4N1gR6mlSgb5dHZFRfbZxT5Bnr98napiFXqRCe3faYN9T///+8Spay3 WyvJZYwPMxPAtD9mpz9nnLWyZSzk4nYtta3JQlu5R8lNw4+acuMTYV0ichYBErun1i/o u6JTyY+i4bP7g6J4+IzEFGrYN9QXoRsRl5PMupWrfLIfkM3GHpvVzO4BM1rPq987vr1L aqbw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-transfer-encoding :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=gSbpyPTIKVyJfJiTjht7IPkWWMvmcOXjW8q5rJaotvo=; b=M2Y1cJrxpkcF0Y6XBmH5j0Wy51HQSCktBuAaZeTRV2yYfHYtYgzuP1vYvQtDuwwGFz jTtoKBXICrjThuDK6s6GWljB3q4C9cGHu2e7L//dtrZvbhkVv5bPbBBfB6rSoqsdyidW x4pVRJcV7mnHFFTcOn+6AgNC6JT6uD+LQuGw0WqfZ726dtD4PdccgOXcEce5PPBTLSZL MtUPD0GLLMiB83dkEDE0kf0uXfPrBNqiRS8Pp4PQQh8q821r+AwKOz1T+RzEAjBRuJY+ A7fFzzb65IhNNZnqzTNv5/zr6gdj8ep6OcWC9CVnn0FJyish8xkjWUqgm9PzBErb2uv0 tt7g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b="bc/I0ehG"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 186si2941743pgi.386.2021.04.15.08.17.13; Thu, 15 Apr 2021 08:17:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b="bc/I0ehG"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237181AbhDOPPc (ORCPT + 99 others); Thu, 15 Apr 2021 11:15:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45986 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235332AbhDOPFD (ORCPT ); Thu, 15 Apr 2021 11:05:03 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D6EC7C06135C; Thu, 15 Apr 2021 08:04:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=In-Reply-To:Content-Transfer-Encoding: Content-Type:MIME-Version:References:Message-ID:Subject:Cc:To:From:Date: Sender:Reply-To:Content-ID:Content-Description; bh=gSbpyPTIKVyJfJiTjht7IPkWWMvmcOXjW8q5rJaotvo=; b=bc/I0ehGbBd/qLhnUGm73sLaXQ 2U8qz23kUnkrM2rupRuvjQ8PbfKhwPrRX7VU6ciUih+PhY+b5KhiPoJ6f+jJ5HY/O6OwQn263qrCs BFDg1aq1NqhCcnEUJ4dgTQf9qt6BUlQlvAeTQ8YJ+pbf/c79kDQOmKViAAT7+8zQyCl57rCahWoZ9 v/8sy/drvJJc5b3jHZJQQeEjduLUjKnoYvSuma5n6skxOepCNMNAWSJ0FsvWrsrBQWQpo0M/GGllz 4AVHzIAAqChPDxSoqCUXsxzgtcqevqQgp5CUA7ZakloyeDl3CIJH/y9Qred0RaMz/SrDCcnu2nMOg gPBAx8Eg==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1lX3XD-00GXTn-OO; Thu, 15 Apr 2021 15:03:59 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id DE67230015A; Thu, 15 Apr 2021 17:03:58 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id C0884200D0271; Thu, 15 Apr 2021 17:03:58 +0200 (CEST) Date: Thu, 15 Apr 2021 17:03:58 +0200 From: Peter Zijlstra To: Ali Saidi Cc: linux-kernel@vger.kernel.org, catalin.marinas@arm.com, steve.capper@arm.com, benh@kernel.crashing.org, stable@vger.kernel.org, Ingo Molnar , Will Deacon , Waiman Long , Boqun Feng Subject: Re: [PATCH] locking/qrwlock: Fix ordering in queued_write_lock_slowpath Message-ID: References: <20210415142552.30916-1-alisaidi@amazon.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20210415142552.30916-1-alisaidi@amazon.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Apr 15, 2021 at 02:25:52PM +0000, Ali Saidi wrote: > While this code is executed with the wait_lock held, a reader can > acquire the lock without holding wait_lock. The writer side loops > checking the value with the atomic_cond_read_acquire(), but only truly > acquires the lock when the compare-and-exchange is completed > successfully which isn’t ordered. The other atomic operations from this > point are release-ordered and thus reads after the lock acquisition can > be completed before the lock is truly acquired which violates the > guarantees the lock should be making. Should be per who? We explicitly do not order the lock acquire store. qspinlock has the exact same issue. If you look in the git history surrounding spin_is_locked(), you'll find 'lovely' things. Basically, if you're doing crazy things with spin_is_locked() you get to keep the pieces. > Fixes: b519b56e378ee ("locking/qrwlock: Use atomic_cond_read_acquire() when spinning in qrwloc") > Signed-off-by: Ali Saidi > Cc: stable@vger.kernel.org > --- > kernel/locking/qrwlock.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/kernel/locking/qrwlock.c b/kernel/locking/qrwlock.c > index 4786dd271b45..10770f6ac4d9 100644 > --- a/kernel/locking/qrwlock.c > +++ b/kernel/locking/qrwlock.c > @@ -73,8 +73,8 @@ void queued_write_lock_slowpath(struct qrwlock *lock) > > /* When no more readers or writers, set the locked flag */ > do { > - atomic_cond_read_acquire(&lock->cnts, VAL == _QW_WAITING); > - } while (atomic_cmpxchg_relaxed(&lock->cnts, _QW_WAITING, > + atomic_cond_read_relaxed(&lock->cnts, VAL == _QW_WAITING); > + } while (atomic_cmpxchg_acquire(&lock->cnts, _QW_WAITING, > _QW_LOCKED) != _QW_WAITING); > unlock: > arch_spin_unlock(&lock->wait_lock); This doesn't make sense, there is no such thing as a store-acquire. What you're doing here is moving the acquire from one load to the next. A load we know will load the exact same value. Also see Documentation/atomic_t.txt: {}_acquire: the R of the RMW (or atomic_read) is an ACQUIRE If anything this code wants to be written like so. --- diff --git a/kernel/locking/qrwlock.c b/kernel/locking/qrwlock.c index 4786dd271b45..22aeccc363ca 100644 --- a/kernel/locking/qrwlock.c +++ b/kernel/locking/qrwlock.c @@ -60,6 +60,8 @@ EXPORT_SYMBOL(queued_read_lock_slowpath); */ void queued_write_lock_slowpath(struct qrwlock *lock) { + u32 cnt; + /* Put the writer into the wait queue */ arch_spin_lock(&lock->wait_lock); @@ -73,9 +75,8 @@ void queued_write_lock_slowpath(struct qrwlock *lock) /* When no more readers or writers, set the locked flag */ do { - atomic_cond_read_acquire(&lock->cnts, VAL == _QW_WAITING); - } while (atomic_cmpxchg_relaxed(&lock->cnts, _QW_WAITING, - _QW_LOCKED) != _QW_WAITING); + cnt = atomic_cond_read_acquire(&lock->cnts, VAL == _QW_WAITING); + } while (!atomic_try_cmpxchg_relaxed(&lock->cnts, &cnt, _QW_LOCKED)); unlock: arch_spin_unlock(&lock->wait_lock); }