Received: by 2002:a05:6a10:eb17:0:0:0:0 with SMTP id hx23csp778838pxb; Wed, 8 Sep 2021 12:07:43 -0700 (PDT) X-Google-Smtp-Source: ABdhPJydwzR3Nj8i2KZL0XR2rstWY2p6oPaosDjejEBuhBiw6HSdLs0umfWZYCb310RRd0HGp5CO X-Received: by 2002:aa7:cfd2:: with SMTP id r18mr5612385edy.82.1631128063440; Wed, 08 Sep 2021 12:07:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1631128063; cv=none; d=google.com; s=arc-20160816; b=0peBt91ZSscy5UztksImxWs8Cf9XuO9s7+ovzv5yLBWox23xvlY9YPqbDhwpy/dXEH KEPQNJ1AXPJfcHwPAuOmeEj/yNF8SIygf0J3mX6sXAbTAKaWR7c04ikht5oaeLMon0m9 0ZyoNlm7I3KE7uztfB3mLxKEHYR9+hd3Kb8xkyMKCnZ5kFNlcUC9tDj5Xa3OkMXwukgq Tu3YP6izEsgSw7iKJVwJQErzCZN+7qYgMoEm6Uur7L1x1y1beL4dc85QmIcdk29V3Wi0 Py+ZoOU1M5IMgK40O5phmAJ8ZVfDvvzN1k/Z40rnqO8XeqpNLN7NMJWvUzftjLbgGKxF sD8g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=PjnycRsJahZqKRd/FRXz0xWFDFcy+74ToIfIISyCNzM=; b=mxuMWm7R3f9BqGL6oFjUBP+Y69lRuhcdKIt6Cur3EQIncq6t4BpdrP7fa4n+uHmMBy OFKDarLwmuJu0NnIGrFMSU5mhln745xWOjeKxt9ebS7lqDTD1/1oMjNi7qu7atYqK8nn 2PrddFXu95ofSm/PKBw8QH73hExl3+3WaMZRoy+RvgpJ9U4MlAtJ8acVP2ZOrfa9C5K2 LpNJNE43KeDGSW7LdgZ88Sim9hiFtySe51v76sqQpZ8p6hVE+6S3yUAxawoY8xt9Xv9H 4gx2yPUDqs1TsG1iypQotTLzfKY0opFgykypPyPKraJTy/iE+Kb0UA3H+6wqFv/+/sb3 iS9Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id jy1si2738100ejc.592.2021.09.08.12.07.13; Wed, 08 Sep 2021 12:07:43 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350001AbhIHSgH (ORCPT + 99 others); Wed, 8 Sep 2021 14:36:07 -0400 Received: from smtp-out2.suse.de ([195.135.220.29]:56886 "EHLO smtp-out2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235730AbhIHSgH (ORCPT ); Wed, 8 Sep 2021 14:36:07 -0400 Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 71DE220175; Wed, 8 Sep 2021 18:34:58 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 43AFC13A5C; Wed, 8 Sep 2021 18:34:55 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id WdxsBk8COWHoVQAAMHmgww (envelope-from ); Wed, 08 Sep 2021 18:34:55 +0000 Date: Wed, 8 Sep 2021 11:34:49 -0700 From: Davidlohr Bueso To: Peter Zijlstra Cc: Boqun Feng , Thomas Gleixner , Ingo Molnar , Juri Lelli , Steven Rostedt , Will Deacon , Waiman Long , Sebastian Andrzej Siewior , Mike Galbraith , Daniel Bristot de Oliveira , LKML Subject: Re: [RFC] locking: rwbase: Take care of ordering guarantee for fastpath reader Message-ID: <20210908183449.hidfjw4rm65eesww@offworld> References: <20210901150627.620830-1-boqun.feng@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Disposition: inline In-Reply-To: User-Agent: NeoMutt/20201120 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 08 Sep 2021, Peter Zijlstra wrote: >Subject: lockin/rwbase: Take care of ordering guarantee for fastpath reader locking >From: Boqun Feng >Date: Wed, 1 Sep 2021 23:06:27 +0800 > >From: Boqun Feng > >Readers of rwbase can lock and unlock without taking any inner lock, if >that happens, we need the ordering provided by atomic operations to >satisfy the ordering semantics of lock/unlock. Without that, considering >the follow case: > > { X = 0 initially } > > CPU 0 CPU 1 > ===== ===== > rt_write_lock(); > X = 1 > rt_write_unlock(): > atomic_add(READER_BIAS - WRITER_BIAS, ->readers); > // ->readers is READER_BIAS. > rt_read_lock(): > if ((r = atomic_read(->readers)) < 0) // True > atomic_try_cmpxchg(->readers, r, r + 1); // succeed. > > > r1 = X; // r1 may be 0, because nothing prevent the reordering > // of "X=1" and atomic_add() on CPU 1. > >Therefore audit every usage of atomic operations that may happen in a >fast path, and add necessary barriers. > >Signed-off-by: Boqun Feng >Signed-off-by: Peter Zijlstra (Intel) >Link: https://lkml.kernel.org/r/20210901150627.620830-1-boqun.feng@gmail.com With a few comments below, feel free to add my: Reviewed-by: Davidlohr Bueso >--- > kernel/locking/rwbase_rt.c | 41 ++++++++++++++++++++++++++++++++++++----- > 1 file changed, 36 insertions(+), 5 deletions(-) > >--- a/kernel/locking/rwbase_rt.c >+++ b/kernel/locking/rwbase_rt.c >@@ -41,6 +41,12 @@ > * The risk of writer starvation is there, but the pathological use cases > * which trigger it are not necessarily the typical RT workloads. > * >+ * Fast-path orderings: >+ * The lock/unlock of readers can run in fast paths: lock and unlock are only >+ * atomic ops, and there is no inner lock to provide ACQUIRE and RELEASE >+ * semantics of rwbase_rt. Atomic ops then should be stronger than _acquire() >+ * and _release() to provide necessary ordering guarantee. This last part reads funky. Guarantees must be acquire/release or stronger, not necessarily stronger than. ... >@@ -210,14 +224,23 @@ static int __sched rwbase_write_lock(str > atomic_sub(READER_BIAS, &rwb->readers); > > raw_spin_lock_irqsave(&rtm->wait_lock, flags); >+ >+ /* The below set_*_state() thingy implies smp_mb() to provide ACQUIRE */ >+ readers = atomic_read(&rwb->readers); > /* > * set_current_state() for rw_semaphore > * current_save_and_set_rtlock_wait_state() for rwlock > */ > rwbase_set_and_save_current_state(state); > >- /* Block until all readers have left the critical section. */ >- for (; atomic_read(&rwb->readers);) { >+ /* >+ * Block until all readers have left the critical section. >+ * >+ * _acqurie() is needed in case that the reader side runs in the fast ^acquire Thanks, Davidlohr