Received: by 2002:a05:6a10:1287:0:0:0:0 with SMTP id d7csp5438421pxv; Wed, 28 Jul 2021 10:42:19 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwTN0bPVOr9hux6DLhAXB9Amc9fWVnhSneOn+EAfzK6wQym4/m85QS2QtMEV/M5kkwuYEPc X-Received: by 2002:a5d:984d:: with SMTP id p13mr501747ios.182.1627494139794; Wed, 28 Jul 2021 10:42:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1627494139; cv=none; d=google.com; s=arc-20160816; b=q8BlUSLosnRmJ04yBpr3vv76SrJkO3uixokAz8iTGj0eVn07AF+ueQLsQnPu4v6cPd liMPjLgQvjHRTlq54qA8gWwqny/xsy0+0MaQn1EeVGH7ny84qhW3b4Nq8+HZfdJMhXAk 5nIpNSwVKGEs3p0BR+pZsMs2CTEb3jLXtc3d07Y9/xijMbRRfkxbM5X41Y92uOYsau26 Cpt5+UXdmXOF+6XqhjSz8b3x2ejL+FhHXDCit+8vy5v+giBeW+YbVrlWaIQUbYk40Ez7 a52gGJeCphwBoJZX2iOQcKVziHaSHHy3Q50RiqP5A5XhGKFBrYOt983XjlkecptmfwSK kn/g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:reply-to:message-id:subject:cc:to:from:date :dkim-signature; bh=tFaMkU9EnzDMrv8CZJKUb19omwcTwv2xMxCFr66qdyM=; b=biFuko+doYQrvb4QABEr1V9ZUYodfo5TQGBuVjrEChIRAANGr4ruzQMR+zg56qvx7e ram2D4ptplQV5NIcF4DUvMGgIDTfC1vMdRMZ5BL/LTPlFDtgfd4ZX+3Au1fDvRQkpepD DL3kGX725PPfEc/Vrftd/ZY4InPesgiqn06zvrczQCnuBxe46byuhbSq/9Xd4G4s1yR3 eH/AlwKGKmNKb5Mj0y8Np1Q6OzbhYetcWsl0Kx+wiqYO9tInG8NjrMfJOJb2axA1snrw 9z3lgjjFCyhqr+OX/J7BsSGdwx81K+UsQoFf/bB3qyIW29Ikt5GWM5gCHj4PSS2SxEAF hnTw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=caJb4tX2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id p6si477506ilo.37.2021.07.28.10.42.06; Wed, 28 Jul 2021 10:42:19 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=caJb4tX2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229591AbhG1Rk6 (ORCPT + 99 others); Wed, 28 Jul 2021 13:40:58 -0400 Received: from mail.kernel.org ([198.145.29.99]:59984 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229603AbhG1Rk5 (ORCPT ); Wed, 28 Jul 2021 13:40:57 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id B3F6A60238; Wed, 28 Jul 2021 17:40:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1627494055; bh=4ubaP3xAeTeUiJRqybWFV/LSjFpv/srCYUsWlwLVoNA=; h=Date:From:To:Cc:Subject:Reply-To:References:In-Reply-To:From; b=caJb4tX2QUDhE4l30ZFZHwbuwgdR4mKqQQfeyia16umy2cpqnC3GtFznHOEFHDgWm rFQxInaXMp1O2+XOUyVgl4M9sl5E/jcuAuonO2u80fFgCjK54+kIEN1pU3u3GRzFk9 MBcmot6lTIknQR6EVH52hYlGdHuplVC8c8W0Esd4OgcICA4Jv27aYjVCM6im7ya5oZ huOPaDKzw2QQggjdssBuyT1u1XDkJSyFDE/v8ORKJsszb0mPnjBt+CrBXrmYxIfzkn r0nMr1Nuw0AZ4bE4dQSVb3Zh6AlwcPZIDjItvqawTqZ6gOyigG4efTWmJ34hKf7Y+X PZl1Bg0lmAQfA== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 7A5FA5C048D; Wed, 28 Jul 2021 10:40:55 -0700 (PDT) Date: Wed, 28 Jul 2021 10:40:55 -0700 From: "Paul E. McKenney" To: linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, kernel-team@fb.com, mingo@kernel.org Cc: stern@rowland.harvard.edu, parri.andrea@gmail.com, will@kernel.org, peterz@infradead.org, boqun.feng@gmail.com, npiggin@gmail.com, dhowells@redhat.com, j.alglave@ucl.ac.uk, luc.maranget@inria.fr, akiyks@gmail.com, Manfred Spraul Subject: [PATCH v2 memory-model 2/4] tools/memory-model: Add example for heuristic lockless reads Message-ID: <20210728174055.GA9718@paulmck-ThinkPad-P17-Gen-1> Reply-To: paulmck@kernel.org References: <20210721210726.GA828672@paulmck-ThinkPad-P17-Gen-1> <20210721211003.869892-2-paulmck@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210721211003.869892-2-paulmck@kernel.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This commit adds example code for heuristic lockless reads, based loosely on the sem_lock() and sem_unlock() functions. [ paulmck: Apply Alan Stern and Manfred Spraul feedback. ] Reported-by: Manfred Spraul [ paulmck: Update per Manfred Spraul and Hillf Danton feedback. ] Signed-off-by: Paul E. McKenney diff --git a/tools/memory-model/Documentation/access-marking.txt b/tools/memory-model/Documentation/access-marking.txt index 58bff26198767..d96fe20ed582a 100644 --- a/tools/memory-model/Documentation/access-marking.txt +++ b/tools/memory-model/Documentation/access-marking.txt @@ -319,6 +319,99 @@ of the ASSERT_EXCLUSIVE_WRITER() is to allow KCSAN to check for a buggy concurrent lockless write. +Lock-Protected Writes With Heuristic Lockless Reads +--------------------------------------------------- + +For another example, suppose that the code can normally make use of +a per-data-structure lock, but there are times when a global lock +is required. These times are indicated via a global flag. The code +might look as follows, and is based loosely on nf_conntrack_lock(), +nf_conntrack_all_lock(), and nf_conntrack_all_unlock(): + + bool global_flag; + DEFINE_SPINLOCK(global_lock); + struct foo { + spinlock_t f_lock; + int f_data; + }; + + /* All foo structures are in the following array. */ + int nfoo; + struct foo *foo_array; + + void do_something_locked(struct foo *fp) + { + /* This works even if data_race() returns nonsense. */ + if (!data_race(global_flag)) { + spin_lock(&fp->f_lock); + if (!smp_load_acquire(&global_flag)) { + do_something(fp); + spin_unlock(&fp->f_lock); + return; + } + spin_unlock(&fp->f_lock); + } + spin_lock(&global_lock); + /* global_lock held, thus global flag cannot be set. */ + spin_lock(&fp->f_lock); + spin_unlock(&global_lock); + /* + * global_flag might be set here, but begin_global() + * will wait for ->f_lock to be released. + */ + do_something(fp); + spin_unlock(&fp->f_lock); + } + + void begin_global(void) + { + int i; + + spin_lock(&global_lock); + WRITE_ONCE(global_flag, true); + for (i = 0; i < nfoo; i++) { + /* + * Wait for pre-existing local locks. One at + * a time to avoid lockdep limitations. + */ + spin_lock(&fp->f_lock); + spin_unlock(&fp->f_lock); + } + } + + void end_global(void) + { + smp_store_release(&global_flag, false); + spin_unlock(&global_lock); + } + +All code paths leading from the do_something_locked() function's first +read from global_flag acquire a lock, so endless load fusing cannot +happen. + +If the value read from global_flag is true, then global_flag is +rechecked while holding ->f_lock, which, if global_flag is now false, +prevents begin_global() from completing. It is therefore safe to invoke +do_something(). + +Otherwise, if either value read from global_flag is true, then after +global_lock is acquired global_flag must be false. The acquisition of +->f_lock will prevent any call to begin_global() from returning, which +means that it is safe to release global_lock and invoke do_something(). + +For this to work, only those foo structures in foo_array[] may be passed +to do_something_locked(). The reason for this is that the synchronization +with begin_global() relies on momentarily holding the lock of each and +every foo structure. + +The smp_load_acquire() and smp_store_release() are required because +changes to a foo structure between calls to begin_global() and +end_global() are carried out without holding that structure's ->f_lock. +The smp_load_acquire() and smp_store_release() ensure that the next +invocation of do_something() from do_something_locked() will see those +changes. + + Lockless Reads and Writes -------------------------