Received: by 2002:ab2:3350:0:b0:1f4:6588:b3a7 with SMTP id o16csp2124642lqe; Tue, 9 Apr 2024 10:07:32 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCUFANHk3Om84cxiwJXann6CJTqoNgcxeaKremZcdShkq+4y9haURHNbTcvMqrrS1j3cqJisgPmKeWESOTc5f10iZEPT8ShSIRasBXXnoQ== X-Google-Smtp-Source: AGHT+IFjsf8m2zWP6mKHoJs8SFNu8eXPRm6bX6iR+1Zj3Tx6cTrl9Vu+1cuOEvt/dEZ1nvjfDzwn X-Received: by 2002:a05:6e02:1c27:b0:36a:32b3:55d with SMTP id m7-20020a056e021c2700b0036a32b3055dmr463512ilh.2.1712682452636; Tue, 09 Apr 2024 10:07:32 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1712682452; cv=pass; d=google.com; s=arc-20160816; b=w5Z/aVbF4QC6wCNCxA0nsJo4EbSg6Ydydvu/hxT0zUQZXu+Pi/00uNeJs1vyum/rUz w8o5WyFZTqrUoDoOo+YgdbyU6Db2jh3/eiBovkimTKkKCQZB99IkVfl1dYhh+ANlRuSb lHPj1FaQu8Ftf3ifKR0VPiOetLk1snbX4yj7kDc6ltF3ZesQSPE1Z+6q9B7r1uFPAnIz YDCEYbztGG4AS62yAluLVMj15YyCnssln64ejKKP3VbrZeA15hziQVycCCX8PUhkkCo5 meW2vBzkzAGp3BSqD8QAYiDcgGx+2q9NdsZavAZ/h8REssyMsVz0i7dVFRQ7CtXgpegJ WidQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:list-unsubscribe:list-subscribe:list-id:precedence :references:message-id:subject:cc:to:from:date; bh=hFg3IJPXqQ3Z6XMuHiZ7PN8aOyXAeiPA8CnCY5kSF00=; fh=yMa1mM0cSXsrds5gu+WiWMqvdhTOvLxbwLwOtadqqAU=; b=qEgLKaWaxakGFhGvYo9LnrqQcWZtZ0uwNJ9QE9+GpXZhz7G09lBqUUEAp/lIcMACWF QbGYkt6tFf+3UYQiLdTQKST1fmn1jLOzqPDZFncyJqs/QR65b9g3LmLPJ1Nb8KJR208F JcGxArjfsVLS8txze7xJ5QpEZOSVXVFUfCPA5jBPay7G+sBHE0XGPU1TDmaLlSt092// 0umI8a7ngntolt4b0WU+v9porXzR7L57kbuDOcmktibe0C/qFGt5yDNVNnHxeLNRunTe kkNCMXA6yFYq1Yb9WIZVBuHkgY7rTgEEBt5zw6QDMm5GJ2jfficuP8MYLgI8E/b3YGdA V3vQ==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-137330-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-137330-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id o37-20020a635a25000000b005dc98dad590si8848190pgb.68.2024.04.09.10.07.32 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 09 Apr 2024 10:07:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-137330-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-137330-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-137330-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 543EFB25903 for ; Tue, 9 Apr 2024 16:34:51 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 1120F153576; Tue, 9 Apr 2024 16:34:46 +0000 (UTC) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 3274E131198 for ; Tue, 9 Apr 2024 16:34:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712680485; cv=none; b=b5Ml7pBE89OE69sMp7i+CFgCfrp04DgAhP1BP9R8GfuTMrVoxhzaBIm8JsNOIMSu8ytS1anQKAhH2Pxsxby/vzkKUMfeU8h5WqVLPGIsvfLMIiC8pgUyvaFbLZyNdRevb3zT/rslJDE9GSRs5itSHREwLpIv871I40d6UYGeQXM= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712680485; c=relaxed/simple; bh=P7X6qHt95v8MEGLWqAQUF14/l9TPeP4OkPLmOmZK2XI=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=iSF0ucYA6zk1NYsCIHetjTmM+TKzYieXB6I6uSpXLWjSJWdzmfV7KAPfL0SI19HMtL7T/JUNYKdiX1tkkMaG7W40N2plEXGMLXJstECgZc69uqqZDXtxeV6a1uFXIQgZ1l+Lp+V9x5lvSGC+vCrlgn6a4QbA5IIhM8Q7P1qgHOE= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 67E90139F; Tue, 9 Apr 2024 09:35:12 -0700 (PDT) Received: from FVFF77S0Q05N.cambridge.arm.com (FVFF77S0Q05N.cambridge.arm.com [10.1.29.129]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 005143F6C4; Tue, 9 Apr 2024 09:34:40 -0700 (PDT) Date: Tue, 9 Apr 2024 17:34:35 +0100 From: Mark Rutland To: Uros Bizjak Cc: x86@kernel.org, linux-kernel@vger.kernel.org, Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Peter Zijlstra Subject: Re: [PATCH 2/6] locking/atomic/x86: Rewrite x86_32 arch_atomic64_{,fetch}_{and,or,xor}() functions Message-ID: References: <20240409100503.274629-1-ubizjak@gmail.com> <20240409100503.274629-3-ubizjak@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: On Tue, Apr 09, 2024 at 02:50:19PM +0200, Uros Bizjak wrote: > On Tue, Apr 9, 2024 at 2:03 PM Uros Bizjak wrote: > > > > On Tue, Apr 9, 2024 at 1:13 PM Mark Rutland wrote: > > > > > > static __always_inline void arch_atomic64_and(s64 i, atomic64_t *v) > > > > { > > > > - s64 old, c = 0; > > > > + s64 val = __READ_ONCE(v->counter); > > > > > > I reckon it's worth placing this in a helper with a big comment, e.g. > > > > > > static __always_inline s64 arch_atomic64_read_tearable(atomic64_t *v) > > > { > > > /* > > > * TODO: explain that this might be torn, but it occurs *once*, and can > > > * safely be consumed by atomic64_try_cmpxchg(). > > > * > > > * TODO: point to the existing commentary regarding why we use > > > * __READ_ONCE() for KASAN reasons. > > > */ > > > return __READ_ONCE(v->counter); > > > } > > > > > > ... and then use that in each of the instances below. > > > > > > That way the subtlety is clearly documented, and it'd more clearly align with > > > the x86_64 verions. > > > > This is an excellent idea. The separate definitions needs to be placed > > in atomic64_32.h and atomic_64_64.h (due to use of atomic64_t > > typedef), but it will allow the same unification of functions between > > x64_32 and x64_64 as the approach with __READ_ONCE(). > > Something like this: > > --cut here-- > /* > * This function is intended to preload the value from atomic64_t > * location in a non-atomic way. The read might be torn, but can > * safely be consumed by the compare-and-swap loop. > */ > static __always_inline s64 arch_atomic64_read_tearable(atomic64_t *v) > { > /* > * See the comment in arch_atomic_read() on why we use > * __READ_ONCE() instead of READ_ONCE_NOCHECK() here. > */ > return __READ_ONCE(v->counter); > } > --cut here-- > > Thanks, > Uros. Yeah, something of that shape. Having thought for a bit longer, it's probably better to use '_torn' rather than '_tearable' (i.e. name this arch_atomic64_read_torn()). It'd be nice if we could specify the usage restrictions a bit more clearly, since this can only be used for compare-and-swap loops that implement unconditional atomics. (e.g. arch_atomic64_and(), but not arch_atomic_add_unless()). So I'd suggest: /* * Read an atomic64_t non-atomically. * * This is intended to be used in cases where a subsequent atomic operation * will handle the torn value, and can be used to prime the first iteration of * unconditional try_cmpxchg() loops, e.g. * * s64 val = arch_atomic64_read_torn(v); * do { } while (!arch_atomic_try_cmpxchg(v, &val, val OP i); * * This is NOT safe to use where the value is not always checked by a * subsequent atomic operation, such as in conditional try_cmpxchg() loops that * can break before the atomic, e.g. * * s64 val = arch_atomic64_read_torn(v); * do { * if (condition(val)) * break; * } while (!arch_atomic_try_cmpxchg(v, &val, val OP i); */ static __always_inline s64 arch_atomic64_read_torn(atomic64_t *v) { /* See comment in arch_atomic_read() */ return __READ_ONCE(v->counter); } Mark.