Received: by 2002:a6b:500f:0:0:0:0:0 with SMTP id e15csp231096iob; Mon, 2 May 2022 17:57:03 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwFtJVBJyBMXZgepAphUkBWXA/5E7A0a7O3RrVBPYBrCkIMevgNTJ8vbvBxKl9Y7mECkqjc X-Received: by 2002:a17:90a:dc0c:b0:1da:125a:ec84 with SMTP id i12-20020a17090adc0c00b001da125aec84mr2009253pjv.137.1651539423629; Mon, 02 May 2022 17:57:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1651539423; cv=none; d=google.com; s=arc-20160816; b=cMq9p119wn58zbcD+czfx7+DhxpSlTmEVUxEYNQqCaCwSTdX5xbRyixRULlM4zwz2T 2QSb5lcyhqUzXEwIJkMq3gGfkP1fKGqCdFi/g+rddtZXZlD4GHktEZH4QQvXh96iT4Ol OrdRJaoeUFPi7NAJUdlTYv4403koH764TbImK4PEQ++9dlhxaBWcmZoc3fudpwbLfVsM Qsy4nTXV86S6Q6QGEkPk6509LvgbvybuFNiiHhDB54weH5cvk/wMkLCfeLyUE2f1Yunv fjDStyX4z0apujh/rqyRW2DjNaNAO+B6uKH/C0ozC2fvRmDfSiaL9BJQ8+fC+f0fnKn0 4FrQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:to:from:cc:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject :dkim-signature; bh=sMDsvqsINC/1HzV0XVeIiP6qJbruGiMokRnjzI8Iijs=; b=zTmkY7tLbKDr+iCib7lW0mCbArIkzBdReV881iElASVIAR/7vsqDChrdQD+J/6ymvQ EAlFefrmA9r2wrgq0sBcjS2OXbmAzcqm7qOSwLNEpGc6zI0cvLbpcwC5TTIt7zoqo5dz g7+uYRDfPPmRtVDESdykzlvEdOyFLqy+l/dqiBD2074Yx/pFlpJqoUFljNBG8nXl0mNm nhfXxFI4mhc5coowjL848V4ZHoTcG2GkEzRC6AI7XXWO4SnnMoWEGqj1yL6sp6lsEUhw KPkIjtvuNkoEsJ68UKb5CVW2CWuI5DHbSIhHCbHOaB3inXdu0AQ7IKZmDKzRK9b5Zxa3 RIkQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=XwrBMV84; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [23.128.96.19]) by mx.google.com with ESMTPS id f8-20020a17090ab94800b001d28bc0e241si771715pjw.38.2022.05.02.17.57.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 02 May 2022 17:57:03 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) client-ip=23.128.96.19; Authentication-Results: mx.google.com; dkim=pass header.i=@rivosinc-com.20210112.gappssmtp.com header.s=20210112 header.b=XwrBMV84; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 5D673522C2; Mon, 2 May 2022 17:43:12 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1382942AbiD3PlV (ORCPT + 99 others); Sat, 30 Apr 2022 11:41:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43800 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232786AbiD3PlQ (ORCPT ); Sat, 30 Apr 2022 11:41:16 -0400 Received: from mail-pf1-x433.google.com (mail-pf1-x433.google.com [IPv6:2607:f8b0:4864:20::433]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A396CA0BD2 for ; Sat, 30 Apr 2022 08:37:54 -0700 (PDT) Received: by mail-pf1-x433.google.com with SMTP id p8so9215843pfh.8 for ; Sat, 30 Apr 2022 08:37:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding:cc:from:to; bh=sMDsvqsINC/1HzV0XVeIiP6qJbruGiMokRnjzI8Iijs=; b=XwrBMV84AyVfrW6L5mcdJTABJJHR5Cc7I+uxzoBIZkBekPAupOfEZ/k+xOdbEKEife EuQaMP+UWNSV75iAffC+dhjvPA1NaL2+4Vv99GAnx8rRzw+i1YjF/iXl/h5HIlGA1BJP kJYDzMEeqLhKYrTdf84LxohlkUiVLM0+wsskIQ1ffeDS9wpyvOSTjwuvmTWf7zojGnh5 0gtpJ582CRTl1aBaRbQCKZoLO1qLrMnoMtwzFYd7gX9j2nxRyz4PTc/UBcirCJrUc/9M 8CzT7gASjopV5S44exy8VEtWKH/asaKYNKyStQHWara8f1zP/z+5LHZgqMC4RrdIfMr4 IKcA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:cc:from:to; bh=sMDsvqsINC/1HzV0XVeIiP6qJbruGiMokRnjzI8Iijs=; b=aa+2lrcjAfOAPc9odtgd52p3xM0pVJ+d3kmEaEZcb2cymTkhEofiqnWTbr3wyyUlF8 05XPSB3Bsj0d9SOCKdHj/vDvjgjLv42MCmOxXBkLQtrsUcRKY//mRDcv86k79TTVZeKx vLy4SV6zrpDQR/LlyjdYKUa7OnG5y4qURPvLiDKH/K9iR8i9pgeutVw7mTktyYAcj9GD B5m+ljOlCMK9aN1JMztDGxwiDdnJn2ADF3W+RrT69ez88oUcpZk+d29cWbvbiVwuWNrE krP1u4kBQSe4IK2x3QSOh0lFbRHF305oSm6/2zdRxq7QRK8wAAaDiyyhhY0Rij9Mm/Iw JpCg== X-Gm-Message-State: AOAM5333qwzAiG7wDU/ZDRJpztPEEu5NMM641jTZ/GSgPa73ZmK4VYgl GdveT5cVbSEoGQ3//e/Ftu5G0g== X-Received: by 2002:a63:38e:0:b0:3ab:ada6:a219 with SMTP id 136-20020a63038e000000b003abada6a219mr3560920pgd.140.1651333074093; Sat, 30 Apr 2022 08:37:54 -0700 (PDT) Received: from localhost (76-210-143-223.lightspeed.sntcca.sbcglobal.net. [76.210.143.223]) by smtp.gmail.com with ESMTPSA id d3-20020a170902854300b0015e8d4eb26esm1626226plo.184.2022.04.30.08.37.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 30 Apr 2022 08:37:53 -0700 (PDT) Subject: [PATCH v4 1/7] asm-generic: ticket-lock: New generic ticket-based spinlock Date: Sat, 30 Apr 2022 08:36:20 -0700 Message-Id: <20220430153626.30660-2-palmer@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220430153626.30660-1-palmer@rivosinc.com> References: <20220430153626.30660-1-palmer@rivosinc.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Cc: guoren@kernel.org, peterz@infradead.org, mingo@redhat.com, Will Deacon , longman@redhat.com, boqun.feng@gmail.com, jonas@southpole.se, stefan.kristiansson@saunalahti.fi, shorne@gmail.com, Paul Walmsley , Palmer Dabbelt , aou@eecs.berkeley.edu, Arnd Bergmann , Greg KH , sudipm.mukherjee@gmail.com, macro@orcam.me.uk, jszhang@kernel.org, linux-csky@vger.kernel.org, linux-kernel@vger.kernel.org, openrisc@lists.librecores.org, linux-riscv@lists.infradead.org, linux-arch@vger.kernel.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Palmer Dabbelt From: Palmer Dabbelt To: Arnd Bergmann X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,RDNS_NONE, SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Peter Zijlstra This is a simple, fair spinlock. Specifically it doesn't have all the subtle memory model dependencies that qspinlock has, which makes it more suitable for simple systems as it is more likely to be correct. It is implemented entirely in terms of standard atomics and thus works fine without any arch-specific code. This replaces the existing asm-generic/spinlock.h, which just errored out on SMP systems. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Palmer Dabbelt --- include/asm-generic/spinlock.h | 94 +++++++++++++++++++++++++--- include/asm-generic/spinlock_types.h | 17 +++++ 2 files changed, 104 insertions(+), 7 deletions(-) create mode 100644 include/asm-generic/spinlock_types.h diff --git a/include/asm-generic/spinlock.h b/include/asm-generic/spinlock.h index adaf6acab172..fdfebcb050f4 100644 --- a/include/asm-generic/spinlock.h +++ b/include/asm-generic/spinlock.h @@ -1,12 +1,92 @@ /* SPDX-License-Identifier: GPL-2.0 */ -#ifndef __ASM_GENERIC_SPINLOCK_H -#define __ASM_GENERIC_SPINLOCK_H + /* - * You need to implement asm/spinlock.h for SMP support. The generic - * version does not handle SMP. + * 'Generic' ticket-lock implementation. + * + * It relies on atomic_fetch_add() having well defined forward progress + * guarantees under contention. If your architecture cannot provide this, stick + * to a test-and-set lock. + * + * It also relies on atomic_fetch_add() being safe vs smp_store_release() on a + * sub-word of the value. This is generally true for anything LL/SC although + * you'd be hard pressed to find anything useful in architecture specifications + * about this. If your architecture cannot do this you might be better off with + * a test-and-set. + * + * It further assumes atomic_*_release() + atomic_*_acquire() is RCpc and hence + * uses atomic_fetch_add() which is RCsc to create an RCsc hot path, along with + * a full fence after the spin to upgrade the otherwise-RCpc + * atomic_cond_read_acquire(). + * + * The implementation uses smp_cond_load_acquire() to spin, so if the + * architecture has WFE like instructions to sleep instead of poll for word + * modifications be sure to implement that (see ARM64 for example). + * */ -#ifdef CONFIG_SMP -#error need an architecture specific asm/spinlock.h -#endif + +#ifndef __ASM_GENERIC_SPINLOCK_H +#define __ASM_GENERIC_SPINLOCK_H + +#include +#include + +static __always_inline void arch_spin_lock(arch_spinlock_t *lock) +{ + u32 val = atomic_fetch_add(1<<16, lock); + u16 ticket = val >> 16; + + if (ticket == (u16)val) + return; + + /* + * atomic_cond_read_acquire() is RCpc, but rather than defining a + * custom cond_read_rcsc() here we just emit a full fence. We only + * need the prior reads before subsequent writes ordering from + * smb_mb(), but as atomic_cond_read_acquire() just emits reads and we + * have no outstanding writes due to the atomic_fetch_add() the extra + * orderings are free. + */ + atomic_cond_read_acquire(lock, ticket == (u16)VAL); + smp_mb(); +} + +static __always_inline bool arch_spin_trylock(arch_spinlock_t *lock) +{ + u32 old = atomic_read(lock); + + if ((old >> 16) != (old & 0xffff)) + return false; + + return atomic_try_cmpxchg(lock, &old, old + (1<<16)); /* SC, for RCsc */ +} + +static __always_inline void arch_spin_unlock(arch_spinlock_t *lock) +{ + u16 *ptr = (u16 *)lock + IS_ENABLED(CONFIG_CPU_BIG_ENDIAN); + u32 val = atomic_read(lock); + + smp_store_release(ptr, (u16)val + 1); +} + +static __always_inline int arch_spin_is_locked(arch_spinlock_t *lock) +{ + u32 val = atomic_read(lock); + + return ((val >> 16) != (val & 0xffff)); +} + +static __always_inline int arch_spin_is_contended(arch_spinlock_t *lock) +{ + u32 val = atomic_read(lock); + + return (s16)((val >> 16) - (val & 0xffff)) > 1; +} + +static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock) +{ + return !arch_spin_is_locked(&lock); +} + +#include #endif /* __ASM_GENERIC_SPINLOCK_H */ diff --git a/include/asm-generic/spinlock_types.h b/include/asm-generic/spinlock_types.h new file mode 100644 index 000000000000..8962bb730945 --- /dev/null +++ b/include/asm-generic/spinlock_types.h @@ -0,0 +1,17 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef __ASM_GENERIC_SPINLOCK_TYPES_H +#define __ASM_GENERIC_SPINLOCK_TYPES_H + +#include +typedef atomic_t arch_spinlock_t; + +/* + * qrwlock_types depends on arch_spinlock_t, so we must typedef that before the + * include. + */ +#include + +#define __ARCH_SPIN_LOCK_UNLOCKED ATOMIC_INIT(0) + +#endif /* __ASM_GENERIC_SPINLOCK_TYPES_H */ -- 2.34.1