Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp3745786pxf; Mon, 5 Apr 2021 22:10:17 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxp1lZefhoyH/SyOnXO9qtitufJjTXa40xnHESO+j4zWyn6gQ4SUTHYbTCcWeciVxPIeS9a X-Received: by 2002:a6b:b7d7:: with SMTP id h206mr22778248iof.56.1617685817318; Mon, 05 Apr 2021 22:10:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1617685817; cv=none; d=google.com; s=arc-20160816; b=ZSMV4Vsj8tt1KWdUsjAilkDdYsNXJVc0PgM653tDaslkODLZiekIB7S1mwyKVxucrw QerAmDT4CZ5NzMcF/sdgPc9LlsReivMn1VUCzKNawZ9rwiIeJcvQsm5IueSJrM7SPDNL ebVa0UJ8P0EoC6fMkvpKdhNUfXk0+kHpgT7v/jhPge0/CE5VDl0jD+3E2pVutADvavWj Gxx9ii/IhMg+YIIBrBm5Mz3fQBCBwEdlZLhi6v3wmWF6xxFl6fnpTKc5be/rhehkoJYZ dyZ78X0/OpwCpVVXIJQY/AlfMYaXKrnyGkqq2MjEDOroaT5FsivFHinRQpSx/HVpO3fE ux6g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=aYmMcBUq7G8Qc25MfOV3Z71Lki0JYTE3ieqjl3LQkMU=; b=EyaJVdqx8eo5N6jiVZ7UwvxWxteju33EFuYlzOdRb3r2zneTl3OoWPSqj4ajwB6rSu ncnHLzgMic1EerkPqowMaC9O+iRSADuu6bZyFfqoZDdN9sePySGrMWbGdl122ny3cDON xIL0rY08zDw369K6oV6D56mfJxR5iRQn+ZBY9HYCi69YMcSY2CNkOUsScedgwvJPb2zK 12J+/AGSZ+NUmmUFgKpXleGcJKJl+ekjGDSda4mA/TvR1RiP6/P7sJ5WE38VAPEmUMOg TvmGk7oyTYpXiqZjnv0nQo3+H1WrDy1o4Mz1/Ocvs6W/y9lt/b2R0QQ27KUGA8sQgt9G VNZA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=GeIS+8ll; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f11si16266390ioc.41.2021.04.05.22.10.05; Mon, 05 Apr 2021 22:10:17 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=GeIS+8ll; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239745AbhDEQk5 (ORCPT + 99 others); Mon, 5 Apr 2021 12:40:57 -0400 Received: from mail.kernel.org ([198.145.29.99]:42296 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239110AbhDEQkp (ORCPT ); Mon, 5 Apr 2021 12:40:45 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 3B4EA613B1; Mon, 5 Apr 2021 16:40:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1617640839; bh=uwAcNXEy+ghDiCSwGPGlI0CYFJF4SwLzYINuCy4SDVg=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=GeIS+8lluCKRUl3YE+TiOVeb/hvdwna4O8MUQuwFiYmsAyLkj7J2FlGJspKA/mGST GuSRADV73FM+ifYvNNcTGyzBIYMlbART2af2ippo6HWHGWZkU7JeBpJyD8+uYvZ0hi 5e0md006K4jRK67wgby+Tu9sLRcRY5iCZUPpBbslgjMH+l5MEbxk6iVgEZ1Old+Tbm LYki2E2+ILVESbdquz6fDxQrESGF50suQHkn0M/S7c4xJmIuAAAonvVvDniZHZRP7I yXLtTeS3Spt2En09KW4Zp3E+ANVjGQ39jQQr7tXfbzFfThekM/y+TUO4DzmV25mycD Yy9VP8DqW7VGQ== Received: by mail-lj1-f179.google.com with SMTP id s17so13293976ljc.5; Mon, 05 Apr 2021 09:40:39 -0700 (PDT) X-Gm-Message-State: AOAM531ik6m7sh0f9Po085ErWKZ1JXLS9JS+PUgHcq/PYvz+FwsbFcNM o/IymFghnpK0L8tIZmj3dy33ASijLe5KCzhPSPM= X-Received: by 2002:a2e:2a44:: with SMTP id q65mr17237317ljq.238.1617640837530; Mon, 05 Apr 2021 09:40:37 -0700 (PDT) MIME-Version: 1.0 References: <1616868399-82848-1-git-send-email-guoren@kernel.org> <1616868399-82848-4-git-send-email-guoren@kernel.org> In-Reply-To: From: Guo Ren Date: Tue, 6 Apr 2021 00:40:26 +0800 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v4 3/4] locking/qspinlock: Add ARCH_USE_QUEUED_SPINLOCKS_XCHG32 To: Peter Zijlstra Cc: linux-riscv , Linux Kernel Mailing List , linux-csky@vger.kernel.org, linux-arch , Guo Ren , Will Deacon , Ingo Molnar , Waiman Long , Arnd Bergmann , Anup Patel Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Mar 31, 2021 at 12:08 AM Peter Zijlstra wrote: > > On Tue, Mar 30, 2021 at 11:13:55AM +0800, Guo Ren wrote: > > On Mon, Mar 29, 2021 at 8:50 PM Peter Zijlstra wrote: > > > > > > On Mon, Mar 29, 2021 at 08:01:41PM +0800, Guo Ren wrote: > > > > u32 a = 0x55aa66bb; > > > > u16 *ptr = &a; > > > > > > > > CPU0 CPU1 > > > > ========= ========= > > > > xchg16(ptr, new) while(1) > > > > WRITE_ONCE(*(ptr + 1), x); > > > > > > > > When we use lr.w/sc.w implement xchg16, it'll cause CPU0 deadlock. > > > > > > Then I think your LL/SC is broken. > > > > > > That also means you really don't want to build super complex locking > > > primitives on top, because that live-lock will percolate through. > > > Do you mean the below implementation has live-lock risk? > > +static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail) > > +{ > > + u32 old, new, val = atomic_read(&lock->val); > > + > > + for (;;) { > > + new = (val & _Q_LOCKED_PENDING_MASK) | tail; > > + old = atomic_cmpxchg(&lock->val, val, new); > > + if (old == val) > > + break; > > + > > + val = old; > > + } > > + return old; > > +} > > That entirely depends on the architecture (and cmpxchg() impementation). > > There are a number of cases: > > * architecture has cmpxchg() instruction (x86, s390, sparc, etc.). > > - architecture provides fwd progress (x86) > - architecture requires backoff for progress (sparc) > > * architecture does not have cmpxchg, and implements it using LL/SC. > > and here things get *really* interesting, because while an > architecture can have LL/SC fwd progress, that does not translate into > cmpxchg() also having the same guarantees and all bets are off. Seems riscv spec didn't mandatory LR/SC fwd guarantee, ref: In riscv-spec 8.3 Eventual Success of Store-Conditional Instructions "As a consequence of the eventuality guarantee, if some harts in an execution environment are executing constrained LR/SC loops, and no other harts or devices in the execution environment execute an unconditional store or AMO to that reservation set, then at least one hart will eventually exit its constrained LR/SC loop. *** By contrast, if other harts or devices continue to write to that reservation set, it ***is not guaranteed*** that any hart will exit its LR/SC loop.*** " > > The real bummer is that C can do cmpxchg(), but there is no way it can > do LL/SC. And even if we'd teach C how to do LL/SC, it couldn't be > generic because architectures lacking it can't emulate it using > cmpxchg() (there's a fun class of bugs there). > > So while the above code might be the best we can do in generic code, > it's really up to the architecture to make it work. -- Best Regards Guo Ren ML: https://lore.kernel.org/linux-csky/