Received: by 2002:a05:6a10:17d3:0:0:0:0 with SMTP id hz19csp1776571pxb; Mon, 12 Apr 2021 06:33:50 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzJRLjpjvy7pZUtXfhdWzuoKU/yydsh2W8ZJ7Nxf5RPWfsio1pKMf7XYrzAfBDPwDKA2WJN X-Received: by 2002:a17:902:b188:b029:e8:bd90:3f99 with SMTP id s8-20020a170902b188b02900e8bd903f99mr26411952plr.6.1618234430358; Mon, 12 Apr 2021 06:33:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1618234430; cv=none; d=google.com; s=arc-20160816; b=Crgqgwp3ZLxKU3eWwlDwxwmP0486jTSZ5rvQb3xpQrCu9LF1WivYL/bB9EmSjXpQwe 8zINAlM5IeeHrIbuivX8bSbDFhXHOWT3lWSNDjKd2NWnzhewJ16XEV43aCmvzPOk8J+u z3VjoK0XzYdBLFg5qbsKQmlQA3xl/nf5RaH89EQVaAzHbW2SFzG/0bJvIJv5hOFin705 0TMgOAm36ILxQwfz7McHZkAIK+q3a9xtUKDUiqaeCPkq6nprJR5ubESNX0OuXCxfD8Pc p5VJ0+HxOF9U+rAhavPicYgP7K48CVsw5vhCNAEP9EgjQK485Bnwk/PXjbmYxcajHfGH J9GQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=Vhdrbo5cW0iQQU8Syu+n0QhsPOGMfq6mSgBHWlPP528=; b=dfeM2vVvYSrLYxflhUx+vrswZphZ0mHkNdOHfHQsgEWGx1H/U7GxoEvFxZfKwbyorv i3Q7+m42inmneVZReVxXv0Nffvi4+HEJhCyqPswYpks9LenJX3yatspJ4T/2y8CPcXpL 0zKMf8T3ddIlq2VnJ2in7xUllrpmG1/HIwVX11YWRw+NgQvuP1FnGSAfftStC0RJiWCz 1f3tdkLe+CBolczAkKn4PSXs3+D/hmga2aHVs3oma0IXMmEcNkjnnNA8tdKbc7CadeHL 3t5TN+k+7utJNW6FjdVjw0DrDTpCwvQOg1DgYp7fG6YmsgxbHoqPfzT8AbAApMjmkAZI XCqA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=ueVUhz0s; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 125si13175807pgi.270.2021.04.12.06.33.37; Mon, 12 Apr 2021 06:33:50 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=ueVUhz0s; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240457AbhDLNc7 (ORCPT + 99 others); Mon, 12 Apr 2021 09:32:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42520 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238002AbhDLNc5 (ORCPT ); Mon, 12 Apr 2021 09:32:57 -0400 Received: from mail-vs1-xe30.google.com (mail-vs1-xe30.google.com [IPv6:2607:f8b0:4864:20::e30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6F653C061574 for ; Mon, 12 Apr 2021 06:32:39 -0700 (PDT) Received: by mail-vs1-xe30.google.com with SMTP id 66so6626726vsk.9 for ; Mon, 12 Apr 2021 06:32:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Vhdrbo5cW0iQQU8Syu+n0QhsPOGMfq6mSgBHWlPP528=; b=ueVUhz0sd6BJkNgSHXPp+0sU1TwRLfgE2GCXap2xGE8HkAe515b6tkk3JH2gQJ1K9W +phEgmnP0LoeI5cifulcAMzfM+q6s0ExBpBog9SQ8hWkwarTSCyC1RfizD1C1fNoYLOz gXwJ3UKZRMxckE2f6JSJTK4Qx/sLfyEZRUYKb9wXVU8yH2tXkJlWRCa5hxwBAQkytEZY PR+JAyz7KVTjJ2bnhAlwPiuVM3lD56fBDpc5nrmelUNOzSVIeD1J02YC+z0IMQUPqQUA WAOLcnlAcOeCtQaAltca69uvNtqBpTDegUR/w52hY98bUsL5A1ae8lnOkqKGt4Q5ktls /8fg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Vhdrbo5cW0iQQU8Syu+n0QhsPOGMfq6mSgBHWlPP528=; b=tdpo22ij3XMhwF8LWT5/skftflWH0qLfrbwao519D78xRBqu59BK3BrppwuLKvl7yc lyI4NjX1yQdrw1w4hGwZpdqBu/a8o+UI1UuUxMgSpVP4C5McUC+8BXrlsUXIGcqbZo/U lth/aMCwdwBXQHC+pszmE/t2jgWo1EGGw+cq25FbjbsdGwpen0CRHMViG2rdoRUUAxbJ PalGvUGa7JquCNWQJWTa4a11S8OFd86pB3wMTr0zka1m3neQeZJMAwXXDlpPW+cGnq89 wYGyG+/TAYXfMQfdQxwTXVrw0yiyYDz/rLIirK9TydTg/qRcQj+W6vZn+LXqjWMLUY79 ZdOw== X-Gm-Message-State: AOAM532frGEImKqruZB4l3yseMLZ5oSDFrhYS65In/hxplUVAcI3tQ6V 5b94xdta/HkKsNrXdnUyJeJbWH7dIsyQnRSSZS3nWaiz0puliA== X-Received: by 2002:a67:7786:: with SMTP id s128mr4661759vsc.30.1618234358422; Mon, 12 Apr 2021 06:32:38 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: =?UTF-8?Q?Christoph_M=C3=BCllner?= Date: Mon, 12 Apr 2021 15:32:27 +0200 Message-ID: Subject: Re: [PATCH] riscv: locks: introduce ticket-based spinlock implementation To: Palmer Dabbelt Cc: Anup Patel , Peter Zijlstra , Guo Ren , linux-riscv , Linux Kernel Mailing List , Guo Ren , catalin.marinas@arm.com, will.deacon@arm.com, Arnd Bergmann Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, Apr 11, 2021 at 11:11 PM Palmer Dabbelt wrote: > > On Wed, 24 Mar 2021 05:53:51 PDT (-0700), anup@brainfault.org wrote: > > On Wed, Mar 24, 2021 at 6:08 PM Peter Zijlstra wrote: > >> > >> On Wed, Mar 24, 2021 at 05:58:58PM +0530, Anup Patel wrote: > >> > On Wed, Mar 24, 2021 at 3:45 PM wrote: > >> > > > >> > > From: Guo Ren > >> > > > >> > > This patch introduces a ticket lock implementation for riscv, along the > >> > > same lines as the implementation for arch/arm & arch/csky. > >> > > > >> > > Signed-off-by: Guo Ren > >> > > Cc: Catalin Marinas > >> > > Cc: Will Deacon > >> > > Cc: Peter Zijlstra > >> > > Cc: Palmer Dabbelt > >> > > Cc: Anup Patel > >> > > Cc: Arnd Bergmann > >> > > --- > >> > > arch/riscv/Kconfig | 1 + > >> > > arch/riscv/include/asm/Kbuild | 1 + > >> > > arch/riscv/include/asm/spinlock.h | 158 ++++++++++++-------------------- > >> > > arch/riscv/include/asm/spinlock_types.h | 19 ++-- > >> > > >> > NACK from myside. > >> > > >> > Linux ARM64 has moved away from ticket spinlock to qspinlock. > >> > > >> > We should directly go for qspinlock. > >> > >> I think it is a sensible intermediate step, even if you want to go > >> qspinlock. Ticket locks are more or less trivial and get you fairness > >> and all that goodness without the mind bending complexity of qspinlock. > >> > >> Once you have the ticket lock implementation solid (and qrwlock) and > >> everything, *then* start to carefully look at qspinlock. > > > > I do understand qspinlock are relatively complex but the best thing > > about qspinlock is it tries to ensure each CPU spins on it's own location. > > > > Instead of adding ticket spinlock now and later replacing it with qspinlock, > > it is better to straight away explore qspinlock hence my NACK. > > > >> > >> Now, arguably arm64 did the heavy lifting of making qspinlock good on > >> weak architectures, but if you want to do it right, you still have to > >> analyze the whole thing for your own architecture. > > > > Most of the RISC-V implementations are weak memory ordering so it > > makes more sense to explore qspinlock first. > > I know I'm somewhat late to the party here. I talked with Will (and > to a lesser extent Peter) about this a week or two ago and it seems the > best way to go here is to start with ticket locks. They're simpler, and > in Arm land they performed better until we got to the larger systems. > Given that we don't have any high performance implementations of the > RISC-V memory model (and likely won't any time soon) it's hard to reason > about the performance of anything like this, but at a bare minimum > having fair locks is a pretty big positive and ticket locks should have > very little overhead while providing fairness. > > IMO the decision between ticket and queueing locks is really more of a > property of the hardware/workload than the ISA, though there are of > course some pretty deep ISA dependencies than can make one saner than > the other. It seems best to me to just allow users to pick their own > flavor of locks, and at least PPC is already doing that. I threw > together a quick asm-generic ticket lock that can be selected at compile > time, but I want to spend some more time playing with the other > architectures before sending anything out. This discussion came up again a few weeks ago because I've been stumbling over the test-and-set implementation and was wondering if nobody cared to improve that yet. Then I saw, that there have been a few attempts so far, but they did not land. So I brought this up in RVI's platform group meeting and the attendees showed big interest to get at least fairness. I assume Guo sent out his new patchset as a reaction to this call (1 or 2 days later). We have the same situation on OpenSBI, where we've agreed (with Anup) to go for a ticket lock implementation. A series for that can be found here (the implementation was tested in the kernel): http://lists.infradead.org/pipermail/opensbi/2021-April/000789.html In the mentioned RVI call, I've asked the question if ticket locks or MCF locks are preferred. And the feedback was to go for qspinlock/qrwlock. One good argument to do so would be, to not have to maintain an RV-specific implementation, but to use a well-tested in-kernel mechanism. The feedback in the call is also aligned with the previous attempts to enable MCF-locks on RV. However, the kernel's implementation requires sub-word atomics. And here is, where we are. The discussion last week was about changing the generic kernel code to loosen its requirements (not accepted because of performance regressions on e.g. x86) and if RV actually can provide sub-word atomics in form of LL/SC loops (yes, that's possible). Providing sub-word xchg() can be done within a couple of hours (some solutions are already on the list), but that was not enough from the initial patchset from Michael on (e.g. Christoph Hellwig asked back then for moving arch-independent parts into lib, which is a good idea given other archs do the same). So I expect this might require some more time until there is a solution, that's accepted by a broad range of maintainers. I've been working on a new MCF-lock series last week. It is working, but I did not publish it yet, because I wanted to go through the 130+ emails on the riscv-linux list and check for overseen review comments and validate the patch authors. You can find the current state here: https://github.com/cmuellner/linux/pull/new/riscv-spinlocks So, if you insist on ticket locks, then let's coordinate who will do that and how it will be tested (RV32/RV64, qemu vs real hw).