Received: by 2002:ac0:a582:0:0:0:0:0 with SMTP id m2-v6csp1314271imm; Tue, 2 Oct 2018 06:24:00 -0700 (PDT) X-Google-Smtp-Source: ACcGV62tZQZdi1WUBEUvDXZG5BVSv/P1hpenuusPMPqlXgWrIws8TDA1orQGnBWirBP593W9xiy/ X-Received: by 2002:a62:4b09:: with SMTP id y9-v6mr9714194pfa.93.1538486640344; Tue, 02 Oct 2018 06:24:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1538486640; cv=none; d=google.com; s=arc-20160816; b=S5ZUb2q6kKOpN79XSmDhAEasznHCCF7IE70MkNtyVrfs+bFxAsOiWWUeC3jLh0ir5l /XK5OXuklA0lhxxzIwralOuqW7UkTb6C6vzxagGglO8SpUmFSfFD6XXjXqE0c/OBFx8I k7YOoOJuyGY3ImxWHQCB7j0AJid+qBODXbppx8NUZyl24/1cBUoCtynb5BqSaS0j4tU+ TOrsAO3HYdxQr65aXU8Fdig9scLmZLt72faN48nAPLhqHIuIZVeQl0IjR605gsa58euW dwIz9/0XcPblmxahassiSwxLoYIVP9RZJqgFSRoDqHZueFGUCJzVth9Yj1apwiuUJyN+ SJHg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=FjpHdi0/P76uKlODBppDuIn47libFPitXAJ4Yj0ohDk=; b=izMNSZbFrZxbcoRuSTZNAchlIo4xrm3Kgrf5F0cjNHuKG8i/4qsgJvu7vSKvbZWiRR QJcbxNKig9CxGdEGCYsnmhx7pwFW7JoAWrjQ4x5o0FjDU4vT43y0SpW3pzQ9ND6hpemp fUiwbmHX4tm8PsYTyJbnJDOTqncy+sUqdwved4CAmZNwyKlHQfNxICeWTjKSZSKJZk2r jVKuXxBAbjSnTs0LRvVuYYCkeFMSE8tTQTysm5djSWdbY1xYEuDX7iW0+mQoKPOloM9k rjqaNGZsgVB3q8vrjjFtSBIcJ4540V4+gr3oN7gwMM+edmjrcOw5KIrw5vEf7DgpUQyM SYGQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t79-v6si15728780pfa.170.2018.10.02.06.23.44; Tue, 02 Oct 2018 06:24:00 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727197AbeJBUFF (ORCPT + 99 others); Tue, 2 Oct 2018 16:05:05 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:37354 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726748AbeJBUFE (ORCPT ); Tue, 2 Oct 2018 16:05:04 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4BF687A9; Tue, 2 Oct 2018 06:21:45 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 1BBD23F5B3; Tue, 2 Oct 2018 06:21:45 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id 1C9081AE2DE4; Tue, 2 Oct 2018 14:22:09 +0100 (BST) Date: Tue, 2 Oct 2018 14:22:09 +0100 From: Will Deacon To: Andrea Parri Cc: Peter Zijlstra , mingo@kernel.org, linux-kernel@vger.kernel.org, longman@redhat.com, tglx@linutronix.de Subject: Re: [RFC][PATCH 3/3] locking/qspinlock: Optimize for x86 Message-ID: <20181002132208.GF16422@arm.com> References: <20180926110117.405325143@infradead.org> <20180926111307.513429499@infradead.org> <20181001171700.GC13918@arm.com> <20181002123152.GA10055@andrea> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20181002123152.GA10055@andrea> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Oct 02, 2018 at 02:31:52PM +0200, Andrea Parri wrote: > > consider this scenario with your patch: > > > > 1. CPU0 sees a locked val, and is about to do your xchg_relaxed() to set > > pending. > > > > 2. CPU1 comes in and sets pending, spins on locked > > > > 3. CPU2 sees a pending and locked val, and is about to enter the head of > > the waitqueue (i.e. it's right before xchg_tail()). > > > > 4. The locked holder unlock()s, CPU1 takes the lock() and then unlock()s > > it, so pending and locked are now 0. > > > > 5. CPU0 sets pending and reads back zeroes for the other fields > > > > 6. CPU0 clears pending and sets locked -- it now has the lock > > > > 7. CPU2 updates tail, sees it's at the head of the waitqueue and spins > > for locked and pending to go clear. However, it reads a stale value > > from step (4) and attempts the atomic_try_cmpxchg() to take the lock. > > > > 8. CPU2 will fail the cmpxchg(), but then go ahead and set locked. At this > > point we're hosed, because both CPU2 and CPU0 have the lock. > > Thanks for pointing this out. I am wondering: can't we have a similar > scenario with the current code (i.e., w/o these patches): what prevents > the scenario reported below, following Peter's diagram, from happening? The xchg_tail() in step (7) reads from the fetch_or_acquire() in step (5), so I don't think we can see a stale value in the subsequent (overlapping) acquire load. Will > CPU0 CPU1 CPU2 CPU3 > > 0) lock > trylock -> (0,0,1) > 1)lock > trylock /* fail */ > > 2) lock > trylock /* fail */ > fetch_or_acquire -> (0,1,1) > wait-locked > > 3) lock > trylock /* fail */ > goto queue > > 4) unlock -> (0,1,0) > clr_pnd_set_lck -> (0,0,1) > unlock -> (0,0,0) > > 5) fetch_or_acquire -> (0,1,0) > 6) clr_pnd_set_lck -> (0,0,1) > 7) xchg_tail -> (n,0,1) > load_acquire <- (n,0,0) (from-4) > 8) cmpxchg /* fail */ > set_locked()