Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp611644pxb; Wed, 27 Jan 2021 16:44:26 -0800 (PST) X-Google-Smtp-Source: ABdhPJwL59j+P0ZPSXwp0en7SkwrvxtO2feayXBOq5b6LNzbeDPFXs3CGX4wUh9WUW80GEYIy1lm X-Received: by 2002:aa7:dd49:: with SMTP id o9mr11299013edw.14.1611794666378; Wed, 27 Jan 2021 16:44:26 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1611794666; cv=none; d=google.com; s=arc-20160816; b=amPSeWWqfOapSfVW7nIKiU2X+POb5deNAowS2m1kti4xYDummZqg5s2fto/C8qKJk2 aaagYplauM9s5s/KUT2fR2VwOpBe4H0zxR/6SIpaUbAlM7fWVB9ebneY9rVb5IH/cF2r VQ5a8RIVsasqfRbTBvugen9IvQkvquhtQTnSPqbmreeYGhep2BaQ7v8Uw7ztFT8joskz c3hZViPjDiZkYNMSJksGhD+/vo2fVOCA9z6sT8G9vFPmyWFhspr2P4Lk1SFZIWAn+eCi 89e/OkG1hcnArHo3UW0FnKrV/P3uwoFDfCkvvnxEhdzYdizGV7eBlEyFSnYZL36I706m CjgQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=7AtM/QmstSC9H7cv+04zDX0FRsehfo4S9RTjsnDMAHE=; b=VwbSvpXqbvul2slEC9TC6PwgspdtDqzx2POLaRQLK/cZ/TCEXXaWRhE9t7rRCiF4SJ xTO8tNKY+bASWa5TYFiwWITAs9I3rX0oZB4xr0CM3bhaMOohQH5llab3gRkK3SKAQFzF oye3/izrA4ZHjDFbTJHdXpJclBYoHVROpRUsy6z1hQGDtyN365ajGo8ejYVQG6elR+F8 3iU0+BvXrrQH9NCWjAgjD38bTHSC3mUFvmDzp99B0J0CKcpQ8+9i1OIW2wddH+6of6eV rxNWRZjFFJA9N3blegsf9rCWM8ngA6I5bIppIHh4WFWViUuCR0zzMmSZ/7B2PlbdtOUc qG+w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=nT0LngdI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id ox4si1462969ejb.116.2021.01.27.16.44.02; Wed, 27 Jan 2021 16:44:26 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=nT0LngdI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233068AbhA0W6f (ORCPT + 99 others); Wed, 27 Jan 2021 17:58:35 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40588 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231862AbhA0Wz3 (ORCPT ); Wed, 27 Jan 2021 17:55:29 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 94035C061351 for ; Wed, 27 Jan 2021 14:44:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=7AtM/QmstSC9H7cv+04zDX0FRsehfo4S9RTjsnDMAHE=; b=nT0LngdIoStZ4i+ZhMKCNhEkbq +Ot/Agy4MpX6+i9qpA8LZb8eU8HQCEl/PXudCzCxfjmbl5pVn+p2IBe7pS7STlodegSHfU9NVi6GJ fraU29kjeCS92ldEQrfiqW0lPsU7isyXiVVrkHWcfKBvHqdbgBgwAKB0564qhjxSzAyaqh3i5CO0W vzELGLa+8flwas442njT3SYMXl7nGRHPTd3mgl+EEDBniAsaGjduQiFn/sXdz9KYc86kIt0FZTCQB gpcNMg1FIZCnR2T030iyhDJ752We1w/Ef+x6b636j6v7OeTOcHuHSpjN9tgg2vJ2VXMvZuSID/g+a S7yyrc2g==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1l4tXV-007bu6-Ba; Wed, 27 Jan 2021 22:43:54 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 115A4300455; Wed, 27 Jan 2021 23:43:53 +0100 (CET) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id EF41B200F8CC5; Wed, 27 Jan 2021 23:43:52 +0100 (CET) Date: Wed, 27 Jan 2021 23:43:52 +0100 From: Peter Zijlstra To: Alexander A Sverdlin Cc: Ingo Molnar , Will Deacon , linux-kernel@vger.kernel.org, Russell King , linux-arm-kernel@lists.infradead.org Subject: Re: [PATCH 1/2] qspinlock: Ensure writes are pushed out of core write buffer Message-ID: References: <20210127200109.16412-1-alexander.sverdlin@nokia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210127200109.16412-1-alexander.sverdlin@nokia.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jan 27, 2021 at 09:01:08PM +0100, Alexander A Sverdlin wrote: > From: Alexander Sverdlin > > Ensure writes are pushed out of core write buffer to prevent waiting code > on another cores from spinning longer than necessary. Our smp_wmb() as defined does not have that property. You're relying on some arch specific details which do not belong in common code. > diff --git a/kernel/locking/mcs_spinlock.h b/kernel/locking/mcs_spinlock.h > index 5e10153..10e497a 100644 > --- a/kernel/locking/mcs_spinlock.h > +++ b/kernel/locking/mcs_spinlock.h > @@ -89,6 +89,11 @@ void mcs_spin_lock(struct mcs_spinlock **lock, struct mcs_spinlock *node) > return; > } > WRITE_ONCE(prev->next, node); > + /* > + * This is necessary to make sure that the corresponding "while" in the > + * mcs_spin_unlock() doesn't loop forever > + */ This comment is utterly inadequate, since it does not describe an explicit ordering between two (or more) stores. > + smp_wmb(); > > /* Wait until the lock holder passes the lock down. */ > arch_mcs_spin_lock_contended(&node->locked); > diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c > index cbff6ba..577fe01 100644 > --- a/kernel/locking/qspinlock.c > +++ b/kernel/locking/qspinlock.c > @@ -469,6 +469,12 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) > > /* Link @node into the waitqueue. */ > WRITE_ONCE(prev->next, node); > + /* > + * This is necessary to make sure that the corresponding > + * smp_cond_load_relaxed() below (running on another core) > + * doesn't spin forever. > + */ > + smp_wmb(); That's insane, cache coherency should not allow that to happen in the first place. Our smp_wmb() cannot help with that. > pv_wait_node(node, prev); > arch_mcs_spin_lock_contended(&node->locked);