Received: by 2002:ad5:4acb:0:0:0:0:0 with SMTP id n11csp419658imw; Wed, 13 Jul 2022 00:22:54 -0700 (PDT) X-Google-Smtp-Source: AGRyM1s8XWlPjW7u3IHkS5caw9tHO6guOkzv1CIRdq7nedBDObeAjODTzkebywPwI1Xms1ZT068x X-Received: by 2002:a17:903:32cd:b0:16c:5053:fbc3 with SMTP id i13-20020a17090332cd00b0016c5053fbc3mr1779534plr.143.1657696974465; Wed, 13 Jul 2022 00:22:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1657696974; cv=none; d=google.com; s=arc-20160816; b=zhIC/PX63FR2U7wJZUXTw1i2VyUfTrkH3IgnZYHQiwq3ku3MKnqpB3QwaygDmSsUS7 DqQUKeK8pfPTvfxjngpfaGlQ3qY/v8Nmr87ww4gVSUsV+FtqZhTDfiqZUZR6VDrKywk/ FoQUaOY5jgwhCPQnqDAe17yznW4TULQQv+6Rg6GMhOJGDeKAirZ1feHKGCm93qYySG9D i45PhEgz1NnPE0vOWQ29SRBmK4bVSDzVHsRLbTgz9g2FKUAoES4Zs4VxF79YcUz0lemT iCr4rlvd8Vz4zqiKMieKPhaj23CnQSqeqHTH2jw70spKYRksFGQ8p0q+qAVNM/vtBbTE cGWQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=1rI3NJLOWsk4vsP7I8W/i/0iO2V6TYGkyq9mbykMWmc=; b=Af7Pe0yTatEsVwF6j2cQSwZKkN+QX1xZq/7H9fe0F4Ut747TTIvQ+/dvKcWWGsf5OW nSQ5PF1I3Khv0dtsynMVRPpUCfJCFBtviB8QYvK1g0bSpUojXF4kxnaWTRXu75Sb9FhE ZFwfGJjz7GnyL1hn2q64hKlfkLYA5kyk6fCvE8AqCuZrPcCXGI5qYfzRxksYkDb1rseS AA2TZBvFSPhNHuGFQ/AK7IRc8lXZ5bcVYWDEzo7B3xxQugu1jbQ45qBtJu8vN//ECoL1 rooPGSfy9qdX6HpPOqoKHBNIl4n8Fi9IGU1PcWC+82QyMceFln6xHCHdeJ+HfBSPCgT+ 9ZdA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=JsG4DJjx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b13-20020a170903228d00b0016bb24f5d1esi16192205plh.179.2022.07.13.00.22.32; Wed, 13 Jul 2022 00:22:54 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=JsG4DJjx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234640AbiGMHIo (ORCPT + 99 others); Wed, 13 Jul 2022 03:08:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35310 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234519AbiGMHIF (ORCPT ); Wed, 13 Jul 2022 03:08:05 -0400 Received: from mail-pg1-x52f.google.com (mail-pg1-x52f.google.com [IPv6:2607:f8b0:4864:20::52f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E95C2E3C13 for ; Wed, 13 Jul 2022 00:07:53 -0700 (PDT) Received: by mail-pg1-x52f.google.com with SMTP id bf13so9640294pgb.11 for ; Wed, 13 Jul 2022 00:07:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1rI3NJLOWsk4vsP7I8W/i/0iO2V6TYGkyq9mbykMWmc=; b=JsG4DJjxtIuzRTGePgkVD534zkcZvQCGsVUdko8T5lmNcAPW6UZ7mQdZa+W/ofasOE h5F3pi8R+hkkB5uAtv/dql08ZOT7mM58lvtRP/IBEX+ST37A5TKH5bwD4EkrUGO6pEAH dvNoRUjBgeP79D0rAESigx29AySMMKB/IsH+z6Aye6NLNHvjH6HEoluYULQfUzp3bFYj QHB+rmvpmJEhlSc89WoUiwUTIYcwLy6uh8N+c7PVtZ7IQAkK3HtKoBBnemQv5jEaU+mR ArdWpkHEGRn0pdJt7xetbSGN4qYvVL4ozKMaoqb33FCtCm9wtwUpWegMri03ixXOiAWK rLmg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1rI3NJLOWsk4vsP7I8W/i/0iO2V6TYGkyq9mbykMWmc=; b=d/hMn2QCPeCR9Xbb7qEq7ej7yqXL5S12goihyW7btYaoftpMv356eS3OLAGR8RU3t1 YK5RgoaAjeGEE8QJe4XE2MAeCFQv36+15Bw7yGdiai5LMumta9HyUxzk+cqMod/arMLZ PMgZdIhVUaikk+cAV8jCpvkgL2wAItQvVEWectYIXmqG6w65Llo7+9I/kHVaX63PR/Wu 08jD65Q0FRFz1eHyRz2viULQP1D1sv4be4sPGCH6F9tkvSpVYyZrO+7M8IIcAhJK5LYf jmdp90CBVUFwvjZcW1eBJflFRtJz4D35BKIJQ648n8QqrPlhcNuW+O4QyNZowv36ZW5V dPFg== X-Gm-Message-State: AJIora8xNAmQFEXqUsV9PSm3jN5yYvecuLqhEo+DJa3289KWmAXw5ARn ysDeJciorkrK/iogC5nvTsM= X-Received: by 2002:a05:6a00:23d4:b0:52a:e5c1:caa7 with SMTP id g20-20020a056a0023d400b0052ae5c1caa7mr1775750pfc.62.1657696073495; Wed, 13 Jul 2022 00:07:53 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (193-116-203-247.tpgi.com.au. [193.116.203.247]) by smtp.gmail.com with ESMTPSA id d11-20020a170902cecb00b0016bd5da20casm8099061plg.134.2022.07.13.00.07.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Jul 2022 00:07:53 -0700 (PDT) From: Nicholas Piggin To: Peter Zijlstra Cc: Nicholas Piggin , Ingo Molnar , Will Deacon , Waiman Long , Boqun Feng , "linux-kernel @ vger . kernel . org" Subject: [PATCH v2 11/12] locking/qspinlock: separate pv_wait_node from the non-paravirt path Date: Wed, 13 Jul 2022 17:07:03 +1000 Message-Id: <20220713070704.308394-12-npiggin@gmail.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220713070704.308394-1-npiggin@gmail.com> References: <20220713070704.308394-1-npiggin@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org pv_wait_node waits until node->locked is non-zero, no need for the pv case to wait again by also executing the !pv code path. Signed-off-by: Nicholas Piggin --- kernel/locking/qspinlock.c | 34 ++++++++++++++++------------------ 1 file changed, 16 insertions(+), 18 deletions(-) diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index 2ebb946a6b80..3255e7804842 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -506,15 +506,18 @@ static void pv_init_node(struct qnode *node) * pv_kick_node() is used to set _Q_SLOW_VAL and fill in hash table on its * behalf. */ -static void pv_wait_node(struct qnode *node, struct qnode *prev) +static void pv_wait_node_acquire(struct qnode *node, struct qnode *prev) { int loop; bool wait_early; for (;;) { for (wait_early = false, loop = SPIN_THRESHOLD; loop; loop--) { - if (READ_ONCE(node->locked)) + if (READ_ONCE(node->locked)) { + /* Provide the acquire ordering. */ + smp_load_acquire(&node->locked); return; + } if (pv_wait_early(prev, loop)) { wait_early = true; break; @@ -556,29 +559,23 @@ static void pv_wait_node(struct qnode *node, struct qnode *prev) lockevent_cond_inc(pv_spurious_wakeup, !READ_ONCE(node->locked)); } - - /* - * By now our node->locked should be 1 and our caller will not actually - * spin-wait for it. We do however rely on our caller to do a - * load-acquire for us. - */ } /* * Called after setting next->locked = 1 when we're the lock owner. * - * Instead of waking the waiters stuck in pv_wait_node() advance their state - * such that they're waiting in pv_wait_head_or_lock(), this avoids a + * Instead of waking the waiters stuck in pv_wait_node_acquire() advance their + * state such that they're waiting in pv_wait_head_or_lock(), this avoids a * wake/sleep cycle. */ static void pv_kick_node(struct qspinlock *lock, struct qnode *node) { /* * If the vCPU is indeed halted, advance its state to match that of - * pv_wait_node(). If OTOH this fails, the vCPU was running and will - * observe its next->locked value and advance itself. + * pv_wait_node_acquire(). If OTOH this fails, the vCPU was running and + * will observe its next->locked value and advance itself. * - * Matches with smp_store_mb() and cmpxchg() in pv_wait_node() + * Matches with smp_store_mb() and cmpxchg() in pv_wait_node_acquire() * * The write to next->locked in arch_mcs_spin_unlock_contended() * must be ordered before the read of node->state in the cmpxchg() @@ -765,8 +762,8 @@ EXPORT_SYMBOL(__pv_queued_spin_unlock); #else /* CONFIG_PARAVIRT_SPINLOCKS */ static __always_inline void pv_init_node(struct qnode *node) { } -static __always_inline void pv_wait_node(struct qnode *node, - struct qnode *prev) { } +static __always_inline void pv_wait_node_acquire(struct qnode *node, + struct qnode *prev) { } static __always_inline void pv_kick_node(struct qspinlock *lock, struct qnode *node) { } static __always_inline u32 pv_wait_head_or_lock(struct qspinlock *lock, @@ -864,10 +861,11 @@ static __always_inline void queued_spin_lock_mcs_queue(struct qspinlock *lock, b /* Link @node into the waitqueue. */ WRITE_ONCE(prev->next, node); - if (paravirt) - pv_wait_node(node, prev); /* Wait for mcs node lock to be released */ - smp_cond_load_acquire(&node->locked, VAL); + if (paravirt) + pv_wait_node_acquire(node, prev); + else + smp_cond_load_acquire(&node->locked, VAL); /* * While waiting for the MCS lock, the next pointer may have -- 2.35.1