Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp3149256pxk; Tue, 15 Sep 2020 11:16:24 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyRlyUvaMtG9CzsljS/cFvtncvnifORLhHdGXRl8AcRCyvmYrvYtXalsoRlkX6WZNIIfMI9 X-Received: by 2002:a17:907:408d:: with SMTP id nt21mr20940414ejb.355.1600193784484; Tue, 15 Sep 2020 11:16:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1600193784; cv=none; d=google.com; s=arc-20160816; b=JyizbiMjajQbosudgAbuClNBfAZ0TUVAacUexuEcvgjefgo/KiL2rn+caWQyfGEnbD BYDANw8Pq3FkEK+XQ3TcPO02EFcV2KE+jklKAxA9l9kos7hzob6x717UwNCHWY2o/Fu5 LJIQuNFUPga5wsgWEBEfcNBHJuc6SE3L7oC7CK/10ro4sikngRL0RZdYJJ+nAomku+Bz gaE4xAgQipWXEo88vBL2/x8DEK5T8XLd7IZi2IzVl30RgWPpwxEBMEHf5XrokrDUBezr Wzxo+tYwYy9FwY0Me1cQxpCUeOxgh/lLSYXv+CUPjT+QWgRWrtdVWtmJrTj9Ng/QEEQH /PAg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=wyOZXM5CTPqjHP81dfLipqHK2O3wv/6C60UTHFbsYIA=; b=YTinCM1sBrUajYGPmLlpxwqm/JidvMIJ6kL3719JgaT2SJPjvhHbAQn6niT87OEaxB ud4TFWNf6X8CPNxXbh7ntq4qIFEEBeqOLGEOSzBPERh8d3AT5p5XprLn+q6x+MQxdMoC G56ku2s5EREZcrjd2LTbNvWMUPE+RCZeID0WDdxEYsUO4tgir+4Tb08xg6fhIHyjCtBs cucvorYzGEkBSweFAoyAIKcuYbZxku2gAkCQP5yDGlSDQY5ARSwnY+fGHdLvF3LpqYFY d9JtArbzMIFeheH3/yvp5Xm9j4KBiwe+qYldUs719E191Adwst6mbrOpb4cqB51152pr QFHw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2020-01-29 header.b="I80v/FAT"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id v15si10300046ede.489.2020.09.15.11.16.00; Tue, 15 Sep 2020 11:16:24 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2020-01-29 header.b="I80v/FAT"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727551AbgIOSLS (ORCPT + 99 others); Tue, 15 Sep 2020 14:11:18 -0400 Received: from aserp2120.oracle.com ([141.146.126.78]:41224 "EHLO aserp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727876AbgIOSHH (ORCPT ); Tue, 15 Sep 2020 14:07:07 -0400 Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1]) by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 08FHwwu0078475; Tue, 15 Sep 2020 18:06:00 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2020-01-29; bh=wyOZXM5CTPqjHP81dfLipqHK2O3wv/6C60UTHFbsYIA=; b=I80v/FATMj2/TvT4VK3NwRjyBUdRPdIBOXK6dkd+ekJN/OtzM4+p84jn4mf+2hYhhv0f WOOoqe5Bg+sRisim1Qh9SDunVPq3RE0FyLhajMVNJNQl9Q3LXCVMtkuK5o91Im32+r/m +VhkHdnXfp5oUL5n2yDQXrJhj8RmH0A0WzHoLkln4UqRBtDqxF/et68QfakNJDbQesIh 2V9vm2dqoGGKrjajwT5DCmVM9XIlT2V1WPmvsJYsBWzOwTQqP/90aCKFip00cMq5WqtM rberhU0hXBUVAW+SHkKdgQigjvvj1p3J2vT0a8rkTtgxLPCOaFHGw+ZRs+R28JcmUo/w dQ== Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71]) by aserp2120.oracle.com with ESMTP id 33gp9m6qq6-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Tue, 15 Sep 2020 18:06:00 +0000 Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1]) by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 08FI5x3q089090; Tue, 15 Sep 2020 18:06:00 GMT Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by aserp3030.oracle.com with ESMTP id 33h7wphgxm-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Sep 2020 18:06:00 +0000 Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 08FI5kBu009862; Tue, 15 Sep 2020 18:05:46 GMT Received: from neelam.us.oracle.com (/10.152.128.16) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Tue, 15 Sep 2020 18:05:46 +0000 From: Alex Kogan To: linux@armlinux.org.uk, peterz@infradead.org, mingo@redhat.com, will.deacon@arm.com, arnd@arndb.de, longman@redhat.com, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, tglx@linutronix.de, bp@alien8.de, hpa@zytor.com, x86@kernel.org, guohanjun@huawei.com, jglauber@marvell.com Cc: steven.sistare@oracle.com, daniel.m.jordan@oracle.com, alex.kogan@oracle.com, dave.dice@oracle.com Subject: [PATCH v11 1/5] locking/qspinlock: Rename mcs lock/unlock macros and make them more generic Date: Tue, 15 Sep 2020 14:05:31 -0400 Message-Id: <20200915180535.2975060-2-alex.kogan@oracle.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200915180535.2975060-1-alex.kogan@oracle.com> References: <20200915180535.2975060-1-alex.kogan@oracle.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9745 signatures=668679 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 malwarescore=0 adultscore=0 bulkscore=0 phishscore=0 mlxlogscore=924 mlxscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2009150146 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9745 signatures=668679 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxlogscore=938 adultscore=0 malwarescore=0 clxscore=1015 lowpriorityscore=0 phishscore=0 spamscore=0 priorityscore=1501 suspectscore=0 impostorscore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2009150145 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The mcs unlock macro (arch_mcs_lock_handoff) should accept the value to be stored into the lock argument as another argument. This allows using the same macro in cases where the value to be stored when passing the lock is different from 1. Signed-off-by: Alex Kogan Reviewed-by: Steve Sistare Reviewed-by: Waiman Long --- arch/arm/include/asm/mcs_spinlock.h | 6 +++--- include/asm-generic/mcs_spinlock.h | 4 ++-- kernel/locking/mcs_spinlock.h | 18 +++++++++--------- kernel/locking/qspinlock.c | 4 ++-- kernel/locking/qspinlock_paravirt.h | 2 +- 5 files changed, 17 insertions(+), 17 deletions(-) diff --git a/arch/arm/include/asm/mcs_spinlock.h b/arch/arm/include/asm/mcs_spinlock.h index 529d2cf4d06f..1eb4d733459c 100644 --- a/arch/arm/include/asm/mcs_spinlock.h +++ b/arch/arm/include/asm/mcs_spinlock.h @@ -6,7 +6,7 @@ #include /* MCS spin-locking. */ -#define arch_mcs_spin_lock_contended(lock) \ +#define arch_mcs_spin_wait(lock) \ do { \ /* Ensure prior stores are observed before we enter wfe. */ \ smp_mb(); \ @@ -14,9 +14,9 @@ do { \ wfe(); \ } while (0) \ -#define arch_mcs_spin_unlock_contended(lock) \ +#define arch_mcs_lock_handoff(lock, val) \ do { \ - smp_store_release(lock, 1); \ + smp_store_release((lock), (val)); \ dsb_sev(); \ } while (0) diff --git a/include/asm-generic/mcs_spinlock.h b/include/asm-generic/mcs_spinlock.h index 10cd4ffc6ba2..f933d99c63e0 100644 --- a/include/asm-generic/mcs_spinlock.h +++ b/include/asm-generic/mcs_spinlock.h @@ -4,8 +4,8 @@ /* * Architectures can define their own: * - * arch_mcs_spin_lock_contended(l) - * arch_mcs_spin_unlock_contended(l) + * arch_mcs_spin_wait(l) + * arch_mcs_lock_handoff(l, val) * * See kernel/locking/mcs_spinlock.c. */ diff --git a/kernel/locking/mcs_spinlock.h b/kernel/locking/mcs_spinlock.h index 5e10153b4d3c..904ba5d0f3f4 100644 --- a/kernel/locking/mcs_spinlock.h +++ b/kernel/locking/mcs_spinlock.h @@ -21,7 +21,7 @@ struct mcs_spinlock { int count; /* nesting count, see qspinlock.c */ }; -#ifndef arch_mcs_spin_lock_contended +#ifndef arch_mcs_spin_wait /* * Using smp_cond_load_acquire() provides the acquire semantics * required so that subsequent operations happen after the @@ -29,20 +29,20 @@ struct mcs_spinlock { * ARM64 would like to do spin-waiting instead of purely * spinning, and smp_cond_load_acquire() provides that behavior. */ -#define arch_mcs_spin_lock_contended(l) \ -do { \ - smp_cond_load_acquire(l, VAL); \ +#define arch_mcs_spin_wait(l) \ +do { \ + smp_cond_load_acquire(l, VAL); \ } while (0) #endif -#ifndef arch_mcs_spin_unlock_contended +#ifndef arch_mcs_lock_handoff /* * smp_store_release() provides a memory barrier to ensure all * operations in the critical section has been completed before * unlocking. */ -#define arch_mcs_spin_unlock_contended(l) \ - smp_store_release((l), 1) +#define arch_mcs_lock_handoff(l, val) \ + smp_store_release((l), (val)) #endif /* @@ -91,7 +91,7 @@ void mcs_spin_lock(struct mcs_spinlock **lock, struct mcs_spinlock *node) WRITE_ONCE(prev->next, node); /* Wait until the lock holder passes the lock down. */ - arch_mcs_spin_lock_contended(&node->locked); + arch_mcs_spin_wait(&node->locked); } /* @@ -115,7 +115,7 @@ void mcs_spin_unlock(struct mcs_spinlock **lock, struct mcs_spinlock *node) } /* Pass lock to next waiter. */ - arch_mcs_spin_unlock_contended(&next->locked); + arch_mcs_lock_handoff(&next->locked, 1); } #endif /* __LINUX_MCS_SPINLOCK_H */ diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index cbff6ba53d56..435d696f9250 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -471,7 +471,7 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) WRITE_ONCE(prev->next, node); pv_wait_node(node, prev); - arch_mcs_spin_lock_contended(&node->locked); + arch_mcs_spin_wait(&node->locked); /* * While waiting for the MCS lock, the next pointer may have @@ -550,7 +550,7 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) if (!next) next = smp_cond_load_relaxed(&node->next, (VAL)); - arch_mcs_spin_unlock_contended(&next->locked); + arch_mcs_lock_handoff(&next->locked, 1); pv_kick_node(lock, next); release: diff --git a/kernel/locking/qspinlock_paravirt.h b/kernel/locking/qspinlock_paravirt.h index e84d21aa0722..619d80fd5ea8 100644 --- a/kernel/locking/qspinlock_paravirt.h +++ b/kernel/locking/qspinlock_paravirt.h @@ -368,7 +368,7 @@ static void pv_kick_node(struct qspinlock *lock, struct mcs_spinlock *node) * * Matches with smp_store_mb() and cmpxchg() in pv_wait_node() * - * The write to next->locked in arch_mcs_spin_unlock_contended() + * The write to next->locked in arch_mcs_lock_handoff() * must be ordered before the read of pn->state in the cmpxchg() * below for the code to work correctly. To guarantee full ordering * irrespective of the success or failure of the cmpxchg(), -- 2.21.1 (Apple Git-122.3)