Received: by 10.213.65.68 with SMTP id h4csp2164215imn; Thu, 5 Apr 2018 10:02:24 -0700 (PDT) X-Google-Smtp-Source: AIpwx48gse/QsaSeIlS8h0L9t/g+kxgrTMLqa1X1zNryyQcx9y62PtIWySzQWUAdeYRdzXLClitP X-Received: by 2002:a17:902:2d01:: with SMTP id o1-v6mr23249638plb.309.1522947744925; Thu, 05 Apr 2018 10:02:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522947744; cv=none; d=google.com; s=arc-20160816; b=LItp7n3D1bG5KhnY6hGZWcoi+/KkXobCGPcVODt9s6dsPqWmIYWLI0qHXCELVr9zl2 9xXi8oC3kmPzwGS/RHVg0QWNSLLpKqz8iNOO96ppyOTIwlQpqISlxSyHWycSld5qn9n7 qYsP3hoKdrO7vaoJFCXypqvXn2/+dU0lPiw9jmvL1wyFyncZ2q6NfS0DqbF88RU37gyX ykaLCI1gyhSCZ6slJ7GEns6bqm2CjbmYCd1vrXiclf6dahyKWUIA4yVojlr3mcOwyhu5 y/ZFUVgYw7CsCw4KyKhAcc4zrkrzzZrReGuDaXAK0aN85dOL18N5nmsKYyREXS6Iij6M NnEA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from :arc-authentication-results; bh=r2v5yO6tiCRt+2p2LmDFH/+GVnW+vM2qG0/atT6biOA=; b=FJ/0gJ1WqhqlQ3jxuvnC9MQ6nigbnrgWnu1/9EQTOx4+zHYPdJWgD7couPRPwjVCpM aJJzxYLSEZtv0yVP2o0MFjcAJFyqJr33ww4lhVqUl8Gl/GS2JvNaQ3COfLijfbYLfF0V XNSDrejIIteds5ouSjZZd0CDVGyOmleMeCf/L9TKrhjABOHBFQq91n93NN11TIUD7PAW B6NvnvCm6Lknwxm30pOKG5aYfawQCE/kJ06ep9TGMBsLvD1eXy5ZmgzNtcS/REJP+jAY +Km4dQeW/W2G0AO/JBNXwNyuTnMphJWZyulj4nuHoSfiPsl9csjQ2tX62U0j/pvEvwiF SYRg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k33-v6si5988120pld.158.2018.04.05.10.02.03; Thu, 05 Apr 2018 10:02:24 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751494AbeDEQ64 (ORCPT + 99 others); Thu, 5 Apr 2018 12:58:56 -0400 Received: from foss.arm.com ([217.140.101.70]:57294 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750726AbeDEQ6z (ORCPT ); Thu, 5 Apr 2018 12:58:55 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 11F4A1435; Thu, 5 Apr 2018 09:58:55 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D70E63F25D; Thu, 5 Apr 2018 09:58:54 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id 7E2DC1AE555E; Thu, 5 Apr 2018 17:59:08 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, peterz@infradead.org, mingo@kernel.org, boqun.feng@gmail.com, paulmck@linux.vnet.ibm.com, catalin.marinas@arm.com, Will Deacon Subject: [PATCH 00/10] kernel/locking: qspinlock improvements Date: Thu, 5 Apr 2018 17:58:57 +0100 Message-Id: <1522947547-24081-1-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi all, I've been kicking the tyres further on qspinlock and with this set of patches I'm happy with the performance and fairness properties. In particular, the locking algorithm now guarantees forward progress whereas the implementation in mainline can starve threads indefinitely in cmpxchg loops. Catalin has also implemented a model of this using TLA to prove that the lock is fair, although this doesn't take the memory model into account: https://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/kernel-tla.git/commit/ I'd still like to get more benchmark numbers and wider exposure before enabling this for arm64, but my current testing is looking very promising. This series, along with the arm64-specific patches, is available at: https://git.kernel.org/pub/scm/linux/kernel/git/will/linux.git/log/?h=qspinlock Cheers, Will --->8 Jason Low (1): locking/mcs: Use smp_cond_load_acquire() in mcs spin loop Will Deacon (9): locking/qspinlock: Don't spin on pending->locked transition in slowpath locking/qspinlock: Remove unbounded cmpxchg loop from locking slowpath locking/qspinlock: Kill cmpxchg loop when claiming lock from head of queue locking/qspinlock: Use atomic_cond_read_acquire barriers: Introduce smp_cond_load_relaxed and atomic_cond_read_relaxed locking/qspinlock: Use smp_cond_load_relaxed to wait for next node locking/qspinlock: Merge struct __qspinlock into struct qspinlock locking/qspinlock: Make queued_spin_unlock use smp_store_release locking/qspinlock: Elide back-to-back RELEASE operations with smp_wmb() arch/x86/include/asm/qspinlock.h | 19 ++- arch/x86/include/asm/qspinlock_paravirt.h | 3 +- include/asm-generic/barrier.h | 27 ++++- include/asm-generic/qspinlock.h | 2 +- include/asm-generic/qspinlock_types.h | 32 ++++- include/linux/atomic.h | 2 + kernel/locking/mcs_spinlock.h | 10 +- kernel/locking/qspinlock.c | 191 ++++++++++-------------------- kernel/locking/qspinlock_paravirt.h | 34 ++---- 9 files changed, 141 insertions(+), 179 deletions(-) -- 2.1.4