Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp586043ybi; Fri, 12 Jul 2019 01:14:25 -0700 (PDT) X-Google-Smtp-Source: APXvYqwfdlIZ724Jevmxo2BBHWnoa0H5XChPqgTNix5qeXW+sH4VuCmdyzaUS8NRHnJh06lXqPGf X-Received: by 2002:a17:90a:23a4:: with SMTP id g33mr10378593pje.115.1562919265293; Fri, 12 Jul 2019 01:14:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1562919265; cv=none; d=google.com; s=arc-20160816; b=Ax+pyC1XhSLpB74zQmqhPgJuiSGCtM+yNZ8+unmf8y3b9VOgRLKHsACpbUxog9DOmp TWE46UrgSxtHlqqNjDSh6APA8/+JX9N/lOyAigx0uZJtI2eUniugJTfxyCpkHo9sBRx4 cYt0S+gUOY4mvbYB0XeN8m+T1gwsMXuJcXeU7Q0ifWUP+el7PRt3UrP3O8ca20G0unHi W7WETO/bZjYkMWuhvGfIJknKFq96bVhI+2UvzCw0gd8Aty7JS36qGLWYDsdRvEwfM+e3 VorjjRUV1fD5zhidPimqI8pKGtLYwTKGQNFYJSmHi9jdocjIWRcpST91V41LorbqBkQh zMCg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=aOaVw+oTu+eS/mgmfvZ5yqa4uwawENhVU3hw73R3Dl4=; b=uevXaWUWYhVOviVDrzgrNMT3/PkAYB2DteERTbYAF9e+prlQsQH0qdpSGTNV24Pc/u 9IARVXWdTM2FqLKrb9pQs6OnKyIxZu68hWZ5XAojm99MGnxC4JJtDQwFB47Ygr5hhKWy RCTd/3FXprrDHBkvfS3ziKt9QTUJtIBJd9981sixj1J/5HAHtP1awn6hiu6QQbK18ana GcEK2GOrnyWfFU4gBwyfJmD51oxjVK0qvyrQTLiMK+zuNdYjfoapbiC1j1QQpS3nGGfR p2ETpReIR4HNh+l6V1W8fUgwnycGEndMus+D5TcpalVr3KNn9RqQ2bwyrcKeD7FZ5F4c fWFw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x9si6884163plo.98.2019.07.12.01.14.09; Fri, 12 Jul 2019 01:14:25 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726344AbfGLIMQ (ORCPT + 99 others); Fri, 12 Jul 2019 04:12:16 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:2225 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726112AbfGLIMP (ORCPT ); Fri, 12 Jul 2019 04:12:15 -0400 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 51D31B171A7986E02C84; Fri, 12 Jul 2019 16:12:12 +0800 (CST) Received: from [127.0.0.1] (10.177.223.23) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.439.0; Fri, 12 Jul 2019 16:12:10 +0800 Subject: Re: [PATCH v2 0/5] Add NUMA-awareness to qspinlock To: Jan Glauber , Alex Kogan CC: "linux@armlinux.org.uk" , Peter Zijlstra , Ingo Molnar , Will Deacon , Arnd Bergmann , "longman@redhat.com" , "linux-arch@vger.kernel.org" , linux-arm-kernel , "linux-kernel@vger.kernel.org" , "tglx@linutronix.de" , Borislav Petkov , "hpa@zytor.com" , "x86@kernel.org" , "steven.sistare@oracle.com" , "daniel.m.jordan@oracle.com" , "dave.dice@oracle.com" , "rahul.x.yadav@oracle.com" References: <20190329152006.110370-1-alex.kogan@oracle.com> From: Hanjun Guo Message-ID: <95683b80-f694-cf34-73fc-e6ec05462ee0@huawei.com> Date: Fri, 12 Jul 2019 16:12:05 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.5.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.177.223.23] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2019/7/3 19:58, Jan Glauber wrote: > Hi Alex, > I've tried this series on arm64 (ThunderX2 with up to SMT=4 and 224 CPUs) > with the borderline testcase of accessing a single file from all > threads. With that > testcase the qspinlock slowpath is the top spot in the kernel. > > The results look really promising: > > CPUs normal numa-qspinlocks > --------------------------------------------- > 56 149.41 73.90 > 224 576.95 290.31 > > Also frontend-stalls are reduced to 50% and interconnect traffic is > greatly reduced. > Tested-by: Jan Glauber Tested this patchset on Kunpeng920 ARM64 server (96 cores, 4 NUMA nodes), and with the same test case from Jan, I can see 150%+ boost! (Need to add a patch below [1].) For the real workload such as Nginx I can see about 10% performance improvement as well. Tested-by: Hanjun Guo Please cc me for new versions and I'm willing to test it. Thanks Hanjun [1] diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 657bbc5..72c1346 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -792,6 +792,20 @@ config NODES_SHIFT Specify the maximum number of NUMA Nodes available on the target system. Increases memory reserved to accommodate various tables. +config NUMA_AWARE_SPINLOCKS + bool "Numa-aware spinlocks" + depends on NUMA + default y + help + Introduce NUMA (Non Uniform Memory Access) awareness into + the slow path of spinlocks. + + The kernel will try to keep the lock on the same node, + thus reducing the number of remote cache misses, while + trading some of the short term fairness for better performance. + + Say N if you want absolute first come first serve fairness. + config USE_PERCPU_NUMA_NODE_ID def_bool y depends on NUMA diff --git a/kernel/locking/qspinlock_cna.h b/kernel/locking/qspinlock_cna.h index 2994167..be5dd44 100644 --- a/kernel/locking/qspinlock_cna.h +++ b/kernel/locking/qspinlock_cna.h @@ -4,7 +4,7 @@ #endif #include - +#include /* * Implement a NUMA-aware version of MCS (aka CNA, or compact NUMA-aware lock). * @@ -170,7 +170,7 @@ static __always_inline void cna_init_node(struct mcs_spinlock *node, int cpuid, u32 tail) { if (decode_numa_node(node->node_and_count) == -1) - store_numa_node(node, numa_cpu_node(cpuid)); + store_numa_node(node, cpu_to_node(cpuid)); node->encoded_tail = tail; }