Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp335401yba; Wed, 3 Apr 2019 09:35:02 -0700 (PDT) X-Google-Smtp-Source: APXvYqwP3y+dRK83bq8GJc3V4/+gZ9H+S3HcKD4zjSKMLCahC7G/Ubp4fuMa0CM6cK5NYQWJcJHF X-Received: by 2002:a63:7117:: with SMTP id m23mr558717pgc.271.1554309302548; Wed, 03 Apr 2019 09:35:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1554309302; cv=none; d=google.com; s=arc-20160816; b=qGAzaqx7aacyNPapPxFBrbquFDM7sNu7EYEQREzPCYI5qvZruzZY+85m1LYmCawr7a Y3fHG41Hpc1K295iHbSxZeHw1r3YgHifQvnbQs2ShuZavBKRdYzoT6DRDYmMzB/h9kGq hCO40biq8p7/9iWimsv/ZshV6OA5BGtXMlF5bpBMJ9psbwbMu+dxJAOH4X6HMP197GuQ B6Qc+bqHx29b5//5CYnZEaBGFIq7saDcn4tRtB8sVcLDYEnjTPrZxWSpr/wSEykfEanf EdOoCB834OF3EhnDrupJcFpR16DqbExhwFs/5k3SqXMaWNWfUvVnnQSCr5pXPnu5Ztk9 c8yw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-language :content-transfer-encoding:in-reply-to:mime-version:user-agent:date :message-id:organization:autocrypt:openpgp:from:references:cc:to :subject; bh=A1U9WMbwBjnpZkI2f5HwvLuRt9/F64nKY1zL6Pix2n0=; b=KO90U6OE8MBOKn6jz1mndRVkm3ddi3kZR7Zb60RbImGgh5iGUu2+jqU+bBhgp944/7 H08DDUWANHMd1QyAr+5rL+ZjUPRjHpRmgHwF9XXHG4oEVR3Cy9b38QZrTWJggpCFz3SC Cx6YtSG7gAC4SJjrJNT2l660NNnR3fNxEeXidQpTVU5DYEoYKGUVKS/Tamap4hrKh5VR anWMi/R+g9Kdn/YHD9BapfES5a6Bojn9XFqDQi83lM6YqNSPqJBsq6oKZa+r8/8qlLh/ epqNZAqIxs+8Q7btuYoNigMowyu98rndJgZOnrduLNHG5y34notQ3+KvloPB1AMGfuak ssXg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v1si12063124pgc.492.2019.04.03.09.34.44; Wed, 03 Apr 2019 09:35:02 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726783AbfDCQdY convert rfc822-to-8bit (ORCPT + 99 others); Wed, 3 Apr 2019 12:33:24 -0400 Received: from mx1.redhat.com ([209.132.183.28]:48668 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726074AbfDCQdX (ORCPT ); Wed, 3 Apr 2019 12:33:23 -0400 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 8EEBB5946B; Wed, 3 Apr 2019 16:33:22 +0000 (UTC) Received: from llong.remote.csb (dhcp-17-19.bos.redhat.com [10.18.17.19]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8CCDC6012C; Wed, 3 Apr 2019 16:33:20 +0000 (UTC) Subject: Re: [PATCH v2 3/5] locking/qspinlock: Introduce CNA into the slow path of qspinlock To: Peter Zijlstra Cc: Alex Kogan , linux@armlinux.org.uk, mingo@redhat.com, will.deacon@arm.com, arnd@arndb.de, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, tglx@linutronix.de, bp@alien8.de, hpa@zytor.com, x86@kernel.org, steven.sistare@oracle.com, daniel.m.jordan@oracle.com, dave.dice@oracle.com, rahul.x.yadav@oracle.com References: <20190329152006.110370-1-alex.kogan@oracle.com> <20190329152006.110370-4-alex.kogan@oracle.com> <60a3a2d8-d222-73aa-2df1-64c9d3fa3241@redhat.com> <20190402094320.GM11158@hirez.programming.kicks-ass.net> From: Waiman Long Openpgp: preference=signencrypt Autocrypt: addr=longman@redhat.com; prefer-encrypt=mutual; keydata= xsFNBFgsZGsBEAC3l/RVYISY3M0SznCZOv8aWc/bsAgif1H8h0WPDrHnwt1jfFTB26EzhRea XQKAJiZbjnTotxXq1JVaWxJcNJL7crruYeFdv7WUJqJzFgHnNM/upZuGsDIJHyqBHWK5X9ZO jRyfqV/i3Ll7VIZobcRLbTfEJgyLTAHn2Ipcpt8mRg2cck2sC9+RMi45Epweu7pKjfrF8JUY r71uif2ThpN8vGpn+FKbERFt4hW2dV/3awVckxxHXNrQYIB3I/G6mUdEZ9yrVrAfLw5M3fVU CRnC6fbroC6/ztD40lyTQWbCqGERVEwHFYYoxrcGa8AzMXN9CN7bleHmKZrGxDFWbg4877zX 0YaLRypme4K0ULbnNVRQcSZ9UalTvAzjpyWnlnXCLnFjzhV7qsjozloLTkZjyHimSc3yllH7 VvP/lGHnqUk7xDymgRHNNn0wWPuOpR97J/r7V1mSMZlni/FVTQTRu87aQRYu3nKhcNJ47TGY evz/U0ltaZEU41t7WGBnC7RlxYtdXziEn5fC8b1JfqiP0OJVQfdIMVIbEw1turVouTovUA39 Qqa6Pd1oYTw+Bdm1tkx7di73qB3x4pJoC8ZRfEmPqSpmu42sijWSBUgYJwsziTW2SBi4hRjU h/Tm0NuU1/R1bgv/EzoXjgOM4ZlSu6Pv7ICpELdWSrvkXJIuIwARAQABzR9Mb25nbWFuIExv bmcgPGxsb25nQHJlZGhhdC5jb20+wsF/BBMBAgApBQJYLGRrAhsjBQkJZgGABwsJCAcDAgEG FQgCCQoLBBYCAwECHgECF4AACgkQbjBXZE7vHeYwBA//ZYxi4I/4KVrqc6oodVfwPnOVxvyY oKZGPXZXAa3swtPGmRFc8kGyIMZpVTqGJYGD9ZDezxpWIkVQDnKM9zw/qGarUVKzElGHcuFN ddtwX64yxDhA+3Og8MTy8+8ZucM4oNsbM9Dx171bFnHjWSka8o6qhK5siBAf9WXcPNogUk4S fMNYKxexcUayv750GK5E8RouG0DrjtIMYVJwu+p3X1bRHHDoieVfE1i380YydPd7mXa7FrRl 7unTlrxUyJSiBc83HgKCdFC8+ggmRVisbs+1clMsK++ehz08dmGlbQD8Fv2VK5KR2+QXYLU0 rRQjXk/gJ8wcMasuUcywnj8dqqO3kIS1EfshrfR/xCNSREcv2fwHvfJjprpoE9tiL1qP7Jrq 4tUYazErOEQJcE8Qm3fioh40w8YrGGYEGNA4do/jaHXm1iB9rShXE2jnmy3ttdAh3M8W2OMK 4B/Rlr+Awr2NlVdvEF7iL70kO+aZeOu20Lq6mx4Kvq/WyjZg8g+vYGCExZ7sd8xpncBSl7b3 99AIyT55HaJjrs5F3Rl8dAklaDyzXviwcxs+gSYvRCr6AMzevmfWbAILN9i1ZkfbnqVdpaag QmWlmPuKzqKhJP+OMYSgYnpd/vu5FBbc+eXpuhydKqtUVOWjtp5hAERNnSpD87i1TilshFQm TFxHDzbOwU0EWCxkawEQALAcdzzKsZbcdSi1kgjfce9AMjyxkkZxcGc6Rhwvt78d66qIFK9D Y9wfcZBpuFY/AcKEqjTo4FZ5LCa7/dXNwOXOdB1Jfp54OFUqiYUJFymFKInHQYlmoES9EJEU yy+2ipzy5yGbLh3ZqAXyZCTmUKBU7oz/waN7ynEP0S0DqdWgJnpEiFjFN4/ovf9uveUnjzB6 lzd0BDckLU4dL7aqe2ROIHyG3zaBMuPo66pN3njEr7IcyAL6aK/IyRrwLXoxLMQW7YQmFPSw drATP3WO0x8UGaXlGMVcaeUBMJlqTyN4Swr2BbqBcEGAMPjFCm6MjAPv68h5hEoB9zvIg+fq M1/Gs4D8H8kUjOEOYtmVQ5RZQschPJle95BzNwE3Y48ZH5zewgU7ByVJKSgJ9HDhwX8Ryuia 79r86qZeFjXOUXZjjWdFDKl5vaiRbNWCpuSG1R1Tm8o/rd2NZ6l8LgcK9UcpWorrPknbE/pm MUeZ2d3ss5G5Vbb0bYVFRtYQiCCfHAQHO6uNtA9IztkuMpMRQDUiDoApHwYUY5Dqasu4ZDJk bZ8lC6qc2NXauOWMDw43z9He7k6LnYm/evcD+0+YebxNsorEiWDgIW8Q/E+h6RMS9kW3Rv1N qd2nFfiC8+p9I/KLcbV33tMhF1+dOgyiL4bcYeR351pnyXBPA66ldNWvABEBAAHCwWUEGAEC AA8FAlgsZGsCGwwFCQlmAYAACgkQbjBXZE7vHeYxSQ/+PnnPrOkKHDHQew8Pq9w2RAOO8gMg 9Ty4L54CsTf21Mqc6GXj6LN3WbQta7CVA0bKeq0+WnmsZ9jkTNh8lJp0/RnZkSUsDT9Tza9r GB0svZnBJMFJgSMfmwa3cBttCh+vqDV3ZIVSG54nPmGfUQMFPlDHccjWIvTvyY3a9SLeamaR jOGye8MQAlAD40fTWK2no6L1b8abGtziTkNh68zfu3wjQkXk4kA4zHroE61PpS3oMD4AyI9L 7A4Zv0Cvs2MhYQ4Qbbmafr+NOhzuunm5CoaRi+762+c508TqgRqH8W1htZCzab0pXHRfywtv 0P+BMT7vN2uMBdhr8c0b/hoGqBTenOmFt71tAyyGcPgI3f7DUxy+cv3GzenWjrvf3uFpxYx4 yFQkUcu06wa61nCdxXU/BWFItryAGGdh2fFXnIYP8NZfdA+zmpymJXDQeMsAEHS0BLTVQ3+M 7W5Ak8p9V+bFMtteBgoM23bskH6mgOAw6Cj/USW4cAJ8b++9zE0/4Bv4iaY5bcsL+h7TqQBH Lk1eByJeVooUa/mqa2UdVJalc8B9NrAnLiyRsg72Nurwzvknv7anSgIkL+doXDaG21DgCYTD wGA5uquIgb8p3/ENgYpDPrsZ72CxVC2NEJjJwwnRBStjJOGQX4lV1uhN1XsZjBbRHdKF2W9g weim8xU= Organization: Red Hat Message-ID: <9ec3d8dc-d1e0-1b8a-5e00-ba92b9756c58@redhat.com> Date: Wed, 3 Apr 2019 12:33:20 -0400 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <20190402094320.GM11158@hirez.programming.kicks-ass.net> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8BIT Content-Language: en-US X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.39]); Wed, 03 Apr 2019 16:33:22 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 04/02/2019 05:43 AM, Peter Zijlstra wrote: > On Mon, Apr 01, 2019 at 10:36:19AM -0400, Waiman Long wrote: >> On 03/29/2019 11:20 AM, Alex Kogan wrote: >>> +config NUMA_AWARE_SPINLOCKS >>> + bool "Numa-aware spinlocks" >>> + depends on NUMA >>> + default y >>> + help >>> + Introduce NUMA (Non Uniform Memory Access) awareness into >>> + the slow path of spinlocks. >>> + >>> + The kernel will try to keep the lock on the same node, >>> + thus reducing the number of remote cache misses, while >>> + trading some of the short term fairness for better performance. >>> + >>> + Say N if you want absolute first come first serve fairness. >>> + >> The patch that I am looking for is to have a separate >> numa_queued_spinlock_slowpath() that coexists with >> native_queued_spinlock_slowpath() and >> paravirt_queued_spinlock_slowpath(). At boot time, we select the most >> appropriate one for the system at hand. > Agreed; and until we have static_call, I think we can abuse the paravirt > stuff for this. I haven't checked Josh's patch to see if it is doing. The availability of static_call will certainly make thing easier for this case. > By the time we patch the paravirt stuff: > > check_bugs() > alternative_instructions() > apply_paravirt() > > we should already have enumerated the NODE topology and so nr_node_ids() > should be set. > > So if we frob pv_ops.lock.queued_spin_lock_slowpath to > numa_queued_spin_lock_slowpath before that, it should all get patched > just right. > > That of course means the whole NUMA_AWARE_SPINLOCKS thing depends on > PARAVIRT_SPINLOCK, which is a bit awkward... Yes, this is one way of doing it. Another way to use static key to switch between the native and numa version. So if PARAVIRT_SPINLOCK is defined, we use the paravirt patching to point to the right function. If PARAVIRT_SPINLOCK isn't enabled, we can do something like static inline void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) {         if (static_branch_unlikely(&use_numa_spinlock))                 numa_queued_spin_lock_slowpath(lock, val);         else                    native_queued_spin_lock_slowpath(lock, val); } Alternatively, we can also call numa_queued_spin_lock_slowpath() in native_queued_spin_lock_slowpath() if we don't want to increase the code size of spinlock call sites. Cheers, Longman