Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp26392ybl; Tue, 28 Jan 2020 17:42:20 -0800 (PST) X-Google-Smtp-Source: APXvYqzMiswAJlfIrANVaGqCf5/Roj8hoTMnw5qk2yABNqlWyntMRA3q7xxctu1sz22ANF6n4eCx X-Received: by 2002:a05:6830:1f1c:: with SMTP id u28mr10513705otg.143.1580262140305; Tue, 28 Jan 2020 17:42:20 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1580262140; cv=none; d=google.com; s=arc-20160816; b=S9Hhk+emI9GMhB/gxdRZnB7iKMN5t10j8JxystXBGl1QB0PNqHT6em7wi5IFGQcPuP R2fN1OwfZDjQlhq47/7yOdrkxygXuy6A9qsg1SSp4qtqK32MjvTxGhsAkqtgEvkxWUTk iM/AjmlW4zEQJaw6WzYvnKLH9FPYHRVLfdNrswbQ43pl1IRlIufXriSgtyX+F4I9KPaQ 0tQT9vl0RdxKiRHvLUnxFB+DX8wii49qFmtflfndRKjSYfClpYZ6wWc6Wj4yuFG2Q3NR FsCy9G46vQO8cZc26LgyfhUywQTA05e6zaErQ9iL8u4Y74Z3MosJMLvy4dgsKCN6YTNA KGbg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=o4/ClYYJ+5YTTgIua33qLOnbZHxhbde9Yd+Epr2e4yE=; b=JSHzU7COnEsFTysA3wdlIq8vkA/faf0ZWMHPTOJ21lH68jgOeLp+vq2m1VNsDAtA2l aNaVMI5XQDT4YmHZ87zX9HyoUI8JTJz1gzAp9+CIE9WNM+rqKZ7frR3YatFH4nk2jT4r ms7TQdVd6VgqZ7qDDGR1mrP0400vQoNzFliTes47bdAwhiTEWqgfCg1MCOMIVfJMV4Sn 9AQIr2JDW1xrOT01LNoewgNg964hZ676E0sAxiUH/1nVrdsoxVIIVxePJjV9uUWGGgCW G0oIs3JSM3iHuQQ6E46hgDNlmXKvoBINndJdqO1Ec5JeX3cakRbpGDY1kMJrmCp01iET 5M3Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=IT5+OqaW; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v26si323309otj.0.2020.01.28.17.42.08; Tue, 28 Jan 2020 17:42:20 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=IT5+OqaW; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726539AbgA2Bjy (ORCPT + 99 others); Tue, 28 Jan 2020 20:39:54 -0500 Received: from mail-lj1-f196.google.com ([209.85.208.196]:39047 "EHLO mail-lj1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726402AbgA2Bjy (ORCPT ); Tue, 28 Jan 2020 20:39:54 -0500 Received: by mail-lj1-f196.google.com with SMTP id o11so16731165ljc.6 for ; Tue, 28 Jan 2020 17:39:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=o4/ClYYJ+5YTTgIua33qLOnbZHxhbde9Yd+Epr2e4yE=; b=IT5+OqaWPelc9z2VULl83XxNj/7NowpOHZFU4nhYW8T24uKH/JOKHXgmjK3eGy1CKc 1aNzJ0uf2K5syWkqTSVYUXMAzxQW/rli8DGH+ho4akKgsewUQ7Ctyzmu0I+MziI/Y0GO kiQh82G3dgvUsiGzEA3WNpJo8N++k1GDppFsx5RiXBFMBTVkrwqrwd5JrJQz4zVb11Cn 5YhZigNsgW5uma91wXVJcN2/GG//BH3v0mCRmrOIH52SBCnqncxfYbNjvFilEhyMv6Zg aGDDoDADdLwTJ49rDhAG6W/nAGHZK36Xi/t6WB4p0O2qoK378FMwLwmgg26qsbKjulIn OABQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=o4/ClYYJ+5YTTgIua33qLOnbZHxhbde9Yd+Epr2e4yE=; b=eMdy0goslC0McfgOUvMR72U3v/JZErX4hy1VCs5vc8vhVcXbCf3nWFQ/b1WS+3wUL2 /c1z50XKsd7pnAnpQJOBxEQe9pJ9Teg4M8m6LWFU4B2uTY+frbAjtI3xLLbd/jmZOHqP 183WYIA25KRyoahQCV8WRfi/WulTB+/2bny5yIiXLLSJ9hR1pulSq5D6aiOZUHkXu1p7 F6dpmFIKWCtVsadi9Gufedu1UKIPoegXnapFgwZMhOhm75/h8weyAhMlfr+a5r8K2Lo6 id+6mG+/hg+sS8JQYkm16wwoWyqvl8Qk7yp6w0vNofQ1BT76iU5ROzAToq2g/FTNrtp3 gStw== X-Gm-Message-State: APjAAAUL4jf3+laseP+zkRk3NHbR3l+Zv79W+Rcq3qchA3LvDSBPv72r sBpDB3FJsr+N46iQudGnEEuesKcr8d6yH5sGoH3ywA== X-Received: by 2002:a2e:3e0d:: with SMTP id l13mr10931600lja.70.1580261990708; Tue, 28 Jan 2020 17:39:50 -0800 (PST) MIME-Version: 1.0 References: <20200115035920.54451-1-alex.kogan@oracle.com> <4F71A184-42C0-4865-9AAA-79A636743C25@oracle.com> <25401561-CD1F-4FDC-AED5-256EBE56B9F6@oracle.com> In-Reply-To: <25401561-CD1F-4FDC-AED5-256EBE56B9F6@oracle.com> From: Lihao Liang Date: Wed, 29 Jan 2020 01:39:38 +0000 Message-ID: Subject: Re: [PATCH v9 0/5] Add NUMA-awareness to qspinlock To: Alex Kogan , longman@redhat.com Cc: linux@armlinux.org.uk, Peter Zijlstra , mingo@redhat.com, will.deacon@arm.com, arnd@arndb.de, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, tglx@linutronix.de, bp@alien8.de, hpa@zytor.com, x86@kernel.org, guohanjun@huawei.com, jglauber@marvell.com, dave.dice@oracle.com, steven.sistare@oracle.com, daniel.m.jordan@oracle.com, Will Deacon , Lihao Liang Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Alex and Waiman, On Mon, Jan 27, 2020 at 4:02 PM Alex Kogan wrote: > > Hi, Lihao. > > >>> > >>>> This is particularly relevant > >>>> in high contention situations when new threads keep arriving on the same > >>>> socket as the lock holder. > >>> In this case, the lock will stay on the same NUMA node/socket for > >>> 2^numa_spinlock_threshold times, which is the worst case scenario if we > >>> consider the long-term fairness. And if we have multiple nodes, it will take > >>> up to 2^numa_spinlock_threshold X (nr_nodes - 1) + nr_cpus_per_node > >>> lock transitions until any given thread will acquire the lock > >>> (assuming 2^numa_spinlock_threshold > nr_cpus_per_node). > >>> > >> > >> You're right that the latest version of the patch handles long-term fairness > >> deterministically. > >> > >> As I understand it, the n-th thread in the main queue is guaranteed to > >> acquire the lock after N lock handovers, where N is bounded by > >> > >> n - 1 + 2^numa_spinlock_threshold * (nr_nodes - 1) > >> > >> I'm not sure what role the variable nr_cpus_per_node plays in your analysis. > >> > >> Do I miss anything? > >> > > > > If I understand correctly, there are two phases in the algorithm: > > > > MCS phase: when the secondary queue is empty, as explained in your emails, > > the algorithm hands the lock to threads in the main queue in an FIFO order. > > When probably(SHUFFLE_REDUCTION_PROB_ARG) returns false (with default > > probability 1%), if the algorithm finds the first thread running on the same > > socket as the lock holder in cna_scan_main_queue(), it enters the following > > CNA phase > Yep. When probably() returns false, we scan the main queue. If as the result of > this scan the secondary queue becomes not empty, we enter what you call > the CNA phase. > As I understand it, the probability of making a transition from the MCS to CNA phase in less than N lock handovers is 1 - p^N, where p is the probability that probably() returns true (default 99%). So in high contention situations where N can become quite large in a relatively short period of time, the probability of getting into the CNA phase is high, e.g. 95% when N = 300. I was wondering whether it would be possible to detect contention and make a phase transition deterministically, maybe by reusing the intra_count variable to keep track of the processing rate in the MCS phase? As Will pointed out earlier, this would make formal analysis and verification of the CNA qspinlock much more feasible. > > . > > > > CNA phase: when the secondary queue is not empty, the algorithm keeps > > handing the lock to threads in the main queue that run on the same socket as > > the lock holder. When 2^numa_spinlock_threshold is reached, it splices > > the secondary queue to the front of the main queue. And we are back to the > > MCS phase above. > Correct. > > > For the n-th thread T in the main queue, the MCS phase handles threads that > > arrived in the main queue before T. In high contention situations, the CNA > > phase handles two kinds of threads: > > > > 1. Threads ahead of T that run on the same socket as the lock holder when > > a transition from the MCS to CNA phase was made. Assume there are m such > > threads. > > > > 2. Threads that keep arriving on the same socket as the lock holder. There > > are at most 2^numa_spinlock_threshold of them. > > > > Then the number of lock handovers in the CNA phase is max(m, > > 2^numa_spinlock_threshold). So the total number of lock handovers before T > > acquires the lock is at most > > > > n - 1 + 2^numa_spinlock_threshold * (nr_nodes - 1) > > > > Please let me know if I misunderstand anything. > I think you got it right (modulo nr_cpus_per_node instead of n, as mentioned in > my other response). > Make sense. Thanks a lot for the clarification :) Best, Lihao.