Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp3696090ybl; Tue, 21 Jan 2020 05:32:12 -0800 (PST) X-Google-Smtp-Source: APXvYqzGLk9kJ/z28aG+jhw2UW5lskeihj/+C+WYN66c3rt+jGOo8V7aP6XGy8s76wL5gxeNbI/l X-Received: by 2002:a05:6808:312:: with SMTP id i18mr3065068oie.44.1579613531978; Tue, 21 Jan 2020 05:32:11 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1579613531; cv=none; d=google.com; s=arc-20160816; b=n8Z02uagTtiF1G0pTBK9K3BgoXxcwx+EQ+OI8G5CaotIFwewORaQPb43Pd9UlBRZaq XuX6kgfMB/Ih29kHo5/dmJ3DOmBgGRKdsrSAllEWMDxJxCsaHtj6/RtQt33qfupEJN2M 6jPqyXS46DnJqay+lluxIRJvjxfQJqZDmmg1hEVpclxELypwc196rI9P3OQ4mEAgZzbs 0PdUyM1TM6h6lkM6fBYmWJnLEtFGEPMuKDz7l+S1su9VdD+xEcLp6Mhj5sC+cOfxGZdR jKPY8azz8DcQ1elf29+JrBbnRpXqXSbRglHJ4S+GOcqW1beF4mMbRMr5yPW5YrJxImtE 5nJw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=sQsNUT1I6E8QItPKtSDNxr8kzUBb8jOWCVawdbMbuhs=; b=bf8y1Nt+dTGV7KFC8nyDw6YB6clV6xQqEYtZ5TxlXV/K5y8Boq8w81qIbux3Qk7Ehl Bmh1wwrec0HC0cjO15NYaY5/1W+kjiqTRseHvrQ2V2LSdYytqt5C7ST72NeCUJswpQim r/H1wAqXHQlKqvHeASQT7pB7rE+T2w3pTIqi2vaf3oYY47dY+PRJQsewfQFKehnCyIv+ GgFZ/M7qYftfGKFsqUcmK38XR8g4FT0AtHSTi4HG561rC/Yx/AlnV7OmEc4b7wmjZ+99 OYYXh6I3ojUsFh6llq4xrpai7UAbowpPz4PzqkcVv1ni8ZlW1Np3dUk1xIASo2/EvVtK g0fQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=merlin.20170209 header.b=wumgBDv6; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e3si23058599otp.286.2020.01.21.05.31.59; Tue, 21 Jan 2020 05:32:11 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=merlin.20170209 header.b=wumgBDv6; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728733AbgAUNbE (ORCPT + 99 others); Tue, 21 Jan 2020 08:31:04 -0500 Received: from merlin.infradead.org ([205.233.59.134]:43374 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725890AbgAUNbE (ORCPT ); Tue, 21 Jan 2020 08:31:04 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=sQsNUT1I6E8QItPKtSDNxr8kzUBb8jOWCVawdbMbuhs=; b=wumgBDv6cNBhEJASXpiGDNAAA fIjFViuZ/mfovC47v2shUCqKUf1OHOHGHCqlPOvQ1A7JID+0E02moRI0efYbudsqlBXR/o+B+mGN5 mxIMenqLi4lcXgHyX6fCzBEzOsMdRXvfiIht1UCcGvWIe/7idW3F02/5yuTujrnLupRkFDzhwre64 0e3Tb0KiaEV35OILtmALggkTSogvX2c4NUq6SAxjY4o624HXSoftOhLihn9/jnyMmejE86xuOmlNz d4dQ2iYkRLOzMPuMT9CR6NFW2DiezhfzlklTPofyQ86qCl400zrucJA1Mux8uWyHG3//ShnZLO7TL y5ils314w==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=noisy.programming.kicks-ass.net) by merlin.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1ittbO-0006Ql-0z; Tue, 21 Jan 2020 13:29:54 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id B9A1F3043C9; Tue, 21 Jan 2020 14:28:10 +0100 (CET) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id D89C520983FD9; Tue, 21 Jan 2020 14:29:49 +0100 (CET) Date: Tue, 21 Jan 2020 14:29:49 +0100 From: Peter Zijlstra To: Alex Kogan Cc: linux@armlinux.org.uk, mingo@redhat.com, will.deacon@arm.com, arnd@arndb.de, longman@redhat.com, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, tglx@linutronix.de, bp@alien8.de, hpa@zytor.com, x86@kernel.org, guohanjun@huawei.com, jglauber@marvell.com, steven.sistare@oracle.com, daniel.m.jordan@oracle.com, dave.dice@oracle.com Subject: Re: [PATCH v8 4/5] locking/qspinlock: Introduce starvation avoidance into CNA Message-ID: <20200121132949.GL14914@hirez.programming.kicks-ass.net> References: <20191230194042.67789-1-alex.kogan@oracle.com> <20191230194042.67789-5-alex.kogan@oracle.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20191230194042.67789-5-alex.kogan@oracle.com> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Dec 30, 2019 at 02:40:41PM -0500, Alex Kogan wrote: > +/* > + * Controls the threshold for the number of intra-node lock hand-offs before > + * the NUMA-aware variant of spinlock is forced to be passed to a thread on > + * another NUMA node. By default, the chosen value provides reasonable > + * long-term fairness without sacrificing performance compared to a lock > + * that does not have any fairness guarantees. The default setting can > + * be changed with the "numa_spinlock_threshold" boot option. > + */ > +int intra_node_handoff_threshold __ro_after_init = 1 << 16; There is a distinct lack of quantitative data to back up that 'reasonable' claim there. Where is the table of inter-node latencies observed for the various values tested, and on what criteria is this number deemed reasonable? To me, 64k lock hold times seems like a giant number, entirely outside of reasonable. > + > static void __init cna_init_nodes_per_cpu(unsigned int cpu) > { > struct mcs_spinlock *base = per_cpu_ptr(&qnodes[0].mcs, cpu); > @@ -97,6 +109,11 @@ static int __init cna_init_nodes(void) > } > early_initcall(cna_init_nodes); > > +static __always_inline void cna_init_node(struct mcs_spinlock *node) > +{ > + ((struct cna_node *)node)->intra_count = 0; > +} > + > /* this function is called only when the primary queue is empty */ > static inline bool cna_try_change_tail(struct qspinlock *lock, u32 val, > struct mcs_spinlock *node) > @@ -233,7 +250,9 @@ __always_inline u32 cna_pre_scan(struct qspinlock *lock, > { > struct cna_node *cn = (struct cna_node *)node; > > - cn->pre_scan_result = cna_scan_main_queue(node, node); > + cn->pre_scan_result = > + cn->intra_count == intra_node_handoff_threshold ? > + FLUSH_SECONDARY_QUEUE : cna_scan_main_queue(node, node); Because: if (cn->intra_count < intra_node_handoff_threshold) cn->pre_scan_result = cna_scan_main_queue(node, node); else cn->pre_scan_result = FLUSH_SECONDARY_QUEUE; was too readable?