Received: by 2002:a05:6a10:a852:0:0:0:0 with SMTP id d18csp3394099pxy; Tue, 4 May 2021 00:45:32 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw/Vl7+WgFQw8Vd8lzAV0UzDKOQsaLL1FAIII9SN4j/fxdeOHZyQ8Mw09EKV2E946axJgUz X-Received: by 2002:a17:90a:454c:: with SMTP id r12mr21333832pjm.52.1620114331951; Tue, 04 May 2021 00:45:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1620114331; cv=none; d=google.com; s=arc-20160816; b=c0fKO1Uhk6d2NIuFXWQrq/zd48fjIiaEmLUgM55QQEfDThc9z9Cd4bdME5/UXDw7SO 0MTDTDX6vV2yllw3E8iWpykKP7RYd0HG0Q7keK1Z9wJ/Ee4Br9I/FeaqiYPoWidKmH9B j9KrXvSN6xT73yNpeNEdlmcw+eFdnD1sRQI19niMRgfVC0IYigIghvCUoBInxtEuVJIS NyzQaPVf5AhSf38nXFkju+u8lg136RlWpm1iYxKW6h8/4VzkE3an1ziJp89Ik4cVyRqP jbqWx44lcMe1UmVmxxC2ZC57VTp04PF4GASUWkO1BlbpUAw+s+h4nl0ZJoBjx1BCG+13 9VNw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=0lbdYBbkecAfVhKtoKKQJCexg9woq7Y5uLlxNo7Q9Ag=; b=Qnn6aCAvwzPQX1EqsozqzsiFSS74TgMt34fEHVCidPq0j4xhBOyBGv+UMyao6ecsu8 qXNRTbImptTYWw9mbSX34m6mmvKjGLNH70uRfuV5/BMFHdKJH/OTNmz4QPiJvZSDEf3J vr1+T8E5cP65VY1o3AV6rkwoAu1m6KUjnri7JF/++fDLbg80j+niZhIjfw3PJ8xjbnI2 hINi3IwJWBVqI0S/F/JOwmiCUd1K98dNRZAL1cPGhKXP5haw0BPe/NOKO8Bb++s17jmU lMJZT7Mh+trb1skaqoWgUQsXVRwA6q5qVIYYOqY4+IvcjHKRb4UJjLXoW57x0fjMQvTd xl6w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=RAL1KJ3X; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id t21si17963920pfh.191.2021.05.04.00.45.18; Tue, 04 May 2021 00:45:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=RAL1KJ3X; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229879AbhEDHj0 (ORCPT + 99 others); Tue, 4 May 2021 03:39:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52070 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229607AbhEDHj0 (ORCPT ); Tue, 4 May 2021 03:39:26 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 138C3C061574 for ; Tue, 4 May 2021 00:38:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=0lbdYBbkecAfVhKtoKKQJCexg9woq7Y5uLlxNo7Q9Ag=; b=RAL1KJ3XjK9jGccbsj3Ms8QFlq DHZ1TSjJTOg3aK2BjndDkv72z2a7Emsdthx3dnXYN918Q5vRXxiG/N8uEZwK9w0MK5SWDj3i47ugr IsFb+IXN1+8Nwgg4rDUfVLSVsF39tT7QbyhR1TQtZDz3m5WelRK+IswnSAMzWmnqYStFawntRpmM/ 1P174HfyX62jglN5yxtdTzaWrZdSW3IrY0Kr3/JANMETNAO+dYSpp/uBpTSAHBbbJ4CvOgRa58lrB mtvW/6hiHhd7svFFnLIx4PsV+ps14z/zU5GZluQVveaH3np4bnU9tRBRfLvv+rcATruA3w0mBTbLa V/TfhzMg==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux)) id 1ldpdN-00FcyK-W8; Tue, 04 May 2021 07:38:23 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id AC1D83001D0; Tue, 4 May 2021 09:38:18 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id 699D228B31306; Tue, 4 May 2021 09:38:18 +0200 (CEST) Date: Tue, 4 May 2021 09:38:18 +0200 From: Peter Zijlstra To: Josh Don Cc: Aubrey Li , Joel Fernandes , "Hyser,Chris" , Ingo Molnar , Vincent Guittot , Valentin Schneider , Mel Gorman , Linux List Kernel Mailing , Thomas Gleixner , Don Hiatt Subject: Re: [PATCH 04/19] sched: Prepare for Core-wide rq->lock Message-ID: References: <20210422120459.447350175@infradead.org> <20210422123308.196692074@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Apr 29, 2021 at 01:39:54PM -0700, Josh Don wrote: > > > +void double_rq_lock(struct rq *rq1, struct rq *rq2) > > > +{ > > > + lockdep_assert_irqs_disabled(); > > > + > > > + if (rq1->cpu > rq2->cpu) > > > > It's still a bit hard for me to digest this function, I guess using (rq->cpu) > > can't guarantee the sequence of locking when coresched is enabled. > > > > - cpu1 and cpu7 shares lockA > > - cpu2 and cpu8 shares lockB > > > > double_rq_lock(1,8) leads to lock(A) and lock(B) > > double_rq_lock(7,2) leads to lock(B) and lock(A) Good one! > > change to below to avoid ABBA? > > + if (__rq_lockp(rq1) > __rq_lockp(rq2)) This, however, is broken badly, not only does it suffer the problem Josh pointed out, it also breaks the rq->__lock ordering vs __sched_core_flip(), which was the whole reason the ordering needed changing in the first place. > I'd propose an alternative but > similar idea: order by core, then break ties by ordering on cpu. > > +#ifdef CONFIG_SCHED_CORE > + if (rq1->core->cpu > rq2->core->cpu) > + swap(rq1, rq2); > + else if (rq1->core->cpu == rq2->core->cpu && rq1->cpu > rq2->cpu) > + swap(rq1, rq2); > +#else > if (rq1->cpu > rq2->cpu) > swap(rq1, rq2); > +#endif I've written it like so: static inline bool rq_order_less(struct rq *rq1, struct rq *rq2) { #ifdef CONFIG_SCHED_CORE if (rq1->core->cpu < rq2->core->cpu) return true; if (rq1->core->cpu > rq2->core->cpu) return false; #endif return rq1->cpu < rq2->cpu; } /* * double_rq_lock - safely lock two runqueues */ void double_rq_lock(struct rq *rq1, struct rq *rq2) { lockdep_assert_irqs_disabled(); if (rq_order_less(rq2, rq1)) swap(rq1, rq2); raw_spin_rq_lock(rq1); if (rq_lockp(rq1) == rq_lockp(rq2)) return; raw_spin_rq_lock_nested(rq2, SINGLE_DEPTH_NESTING); }