Received: by 2002:a25:1104:0:0:0:0:0 with SMTP id 4csp332275ybr; Fri, 22 May 2020 07:43:31 -0700 (PDT) X-Google-Smtp-Source: ABdhPJywKtUn/c5ymgFh6iylf2e+HvwnuBUopCuNcgYoU1wEtvz7B72TkhuO8jRzDrOBeMzVqMGm X-Received: by 2002:a50:9dcd:: with SMTP id l13mr3340855edk.126.1590158611564; Fri, 22 May 2020 07:43:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1590158611; cv=none; d=google.com; s=arc-20160816; b=E30yg+TJXxF7KBr5dWUmUqYsxN5pvbaY2nEjlRAnN5W+2lhsAigP3Vl20fC3uV46Ly Q/5Try0XPDUP4s0ZcrzjbhZGu6tSeym/hSa6p0Z1cekrvsX2DOOXMiVflMAyxNzxLvoE AioztLot4AItFeGDWAMMas5lRDgRsBGzj3zmz0OzZZ83wYVPhQMMCEJElpGuR+Jry6GD 0f0YiouuQe/1PWcSWvKlZXMncTvt5kRJbAt8ZwaRegU5Tk9wTCjr4IQ1CKTS4/0MofE0 TphHsDUA884VURoyEIzoHaMvvkjvmJuNfHDR3GhCr7/OaG8Y/10dgHLFhRyZUv8al93u jC1A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :dkim-signature; bh=Q+wlQfoEZBNLKtuvrE+VsWClCngPETJs+hYOYiKjbhU=; b=FqnLD1JYB9dPhfHFD23kmMztQBxR3sl8vdFVvPRLTExJsJa186BT29ODIxUxWPPJGs BZBXwRTRFk6nyeDuA0IHdR5dLRHDg3l06NvK4rC0D7lqFjxEP+CHbrqlqXv5a/rydymk XWBF/iH8b3kDMxTYJUi0QJF2+McG+9H4w1WJu1ZmuYNs94gJBq70T6QmAmSxyE4jrfck IIDFroYGlnFZ8DpTTRx4jAaeBdfIOKaxKNOpdc3RwPKwHhA0tslavh3k3sjTwyy0THWD Bkfu4q1rP8hYrdR1lDI9kVZJ0AHw+ayTbToJBx0OqMrp0khgyKTNEOCtjN7jBBtFIv7U L9GQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=merlin.20170209 header.b=CVZmJGW0; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id k22si5097205eja.706.2020.05.22.07.43.08; Fri, 22 May 2020 07:43:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=merlin.20170209 header.b=CVZmJGW0; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729931AbgEVOjo (ORCPT + 99 others); Fri, 22 May 2020 10:39:44 -0400 Received: from merlin.infradead.org ([205.233.59.134]:60388 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729868AbgEVOjo (ORCPT ); Fri, 22 May 2020 10:39:44 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=Q+wlQfoEZBNLKtuvrE+VsWClCngPETJs+hYOYiKjbhU=; b=CVZmJGW0vYxZD0dmWgAVrVlcC6 OxTGfTA2htrPPwKStyJK8IPEdzI1eKFHUyoJO8Bb+p1/FvoRhqelkgnpZAgqacgVHT/1e5EiJmRkY BYhMjaEvOlUV0ifDXrX7NACw5hzlV2hpo67tvpLPAChsXJRVTxf4TIE5fahQBDnfx/+OBqG40r1U5 hj+9OnRjUeECs+0rViU3e0Cnk+FNI4biae4psOBS6vk5jiLR2UGRzDA+iz4YTUfgOu6RAn5+ryjsD ilZ3naiuwM4eBylf9pT5lWpKo0Xhk/I2XF2I/HyTNEeiXxmG/vJarKf2xf+P8W1ZRyYVpPPHACAJa eJz2dDaQ==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=noisy.programming.kicks-ass.net) by merlin.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1jc8pE-0003Bp-KA; Fri, 22 May 2020 14:39:04 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id E9AC53011E8; Fri, 22 May 2020 16:38:57 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id B043620BDB12E; Fri, 22 May 2020 16:38:57 +0200 (CEST) Date: Fri, 22 May 2020 16:38:57 +0200 From: Peter Zijlstra To: Mel Gorman Cc: Jirka Hladky , Phil Auld , Ingo Molnar , Vincent Guittot , Juri Lelli , Dietmar Eggemann , Steven Rostedt , Ben Segall , Valentin Schneider , Hillf Danton , LKML , Douglas Shakshober , Waiman Long , Joe Mario , Bill Gray , riel@surriel.com Subject: Re: [PATCH 00/13] Reconcile NUMA balancing decisions with the load balancer v6 Message-ID: <20200522143857.GU317569@hirez.programming.kicks-ass.net> References: <20200513153023.GF3758@techsingularity.net> <20200514153122.GE2978@hirez.programming.kicks-ass.net> <20200515084740.GJ3758@techsingularity.net> <20200515111732.GS2957@hirez.programming.kicks-ass.net> <20200515142444.GK3001@hirez.programming.kicks-ass.net> <20200521103816.GA7167@techsingularity.net> <20200521114132.GI325280@hirez.programming.kicks-ass.net> <20200522132854.GF7167@techsingularity.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200522132854.GF7167@techsingularity.net> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, May 22, 2020 at 02:28:54PM +0100, Mel Gorman wrote: > Is something like this on top of your patch what you had in mind? All under the assumption that is makes it go faster of course ;-) > ---8<--- static inline bool ttwu_queue_cond() { /* * If the CPU does not share cache, then queue the task on the * remote rqs wakelist to avoid accessing remote data. */ if (!cpus_share_cache(smp_processor_id(), cpu)) return true; /* * If the task is descheduling and the only running task on the * CPU, .... */ if ((wake_flags & WF_ON_RQ) && cpu_rq(cpu)->nr_running <= 1) return true; return false; } > -static bool ttwu_queue_remote(struct task_struct *p, int cpu, int wake_flags) > +static bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags) > { > - if (sched_feat(TTWU_QUEUE) && !cpus_share_cache(smp_processor_id(), cpu)) { > - sched_clock_cpu(cpu); /* Sync clocks across CPUs */ > - __ttwu_queue_remote(p, cpu, wake_flags); > - return true; > + if (sched_feat(TTWU_QUEUE)) { > + /* > + * If CPU does not share cache then queue the task on the remote > + * rqs wakelist to avoid accessing remote data. Alternatively, > + * if the task is descheduling and the only running task on the > + * CPU then use the wakelist to offload the task activation to > + * the CPU that will soon be idle so the waker can continue. > + * nr_running is checked to avoid unnecessary task stacking. > + */ > + if (!cpus_share_cache(smp_processor_id(), cpu) || > + ((wake_flags & WF_ON_RQ) && cpu_rq(cpu)->nr_running <= 1)) { > + sched_clock_cpu(cpu); /* Sync clocks across CPUs */ > + __ttwu_queue_wakelist(p, cpu, wake_flags); > + return true; > + } if (sched_feat(TTWU_QUEUE) && ttwu_queue_cond(cpu, wake_flags)) { sched_clock_cpu(cpu); /* Sync clocks across CPUs */ __ttwu_queue_remote(p, cpu, wake_flags); return true; > } > > return false; might be easier to read...