Received: by 2002:a05:6a10:1287:0:0:0:0 with SMTP id d7csp3399689pxv; Mon, 26 Jul 2021 03:25:12 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyQnkyY2+3cZWMUQf8DTpYiBSjsqK/lK3QoXxq24nwL/ctQ44rDnbWjcUTXchMww7qwS0Hx X-Received: by 2002:a05:6e02:f05:: with SMTP id x5mr12412011ilj.268.1627295111898; Mon, 26 Jul 2021 03:25:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1627295111; cv=none; d=google.com; s=arc-20160816; b=z/jD9JdWCzSnpHRzh+vb6eyvR5SKcAvy7sIT7ZKbBCQV6QQ4j6lMVjtwvZ89USUgpd y35LWTck8eThGY12NPlYVw7r8eAP+x+6O1nqexrMrPKHfWSJxpB+NJDL+5RFfAIUqvPS rZG7bvRnQA4hXrYRKfVt/2etn+R1AQoBhGvOeVRMolfdKB10ZJ/F89O++Hb3nz+PdGaE zjoadMB9BjlpKoeHiJ6ZsNrJI64Qm8+CN59GhGWHsHw7lk5fl/hzUFZIEGyhX+E0EymM 3QeTemsaZh2UXyxlcK8d/5a601cqTXx/FzA9FgPDjBhnPkQGxrcuVPVn0LbHD8DY8Svb piiA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=hog/vxlikEDtKJP/V5U+vuH2bAPqB44wbuj4cSM4Pxc=; b=SAOLf30mPvQ58nL4VqQGGYvjOViS654Zy+PDOW4TUQksJnKfez6enx1IAe/vwYkwb1 GPnNHSMQQVIOhnix439T0/yuEvpvjmFmHfr6w4KfjuEmgt7nDe9AhznmIVOBh0jjbjUX 9jhTOaNFEKh9i/eL/T16mbLRP0Emdcif37JOmUCAvHd86U2D2gtvOuDLS6i6Pdjb1sTa vpdzjq7LhfonYl4DiInMZEeNWQLtEsA2WiuIi+eRAIAnxLFs+qF9uaT7pGgqShWxpVIe PSH2cmsaIfR6E23InBceNznBrZTZZGOn6K3PEwAchd9M1aO3W6NLXA0SHVCsPGGieNFv odOg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id b20si34769472jat.16.2021.07.26.03.25.01; Mon, 26 Jul 2021 03:25:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231639AbhGZJnS (ORCPT + 99 others); Mon, 26 Jul 2021 05:43:18 -0400 Received: from outbound-smtp37.blacknight.com ([46.22.139.220]:60921 "EHLO outbound-smtp37.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232862AbhGZJnM (ORCPT ); Mon, 26 Jul 2021 05:43:12 -0400 Received: from mail.blacknight.com (pemlinmail06.blacknight.ie [81.17.255.152]) by outbound-smtp37.blacknight.com (Postfix) with ESMTPS id 435481A1B for ; Mon, 26 Jul 2021 11:23:39 +0100 (IST) Received: (qmail 22922 invoked from network); 26 Jul 2021 10:23:38 -0000 Received: from unknown (HELO stampy.112glenside.lan) (mgorman@techsingularity.net@[84.203.17.255]) by 81.17.254.9 with ESMTPA; 26 Jul 2021 10:23:38 -0000 From: Mel Gorman To: LKML Cc: Ingo Molnar , Peter Zijlstra , Vincent Guittot , Valentin Schneider , Aubrey Li , Mel Gorman Subject: [PATCH 4/9] sched/fair: Use prev instead of new target as recent_used_cpu Date: Mon, 26 Jul 2021 11:22:42 +0100 Message-Id: <20210726102247.21437-5-mgorman@techsingularity.net> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210726102247.21437-1-mgorman@techsingularity.net> References: <20210726102247.21437-1-mgorman@techsingularity.net> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org After select_idle_sibling, p->recent_used_cpu is set to the new target. However on the next wakeup, prev will be the same as recent_used_cpu unless the load balancer has moved the task since the last wakeup. It still works, but is less efficient than it could be. This patch preserves recent_used_cpu for longer. The impact on SIS efficiency is tiny so the SIS statistic patches were used to track the hit rate for using recent_used_cpu. With perf bench pipe on a 2-socket Cascadelake machine, the hit rate went from 57.14% to 85.32%. For more intensive wakeup loads like hackbench, the hit rate is almost negligible but rose from 0.21% to 6.64%. For scaling loads like tbench, the hit rate goes from almost 0% to 25.42% overall. Broadly speaking, on tbench, the success rate is much higher for lower thread counts and drops to almost 0 as the workload scales to towards saturation. Signed-off-by: Mel Gorman --- kernel/sched/fair.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 4e2979b73cec..75ff991a460a 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6390,6 +6390,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target) /* Check a recently used CPU as a potential idle candidate: */ recent_used_cpu = p->recent_used_cpu; + p->recent_used_cpu = prev; if (recent_used_cpu != prev && recent_used_cpu != target) { if (cpus_share_cache(recent_used_cpu, target) && @@ -6922,9 +6923,6 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int wake_flags) } else if (wake_flags & WF_TTWU) { /* XXX always ? */ /* Fast path */ new_cpu = select_idle_sibling(p, prev_cpu, new_cpu); - - if (want_affine) - current->recent_used_cpu = cpu; } rcu_read_unlock(); -- 2.26.2