Received: by 2002:a05:6a10:c604:0:0:0:0 with SMTP id y4csp263993pxt; Wed, 4 Aug 2021 10:29:32 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzeDozM1L4rlFhx4bTksQt1Y8hhBC8vF28j+4zKNoMfNhFQgEiCujC668xF5HmMx+2PsUe/ X-Received: by 2002:a6b:5b0a:: with SMTP id v10mr114626ioh.103.1628098172698; Wed, 04 Aug 2021 10:29:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1628098172; cv=none; d=google.com; s=arc-20160816; b=Y2SQfaEP2x4IqOv5ewA5eCuWaNv10bivPcLni8rE8njS9tCbb15V4pqAth7e6BbYrV XMLXYDiKtfSFcYdV/dGuYh+qZnJtRLoatQixTcSvjbt0HaqDbhqKCaGPxm2rxmogZGvq zQVpbK0r1r0WCo3HsdVTukolP3zUKDBx8jlIP4BcsyW+E3xrF5kIu5WQV3rnKMIvHyMc Sha4FkFxWBeKXIO0QYuth9ZFMPIkcJYqBufqwiEhbSWnAKw+8lptZHwhkhOo9FJm6jI/ ne80mJS7tQF9+V+659lMfmoSe3MF8WX5f6VAh+2bVarkJpPzIOMzI+niABDxZ9yr4efj JQUg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=VcsFGCN5DDsX1y9QY+Znu6sU7csCC0bEfbl18gDqPU8=; b=CfR5FmRCo/wz0qL4vaJmJNsO+UzcOSeax4FFG7e58ssx2Izv8T+wbt5YmQNjnd3MOS DXHbmk9XCDDb51ZCthYUt7L/tV63sGQGK8TN74u/orC2ZgNcf/4Rv844+WAP+3GKpA/Z 1IhI0veNM8zUX7PnZnmNPFiQiuY5+nY2EGfq9vy00/EwkGx3UOTnrnD+xQO4p2Zo3ej2 LEb0kpnZpd5YZsrqdR+rTZnzxbUxCkeKw1kvliblOdIIGalOQ4l5flwdLN7ifM2/stW4 sDx1Y6VGMCAUmjWzm631WoSlujGFVHyLtQjf1fX3R/PopHHwazB+04TmiNR0koCsONvJ Cp5w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id n2si2025544jaj.0.2021.08.04.10.29.19; Wed, 04 Aug 2021 10:29:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237907AbhHDL7e (ORCPT + 99 others); Wed, 4 Aug 2021 07:59:34 -0400 Received: from outbound-smtp29.blacknight.com ([81.17.249.32]:46992 "EHLO outbound-smtp29.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237926AbhHDL7c (ORCPT ); Wed, 4 Aug 2021 07:59:32 -0400 Received: from mail.blacknight.com (pemlinmail02.blacknight.ie [81.17.254.11]) by outbound-smtp29.blacknight.com (Postfix) with ESMTPS id 3E5E8BEC4C for ; Wed, 4 Aug 2021 12:59:19 +0100 (IST) Received: (qmail 29784 invoked from network); 4 Aug 2021 11:59:19 -0000 Received: from unknown (HELO stampy.112glenside.lan) (mgorman@techsingularity.net@[84.203.17.255]) by 81.17.254.9 with ESMTPA; 4 Aug 2021 11:59:18 -0000 From: Mel Gorman To: Ingo Molnar Cc: Peter Zijlstra , Vincent Guittot , Valentin Schneider , Song Bao Hua , LKML , Mel Gorman Subject: [PATCH 1/2] sched/fair: Use prev instead of new target as recent_used_cpu Date: Wed, 4 Aug 2021 12:58:56 +0100 Message-Id: <20210804115857.6253-2-mgorman@techsingularity.net> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210804115857.6253-1-mgorman@techsingularity.net> References: <20210804115857.6253-1-mgorman@techsingularity.net> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org After select_idle_sibling, p->recent_used_cpu is set to the new target. However on the next wakeup, prev will be the same as recent_used_cpu unless the load balancer has moved the task since the last wakeup. It still works, but is less efficient than it could be. This patch preserves recent_used_cpu for longer. The impact on SIS efficiency is tiny so the SIS statistic patches were used to track the hit rate for using recent_used_cpu. With perf bench pipe on a 2-socket Cascadelake machine, the hit rate went from 57.14% to 85.32%. For more intensive wakeup loads like hackbench, the hit rate is almost negligible but rose from 0.21% to 6.64%. For scaling loads like tbench, the hit rate goes from almost 0% to 25.42% overall. Broadly speaking, on tbench, the success rate is much higher for lower thread counts and drops to almost 0 as the workload scales to towards saturation. Signed-off-by: Mel Gorman --- kernel/sched/fair.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 44c452072a1b..8ad7666f387c 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6376,6 +6376,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target) /* Check a recently used CPU as a potential idle candidate: */ recent_used_cpu = p->recent_used_cpu; + p->recent_used_cpu = prev; if (recent_used_cpu != prev && recent_used_cpu != target && cpus_share_cache(recent_used_cpu, target) && @@ -6902,9 +6903,6 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int wake_flags) } else if (wake_flags & WF_TTWU) { /* XXX always ? */ /* Fast path */ new_cpu = select_idle_sibling(p, prev_cpu, new_cpu); - - if (want_affine) - current->recent_used_cpu = cpu; } rcu_read_unlock(); -- 2.31.1