Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp7821613ybi; Tue, 9 Jul 2019 04:59:20 -0700 (PDT) X-Google-Smtp-Source: APXvYqxMxE99ImIRNPEQ+BdYqO7zh7dytxfO7CZzgvYDds8fP7mRcH+p48eQyjfVb7aMIofn/pZK X-Received: by 2002:a65:6294:: with SMTP id f20mr20783558pgv.349.1562673560347; Tue, 09 Jul 2019 04:59:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1562673560; cv=none; d=google.com; s=arc-20160816; b=plovm71+Mxbonm7lwGaUd57fB5GC3l60ypAXB8lKXehaduR/NRTVQQKjzLpAGyrK0l Hw5LGl5y02c/0qkuwEQLT34L7oYOmJ6cqyJwJZMfLqDwNG1eosDN26r6BBuUtctWlfJw r3PKbOokenUJpv/xEMlZmALiBzwoaLtECoxzrnZw38vVH2G0pWuKQTMx9SUA/FkjB3Kl 8XpCnvVMd0nv+ScixyxTXjSt37dntidJBuhJWdsHULNNMhDmyZ/zTWpkChxl0aEgEao9 4a8jI0p51wuuzGBA2BkAhNhKF6+I6mbzyXRJ5Lc3OZ6DMukT5VWA5npQVw8M/PFTP6CP UG9g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from; bh=pX9FjchoR+DQfQxrVDnOdLwWU6b4fRylBBU5s4JpwpQ=; b=rywT6HUToqqRyERbsYyKAirdn6F431GkLYBW4wgaxj9ISYJ1EAmoy1Xt0V1faIrSZB tlsR5wf/Cm4pyYH9QdIvsVg4WMuYK4XcQmIwMH96KUeuOqLijy5Qs+R451ELV7BrX0/v K8XDxV7wTzbPMz13eEIfM4cZjnwG0fzVsgVL4VfiAGf79jaS9ns1pfEZT0jSb2qtBiOa p4KyMlOIsPIoI2U/W+NcABvWjNqDjNqPVVJJH5oGfcLbWZ5uTaA5rmwkB93jYtxISt0B wjWH7OnuLgU6sJbJD2zPpMI+qae1UXhLxzQBvtXhlRySqLZmiR/F5TroZCWnA2yE+dnN 36rQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w12si21061468plq.92.2019.07.09.04.59.04; Tue, 09 Jul 2019 04:59:20 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726636AbfGIL6R (ORCPT + 99 others); Tue, 9 Jul 2019 07:58:17 -0400 Received: from foss.arm.com ([217.140.110.172]:42202 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726043AbfGIL6Q (ORCPT ); Tue, 9 Jul 2019 07:58:16 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 30D742B; Tue, 9 Jul 2019 04:58:16 -0700 (PDT) Received: from e108031-lin.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 4720D3F59C; Tue, 9 Jul 2019 04:58:15 -0700 (PDT) From: Chris Redpath To: linux-kernel@vger.kernel.org Cc: Ingo Molnar , Peter Zijlstra , morten.rasmussen@arm.com, dietmar.eggemann@arm.com, Chris Redpath Subject: [PATCH] sched/fair: Update rq_clock, cfs_rq before migrating for asym cpu capacity Date: Tue, 9 Jul 2019 12:57:59 +0100 Message-Id: <20190709115759.10451-1-chris.redpath@arm.com> X-Mailer: git-send-email 2.17.1 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The ancient workaround to avoid the cost of updating rq clocks in the middle of a migration causes some issues on asymmetric CPU capacity systems where we use task utilization to determine which cpus fit a task. On quiet systems we can inflate task util after a migration which causes misfit to fire and force-migrate the task. This occurs when: (a) a task has util close to the non-overutilized capacity limit of a particular cpu (cpu0 here); and (b) the prev_cpu was quiet otherwise, such that rq clock is sufficiently out of date (cpu1 here). e.g. _____ cpu0: ________________________| |______________ |<- misfit happens ______ ___ ___ cpu1: ____| |______________|___| |_________| ->| |<- wakeup migration time last rq clock update When the task util is in just the right range for the system, we can end up migrating an unlucky task back and forth many times until we are lucky and the source rq happens to be updated close to the migration time. In order to address this, lets update both rq_clock and cfs_rq where this could be an issue. Signed-off-by: Chris Redpath --- kernel/sched/fair.c | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index b798fe7ff7cd..51791db26a2a 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6545,6 +6545,21 @@ static void migrate_task_rq_fair(struct task_struct *p, int new_cpu) * wakee task is less decayed, but giving the wakee more load * sounds not bad. */ + if (static_branch_unlikely(&sched_asym_cpucapacity) && + p->state == TASK_WAKING) { + /* + * On asymmetric capacity systems task util guides + * wake placement so update rq_clock and cfs_rq + */ + struct cfs_rq *cfs_rq = task_cfs_rq(p); + struct rq *rq = task_rq(p); + struct rq_flags rf; + + rq_lock_irqsave(rq, &rf); + update_rq_clock(rq); + update_cfs_rq_load_avg(cfs_rq_clock_pelt(cfs_rq), cfs_rq); + rq_unlock_irqrestore(rq, &rf); + } remove_entity_load_avg(&p->se); } -- 2.17.1