Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp4032766ybl; Mon, 3 Feb 2020 11:13:06 -0800 (PST) X-Google-Smtp-Source: APXvYqwtUzNFnJZIZzC2K7Ta+VD0Nl3yZhSv6yMT7y0s46/s+qSr+j2UqekLQ2qsiZU6ocsi3roI X-Received: by 2002:aca:2803:: with SMTP id 3mr374459oix.162.1580757186656; Mon, 03 Feb 2020 11:13:06 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1580757186; cv=none; d=google.com; s=arc-20160816; b=T1qV0NeKdTchNmrmZ0YYyAfLh9z9UGjA0g2J3wcuPqHiTGoW4L6THpfwz3S54QFMMY d6qqkymfYmZ2W5WcQdN2EvM58XjkG4AWJ5ctIqaoNxmMYFJlMDT1MoPfr6w+ql10XPkf OcKNLjy2zhfrXXx3C/lY3LMc5cUOfmtWPVQAMfLvTterkwa+C7jfR6J35opyStj/N6SW yfWGg5DJyGrSTvvF1Xz6ZNfpYIGtSIj9LyhgzjNcq5zd4mEX0oV4+dA4Qbo+Iwm2G0L3 Gp48hJYltKVPuUeWXFnzX6qK4ERyUTxbXm6Zyfyb6eeqUGAZa8MhHOh7mg14d1UQoUAm 30Ug== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:subject:cc:to:from:date; bh=3ihGXXy/1CGx5kjoixsZVKRcPOgHeKQnj545Ci4kv3Q=; b=ymIKcTGETqaD11OK1lt8/n+BRdmXcJJnoPMjyatZ8BvJfRmoNM9hHuc8qWJmNyx99B Xll4R+G/+MTjlPf/rxZz4LLCTdzYrG0Q8P94dKXr4d36nMPXVa1V+oViTu52tHIvG8Jt qRkMLAwPgJE0Lk9i+DNfeag9rBdS0Ems9BWgJegsRKJAppfL3eyRhCI+q47RFgwSXrn3 p+DgI3EsqZy8EIG0P8G2wTWjGsf1DxY9OLUm3d5OjoJQMcV55ZgrTsLI9QPUbnBBAw1W pbIasx3/Odsz0p/akbGzuCfz1KAV11KJYcsT03RObzF8BPNNrLpPgKmRY9nAXCY/MaV1 VBPQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u12si10042628otq.51.2020.02.03.11.12.54; Mon, 03 Feb 2020 11:13:06 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728869AbgBCSML (ORCPT + 98 others); Mon, 3 Feb 2020 13:12:11 -0500 Received: from mail.kernel.org ([198.145.29.99]:60168 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727188AbgBCSMK (ORCPT ); Mon, 3 Feb 2020 13:12:10 -0500 Received: from oasis.local.home (unknown [213.120.252.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 8932320838; Mon, 3 Feb 2020 18:12:08 +0000 (UTC) Date: Mon, 3 Feb 2020 13:12:03 -0500 From: Steven Rostedt To: Qais Yousef Cc: Pavan Kondeti , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Ben Segall , Mel Gorman , LKML Subject: Re: [PATCH v2] sched: rt: Make RT capacity aware Message-ID: <20200203131203.20bf3fc3@oasis.local.home> In-Reply-To: <20200203171745.alba7aswajhnsocj@e107158-lin> References: <20191009104611.15363-1-qais.yousef@arm.com> <20200131100629.GC27398@codeaurora.org> <20200131153405.2ejp7fggqtg5dodx@e107158-lin.cambridge.arm.com> <20200203142712.a7yvlyo2y3le5cpn@e107158-lin> <20200203111451.0d1da58f@oasis.local.home> <20200203171745.alba7aswajhnsocj@e107158-lin> X-Mailer: Claws Mail 3.17.3 (GTK+ 2.24.32; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 3 Feb 2020 17:17:46 +0000 Qais Yousef wrote: > I'm torn about pushing a task already on a big core to a little core if it says > it wants it (down migration). If the "down migration" happens to a process that is lower in priority, then that stays in line with the policy decisions of scheduling RT tasks. That is, higher priority task take precedence over lower priority tasks, even if that means "degrading" that lower priority task. For example, if a high priority task wakes up on a CPU that is running a lower priority task, and with the exception of that lower priority task being pinned, it will boot it off the CPU. Even if the lower priority task is pinned, it may still take over the CPU if it can't find another CPU. > > > > 4. If a little core is returned, and we schedule an RT task that > > prefers big cores on it, we mark it overloaded. > > > > 5. An RT task on a big core schedules out. Start looking at the RT > > overloaded run queues. > > > > 6. See that there's an RT task on the little core, and migrate it over. > > I think the above should depend on the fitness of the cpu we currently run on. > I think we shouldn't down migrate, or at least investigate better down > migration makes more sense than keeping tasks running on the correct CPU where > they are. Note, this only happens when a big core CPU schedules. And if you do not have HAVE_RT_PUSH_IPI (which sends IPIs to overloaded CPUS and just schedules), then that "down migration" happens to an RT task that isn't even running. You can add to the logic that you do not take over an RT task that is pinned and can't move itself. Perhaps that may be the only change to cpu_find(), is that it will only pick a big CPU if little CPUs are available if the big CPU doesn't have a pinned RT task on it. Like you said, this is best effort, and I believe this is the best approach. The policy has always been the higher the priority of a task, the more likely it will push other tasks away. We don't change that. If the system administrator is overloading the big cores with RT tasks, then this is what they get. > > > Note, this will require a bit more logic as the overloaded code wasn't > > designed for migration of running tasks, but that could be added. > > I'm wary of overloading the meaning of rt.overloaded. Maybe I can convert it to > a bitmap so that we can encode the reason. We can change the name to something like rt.needs_pull or whatever. -- Steve