Received: by 2002:a5b:505:0:0:0:0:0 with SMTP id o5csp4223478ybp; Mon, 7 Oct 2019 05:21:15 -0700 (PDT) X-Google-Smtp-Source: APXvYqxlpIlJuHsStqaE5tbTb5LApnDY81r0Ow962f9gLZmdMevOjoJRIC5ONlKAC5mYVri1EqsR X-Received: by 2002:a50:979b:: with SMTP id e27mr27915859edb.173.1570450875827; Mon, 07 Oct 2019 05:21:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1570450875; cv=none; d=google.com; s=arc-20160816; b=s6n1EB0odmHiIt/g4adH7dUMVQal5P46P4UQqbG6bBldDqNpvepbdYwkC+X4V2Tamx 0vOKPx00a/KqOagaW6uaD+U/+rveSKpXOYBovmp80Y60lJYX6vBijjnxUf4U1KwQL/iD 3dNlJ2H3az5fkUT5J7KQhMTYDbDHv5Pi3N3c0Bi8jdjiUrZ6qNFmX5uXNg/4k4IZciCV 7YTMObgmgpK28hxuhrK+v25O/xcUPoJ6xVhfRow1IWo5THKTINVEAikTv7BhQWoQ++MD jEw9YvUBy5hsY1skpM5s8yDSaMkzqSzSTE5p+j7xrZpgVaKzClJ5jPBUWam6TaMhAJha yXFA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=Os8LYv3GlllnxFiVD99J7Y92c/T0ucky+2nt+k6FyD8=; b=B4EyttLbyPDbp7IzVOp0aWHMKCI1RB94G+lkd8AsHWe0pBS4Qee4X5qk8m32w4yL3Q xxB731effvVhzesC6KQyry+dFcDmkLcOoWYV9WSpbexM1/i+rUyzQ6eG7v3LSjmhynL6 YQnmSnBSKaEkKsGcg3MCG5Ouzx2wAPB5NOdX9UYj9GiAl/5HnEwh9IMdmKJnuU1dn/n1 FzPdq+XisA3Dr5srHTRY2zR1q+6TWlmQijaEfKG3AjUk8t+2KUNsMup3BK8UYahdPE8x 10UE7wXpvdtLzE7WQurOaicckU9ZySMv9DWMk4++lQCyy+8TCR8halwyXvgOBrdyGXMh y0Uw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=mR7p6gOh; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z15si8607466edd.431.2019.10.07.05.20.52; Mon, 07 Oct 2019 05:21:15 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=mR7p6gOh; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727793AbfJGMUN (ORCPT + 99 others); Mon, 7 Oct 2019 08:20:13 -0400 Received: from mail-lf1-f67.google.com ([209.85.167.67]:36038 "EHLO mail-lf1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727639AbfJGMUM (ORCPT ); Mon, 7 Oct 2019 08:20:12 -0400 Received: by mail-lf1-f67.google.com with SMTP id x80so9124345lff.3 for ; Mon, 07 Oct 2019 05:20:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Os8LYv3GlllnxFiVD99J7Y92c/T0ucky+2nt+k6FyD8=; b=mR7p6gOhRZycmGgMGM/nfB2/sGvy8dAPH9Cpr7Ms03Nn4uWxNlSKaeP9/QclKqLjRI 2B3cMpwOWVIcSb2mNu6wjAKEkbyDoz+5EYo09IMgDN4r9OgckJDd4PPZJ4TRqMPgucVW 8bh2Zr0hWxZvzcovM9IHDO1ybwB7MnMlROpybCDAVnu9jIH4+MakavriEv7ahqAZRAZG X4xQJfZaUWearOANKC5IZZmKhhXxhh+pN81QQNfmn1s8lGkoo2XBzb7kcl6QGDdw+6dG TdyHF3SMI86zpi7OgPBk7noyErxatG240L2l0Yl0Dl8HHBgDF57SwjINN2jGoKhowqHa 6qIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Os8LYv3GlllnxFiVD99J7Y92c/T0ucky+2nt+k6FyD8=; b=h5HCa8Ouw1PbiOMQXTBzpcOVZ2aRw4+7hGazykOEQT46o664DrAeTdsXY69Dh4BmCQ SPQ+iaTPRN7gRo/pOiA1N8fJLETwzq5xMXxjbd8XkUVWVHwFmiCUOIAWwpw+mXNRTx1T QGgfsxU5bsH8hkdwrJVdZPlLMH6Tc/BAVSyDdG0DNZpICkUUUSNxAN5mozB86ljvY5dr 8QqFAqvwTpgwieBYjkVxbImUWrkS1B9g3ERLk8xynHhmmAl44UcuYrkYeAZveWo73FO3 swFZz9QLaQ7BVzgynRuFOQmmjhn46v+2QK1JxF88kxgYXuOneBFQXpEj8FvqOnvOcdlQ iXag== X-Gm-Message-State: APjAAAVe4EhoSvF3UVjOhWP7GRqY1lrcHKOhnW+i1R4AtNBjJgA9nY3Z 7bbnd7YR82o2zYCJdwzzcdh6dCZn1ZUQSgnwjVLHO8vRxgM= X-Received: by 2002:a05:6512:411:: with SMTP id u17mr6401412lfk.151.1570450809085; Mon, 07 Oct 2019 05:20:09 -0700 (PDT) MIME-Version: 1.0 References: <20191007083051.4820-1-parth@linux.ibm.com> <20191007083051.4820-5-parth@linux.ibm.com> In-Reply-To: <20191007083051.4820-5-parth@linux.ibm.com> From: Vincent Guittot Date: Mon, 7 Oct 2019 14:19:57 +0200 Message-ID: Subject: Re: [RFC v5 4/6] sched/fair: Tune task wake-up logic to pack small background tasks on fewer cores To: Parth Shah Cc: linux-kernel , "open list:THERMAL" , Peter Zijlstra , Ingo Molnar , Dietmar Eggemann , Patrick Bellasi , Valentin Schneider , Pavel Machek , Doug Smythies , Quentin Perret , "Rafael J. Wysocki" , Tim Chen , Daniel Lezcano Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 7 Oct 2019 at 10:31, Parth Shah wrote: > > The algorithm finds the first non idle core in the system and tries to > place a task in the idle CPU in the chosen core. To maintain > cache hotness, work of finding non idle core starts from the prev_cpu, > which also reduces task ping-pong behaviour inside of the core. > > Define a new method to select_non_idle_core which keep tracks of the idle > and non-idle CPUs in the core and based on the heuristics determines if the > core is sufficiently busy to place the incoming backgroung task. The > heuristic further defines the non-idle CPU into either busy (>12.5% util) > CPU and overutilized (>80% util) CPU. > - The core containing more idle CPUs and no busy CPUs is not selected for > packing > - The core if contains more than 1 overutilized CPUs are exempted from > task packing > - Pack if there is atleast one busy CPU and overutilized CPUs count is <2 > > Value of 12.5% utilization for busy CPU gives sufficient heuristics for CPU > doing enough work and not become idle in nearby timeframe. > > Signed-off-by: Parth Shah > --- > kernel/sched/core.c | 3 ++ > kernel/sched/fair.c | 95 ++++++++++++++++++++++++++++++++++++++++++++- > 2 files changed, 97 insertions(+), 1 deletion(-) > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index 6e1ae8046fe0..7e3aff59540a 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -6402,6 +6402,7 @@ static struct kmem_cache *task_group_cache __read_mostly; > > DECLARE_PER_CPU(cpumask_var_t, load_balance_mask); > DECLARE_PER_CPU(cpumask_var_t, select_idle_mask); > +DECLARE_PER_CPU(cpumask_var_t, turbo_sched_mask); > > void __init sched_init(void) > { > @@ -6442,6 +6443,8 @@ void __init sched_init(void) > cpumask_size(), GFP_KERNEL, cpu_to_node(i)); > per_cpu(select_idle_mask, i) = (cpumask_var_t)kzalloc_node( > cpumask_size(), GFP_KERNEL, cpu_to_node(i)); > + per_cpu(turbo_sched_mask, i) = (cpumask_var_t)kzalloc_node( > + cpumask_size(), GFP_KERNEL, cpu_to_node(i)); > } > #endif /* CONFIG_CPUMASK_OFFSTACK */ > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index b798fe7ff7cd..d4a1b6474338 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -5353,6 +5353,8 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags) > /* Working cpumask for: load_balance, load_balance_newidle. */ > DEFINE_PER_CPU(cpumask_var_t, load_balance_mask); > DEFINE_PER_CPU(cpumask_var_t, select_idle_mask); > +/* A cpumask to find active cores in the system. */ > +DEFINE_PER_CPU(cpumask_var_t, turbo_sched_mask); > > #ifdef CONFIG_NO_HZ_COMMON > > @@ -5964,6 +5966,76 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t > return cpu; > } > > +#ifdef CONFIG_SCHED_SMT > +static inline bool is_background_task(struct task_struct *p) > +{ > + if (p->flags & PF_CAN_BE_PACKED) > + return true; > + > + return false; > +} > + > +#define busyness_threshold (100 >> 3) > +#define is_cpu_busy(util) ((util) > busyness_threshold) > + > +/* > + * Try to find a non idle core in the system based on few heuristics: > + * - Keep track of overutilized (>80% util) and busy (>12.5% util) CPUs > + * - If none CPUs are busy then do not select the core for task packing > + * - If atleast one CPU is busy then do task packing unless overutilized CPUs > + * count is < busy/2 CPU count > + * - Always select idle CPU for task packing > + */ > +static int select_non_idle_core(struct task_struct *p, int prev_cpu, int target) > +{ > + struct cpumask *cpus = this_cpu_cpumask_var_ptr(turbo_sched_mask); > + int iter_cpu, sibling; > + > + cpumask_and(cpus, cpu_online_mask, p->cpus_ptr); > + > + for_each_cpu_wrap(iter_cpu, cpus, prev_cpu) { > + int idle_cpu_count = 0, non_idle_cpu_count = 0; > + int overutil_cpu_count = 0; > + int busy_cpu_count = 0; > + int best_cpu = iter_cpu; > + > + for_each_cpu(sibling, cpu_smt_mask(iter_cpu)) { > + __cpumask_clear_cpu(sibling, cpus); > + if (idle_cpu(iter_cpu)) { > + idle_cpu_count++; > + best_cpu = iter_cpu; > + } else { > + non_idle_cpu_count++; > + if (cpu_overutilized(iter_cpu)) > + overutil_cpu_count++; > + if (is_cpu_busy(cpu_util(iter_cpu))) > + busy_cpu_count++; > + } > + } > + > + /* > + * Pack tasks to this core if > + * 1. Idle CPU count is higher and atleast one is busy > + * 2. If idle_cpu_count < non_idle_cpu_count then ideally do > + * packing but if there are more CPUs overutilized then don't > + * overload it. Could you give details about the rationale behind these conditions ? > + */ > + if (idle_cpu_count > non_idle_cpu_count) { > + if (busy_cpu_count) > + return best_cpu; > + } else { > + /* > + * Pack tasks if at max 1 CPU is overutilized > + */ > + if (overutil_cpu_count < 2) > + return best_cpu; > + } > + } > + > + return select_idle_sibling(p, prev_cpu, target); > +} > +#endif /* CONFIG_SCHED_SMT */ > + > /* > * Try and locate an idle core/thread in the LLC cache domain. > */ > @@ -6418,6 +6490,23 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu) > return -1; > } > > +#ifdef CONFIG_SCHED_SMT > +/* > + * Select all classified background tasks for task packing > + */ > +static inline int turbosched_select_non_idle_core(struct task_struct *p, > + int prev_cpu, int target) > +{ > + return select_non_idle_core(p, prev_cpu, target); > +} > +#else > +static inline int turbosched_select_non_idle_core(struct task_struct *p, > + int prev_cpu, int target) > +{ > + return select_idle_sibling(p, prev_cpu, target); should be better to make turbosched_select_non_idle_core empty and make sure that __turbo_sched_enabled is never enabled if CONFIG_SCHED_SMT is disabled > +} > +#endif > + > /* > * select_task_rq_fair: Select target runqueue for the waking task in domains > * that have the 'sd_flag' flag set. In practice, this is SD_BALANCE_WAKE, > @@ -6483,7 +6572,11 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int wake_f > } else if (sd_flag & SD_BALANCE_WAKE) { /* XXX always ? */ > /* Fast path */ > > - new_cpu = select_idle_sibling(p, prev_cpu, new_cpu); > + if (is_turbosched_enabled() && unlikely(is_background_task(p))) > + new_cpu = turbosched_select_non_idle_core(p, prev_cpu, > + new_cpu); Could you add turbosched_select_non_idle_core() similarly to find_energy_efficient_cpu() ? Add it at the beg select_task_rq_fair() Return immediately with theCPU if you have found one Or let the normal path select a CPU if the turbosched_select_non_idle_core() has not been able to find a suitable CPU for packing > + else > + new_cpu = select_idle_sibling(p, prev_cpu, new_cpu); > > if (want_affine) > current->recent_used_cpu = cpu; > -- > 2.17.1 >