Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp550438pxk; Thu, 17 Sep 2020 09:41:56 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxa744QOIPM1cVWwUhiUBHnkwiwdhGvbazVkWcOwbssJFZMop8mmlXfrzWa+iFwrQlxlV9z X-Received: by 2002:a17:906:2b48:: with SMTP id b8mr33016717ejg.125.1600360916381; Thu, 17 Sep 2020 09:41:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1600360916; cv=none; d=google.com; s=arc-20160816; b=xmCMklcQUpdwXQNm+PnHAb9wN4z6gyoARNIBvqdI0AHlnLG/MfyO9aXTVl2qv+Pb78 q0hijRdFaB+6t85O+wZsogkEPaUsV9JcPjRZtxxVicbKeSG9L0uhpabHank8hPkbPDLX uK4CaR/WHs9yKZYQfL1BzrcOWkacjwWrVhMgjZy/lj5dgRndNUaisP8PbhtPRiJ04OIp h6Sfmrf4+Wsw9jqcQJTH0FlpUj9x6n4YztFB0hzqVoPYtG1gm93kxgvDbsUIawXsDrNw EmBWit1mcOr0GFZ8eryj6fxTI5xTumb1FYl44YcuNNd1uPR9RmFawlCuJY6D6PTDYyUh SuFA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:content-disposition:mime-version :message-id:subject:cc:to:from:date:dkim-signature; bh=jiBYeLIVQGZMxRWao2epvjS3EpndBLJezr8naDcOzmg=; b=CZSeMCvjoHhO9upJ3kdGF7UU+xTW02WzYg3BqQDKqyaXKYXFOuWSQBTg05NMXj+qx9 t38UlHyWrYaOfuGvpecqf5WI0JTnwvGs4mfVlkPg5FaBA1rfMav/P0GAnaKq/6lNUv9/ msnEiICDc7fp7DzbS+3UIITjW4rCOEgPEEc632V3p3PYUtSoQEjJxvRCkH+VwZqHq9LO UCeex+ryWxtWf9J2gOikwEsEjsBGI2X8TiGyT/CUqKhPd2ZbCdt0jXax4Zc5nqcudopY g0NUvuxlTvtqDD5JZQwwhg4yW1fEOVEFpE4fBQcFG/RvGONVOI/V4wM7GDWX1kLoIVtU Q1xg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=vM6Fr1sv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id n6si128983edy.596.2020.09.17.09.41.31; Thu, 17 Sep 2020 09:41:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=vM6Fr1sv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728614AbgIQQkT (ORCPT + 99 others); Thu, 17 Sep 2020 12:40:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51102 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728590AbgIQQjH (ORCPT ); Thu, 17 Sep 2020 12:39:07 -0400 Received: from mail-pg1-x543.google.com (mail-pg1-x543.google.com [IPv6:2607:f8b0:4864:20::543]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A2EB1C061756 for ; Thu, 17 Sep 2020 09:39:06 -0700 (PDT) Received: by mail-pg1-x543.google.com with SMTP id d13so1676571pgl.6 for ; Thu, 17 Sep 2020 09:39:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=date:from:to:cc:subject:message-id:mime-version:content-disposition :user-agent; bh=jiBYeLIVQGZMxRWao2epvjS3EpndBLJezr8naDcOzmg=; b=vM6Fr1sv+Kl/RpKub/YJyiMfBiZS2fjl/RYIf3kGDkRylHKG61SQhHasFN8F5w4F/i NkiSnZVS3k5ga8qQ+Pi7SjNE/lCGv6bM8S8BCL/c6WZJTwaNmyGt8ScJEitO88TwoRsv KRKq06USu59PQBItV2xmAbcgUbMW/qq0QrxXBs4LRDLe/GzRNQrXn2XiOskb9Wq3goZd LU/6dGTn2XTTS7UhX4gYEzuzIB3dFtPx6BQ5CYy/TPbkGtWQwRuxadFwHReFuSnP5blf KukB/4ZpesfXVLjQZjdP/1cqfZlGYmEbtZmlCPCeTDxP9atRi+2poJOfoUn6yklbGsg+ m3Mg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:mime-version :content-disposition:user-agent; bh=jiBYeLIVQGZMxRWao2epvjS3EpndBLJezr8naDcOzmg=; b=M4jMpLHZZqwf3NRZKuIXp3IRt2nSGGcVedDomVgbbYw4dan76onYFSNQtI87skEXvl nSvsUPeWPvjawNXUHIb3Dfb/Kf13hS8pwyPK46+94R1VirogfnvzzAJzKaw3MT289RCA bcQSan4MpAToa1yk5aHH7vRYn/Z3/Uq9qvOstgXA2pAz7QEBbrJsbBYg+FiNT5HNJ2TZ 84XY+nvIE8GdMuBM8r3yh/kIDB0YBpZFxwanv+Qqshscjf8fo31Mzq1q62BG37E7H4L5 HZkgMEk3psIP3hj3GaPw0zV2kwwYapUR/CoGci3KsH0cD/4FIAJrx6XxthJ+cPMXIYUS DEdQ== X-Gm-Message-State: AOAM530RbAaO08nqHnvbgnqMJmxMu+0fCMlrf3aMBdpkeTny6qDrHrOK l4AgSvJEx0vs8OOG6sXsG+rOGLp8lX4= X-Received: by 2002:aa7:8051:0:b029:13e:d13d:a04e with SMTP id y17-20020aa780510000b029013ed13da04emr27211853pfm.20.1600360745528; Thu, 17 Sep 2020 09:39:05 -0700 (PDT) Received: from iZj6chx1xj0e0buvshuecpZ ([47.75.1.235]) by smtp.gmail.com with ESMTPSA id u2sm177009pji.50.2020.09.17.09.39.02 (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Thu, 17 Sep 2020 09:39:04 -0700 (PDT) Date: Fri, 18 Sep 2020 00:39:00 +0800 From: Peng Liu To: linux-kernel@vger.kernel.org Cc: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, valentin.schneider@arm.com, raistlin@linux.it, iwtbavbm@gmail.com Subject: [PATCH v2] sched/deadline: Fix sched_dl_global_validate() Message-ID: <20200917163900.GA29339@iZj6chx1xj0e0buvshuecpZ> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.9.4 (2018-02-28) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When user changes sched_rt_{runtime, period}_us, then sched_rt_handler() --> sched_dl_bandwidth_validate() { new_bw = global_rt_runtime()/global_rt_period(); for_each_possible_cpu(cpu) { dl_b = dl_bw_of(cpu); if (new_bw < dl_b->total_bw) ret = -EBUSY; } } Under CONFIG_SMP, dl_bw is per root domain , but not per CPU, dl_b->total_bw is the allocated bandwidth of the whole root domain. we should compare dl_b->total_bw against cpus*new_bw, where 'cpus' is the number of CPUs of the root domain. Also, below annotation(in kernel/sched/sched.h) implied implementation only appeared in SCHED_DEADLINE v2[1], then deadline scheduler kept evolving till got merged(v9), but the annotation remains unchanged, meaningless and misleading, correct it. * With respect to SMP, the bandwidth is given on a per-CPU basis, * meaning that: * - dl_bw (< 100%) is the bandwidth of the system (group) on each CPU; * - dl_total_bw array contains, in the i-eth element, the currently * allocated bandwidth on the i-eth CPU. [1] https://lkml.org/lkml/2010/2/28/119 Signed-off-by: Peng Liu --- v2 <-- v1: Replace cpumask_weight(cpu_rq(cpu)->rd->span) with dl_bw_cpus(cpu), suggested by Juri. kernel/sched/deadline.c | 45 +++++++++++++++++++++++++++++------------ kernel/sched/sched.h | 17 +++++----------- 2 files changed, 37 insertions(+), 25 deletions(-) diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 3862a28cd05d..17526ecae272 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -2511,33 +2511,45 @@ const struct sched_class dl_sched_class .update_curr = update_curr_dl, }; +#ifdef CONFIG_SMP +static struct cpumask dl_local_possible_mask; +#endif /* CONFIG_SMP */ + int sched_dl_global_validate(void) { u64 runtime = global_rt_runtime(); u64 period = global_rt_period(); u64 new_bw = to_ratio(period, runtime); struct dl_bw *dl_b; - int cpu, ret = 0; + int cpu, cpus, ret = 0; unsigned long flags; /* * Here we want to check the bandwidth not being set to some * value smaller than the currently allocated bandwidth in * any of the root_domains. - * - * FIXME: Cycling on all the CPUs is overdoing, but simpler than - * cycling on root_domains... Discussion on different/better - * solutions is welcome! */ +#ifdef CONFIG_SMP + cpumask_t *possible_mask = &dl_local_possible_mask; + + cpumask_copy(possible_mask, cpu_possible_mask); + for_each_cpu(cpu, possible_mask) { +#else for_each_possible_cpu(cpu) { +#endif /* CONFIG_SMP */ rcu_read_lock_sched(); dl_b = dl_bw_of(cpu); - + cpus = dl_bw_cpus(cpu); +#ifdef CONFIG_SMP + /* Do the "andnot" operation iff it's necessary. */ + if (cpus > 1) + cpumask_andnot(possible_mask, possible_mask, + cpu_rq(cpu)->rd->span); +#endif /* CONFIG_SMP */ raw_spin_lock_irqsave(&dl_b->lock, flags); - if (new_bw < dl_b->total_bw) + if (new_bw * cpus < dl_b->total_bw) ret = -EBUSY; raw_spin_unlock_irqrestore(&dl_b->lock, flags); - rcu_read_unlock_sched(); if (ret) @@ -2566,6 +2578,9 @@ void sched_dl_do_global(void) struct dl_bw *dl_b; int cpu; unsigned long flags; +#ifdef CONFIG_SMP + cpumask_t *possible_mask = NULL; +#endif /* CONFIG_SMP */ def_dl_bandwidth.dl_period = global_rt_period(); def_dl_bandwidth.dl_runtime = global_rt_runtime(); @@ -2573,17 +2588,21 @@ void sched_dl_do_global(void) if (global_rt_runtime() != RUNTIME_INF) new_bw = to_ratio(global_rt_period(), global_rt_runtime()); - /* - * FIXME: As above... - */ - for_each_possible_cpu(cpu) { +#ifdef CONFIG_SMP + possible_mask = &dl_local_possible_mask; + cpumask_copy(possible_mask, cpu_possible_mask); +#endif /* CONFIG_SMP */ + for_each_cpu(cpu, possible_mask) { rcu_read_lock_sched(); dl_b = dl_bw_of(cpu); raw_spin_lock_irqsave(&dl_b->lock, flags); dl_b->bw = new_bw; raw_spin_unlock_irqrestore(&dl_b->lock, flags); - +#ifdef CONFIG_SMP + cpumask_andnot(possible_mask, possible_mask, + cpu_rq(cpu)->rd->span); +#endif /* CONFIG_SMP */ rcu_read_unlock_sched(); init_dl_rq_bw_ratio(&cpu_rq(cpu)->dl); } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 28709f6b0975..2602544e06ff 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -258,9 +258,9 @@ struct rt_bandwidth { void __dl_clear_params(struct task_struct *p); /* - * To keep the bandwidth of -deadline tasks and groups under control + * To keep the bandwidth of -deadline tasks under control * we need some place where: - * - store the maximum -deadline bandwidth of the system (the group); + * - store the maximum -deadline bandwidth of each root domain; * - cache the fraction of that bandwidth that is currently allocated. * * This is all done in the data structure below. It is similar to the @@ -269,17 +269,10 @@ void __dl_clear_params(struct task_struct *p); * do not decrease any runtime while the group "executes", neither we * need a timer to replenish it. * - * With respect to SMP, the bandwidth is given on a per-CPU basis, + * With respect to SMP, the bandwidth is given on a per root domain basis, * meaning that: - * - dl_bw (< 100%) is the bandwidth of the system (group) on each CPU; - * - dl_total_bw array contains, in the i-eth element, the currently - * allocated bandwidth on the i-eth CPU. - * Moreover, groups consume bandwidth on each CPU, while tasks only - * consume bandwidth on the CPU they're running on. - * Finally, dl_total_bw_cpu is used to cache the index of dl_total_bw - * that will be shown the next time the proc or cgroup controls will - * be red. It on its turn can be changed by writing on its own - * control. + * - bw (< 100%) is the bandwidth of the system on each CPU; + * - total_bw is the currently allocated bandwidth on each root domain. */ struct dl_bandwidth { raw_spinlock_t dl_runtime_lock; -- 2.20.1