Received: by 2002:a25:1985:0:0:0:0:0 with SMTP id 127csp588268ybz; Wed, 15 Apr 2020 14:38:15 -0700 (PDT) X-Google-Smtp-Source: APiQypL8z/BeWCRwTBeXkdFTILCg4uyfHOkTklkZhiRhAj1tFNIkBPej9+p261H9jg3AyVDL/pwi X-Received: by 2002:a17:906:d9cf:: with SMTP id qk15mr7089505ejb.55.1586986694892; Wed, 15 Apr 2020 14:38:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1586986694; cv=none; d=google.com; s=arc-20160816; b=o8bLFv3Gt5nOmldRGjP2RAqywu34oa2TtuDE0bXTN+EL/q92dGhz3bBpEucJxZtkAd +yapNShymxNRNIEHrY6o+w4q6jOmNIlDM6lYfRIzwqUwftdGHWNQJUe2+oBAQRW/Kv0i aNmnusl292IW/6SvnPZiaHCG4CJgXw8QqI5tyLh1YYvZVyPJK5cbq8GNrTzIRmwl1BRi bJrKNNEAF8UHhVJEnU1xFl1nAQlZOwZC72krfr3fVwC3CJ7HnLz4CtFhoHiFQt+9Ui5a IEscTX6zNgGIQ1KVlSkQZd+oUSQX+iA9ZhK152cip4AwnAtBHXxqGrUHqMT9Hh5H8EFc Uokg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:date:in-reply-to :subject:cc:to:from:user-agent:references; bh=iwNV8zTJilWMPXkmNjPY45KI0oyhwTctkTXL/eMhhfw=; b=ouc9syrLgJOFxaDBsAewmpIoJZsikGvImPi4bz3C+H7CNhVuo353qCaFeRQEWZMSDb woroi55Lr7zrqASYHJ0NjXhl4MRevwm0V7cu26sS/poJNWsLZhzAFh2BuNTdiwsrK08h v39drwjabWsJsUgUmMqzW3WNGdP+vDIkm4B5nb5OlIiI6yv1ZaRMERo5s+B/JYrsi4KQ XCFdkK2cUy0kMskuj6AebDqtWyJAhVBIhNKpSQWrwMDIZwODfhe8jYuDKaajBM1oa9xr oNPMVINrc40lNj+VfeNeJTG7Fn6NlKvAkJSSyPtOXRndvUWD3VQxlP4IBN72CVcWtx8f qrGg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id y1si12466739edp.606.2020.04.15.14.37.51; Wed, 15 Apr 2020 14:38:14 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390888AbgDNO2h (ORCPT + 99 others); Tue, 14 Apr 2020 10:28:37 -0400 Received: from foss.arm.com ([217.140.110.172]:56916 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390874AbgDNO2S (ORCPT ); Tue, 14 Apr 2020 10:28:18 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4027630E; Tue, 14 Apr 2020 07:28:16 -0700 (PDT) Received: from e113632-lin (e113632-lin.cambridge.arm.com [10.1.194.46]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 426843F73D; Tue, 14 Apr 2020 07:28:14 -0700 (PDT) References: <20200408095012.3819-1-dietmar.eggemann@arm.com> <20200408095012.3819-3-dietmar.eggemann@arm.com> <20200408153032.447e098d@nowhere> <31620965-e1e7-6854-ad46-8192ee4b41af@arm.com> User-agent: mu4e 0.9.17; emacs 26.3 From: Valentin Schneider To: Dietmar Eggemann Cc: luca abeni , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Steven Rostedt , Daniel Bristot de Oliveira , Wei Wang , Quentin Perret , Alessio Balsini , Pavan Kondeti , Patrick Bellasi , Morten Rasmussen , Qais Yousef , linux-kernel@vger.kernel.org Subject: Re: [PATCH 2/4] sched/deadline: Improve admission control for asymmetric CPU capacities In-reply-to: <31620965-e1e7-6854-ad46-8192ee4b41af@arm.com> Date: Tue, 14 Apr 2020 15:28:08 +0100 Message-ID: MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 09/04/20 18:29, Dietmar Eggemann wrote: >> Well it is indeed the case, but sadly it's not an atomic step - AFAICT with >> cpusets we do hold some cpuset lock when calling __dl_overflow() and when >> rebuilding the domains, but not when fiddling with the active mask. >> >> I just realized it's even more obvious for dl_cpu_busy(): IIUC it is meant >> to prevent the removal of a CPU if it would lead to a DL overflow - it >> works now because the active mask is modified before it gets called, but >> here it breaks because it's called before the sched_domain rebuild. >> >> Perhaps re-computing the root domain capacity sum at every dl_bw_cpus() >> call would be simpler. It's a bit more work, but then we already have a >> for_each_cpu_*() loop, and we only rely on the masks being correct. > > Maybe we can do a hybrid. We have rd->span and rd->sum_cpu_capacity and > with the help of an extra per-cpu cpumask we could just > > DEFINE_PER_CPU(cpumask_var_t, dl_bw_mask); > > dl_bw_cpus(int i) { > > struct cpumask *cpus = this_cpu_cpumask_var_ptr(dl_bw_mask); > ... > cpumask_and(cpus, rd->span, cpu_active_mask); > > return cpumask_weight(cpus); +1 on making this use cpumask_weight() :) > } > > and > > dl_bw_capacity(int i) { > > struct cpumask *cpus = this_cpu_cpumask_var_ptr(dl_bw_mask); > ... > cpumask_and(cpus, rd->span, cpu_active_mask); > if (cpumask_equal(cpus, rd->span)) > return rd->sum_cpu_capacity; > > for_each_cpu(i, cpus) > cap += capacity_orig_of(i); > > return cap; > } > > So only in cases in which rd->span and cpu_active_mask differ we would > have to sum up again. I think this might just work. In the "stable" case (i.e. not racing with hotplug), we can use the value cached in the root_domain. Otherwise we'll detect the mismatch between the cpumask and the root_domain (i.e. CPU active but not yet included in root_domain, or CPU !active but still included in root_domain).