Received: by 2002:a25:1985:0:0:0:0:0 with SMTP id 127csp578728ybz; Wed, 15 Apr 2020 14:25:59 -0700 (PDT) X-Google-Smtp-Source: APiQypLv238C40SJ2SIGidmNR4MRx/Vh3gX5zSqksMACQS7DgcloPdn5QUFhOZzJJFiJxRd4K3pB X-Received: by 2002:a17:906:bcec:: with SMTP id op12mr6923309ejb.245.1586985959267; Wed, 15 Apr 2020 14:25:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1586985959; cv=none; d=google.com; s=arc-20160816; b=qdrqPudWVBYRytb13WAE49IfXelxjFaTgHhspBYo+zl+3J7phHqgwhJeSBmiIWEG85 bJzKAEqvehXgm9KdNjiQznUOFdOoNscFs/85OhSNHtkuUlHmNaNvLZoBR906AqNmnLIu nThinSQcfml/+LmdY7KDSYRccq+JB6O4mY9dWayCFAjy4LZhy7w+YFjmZ9uVMSeQSiQ5 9Q1Ioic1oOfD/4EZVktbbj4RO1YmUrhVbnG9z43dxjJlKQQ6dcYqNqExCQPG7oGrKR0w ta4bzr15PG+zwYzG+PhMX9rU7BVeETxGMO3C7bSAMxAfuOks/i7uEOyebuHuguzBUrvq Olpw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=N5i3fSBqb6UiPo/lUYCYR2aiHghz64H7pf0p58l+N6U=; b=zYcbkokpFFUrZMm4+GaVfZMbauDUjYN8Qi3sKpoAxyVpYxOLo+YWnKtbqBM0y0L6Np 8iGRzlQGqnt2PmUZkXzHEiZa4bVr7BZjCATJK1tOi/+QiaSOloLTZq+ydK+VIr2UepjN yZomXt150euadvY4hkZt5jKJNcAo8/hQCjBcLnHaleUUeuSo5g+UiVRa2bwMpNbfiQz5 fDLkaq0IgQIUlA9QKnNUvJ8tB/8xQN6+T0kwwXFmdnCZ0W+yi6Gk24ZjwlekXj0jxzop D/ezfl91LHXcrgcYBxWP655L4ENB3vFfsKEptcJw3qZ80WU62uE5oGQoYy8ehuhEHdAZ jg/A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id e9si4266691eds.63.2020.04.15.14.25.35; Wed, 15 Apr 2020 14:25:59 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729875AbgDNLkn (ORCPT + 99 others); Tue, 14 Apr 2020 07:40:43 -0400 Received: from foss.arm.com ([217.140.110.172]:53594 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729851AbgDNLkh (ORCPT ); Tue, 14 Apr 2020 07:40:37 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A9D7E1FB; Tue, 14 Apr 2020 04:40:36 -0700 (PDT) Received: from e107158-lin.cambridge.arm.com (e107158-lin.cambridge.arm.com [10.1.195.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id AD4743F6C4; Tue, 14 Apr 2020 04:40:34 -0700 (PDT) Date: Tue, 14 Apr 2020 12:40:32 +0100 From: Qais Yousef To: Dietmar Eggemann Cc: Valentin Schneider , luca abeni , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Steven Rostedt , Daniel Bristot de Oliveira , Wei Wang , Quentin Perret , Alessio Balsini , Pavan Kondeti , Patrick Bellasi , Morten Rasmussen , linux-kernel@vger.kernel.org Subject: Re: [PATCH 2/4] sched/deadline: Improve admission control for asymmetric CPU capacities Message-ID: <20200414114032.wigdlnegism6qqns@e107158-lin.cambridge.arm.com> References: <20200408095012.3819-1-dietmar.eggemann@arm.com> <20200408095012.3819-3-dietmar.eggemann@arm.com> <20200408153032.447e098d@nowhere> <31620965-e1e7-6854-ad46-8192ee4b41af@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <31620965-e1e7-6854-ad46-8192ee4b41af@arm.com> User-Agent: NeoMutt/20171215 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 04/09/20 19:29, Dietmar Eggemann wrote: [...] > Maybe we can do a hybrid. We have rd->span and rd->sum_cpu_capacity and > with the help of an extra per-cpu cpumask we could just > > DEFINE_PER_CPU(cpumask_var_t, dl_bw_mask); > > dl_bw_cpus(int i) { > > struct cpumask *cpus = this_cpu_cpumask_var_ptr(dl_bw_mask); > ... > cpumask_and(cpus, rd->span, cpu_active_mask); > > return cpumask_weight(cpus); > } > > and > > dl_bw_capacity(int i) { > > struct cpumask *cpus = this_cpu_cpumask_var_ptr(dl_bw_mask); > ... > cpumask_and(cpus, rd->span, cpu_active_mask); > if (cpumask_equal(cpus, rd->span)) > return rd->sum_cpu_capacity; > > for_each_cpu(i, cpus) > cap += capacity_orig_of(i); > > return cap; > } > > So only in cases in which rd->span and cpu_active_mask differ we would > have to sum up again. I haven't followed this discussion closely, so I could be missing something here. In sched_cpu_dying() we call set_rq_offline() which clears the cpu in rq->rd->online. So the way I read the code rd->online = cpumask_and(rd->span, cpu_active_mask) But I could have easily missed some detail. Regardless, it seems to me that DL is working around something not right in the definition of rd->span or using the wrong variable. My 2p :-). I have to go back and read the discussion in more detail. Thanks -- Qais Yousef