Received: by 2002:ac0:a581:0:0:0:0:0 with SMTP id m1-v6csp927401imm; Fri, 22 Jun 2018 07:37:31 -0700 (PDT) X-Google-Smtp-Source: ADUXVKIc/hiebP/gGQBZcKlrzquDG7hdjyrKil/yRd2LeoWdUePmYIS22wmyqDoeCNKOn8jJwz2U X-Received: by 2002:a63:902:: with SMTP id 2-v6mr1643871pgj.3.1529678251466; Fri, 22 Jun 2018 07:37:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1529678251; cv=none; d=google.com; s=arc-20160816; b=r4+8cX6u2HZRxyYn89yh+KXW8wiNgHF6a4oP7OMD8GRR0HmMuKk/F0y0td6XaOCtcl mhMweYdhPDLJmUGu2goTYgthgLTDsbgzvcK9ouNmhvY+y+1uPgO/eGBx8t34g21F5XvK IBoqnuSKHZhGG1BFaWmXr0MzCW1j/g1eH/g3aYbT+aZyZH+8pQqd+igqupXpn2nEghkd veuS6uWuFHBJv1yvIPuXKG2qdZ05aQ7a39/xWm/Atp/zIeSlcoCb4PZQOItrRU0IAGUr pxVEbN+1RiM7wEfi9txmevJanLJ4wZXR7WXkO0iCaKAUyyIcxY9m3qu3aWT0VSrRD4LO xT8A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=9/phAnOS49bALnGqiHIYDp06MQvEB4/PLVXvZ8CLiIw=; b=wr4Lh2Z5WK1RrhRS6YLJpQT4XnLLRqzk8QHlG2kXD4AkDHjlqElK5eucce10aCWdjK yY7FfCnE9qK6gL9SKwxkztuY1jg9YWLeoLIX/cx7LhNB3RH/lPn/H7OkT0qbMb5RTa1/ nEMB1dPzPN+kcP+4gPRuVpJkx2U1FbmKhqWZ5JzSGALy908k+OQIzH+/aRMgVeKN0U9w aYeKBB54g19gnJUOkGLpzCVeI13787yoa6sp/qiFhTMa+l/7k0Y4JHhpuQtDzdL14jtc /vM/4eADRCDHG/3r6xeH7gfkeixa9je1PDHkHMBVEsdR9iQm+z/081TPsB3s79mwkWB+ 1SNQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d88-v6si7659545pfl.297.2018.06.22.07.37.16; Fri, 22 Jun 2018 07:37:31 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933463AbeFVOga (ORCPT + 99 others); Fri, 22 Jun 2018 10:36:30 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:36072 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751284AbeFVOg3 (ORCPT ); Fri, 22 Jun 2018 10:36:29 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C669280D; Fri, 22 Jun 2018 07:36:28 -0700 (PDT) Received: from e105550-lin.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4D9103F2EA; Fri, 22 Jun 2018 07:36:27 -0700 (PDT) Date: Fri, 22 Jun 2018 15:36:24 +0100 From: Morten Rasmussen To: Quentin Perret Cc: peterz@infradead.org, mingo@redhat.com, valentin.schneider@arm.com, dietmar.eggemann@arm.com, vincent.guittot@linaro.org, gaku.inami.xh@renesas.com, linux-kernel@vger.kernel.org Subject: Re: [PATCHv3 1/9] sched: Add static_key for asymmetric cpu capacity optimizations Message-ID: <20180622143624.GD8461@e105550-lin.cambridge.arm.com> References: <1529485549-5191-1-git-send-email-morten.rasmussen@arm.com> <1529485549-5191-2-git-send-email-morten.rasmussen@arm.com> <20180622082222.GD23168@e108498-lin.cambridge.arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180622082222.GD23168@e108498-lin.cambridge.arm.com> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jun 22, 2018 at 09:22:22AM +0100, Quentin Perret wrote: > Hi Morten, > > On Wednesday 20 Jun 2018 at 10:05:41 (+0100), Morten Rasmussen wrote: > > +static void update_asym_cpucapacity(int cpu) > > +{ > > + int enable = false; > > + > > + rcu_read_lock(); > > + if (lowest_flag_domain(cpu, SD_ASYM_CPUCAPACITY)) > > + enable = true; > > + rcu_read_unlock(); > > + > > + if (enable) { > > + /* This expects to be hotplug-safe */ > > + static_branch_enable_cpuslocked(&sched_asym_cpucapacity); > > + } > > +} > > What would happen if you hotplugged an entire cluster ? You'd loose the > SD_ASYM_CPUCAPACITY flag but keep the static key is that right ? Should > we care ? I don't think we should care. The static key enables additional checks and tweaks but AFAICT none of them requires the SD_ASYM_CPUCAPACITY to be set and they should all be have no effect if that is the case. I added the static key just avoid the overhead on systems where they would have no effect. At least that is intention, I could course have broken things by mistake. > And also, Peter mentioned an issue with the EAS patches with multiple > root_domains. Does that apply here as well ? What if you had a > configuration with big and little CPUs in different root_domains for ex? > > Should we disable the static key in the above cases ? Exclusive cpusets are more tricky as the flags will be the same for sched_domains at the same level. So we can't set the flag correctly if someone configures the exclusive cpusets such that you have one root_domain spanning big and a subset of little, and one spanning the remaining little cpus if all topology levels are preserved. If we imagine a three cluster system where 0-3 and 4-7 little clusters, and 8-11 is a big cluster with cpusets configured as 0-5 and 6-11. The first set should _not_ have SD_ASYM_CPUCAPACITY set, while the second should. I'm tempted to say we shouldn't care in this situation. Setting the flags correctly in the three cluster example would require knowledge about the cpuset configuration which we don't have in the arch code so SD_ASYM_CPUCAPACITY flag detection would have be done by the sched_domain build code. However, not setting the flag according to the actual members of the exclusive cpuset means that homogeneous sched_domains might have SD_ASYM_CPUCAPACITY set enabling potentially wrong scheduling decisions. We can actually end up with this problem just by hotplugging too. If you unplug the entire big cluster in the three cluster example above, you preserve DIE level which would have SD_ASYM_CPUCAPACITY set even though we only have little cpus left. As I see it, we have two choices: 1) Set the flags correctly for exclusive cpusets which means some additional "fun" in the sched_domain hierarchy set up, or 2) ignore it and make sure that setting SD_ASYM_CPUCAPACITY on homogeneous sched_domains works fairly okay. The latter seems easier. Morten