Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp4171285pxj; Tue, 25 May 2021 01:53:21 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwz8I+1lwltadpbFhZ4319KH6eQhV0D/SgeU38Nyys7hrkGp5dWbt9Opi1y2pcDimuCdfSo X-Received: by 2002:a92:ddd1:: with SMTP id d17mr17135070ilr.46.1621932801022; Tue, 25 May 2021 01:53:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1621932801; cv=none; d=google.com; s=arc-20160816; b=vkiwUstMaX50ZTzbyc/Efu5IZo0+8qcrMff7UqqImWkMKyJwNSIHFvpAA8x+5t+Q1/ Io7Kz/UrMzT3JNaRQz6jt1neDj0a0h7fRJHC+49znnn9vxVr/ojtaqqPDDLUECJ7Caa+ u7pxjvJf/I8Q2sb9XPrSFdaGCBkZZQVBHoZwjedC+ojMW5f0B1HHYRBXG2ig+UDEWj+d /VFGnDrAg3mfpwYu4g9nZ90Yfq3HYh4XYgYMiHooc2ksgpeo25ZqLnAgHeUrArcgGrEa xolmkYXw/NjDLATuqwDk7ubOGUXgE481re1xYHn9G9gULOii0Dr2DsHxFbwgrbGymvc9 ufRg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=Wju3sZ/9lPvzMZ393D23YJAMc7NsJBGkqjAx3MYE5jU=; b=oHWzeCYK+YDecNtm6oi07FdfUwIPS/f1ksKN55W8qbV+PLTrY86xy1xmrLBPJkezfO slNmF6/T3WlM1Xb1gbaIwX+GiGJ3z4oJzheRdxSpptC0B2mTOoC87y2LYrtgblrNhBw6 VQPBrn+J0nmvc2HRRfuro/TzcWnUj+ukjmXJUZ1Cn6m2V0k8hI5JN6c1Ye8q35POSCit sU6mRmtjCN5y37RM3BxFMvV+n3PaTZh8nV6H947xCCDDvgj6L0AelnLAKOm+uaReYk/R e3YfBTDw2o6THmLOqjKitXQBmOUcw8Qfba5kmF6xvLAfo1I4IQJ/rUPO5DoYgaV7o0Y0 A47g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id b26si17272019ios.29.2021.05.25.01.53.08; Tue, 25 May 2021 01:53:21 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232359AbhEYI1X (ORCPT + 99 others); Tue, 25 May 2021 04:27:23 -0400 Received: from foss.arm.com ([217.140.110.172]:53096 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232378AbhEYI1Q (ORCPT ); Tue, 25 May 2021 04:27:16 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5EF626D; Tue, 25 May 2021 01:25:46 -0700 (PDT) Received: from [192.168.178.6] (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 864CE3F73D; Tue, 25 May 2021 01:25:44 -0700 (PDT) Subject: Re: [PATCH v5 2/3] sched/topology: Rework CPU capacity asymmetry detection To: Beata Michalska , linux-kernel@vger.kernel.org Cc: peterz@infradead.org, mingo@redhat.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, valentin.schneider@arm.com, corbet@lwn.net, rdunlap@infradead.org, linux-doc@vger.kernel.org References: <20210524101617.8965-1-beata.michalska@arm.com> <20210524101617.8965-3-beata.michalska@arm.com> From: Dietmar Eggemann Message-ID: Date: Tue, 25 May 2021 10:25:36 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.8.1 MIME-Version: 1.0 In-Reply-To: <20210524101617.8965-3-beata.michalska@arm.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 24/05/2021 12:16, Beata Michalska wrote: [...] > Rework the way the capacity asymmetry levels are being detected, > allowing to point to the lowest topology level (for a given CPU), where > full set of available CPU capacities is visible to all CPUs within given > domain. As a result, the per-cpu sd_asym_cpucapacity might differ across > the domains. This will have an impact on EAS wake-up placement in a way > that it might see different rage of CPUs to be considered, depending on s/rage/range ;-) [...] > @@ -1266,6 +1266,112 @@ static void init_sched_groups_capacity(int cpu, struct sched_domain *sd) > update_group_capacity(sd, cpu); > } > > +/** > + * Asymmetric CPU capacity bits > + */ > +struct asym_cap_data { > + struct list_head link; > + unsigned long capacity; > + struct cpumask *cpu_mask; Not sure if this has been discussed already but shouldn't the flexible array members` approach known from struct sched_group, struct sched_domain or struct em_perf_domain be used here? IIRC the last time this has been discussed in this thread: https://lkml.kernel.org/r/20200910054203.525420-2-aubrey.li@intel.com diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 0de6eef91bc8..03e492e91bd7 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -1271,8 +1271,8 @@ static void init_sched_groups_capacity(int cpu, struct sched_domain *sd) */ struct asym_cap_data { struct list_head link; - unsigned long capacity; - struct cpumask *cpu_mask; + unsigned long capacity; + unsigned long cpumask[]; }; /* @@ -1299,14 +1299,14 @@ asym_cpu_capacity_classify(struct sched_domain *sd, goto leave; list_for_each_entry(entry, &asym_cap_list, link) { - if (cpumask_intersects(sched_domain_span(sd), entry->cpu_mask)) { + if (cpumask_intersects(sched_domain_span(sd), to_cpumask(entry->cpumask))) { ++asym_cap_count; } else { /* * CPUs with given capacity might be offline * so make sure this is not the case */ - if (cpumask_intersects(entry->cpu_mask, cpu_map)) { + if (cpumask_intersects(to_cpumask(entry->cpumask), cpu_map)) { sd_asym_flags &= ~SD_ASYM_CPUCAPACITY_FULL; if (asym_cap_count > 1) break; @@ -1332,7 +1332,6 @@ asym_cpu_capacity_get_data(unsigned long capacity) if (WARN_ONCE(!entry, "Failed to allocate memory for asymmetry data\n")) goto done; entry->capacity = capacity; - entry->cpu_mask = (struct cpumask *)((char *)entry + sizeof(*entry)); list_add(&entry->link, &asym_cap_list); done: return entry; @@ -1349,7 +1348,7 @@ static void asym_cpu_capacity_scan(void) int cpu; list_for_each_entry(entry, &asym_cap_list, link) - cpumask_clear(entry->cpu_mask); + cpumask_clear(to_cpumask(entry->cpumask)); entry = list_first_entry_or_null(&asym_cap_list, struct asym_cap_data, link); @@ -1361,11 +1360,11 @@ static void asym_cpu_capacity_scan(void) if (!entry || capacity != entry->capacity) entry = asym_cpu_capacity_get_data(capacity); if (entry) - __cpumask_set_cpu(cpu, entry->cpu_mask); + __cpumask_set_cpu(cpu, to_cpumask(entry->cpumask)); } list_for_each_entry_safe(entry, next, &asym_cap_list, link) { - if (cpumask_empty(entry->cpu_mask)) { + if (cpumask_empty(to_cpumask(entry->cpumask))) { list_del(&entry->link); kfree(entry); } [...]