Received: by 2002:a25:e74b:0:0:0:0:0 with SMTP id e72csp647559ybh; Tue, 21 Jul 2020 04:40:34 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxzT3I/+otUNsBoCGhqT7TEbAEfHAHN4lYdPbmnjocsVff0HaBfIn3jAg5NnffNixoXdg0q X-Received: by 2002:a05:6402:cb9:: with SMTP id cn25mr26228955edb.247.1595331634619; Tue, 21 Jul 2020 04:40:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1595331634; cv=none; d=google.com; s=arc-20160816; b=uLQJYsnEs2WTqUHHDkQThv/pW4qizmG9t6Hyz3V3J9f/tJ5ctpLRWISVTZujqVzDz/ hPpRdZl0DwQyUSiZvslYvZ3WP6XnjQUM5Zr17cZxpryW5ojjPimxmFprfgXYZyarxt42 wssNNbG99oHufCfEj0NTM16Pg36RgqUX2X/9Dt+Cu1j3OKVjTFcfHXIYe/HFQ5gQtl2T S05zCu+mYIEMssYxr9oe3tByBiHuY5DaG8w6oQ4M1gfKlUBpWdVuSxFPWrntoVpYHclz vWNZb6gpC13z8Ar037hhSF1/QOQpT36QUDNko6UMtPjst0jWFP+98UfKa2C02ucHmRSj SOGg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=+L0EUkSHbYG6Z0HtrPM9haVia/VcOZiFkbFhqF2Rjhw=; b=k4qaJKkRddBuiwLirshc0F2uDJd+FkccQfnUH4DqrNT4YR+sc+2Hu5NhgN5p3cSNBg 5c7BuaGS1KI6wu/LeRDpu0i5yCQpkB0/M4A3IQsNIRZFOQa0ykx7/JDPyoGw2bDlDqX3 AGhXqV0XJKm7+dfCdM/ETWBXRlqfxsy/yef3yK+VPeLJfbsqTxd78n4yua9ZmV5dprVk zfO/aQqOyp3PDXozdsS90gmmN84PqaAc3cvv+VQ91SmdQYuSurHUro3CIphezIf/E6Y4 wnAz3kdwcpdHpaNtiovpfNo6gr54Fz2VOhX6GKSxPvJG8nOxOI6siHdvZu2flCx5StLe ySlw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id bq21si12650437ejb.301.2020.07.21.04.40.12; Tue, 21 Jul 2020 04:40:34 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729076AbgGULjR (ORCPT + 99 others); Tue, 21 Jul 2020 07:39:17 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:37022 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726266AbgGULjN (ORCPT ); Tue, 21 Jul 2020 07:39:13 -0400 Received: from pps.filterd (m0098419.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 06LBWJfB013683; Tue, 21 Jul 2020 07:39:03 -0400 Received: from pps.reinject (localhost [127.0.0.1]) by mx0b-001b2d01.pphosted.com with ESMTP id 32dcyr7nts-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 21 Jul 2020 07:39:03 -0400 Received: from m0098419.ppops.net (m0098419.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 06LBXEfZ016463; Tue, 21 Jul 2020 07:39:02 -0400 Received: from ppma04fra.de.ibm.com (6a.4a.5195.ip4.static.sl-reverse.com [149.81.74.106]) by mx0b-001b2d01.pphosted.com with ESMTP id 32dcyr7nt3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 21 Jul 2020 07:39:02 -0400 Received: from pps.filterd (ppma04fra.de.ibm.com [127.0.0.1]) by ppma04fra.de.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 06LBZWmD015269; Tue, 21 Jul 2020 11:39:00 GMT Received: from b06cxnps4075.portsmouth.uk.ibm.com (d06relay12.portsmouth.uk.ibm.com [9.149.109.197]) by ppma04fra.de.ibm.com with ESMTP id 32dbmn0jtb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 21 Jul 2020 11:39:00 +0000 Received: from d06av24.portsmouth.uk.ibm.com (d06av24.portsmouth.uk.ibm.com [9.149.105.60]) by b06cxnps4075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 06LBcwbn63111374 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 21 Jul 2020 11:38:58 GMT Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id E512642042; Tue, 21 Jul 2020 11:38:57 +0000 (GMT) Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 103834203F; Tue, 21 Jul 2020 11:38:54 +0000 (GMT) Received: from srikart450.in.ibm.com (unknown [9.85.93.17]) by d06av24.portsmouth.uk.ibm.com (Postfix) with ESMTP; Tue, 21 Jul 2020 11:38:53 +0000 (GMT) From: Srikar Dronamraju To: Michael Ellerman Cc: Srikar Dronamraju , linuxppc-dev , LKML , Ingo Molnar , Peter Zijlstra , Valentin Schneider , Nick Piggin , Oliver OHalloran , Nathan Lynch , Michael Neuling , Anton Blanchard , Gautham R Shenoy , Vaidyanathan Srinivasan , Jordan Niethe Subject: [PATCH v2 06/10] powerpc/smp: Generalize 2nd sched domain Date: Tue, 21 Jul 2020 17:08:10 +0530 Message-Id: <20200721113814.32284-7-srikar@linux.vnet.ibm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200721113814.32284-1-srikar@linux.vnet.ibm.com> References: <20200721113814.32284-1-srikar@linux.vnet.ibm.com> X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235,18.0.687 definitions=2020-07-21_03:2020-07-21,2020-07-21 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 impostorscore=0 bulkscore=0 malwarescore=0 lowpriorityscore=0 mlxlogscore=999 spamscore=0 phishscore=0 clxscore=1015 priorityscore=1501 mlxscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2007210077 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently "CACHE" domain happens to be the 2nd sched domain as per powerpc_topology. This domain will collapse if cpumask of l2-cache is same as SMT domain. However we could generalize this domain such that it could mean either be a "CACHE" domain or a "BIGCORE" domain. While setting up the "CACHE" domain, check if shared_cache is already set. Cc: linuxppc-dev Cc: LKML Cc: Michael Ellerman Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Valentin Schneider Cc: Nick Piggin Cc: Oliver OHalloran Cc: Nathan Lynch Cc: Michael Neuling Cc: Anton Blanchard Cc: Gautham R Shenoy Cc: Vaidyanathan Srinivasan Cc: Jordan Niethe Signed-off-by: Srikar Dronamraju --- Changelog v1 -> v2: powerpc/smp: Generalize 2nd sched domain Moved shared_cache topology fixup to fixup_topology (Gautham) arch/powerpc/kernel/smp.c | 49 ++++++++++++++++++++++++++++----------- 1 file changed, 35 insertions(+), 14 deletions(-) diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c index 57468877499a..933ebdf97432 100644 --- a/arch/powerpc/kernel/smp.c +++ b/arch/powerpc/kernel/smp.c @@ -85,6 +85,14 @@ EXPORT_PER_CPU_SYMBOL(cpu_l2_cache_map); EXPORT_PER_CPU_SYMBOL(cpu_core_map); EXPORT_SYMBOL_GPL(has_big_cores); +enum { +#ifdef CONFIG_SCHED_SMT + smt_idx, +#endif + bigcore_idx, + die_idx, +}; + #define MAX_THREAD_LIST_SIZE 8 #define THREAD_GROUP_SHARE_L1 1 struct thread_groups { @@ -851,13 +859,7 @@ static int powerpc_shared_cache_flags(void) */ static const struct cpumask *shared_cache_mask(int cpu) { - if (shared_caches) - return cpu_l2_cache_mask(cpu); - - if (has_big_cores) - return cpu_smallcore_mask(cpu); - - return per_cpu(cpu_sibling_map, cpu); + return per_cpu(cpu_l2_cache_map, cpu); } #ifdef CONFIG_SCHED_SMT @@ -867,11 +869,16 @@ static const struct cpumask *smallcore_smt_mask(int cpu) } #endif +static const struct cpumask *cpu_bigcore_mask(int cpu) +{ + return per_cpu(cpu_sibling_map, cpu); +} + static struct sched_domain_topology_level powerpc_topology[] = { #ifdef CONFIG_SCHED_SMT { cpu_smt_mask, powerpc_smt_flags, SD_INIT_NAME(SMT) }, #endif - { shared_cache_mask, powerpc_shared_cache_flags, SD_INIT_NAME(CACHE) }, + { cpu_bigcore_mask, SD_INIT_NAME(BIGCORE) }, { cpu_cpu_mask, SD_INIT_NAME(DIE) }, { NULL, }, }; @@ -1313,7 +1320,6 @@ static void add_cpu_to_masks(int cpu) void start_secondary(void *unused) { unsigned int cpu = smp_processor_id(); - struct cpumask *(*sibling_mask)(int) = cpu_sibling_mask; mmgrab(&init_mm); current->active_mm = &init_mm; @@ -1339,14 +1345,20 @@ void start_secondary(void *unused) /* Update topology CPU masks */ add_cpu_to_masks(cpu); - if (has_big_cores) - sibling_mask = cpu_smallcore_mask; /* * Check for any shared caches. Note that this must be done on a * per-core basis because one core in the pair might be disabled. */ - if (!cpumask_equal(cpu_l2_cache_mask(cpu), sibling_mask(cpu))) - shared_caches = true; + if (!shared_caches) { + struct cpumask *(*sibling_mask)(int) = cpu_sibling_mask; + struct cpumask *mask = cpu_l2_cache_mask(cpu); + + if (has_big_cores) + sibling_mask = cpu_smallcore_mask; + + if (cpumask_weight(mask) > cpumask_weight(sibling_mask(cpu))) + shared_caches = true; + } set_numa_node(numa_cpu_lookup_table[cpu]); set_numa_mem(local_memory_node(numa_cpu_lookup_table[cpu])); @@ -1374,10 +1386,19 @@ int setup_profiling_timer(unsigned int multiplier) static void fixup_topology(void) { + if (shared_caches) { + pr_info("Using shared cache scheduler topology\n"); + powerpc_topology[bigcore_idx].mask = shared_cache_mask; +#ifdef CONFIG_SCHED_DEBUG + powerpc_topology[bigcore_idx].name = "CACHE"; +#endif + powerpc_topology[bigcore_idx].sd_flags = powerpc_shared_cache_flags; + } + #ifdef CONFIG_SCHED_SMT if (has_big_cores) { pr_info("Big cores detected but using small core scheduling\n"); - powerpc_topology[0].mask = smallcore_smt_mask; + powerpc_topology[smt_idx].mask = smallcore_smt_mask; } #endif } -- 2.17.1