Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp714518imu; Fri, 9 Nov 2018 05:03:12 -0800 (PST) X-Google-Smtp-Source: AJdET5eB/2zKDJ2xGo5WFmGKdwgmrNXtAvn4fjtDvUhASj/U7i56e5mzgh9Q57Zi6HRK7euZnx6B X-Received: by 2002:a63:151f:: with SMTP id v31mr7270090pgl.34.1541768592084; Fri, 09 Nov 2018 05:03:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541768592; cv=none; d=google.com; s=arc-20160816; b=ewM6dEgqTJwYU58fFAbxlm3w+EzZK/alFyb14zr26X0pszsgAjmQa+1SqPFdqnBYrP mc6IDlQFrrS0MX9UkOR42mVIR8eOguNU+aahdZ9HhvFWkb5/QN0/IH/dX8fo82UPPn8F HWwEADRt8N6m/2NmhkCUbZ0ivIXpii6XzY3U4HrvMLFCNVURymUHYmkx0cgX3h3/BVRE lNwQzXkx4f93tDabQjnQHamuWTRe1B5oxpJoWCdSjuaML7WoKcmoErFR4r0KSeJufAEu bMWGrIRKMIgiG8FcaBRJM86IuvA8XjzGz4Lg7P6lJoTzS79OTHLr09qvsXdck+e/fRwj 0RPA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=9Rm/MPxBp8VM3wK/vjrE1VQBOiiXTlrUMLjtKqaRBD0=; b=joG3C5cvshcB/OsX2/2OkZO2GqLHO7B5uhJwYbKTqt1yY1K9tuXRyqvF0cXKRgUGd0 EIvl25A8ayuDb827hqZvN+wvtcEfzYVX/PD+m5bNRp6Wm8kqE2m2lLVX11Yc/rPSQzSX 63ORs1E720huv91chFvQpVRCi6CLFM2wRLdF1zlcYWhx1HBlmPhu3BtLsYETsjTdco88 g/xYty2+D4zBxVeYfPUiKpst6MYp+PdmS8Ef3tYcRk8qy+cc3AJV9NoTZD9QtnfEDSVZ JudbBxVHqVXadOrYIMhId59y/X53Rf4/kErTfsZan3W+OLysR5cu/713WHYlu61scrfd nhpw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2018-07-02 header.b=VeUUEFHz; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u8-v6si6619074pgl.59.2018.11.09.05.02.40; Fri, 09 Nov 2018 05:03:12 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2018-07-02 header.b=VeUUEFHz; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728336AbeKIWlx (ORCPT + 99 others); Fri, 9 Nov 2018 17:41:53 -0500 Received: from userp2130.oracle.com ([156.151.31.86]:58676 "EHLO userp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727735AbeKIWlw (ORCPT ); Fri, 9 Nov 2018 17:41:52 -0500 Received: from pps.filterd (userp2130.oracle.com [127.0.0.1]) by userp2130.oracle.com (8.16.0.22/8.16.0.22) with SMTP id wA9Cx8Kk111325; Fri, 9 Nov 2018 13:00:51 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references; s=corp-2018-07-02; bh=9Rm/MPxBp8VM3wK/vjrE1VQBOiiXTlrUMLjtKqaRBD0=; b=VeUUEFHzv5TXgGqZ1z3e0ZA+Ek+KFM6vZQFcFWjBP3cbpi1v+8E1PhOW8fkNCVbLIQzN ndIF6DU6MHePHkilrvqpyaooX7s1uRFmq1JVKs/F86E7/qjRrU2aTE7UqowqgG9RhoCo UZP6eRisI5e3NCzEFc2l2TenQDh7GNsqU19EHxynYrq/0aPvuT8Ki7G1H/4GKRzXOtC3 qsisQtrQhOSBrzmh+/0EhrKDrvTUZwm9a8X7T10bgCPVvhqeV+gzyQM3V0sggwpHpfl1 0Y/2ZrJrJI/Tc6cwZYFns/bfM9U3jtJ2qoJCWyYx1LvlkkwZlDCcx+xHHk/TjyNfOoOG /g== Received: from aserv0022.oracle.com (aserv0022.oracle.com [141.146.126.234]) by userp2130.oracle.com with ESMTP id 2nh33uex4a-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 09 Nov 2018 13:00:51 +0000 Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236]) by aserv0022.oracle.com (8.14.4/8.14.4) with ESMTP id wA9D0jAU018227 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 9 Nov 2018 13:00:45 GMT Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23]) by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id wA9D076F020412; Fri, 9 Nov 2018 13:00:44 GMT Received: from ca-dev63.us.oracle.com (/10.211.8.221) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Fri, 09 Nov 2018 05:00:07 -0800 From: Steve Sistare To: mingo@redhat.com, peterz@infradead.org Cc: subhra.mazumdar@oracle.com, dhaval.giani@oracle.com, daniel.m.jordan@oracle.com, pavel.tatashin@microsoft.com, matt@codeblueprint.co.uk, umgwanakikbuti@gmail.com, riel@redhat.com, jbacik@fb.com, juri.lelli@redhat.com, valentin.schneider@arm.com, vincent.guittot@linaro.org, quentin.perret@arm.com, steven.sistare@oracle.com, linux-kernel@vger.kernel.org Subject: [PATCH v3 02/10] sched/topology: Provide hooks to allocate data shared per LLC Date: Fri, 9 Nov 2018 04:50:32 -0800 Message-Id: <1541767840-93588-3-git-send-email-steven.sistare@oracle.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1541767840-93588-1-git-send-email-steven.sistare@oracle.com> References: <1541767840-93588-1-git-send-email-steven.sistare@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=9071 signatures=668683 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=2 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1807170000 definitions=main-1811090121 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add functions sd_llc_alloc_all() and sd_llc_free_all() to allocate and free data pointed to by struct sched_domain_shared at the last-level-cache domain. sd_llc_alloc_all() is called after the SD hierarchy is known, to eliminate the unnecessary allocations that would occur if we instead allocated in __sdt_alloc() and then figured out which shared nodes are redundant. Signed-off-by: Steve Sistare --- kernel/sched/topology.c | 75 ++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 74 insertions(+), 1 deletion(-) diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 8d7f15b..3e72ce0 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -10,6 +10,12 @@ static cpumask_var_t sched_domains_tmpmask; static cpumask_var_t sched_domains_tmpmask2; +struct s_data; +static int sd_llc_alloc(struct sched_domain *sd); +static void sd_llc_free(struct sched_domain *sd); +static int sd_llc_alloc_all(const struct cpumask *cpu_map, struct s_data *d); +static void sd_llc_free_all(const struct cpumask *cpu_map); + #ifdef CONFIG_SCHED_DEBUG static int __init sched_debug_setup(char *str) @@ -361,8 +367,10 @@ static void destroy_sched_domain(struct sched_domain *sd) */ free_sched_groups(sd->groups, 1); - if (sd->shared && atomic_dec_and_test(&sd->shared->ref)) + if (sd->shared && atomic_dec_and_test(&sd->shared->ref)) { + sd_llc_free(sd); kfree(sd->shared); + } kfree(sd); } @@ -996,6 +1004,7 @@ static void __free_domain_allocs(struct s_data *d, enum s_alloc what, free_percpu(d->sd); /* Fall through */ case sa_sd_storage: + sd_llc_free_all(cpu_map); __sdt_free(cpu_map); /* Fall through */ case sa_none: @@ -1610,6 +1619,62 @@ static void __sdt_free(const struct cpumask *cpu_map) } } +static int sd_llc_alloc(struct sched_domain *sd) +{ + /* Allocate sd->shared data here. Empty for now. */ + + return 0; +} + +static void sd_llc_free(struct sched_domain *sd) +{ + struct sched_domain_shared *sds = sd->shared; + + if (!sds) + return; + + /* Free data here. Empty for now. */ +} + +static int sd_llc_alloc_all(const struct cpumask *cpu_map, struct s_data *d) +{ + struct sched_domain *sd, *hsd; + int i; + + for_each_cpu(i, cpu_map) { + /* Find highest domain that shares resources */ + hsd = NULL; + for (sd = *per_cpu_ptr(d->sd, i); sd; sd = sd->parent) { + if (!(sd->flags & SD_SHARE_PKG_RESOURCES)) + break; + hsd = sd; + } + if (hsd && sd_llc_alloc(hsd)) + return 1; + } + + return 0; +} + +static void sd_llc_free_all(const struct cpumask *cpu_map) +{ + struct sched_domain_topology_level *tl; + struct sched_domain *sd; + struct sd_data *sdd; + int j; + + for_each_sd_topology(tl) { + sdd = &tl->data; + if (!sdd) + continue; + for_each_cpu(j, cpu_map) { + sd = *per_cpu_ptr(sdd->sd, j); + if (sd) + sd_llc_free(sd); + } + } +} + static struct sched_domain *build_sched_domain(struct sched_domain_topology_level *tl, const struct cpumask *cpu_map, struct sched_domain_attr *attr, struct sched_domain *child, int dflags, int cpu) @@ -1769,6 +1834,14 @@ static struct sched_domain *build_sched_domain(struct sched_domain_topology_leve } } + /* + * Allocate shared sd data at last level cache. Must be done after + * domains are built above, but before the data is used in + * cpu_attach_domain and descendants below. + */ + if (sd_llc_alloc_all(cpu_map, &d)) + goto error; + /* Attach the domains */ rcu_read_lock(); for_each_cpu(i, cpu_map) { -- 1.8.3.1