Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752491AbdHPVUy (ORCPT ); Wed, 16 Aug 2017 17:20:54 -0400 Received: from mail-it0-f47.google.com ([209.85.214.47]:33822 "EHLO mail-it0-f47.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752105AbdHPVUt (ORCPT ); Wed, 16 Aug 2017 17:20:49 -0400 From: Mathieu Poirier To: mingo@redhat.com, peterz@infradead.org Cc: tj@kernel.org, vbabka@suse.cz, lizefan@huawei.com, akpm@linux-foundation.org, weiyongjun1@huawei.com, juri.lelli@arm.com, rostedt@goodmis.org, claudio@evidence.eu.com, luca.abeni@santannapisa.it, bristot@redhat.com, linux-kernel@vger.kernel.org, mathieu.poirier@linaro.org Subject: [PATCH 1/7] sched/topology: Adding function partition_sched_domains_locked() Date: Wed, 16 Aug 2017 15:20:37 -0600 Message-Id: <1502918443-30169-2-git-send-email-mathieu.poirier@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1502918443-30169-1-git-send-email-mathieu.poirier@linaro.org> References: <1502918443-30169-1-git-send-email-mathieu.poirier@linaro.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2916 Lines: 84 Introducing function partition_sched_domains_locked() by taking the mutex locking code out of the original function. That way the work done by partition_sched_domains_locked() can be reused without dropping the mutex lock. This patch doesn't change the functionality provided by the original code. Signed-off-by: Mathieu Poirier --- include/linux/sched/topology.h | 9 +++++++++ kernel/sched/topology.c | 17 +++++++++++++---- 2 files changed, 22 insertions(+), 4 deletions(-) diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h index 7d065abc7a47..59b568079e05 100644 --- a/include/linux/sched/topology.h +++ b/include/linux/sched/topology.h @@ -161,6 +161,9 @@ static inline struct cpumask *sched_domain_span(struct sched_domain *sd) return to_cpumask(sd->span); } +extern void partition_sched_domains_locked(int ndoms_new, + cpumask_var_t doms_new[], + struct sched_domain_attr *dattr_new); extern void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[], struct sched_domain_attr *dattr_new); @@ -206,6 +209,12 @@ extern void set_sched_topology(struct sched_domain_topology_level *tl); struct sched_domain_attr; static inline void +partition_sched_domains_locked(int ndoms_new, cpumask_var_t doms_new[], + struct sched_domain_attr *dattr_new) +{ +} + +static inline void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[], struct sched_domain_attr *dattr_new) { diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 9b4f2701ba3c..58133f2ce889 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -1840,15 +1840,15 @@ static int dattrs_equal(struct sched_domain_attr *cur, int idx_cur, * ndoms_new == 0 is a special case for destroying existing domains, * and it will not create the default domain. * - * Call with hotplug lock held + * Call with hotplug lock and sched_domains_mutex held */ -void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[], - struct sched_domain_attr *dattr_new) +void partition_sched_domains_locked(int ndoms_new, cpumask_var_t doms_new[], + struct sched_domain_attr *dattr_new) { int i, j, n; int new_topology; - mutex_lock(&sched_domains_mutex); + lockdep_assert_held(&sched_domains_mutex); /* Always unregister in case we don't destroy any domains: */ unregister_sched_domain_sysctl(); @@ -1902,7 +1902,16 @@ void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[], ndoms_cur = ndoms_new; register_sched_domain_sysctl(); +} +/* + * Call with hotplug lock held + */ +void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[], + struct sched_domain_attr *dattr_new) +{ + mutex_lock(&sched_domains_mutex); + partition_sched_domains_locked(ndoms_new, doms_new, dattr_new); mutex_unlock(&sched_domains_mutex); } -- 2.7.4