Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754896AbdIGJFX (ORCPT ); Thu, 7 Sep 2017 05:05:23 -0400 Received: from smtp.codeaurora.org ([198.145.29.96]:43572 "EHLO smtp.codeaurora.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753826AbdIGJFV (ORCPT ); Thu, 7 Sep 2017 05:05:21 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org 40821602BC Authentication-Results: pdx-caf-mail.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: pdx-caf-mail.web.codeaurora.org; spf=none smtp.mailfrom=prsood@codeaurora.org Subject: Re: [PATCH] cgroup/cpuset: remove circular dependency deadlock To: Peter Zijlstra References: <1504764252-29091-1-git-send-email-prsood@codeaurora.org> <20170907072848.2sjjddwincaeplju@hirez.programming.kicks-ass.net> Cc: tj@kernel.org, lizefan@huawei.com, cgroups@vger.kernel.org, mingo@kernel.org, longman@redhat.com, linux-kernel@vger.kernel.org, sramana@codeaurora.org, Thomas Gleixner From: Prateek Sood Message-ID: <48cd3692-59fa-4e80-6aa5-25293f0fe113@codeaurora.org> Date: Thu, 7 Sep 2017 14:35:06 +0530 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.7.0 MIME-Version: 1.0 In-Reply-To: <20170907072848.2sjjddwincaeplju@hirez.programming.kicks-ass.net> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3168 Lines: 95 On 09/07/2017 12:58 PM, Peter Zijlstra wrote: > On Thu, Sep 07, 2017 at 11:34:12AM +0530, Prateek Sood wrote: >> Remove circular dependency deadlock in a scenario where hotplug of CPU is >> being done while there is updation in cgroup and cpuset triggered from >> userspace. >> >> Example scenario: >> kworker/0:0 => kthreadd => init:729 => init:1 => kworker/0:0 >> >> kworker/0:0 - percpu_down_write(&cpu_hotplug_lock) [held] >> flush(work) [no high prio workqueue available on CPU] >> wait_for_completion() >> >> kthreadd - percpu_down_read(cgroup_threadgroup_rwsem) [waiting] >> >> init:729 - percpu_down_write(cgroup_threadgroup_rwsem) [held] >> lock(cpuset_mutex) [waiting] >> >> init:1 - lock(cpuset_mutex) [held] >> percpu_down_read(&cpu_hotplug_lock) [waiting] > > That's both unreadable and useless :/ You want to tell what code paths > that were, not which random tasks happened to run them. > > > >> Eliminate this dependecy by reordering locking of cpuset_mutex >> and cpu_hotplug_lock in following order >> 1. Acquire cpu_hotplug_lock (read) >> 2. Acquire cpuset_mutex >> >> Signed-off-by: Prateek Sood >> --- >> kernel/cgroup/cpuset.c | 70 +++++++++++++++++++++++++++++++++++++++++++------- >> 1 file changed, 61 insertions(+), 9 deletions(-) >> >> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c >> index 2f4039b..687be57 100644 >> --- a/kernel/cgroup/cpuset.c >> +++ b/kernel/cgroup/cpuset.c >> @@ -843,10 +843,41 @@ static void rebuild_sched_domains_locked(void) >> out: >> put_online_cpus(); >> } >> + >> +/* >> + * Rebuild scheduler domains. >> + * Call with following lock held in the order >> + * 1. cpu_hotplug_lock (read) >> + * 2. cpuset_mutex > > Do not put that in comments, nobody ever reads comments. > >> + */ >> +static void rebuild_sched_domains_unlocked(void) > > The common postfix for a function called with the cpuhotplug lock held > is: _cpuslocked() > >> +{ >> + struct sched_domain_attr *attr; >> + cpumask_var_t *doms; >> + int ndoms; > > lockdep_assert_cpus_held(); > lockdep_assert_held(&cpuset_mutex); > >> + >> + /* >> + * We have raced with CPU hotplug. Don't do anything to avoid >> + * passing doms with offlined cpu to partition_sched_domains(). >> + * Anyways, hotplug work item will rebuild sched domains. >> + */ >> + if (!cpumask_equal(top_cpuset.effective_cpus, cpu_active_mask)) >> + return; >> + >> + /* Generate domain masks and attrs */ >> + ndoms = generate_sched_domains(&doms, &attr); >> + >> + /* Have scheduler rebuild the domains */ >> + partition_sched_domains(ndoms, doms, attr); >> +} > > And you couldn't come up with a way to share _anything_ with the > existing rebuild_sched_domains_locked() function? > > *sigh*.. please try again. > Thanks for the suggestion Peter, I will resend the patch -- Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center, Inc., is a member of Code Aurora Forum, a Linux Foundation Collaborative Project