Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932284AbcDMS57 (ORCPT ); Wed, 13 Apr 2016 14:57:59 -0400 Received: from mail-yw0-f194.google.com ([209.85.161.194]:35748 "EHLO mail-yw0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932228AbcDMS55 (ORCPT ); Wed, 13 Apr 2016 14:57:57 -0400 Date: Wed, 13 Apr 2016 14:57:54 -0400 From: Tejun Heo To: Petr Mladek Cc: cgroups@vger.kernel.org, Michal Hocko , Cyril Hrubis , linux-kernel@vger.kernel.org, Johannes Weiner Subject: Re: [BUG] cgroup/workques/fork: deadlock when moving cgroups Message-ID: <20160413185754.GI3676@htj.duckdns.org> References: <20160413094216.GC5774@pathway.suse.cz> <20160413183309.GG3676@htj.duckdns.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160413183309.GG3676@htj.duckdns.org> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1531 Lines: 47 On Wed, Apr 13, 2016 at 02:33:09PM -0400, Tejun Heo wrote: > An easy solution would be to make lru_add_drain_all() use a > WQ_MEM_RECLAIM workqueue. A better way would be making charge moving > asynchronous similar to cpuset node migration but I don't know whether > that's realistic. Will prep a patch to add a rescuer to > lru_add_drain_all(). So, something like the following. Can you please see whether the deadlock goes away with the patch? diff --git a/mm/swap.c b/mm/swap.c index a0bc206..7022872 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -664,8 +664,16 @@ static void lru_add_drain_per_cpu(struct work_struct *dummy) lru_add_drain(); } +static struct workqueue_struct *lru_add_drain_wq; static DEFINE_PER_CPU(struct work_struct, lru_add_drain_work); +static int __init lru_add_drain_wq_init(void) +{ + lru_add_drain_wq = alloc_workqueue("lru_add_drain", WQ_MEM_RECLAIM, 0); + return lru_add_drain_wq ? 0 : -ENOMEM; +} +core_initcall(lru_add_drain_wq_init); + void lru_add_drain_all(void) { static DEFINE_MUTEX(lock); @@ -685,13 +693,12 @@ void lru_add_drain_all(void) pagevec_count(&per_cpu(lru_deactivate_pvecs, cpu)) || need_activate_page_drain(cpu)) { INIT_WORK(work, lru_add_drain_per_cpu); - schedule_work_on(cpu, work); + queue_work_on(cpu, lru_add_drain_wq, work); cpumask_set_cpu(cpu, &has_work); } } - for_each_cpu(cpu, &has_work) - flush_work(&per_cpu(lru_add_drain_work, cpu)); + flush_workqueue(lru_add_drain_wq); put_online_cpus(); mutex_unlock(&lock);