Received: by 2002:ac0:a594:0:0:0:0:0 with SMTP id m20-v6csp1942832imm; Thu, 24 May 2018 03:20:10 -0700 (PDT) X-Google-Smtp-Source: AB8JxZrHOQoH2ZCgZNDqdkM7O41pNR1ylnb89KJzeXcKwz4fYaNKzwPC2tabc3XBJTGz98oarrhA X-Received: by 2002:a65:4309:: with SMTP id j9-v6mr5267206pgq.375.1527157210347; Thu, 24 May 2018 03:20:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527157210; cv=none; d=google.com; s=arc-20160816; b=M/ZIiBW3LQgrroxgVsoRzVTDfBdoxnaYd/JTjU1XHq2M8J5H9/XDbyT80RpplF0TAr vyF/jAxuS0vpLUWy5j2qv/GDOLX0tpxU96clliY1l+8PNzzLNeIrIFEn6kKtRXVLJPsU g9qvNLZMYzzuv6deHWgGz0/hc4wm7jeBHypPhpuvH3yDPD6HTujCcU4i/QRErQW/aXUJ chFsD7xwdo4jzaH2IheIAKXFequRXRJuAwYznrcDfLVLCEilywYZSUylxrV1kqYRqDv4 7KixOq8dF5DPmS+VeJCDn4F+v7HhVsD+XxYts2wzs77a8LSFBjhbZ3hjYu+yeM9GSD1q w1zw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=29AsfmMqxmqPj/tFEc06oV30sdN+vY6Mpvce1jCOETI=; b=FCJLgBzExptbGR8uLBvwWgyb1LU8sPAhal8LkQjA+A0YAqm5Vl4MSA9SiZ4ihs89by y8c0ZpofjWP8GxZD7YwplVHrDYNyhUyYYn7lDFxqBlR9pU2jaxgAqMUvU3OogbsiXqae PPbvLXoEmnrm8lLgP8jMSmG8ImyE5TQmkSKbWzlT9oGQgVJ3Ntyqw/6WmIRku60w7Mmy 4xMoKnBu59qNvxL78MMrVZ6tQBMWfW1VNsQB5pvpRS1jAUPlo1c1bwqlIXNKDVeeygs1 ibvdXq/7CNb7LChlS6qbiXmFFi915QHOHbNPBaq48mTzxZXvJTwHEYS8WgifkO+KjCK0 9VhQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q70-v6si20321567pfa.272.2018.05.24.03.19.55; Thu, 24 May 2018 03:20:10 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1031894AbeEXKTF (ORCPT + 99 others); Thu, 24 May 2018 06:19:05 -0400 Received: from mx2.suse.de ([195.135.220.15]:60866 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1031410AbeEXKTC (ORCPT ); Thu, 24 May 2018 06:19:02 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id C5DCCAC0D; Thu, 24 May 2018 10:19:00 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id 1273C1E050C; Thu, 24 May 2018 12:19:00 +0200 (CEST) Date: Thu, 24 May 2018 12:19:00 +0200 From: Jan Kara To: Tejun Heo Cc: Jens Axboe , linux-kernel@vger.kernel.org, "Paul E. McKenney" , Jan Kara , Andrew Morton , kernel-team@fb.com Subject: Re: [PATCH] bdi: Move cgroup bdi_writeback to a dedicated low concurrency workqueue Message-ID: <20180524101900.m5vqd74rwaqw2pap@quack2.suse.cz> References: <20180523175632.GO1718769@devbig577.frc2.facebook.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180523175632.GO1718769@devbig577.frc2.facebook.com> User-Agent: NeoMutt/20170421 (1.8.2) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed 23-05-18 10:56:32, Tejun Heo wrote: > From 0aa2e9b921d6db71150633ff290199554f0842a8 Mon Sep 17 00:00:00 2001 > From: Tejun Heo > Date: Wed, 23 May 2018 10:29:00 -0700 > > cgwb_release() punts the actual release to cgwb_release_workfn() on > system_wq. Depending on the number of cgroups or block devices, there > can be a lot of cgwb_release_workfn() in flight at the same time. > > We're periodically seeing close to 256 kworkers getting stuck with the > following stack trace and overtime the entire system gets stuck. OK, but that means that you have to have 256 block devices, don't you? As we have a bdi per device and we call synchronize_rcu_expedited() only when unregistering bdi (and the corresponding request queue must be gone at that point as well as that's otherwise holding a reference). Am I understanding the situation correctly? > [] _synchronize_rcu_expedited.constprop.72+0x2fc/0x330 > [] synchronize_rcu_expedited+0x24/0x30 > [] bdi_unregister+0x53/0x290 > [] release_bdi+0x89/0xc0 > [] wb_exit+0x85/0xa0 > [] cgwb_release_workfn+0x54/0xb0 > [] process_one_work+0x150/0x410 > [] worker_thread+0x6d/0x520 > [] kthread+0x12c/0x160 > [] ret_from_fork+0x29/0x40 > [] 0xffffffffffffffff > > The events leading to the lockup are... > > 1. A lot of cgwb_release_workfn() is queued at the same time and all > system_wq kworkers are assigned to execute them. > > 2. They all end up calling synchronize_rcu_expedited(). One of them > wins and tries to perform the expedited synchronization. > > 3. However, that invovles queueing rcu_exp_work to system_wq and > waiting for it. Because #1 is holding all available kworkers on > system_wq, rcu_exp_work can't be executed. cgwb_release_workfn() > is waiting for synchronize_rcu_expedited() which in turn is waiting > for cgwb_release_workfn() to free up some of the kworkers. > > We shouldn't be scheduling hundreds of cgwb_release_workfn() at the > same time. There's nothing to be gained from that. This patch > updates cgwb release path to use a dedicated percpu workqueue with > @max_active of 1. As Rik wrote, some paralelism is good to reduce number of forced grace periods so raising this to 16 is good. I was thinking whether we could not batch rcu grace periods in some explicit way but that would be difficult to do. But thinking a bit more about this, if we made bdi RCU freed, we could just avoid the synchronize_rcu_expedited() in bdi_remove_from_list() altogether. The uses of bdi list are pretty limited and everybody ends up testing WB_registered bit before doing anything anyway... What do you think? Other than that you can add: Reviewed-by: Jan Kara to this patch when updated to increase concurrency as that's a good short term solution (for stable kernels) anyway. Honza > While this resolves the problem at hand, it might be a good idea to > isolate rcu_exp_work to its own workqueue too as it can be used from > various paths and is prone to this sort of indirect A-A deadlocks. > > Signed-off-by: Tejun Heo > Cc: "Paul E. McKenney" > Cc: stable@vger.kernel.org > --- > mm/backing-dev.c | 18 +++++++++++++++++- > 1 file changed, 17 insertions(+), 1 deletion(-) > > diff --git a/mm/backing-dev.c b/mm/backing-dev.c > index 7441bd9..8fe3ebd 100644 > --- a/mm/backing-dev.c > +++ b/mm/backing-dev.c > @@ -412,6 +412,7 @@ static void wb_exit(struct bdi_writeback *wb) > * protected. > */ > static DEFINE_SPINLOCK(cgwb_lock); > +static struct workqueue_struct *cgwb_release_wq; > > /** > * wb_congested_get_create - get or create a wb_congested > @@ -522,7 +523,7 @@ static void cgwb_release(struct percpu_ref *refcnt) > { > struct bdi_writeback *wb = container_of(refcnt, struct bdi_writeback, > refcnt); > - schedule_work(&wb->release_work); > + queue_work(cgwb_release_wq, &wb->release_work); > } > > static void cgwb_kill(struct bdi_writeback *wb) > @@ -784,6 +785,21 @@ static void cgwb_bdi_register(struct backing_dev_info *bdi) > spin_unlock_irq(&cgwb_lock); > } > > +static int __init cgwb_init(void) > +{ > + /* > + * There can be many concurrent release work items overwhelming > + * system_wq. Put them in a separate wq and limit concurrency. > + * There's no point in executing many of these in parallel. > + */ > + cgwb_release_wq = alloc_workqueue("cgwb_release", 0, 1); > + if (!cgwb_release_wq) > + return -ENOMEM; > + > + return 0; > +} > +subsys_initcall(cgwb_init); > + > #else /* CONFIG_CGROUP_WRITEBACK */ > > static int cgwb_bdi_init(struct backing_dev_info *bdi) > -- > 2.9.5 > > -- Jan Kara SUSE Labs, CR