Received: by 2002:a25:b794:0:0:0:0:0 with SMTP id n20csp5553963ybh; Wed, 7 Aug 2019 07:51:11 -0700 (PDT) X-Google-Smtp-Source: APXvYqyOzC7TnjwpzEX/MhjbBrPHlaJt4N3KhOQwZ2tnPPE311OfdalhkZ7nAHd8ZpG8zvfQQbau X-Received: by 2002:a65:6259:: with SMTP id q25mr8187787pgv.145.1565189471092; Wed, 07 Aug 2019 07:51:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565189471; cv=none; d=google.com; s=arc-20160816; b=jJQOV1N+9c2aFsl+K1g8S+5bVJuJhaxnzHreevJcBbMnJxtNH4ojjXybn99Z4azaYI sILlVk3Eh26T2JczWE933bS7PSGuNadi4iOHAycRDGDCqJljJPof8OcN/mR7SOBQt6eB fly/DHW7kfrw9+xPef9Taq5tbAdhp7dejxqYdxDuKsWleCOKFuFm+M+sdljXrzQSeMH1 4+c9/jNKTrnrlsjzKFnNPtUPSBWoJ/MvbfN1FxH6FGc/0OmucajDnRPPHjZ5FQLY5OU1 QtjhBVIu3z7B9+aWkdpneF8Nr1b8h8SyF5VfrXsJW5Gsot4Cp9PnTGI7Yw2x7L99e7gu 8pig== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:organization:openpgp:from:references:cc:to:subject; bh=zT4UGaCAj9Nd+P6yGp1oKehquvWay1kza7yhtng6N8c=; b=DO0KCSJgxWoSLEt0Ih6Q1j0n0h6vGgGvnui2/4J5HbLrPTHi54uRSEsT/NMTG8S6eL H6JfyTeB1AHvUHZJIuAKj44O/Vjzdg61UDheVrKXQYAY5ghCI23cDf2v7UkqDWHRrpN4 g82hu1nhdrgfofUeCkg1afV6si9Ncah3jEb5WZK1phWZxEk9k7/yBwVGgY4rqar7Vqzx o3ZfQMSEnKzU7Q3PXXSBUn5l6aE/qBQ5t62oeX5bSjX7H8f5DbT8cA2mQFEpv3YnCHIe lVmYhZMyAxjqav2XIbdzWzF5sA/i4eoyN147NdtsXqVdZrdajzaoeHDDyt2c2J1rtf6P Tj1g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r145si48905919pgr.553.2019.08.07.07.50.54; Wed, 07 Aug 2019 07:51:11 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387809AbfHGNx4 (ORCPT + 99 others); Wed, 7 Aug 2019 09:53:56 -0400 Received: from mx2.suse.de ([195.135.220.15]:47824 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726873AbfHGNx4 (ORCPT ); Wed, 7 Aug 2019 09:53:56 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 0E346AD7C; Wed, 7 Aug 2019 13:53:54 +0000 (UTC) Subject: Re: [PATCH v3] bcache: fix deadlock in bcache_allocator To: Andrea Righi , Kent Overstreet Cc: linux-bcache@vger.kernel.org, linux-kernel@vger.kernel.org References: <20190807103806.GA15450@xps-13> From: Coly Li Openpgp: preference=signencrypt Organization: SUSE Labs Message-ID: <1360a7e6-9135-6f3e-fc30-0834779bcf69@suse.de> Date: Wed, 7 Aug 2019 21:53:46 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: <20190807103806.GA15450@xps-13> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2019/8/7 6:38 下午, Andrea Righi wrote: > bcache_allocator can call the following: > > bch_allocator_thread() > -> bch_prio_write() > -> bch_bucket_alloc() > -> wait on &ca->set->bucket_wait > > But the wake up event on bucket_wait is supposed to come from > bch_allocator_thread() itself => deadlock: > > [ 1158.490744] INFO: task bcache_allocato:15861 blocked for more than 10 seconds. > [ 1158.495929] Not tainted 5.3.0-050300rc3-generic #201908042232 > [ 1158.500653] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. > [ 1158.504413] bcache_allocato D 0 15861 2 0x80004000 > [ 1158.504419] Call Trace: > [ 1158.504429] __schedule+0x2a8/0x670 > [ 1158.504432] schedule+0x2d/0x90 > [ 1158.504448] bch_bucket_alloc+0xe5/0x370 [bcache] > [ 1158.504453] ? wait_woken+0x80/0x80 > [ 1158.504466] bch_prio_write+0x1dc/0x390 [bcache] > [ 1158.504476] bch_allocator_thread+0x233/0x490 [bcache] > [ 1158.504491] kthread+0x121/0x140 > [ 1158.504503] ? invalidate_buckets+0x890/0x890 [bcache] > [ 1158.504506] ? kthread_park+0xb0/0xb0 > [ 1158.504510] ret_from_fork+0x35/0x40 > > Fix by making the call to bch_prio_write() non-blocking, so that > bch_allocator_thread() never waits on itself. > > Moreover, make sure to wake up the garbage collector thread when > bch_prio_write() is failing to allocate buckets. > > BugLink: https://bugs.launchpad.net/bugs/1784665 > BugLink: https://bugs.launchpad.net/bugs/1796292 > Signed-off-by: Andrea Righi OK, I add this version into my for-test directory. Once you have a new version, I will update it. Thanks. Coly Li > --- > Changes in v3: > - prevent buckets leak in bch_prio_write() > > drivers/md/bcache/alloc.c | 5 ++++- > drivers/md/bcache/bcache.h | 2 +- > drivers/md/bcache/super.c | 27 +++++++++++++++++++++------ > 3 files changed, 26 insertions(+), 8 deletions(-) > > diff --git a/drivers/md/bcache/alloc.c b/drivers/md/bcache/alloc.c > index 6f776823b9ba..a1df0d95151c 100644 > --- a/drivers/md/bcache/alloc.c > +++ b/drivers/md/bcache/alloc.c > @@ -377,7 +377,10 @@ static int bch_allocator_thread(void *arg) > if (!fifo_full(&ca->free_inc)) > goto retry_invalidate; > > - bch_prio_write(ca); > + if (bch_prio_write(ca, false) < 0) { > + ca->invalidate_needs_gc = 1; > + wake_up_gc(ca->set); > + } > } > } > out: > diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h > index 013e35a9e317..deb924e1d790 100644 > --- a/drivers/md/bcache/bcache.h > +++ b/drivers/md/bcache/bcache.h > @@ -977,7 +977,7 @@ bool bch_cached_dev_error(struct cached_dev *dc); > __printf(2, 3) > bool bch_cache_set_error(struct cache_set *c, const char *fmt, ...); > > -void bch_prio_write(struct cache *ca); > +int bch_prio_write(struct cache *ca, bool wait); > void bch_write_bdev_super(struct cached_dev *dc, struct closure *parent); > > extern struct workqueue_struct *bcache_wq; > diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c > index 20ed838e9413..bd153234290d 100644 > --- a/drivers/md/bcache/super.c > +++ b/drivers/md/bcache/super.c > @@ -529,12 +529,29 @@ static void prio_io(struct cache *ca, uint64_t bucket, int op, > closure_sync(cl); > } > > -void bch_prio_write(struct cache *ca) > +int bch_prio_write(struct cache *ca, bool wait) > { > int i; > struct bucket *b; > struct closure cl; > > + pr_debug("free_prio=%zu, free_none=%zu, free_inc=%zu", > + fifo_used(&ca->free[RESERVE_PRIO]), > + fifo_used(&ca->free[RESERVE_NONE]), > + fifo_used(&ca->free_inc)); > + > + /* > + * Pre-check if there are enough free buckets. In the non-blocking > + * scenario it's better to fail early rather than starting to allocate > + * buckets and do a cleanup later in case of failure. > + */ > + if (!wait) { > + size_t avail = fifo_used(&ca->free[RESERVE_PRIO]) + > + fifo_used(&ca->free[RESERVE_NONE]); > + if (prio_buckets(ca) > avail) > + return -ENOMEM; > + } > + > closure_init_stack(&cl); > > lockdep_assert_held(&ca->set->bucket_lock); > @@ -544,9 +561,6 @@ void bch_prio_write(struct cache *ca) > atomic_long_add(ca->sb.bucket_size * prio_buckets(ca), > &ca->meta_sectors_written); > > - //pr_debug("free %zu, free_inc %zu, unused %zu", fifo_used(&ca->free), > - // fifo_used(&ca->free_inc), fifo_used(&ca->unused)); > - > for (i = prio_buckets(ca) - 1; i >= 0; --i) { > long bucket; > struct prio_set *p = ca->disk_buckets; > @@ -564,7 +578,7 @@ void bch_prio_write(struct cache *ca) > p->magic = pset_magic(&ca->sb); > p->csum = bch_crc64(&p->magic, bucket_bytes(ca) - 8); > > - bucket = bch_bucket_alloc(ca, RESERVE_PRIO, true); > + bucket = bch_bucket_alloc(ca, RESERVE_PRIO, wait); > BUG_ON(bucket == -1); > > mutex_unlock(&ca->set->bucket_lock); > @@ -593,6 +607,7 @@ void bch_prio_write(struct cache *ca) > > ca->prio_last_buckets[i] = ca->prio_buckets[i]; > } > + return 0; > } > > static void prio_read(struct cache *ca, uint64_t bucket) > @@ -1954,7 +1969,7 @@ static int run_cache_set(struct cache_set *c) > > mutex_lock(&c->bucket_lock); > for_each_cache(ca, c, i) > - bch_prio_write(ca); > + bch_prio_write(ca, true); > mutex_unlock(&c->bucket_lock); > > err = "cannot allocate new UUID bucket"; > -- Coly Li