Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp9395066ybi; Wed, 10 Jul 2019 09:31:56 -0700 (PDT) X-Google-Smtp-Source: APXvYqxYH5F4NDhRvRlzXcIDzLYYra+SYSkQzYdjOby/mnk20nJno1Qn5oxSUqE1d6TxFpAZOQ6f X-Received: by 2002:a17:90a:4803:: with SMTP id a3mr8169816pjh.58.1562776316861; Wed, 10 Jul 2019 09:31:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1562776316; cv=none; d=google.com; s=arc-20160816; b=dVKgybx12HZfeXZX9o4uRD/STCNoVar/Qr/wZlijr+FC2izhepBFM7wM9XgcZNRxeR NT4U2cBmJp2g+mPSesB76LY68qFLLAIGvt4yMOy+vSWo6B7LmgqIXPCAEIF385sfuFya ti/Rrtzqoz3VN3DBM9Bxd3ZhRuq9CFWQ8tXTkfmteH5m6ka+76MJudJvoDfZBusRWX6c PQ/wiGMmaP6KFFf9pFUFnErto8aTpf+sI3DnVOFC2I3+jm0wYmMbHO/TdDd7y9NA9x8K c4vDCa7mAjaCyYAiddmd53ZVWZ+y1AauNqqijIiiGXYU3yZyRsSVNloep42dPNq73+Aq /bFA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:organization:openpgp:from:references:cc:to:subject; bh=JFdUMd0PrrDm6+byxcFpxh+EC0hz+PCwMpF7eyF04xY=; b=mNIp38uEpKPIycBz4b1S+PoVVDas3lP3twG23zc7HxkgL9lWO+Tldb+B1qkvhpuIWF BJjuwqh1UB4oLb6YhYoTC4gAqCOE/mWHkXvcq9fZp4j//MIaQQjLrOSbbmx5z8s+NSTI cdamwsC45z2tVbXxch82palqLy0eyZLAJUanDjUjn3yryOiKxyrq8zXybeif3YmL0Xxe Qy/bbnzUaLJFCFCVHBwMEv0LqOy9KA+STlYcePOBnwjmA+cTwLUQTo2Oqoj3LCvwH4qi fDoo5nrEz81IZHXeF+NuLwAlh+NhDElbEN2LeHO9XYnxAaQsI9JC4uacmJo8fysn9Ui/ +Nxw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s33si2614395pgk.112.2019.07.10.09.31.37; Wed, 10 Jul 2019 09:31:56 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727727AbfGJPMX (ORCPT + 99 others); Wed, 10 Jul 2019 11:12:23 -0400 Received: from mx2.suse.de ([195.135.220.15]:32984 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726097AbfGJPMX (ORCPT ); Wed, 10 Jul 2019 11:12:23 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id F22ACAFA4; Wed, 10 Jul 2019 15:12:20 +0000 (UTC) Subject: Re: [PATCH] bcache: fix deadlock in bcache_allocator() To: Andrea Righi Cc: Kent Overstreet , linux-bcache@vger.kernel.org, linux-kernel@vger.kernel.org References: <20190710093117.GA2792@xps-13> From: Coly Li Openpgp: preference=signencrypt Organization: SUSE Labs Message-ID: <82f1c5a9-9da4-3529-1ca5-af724d280580@suse.de> Date: Wed, 10 Jul 2019 23:11:37 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:60.0) Gecko/20100101 Thunderbird/60.7.2 MIME-Version: 1.0 In-Reply-To: <20190710093117.GA2792@xps-13> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2019/7/10 5:31 下午, Andrea Righi wrote: > bcache_allocator() can call the following: > > bch_allocator_thread() > -> bch_prio_write() > -> bch_bucket_alloc() > -> wait on &ca->set->bucket_wait > > But the wake up event on bucket_wait is supposed to come from > bch_allocator_thread() itself => deadlock: > > [ 242.888435] INFO: task bcache_allocato:9015 blocked for more than 120 seconds. > [ 242.893786] Not tainted 4.20.0-042000rc3-generic #201811182231 > [ 242.896669] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. > [ 242.900428] bcache_allocato D 0 9015 2 0x80000000 > [ 242.900434] Call Trace: > [ 242.900448] __schedule+0x2a2/0x880 > [ 242.900455] ? __schedule+0x2aa/0x880 > [ 242.900462] schedule+0x2c/0x80 > [ 242.900480] bch_bucket_alloc+0x19d/0x380 [bcache] > [ 242.900503] ? wait_woken+0x80/0x80 > [ 242.900519] bch_prio_write+0x190/0x340 [bcache] > [ 242.900530] bch_allocator_thread+0x482/0xd10 [bcache] > [ 242.900535] kthread+0x120/0x140 > [ 242.900546] ? bch_invalidate_one_bucket+0x80/0x80 [bcache] > [ 242.900549] ? kthread_park+0x90/0x90 > [ 242.900554] ret_from_fork+0x35/0x40 > > Fix by making the call to bch_prio_write() non-blocking, so that > bch_allocator_thread() never waits on itself. > > Moreover, make sure to wake up the garbage collector thread when > bch_prio_write() is failing to allocate buckets. > > BugLink: https://bugs.launchpad.net/bugs/1784665 > Signed-off-by: Andrea Righi Hi Andrea, From the BugLink, it seems several critical bcache fixes are missing. Could you please to try current 5.3-rc kernel, and try whether such problem exists or not ? For this patch itself, it looks good except that I am not sure whether invoking garbage collection is a proper method. Because bch_prio_write() is called right after garbage collection gets done, jump back to retry_invalidate: again may just hide a non-space long time waiting condition. Could you please give me some hint, on how to reproduce such hang timeout situation. If I am lucky to reproduce such problem on 5.3-rc kernel, it may be very helpful to understand what exact problem your patch fixes. Thanks in advance. Coly Li > --- > drivers/md/bcache/alloc.c | 6 +++++- > drivers/md/bcache/bcache.h | 2 +- > drivers/md/bcache/super.c | 13 +++++++++---- > 3 files changed, 15 insertions(+), 6 deletions(-) > > diff --git a/drivers/md/bcache/alloc.c b/drivers/md/bcache/alloc.c > index f8986effcb50..0797587600c7 100644 > --- a/drivers/md/bcache/alloc.c > +++ b/drivers/md/bcache/alloc.c > @@ -377,7 +377,11 @@ static int bch_allocator_thread(void *arg) > if (!fifo_full(&ca->free_inc)) > goto retry_invalidate; > > - bch_prio_write(ca); > + if (bch_prio_write(ca, false) < 0) { > + ca->invalidate_needs_gc = 1; > + wake_up_gc(ca->set); > + goto retry_invalidate; > + } > } > } > out: > diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h > index fdf75352e16a..dc5106b21260 100644 > --- a/drivers/md/bcache/bcache.h > +++ b/drivers/md/bcache/bcache.h > @@ -979,7 +979,7 @@ bool bch_cached_dev_error(struct cached_dev *dc); > __printf(2, 3) > bool bch_cache_set_error(struct cache_set *c, const char *fmt, ...); > > -void bch_prio_write(struct cache *ca); > +int bch_prio_write(struct cache *ca, bool wait); > void bch_write_bdev_super(struct cached_dev *dc, struct closure *parent); > > extern struct workqueue_struct *bcache_wq; > diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c > index 1b63ac876169..6598b457df1a 100644 > --- a/drivers/md/bcache/super.c > +++ b/drivers/md/bcache/super.c > @@ -525,7 +525,7 @@ static void prio_io(struct cache *ca, uint64_t bucket, int op, > closure_sync(cl); > } > > -void bch_prio_write(struct cache *ca) > +int bch_prio_write(struct cache *ca, bool wait) > { > int i; > struct bucket *b; > @@ -560,8 +560,12 @@ void bch_prio_write(struct cache *ca) > p->magic = pset_magic(&ca->sb); > p->csum = bch_crc64(&p->magic, bucket_bytes(ca) - 8); > > - bucket = bch_bucket_alloc(ca, RESERVE_PRIO, true); > - BUG_ON(bucket == -1); > + bucket = bch_bucket_alloc(ca, RESERVE_PRIO, wait); > + if (bucket == -1) { > + if (!wait) > + return -ENOMEM; > + BUG_ON(1); > + } > > mutex_unlock(&ca->set->bucket_lock); > prio_io(ca, bucket, REQ_OP_WRITE, 0); > @@ -589,6 +593,7 @@ void bch_prio_write(struct cache *ca) > > ca->prio_last_buckets[i] = ca->prio_buckets[i]; > } > + return 0; > } > > static void prio_read(struct cache *ca, uint64_t bucket) > @@ -1903,7 +1908,7 @@ static int run_cache_set(struct cache_set *c) > > mutex_lock(&c->bucket_lock); > for_each_cache(ca, c, i) > - bch_prio_write(ca); > + bch_prio_write(ca, true); > mutex_unlock(&c->bucket_lock); > > err = "cannot allocate new UUID bucket"; > -- Coly Li