Received: by 2002:a25:b794:0:0:0:0:0 with SMTP id n20csp5259459ybh; Wed, 7 Aug 2019 03:20:40 -0700 (PDT) X-Google-Smtp-Source: APXvYqzK+ae7pGMRjwwF/pw6eEaU+b5dhf1AfefQajM0jZcDANGjB4ZIR7ueMMPtn4s1P5WItEMV X-Received: by 2002:a17:90a:1b4c:: with SMTP id q70mr7423362pjq.69.1565173240709; Wed, 07 Aug 2019 03:20:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565173240; cv=none; d=google.com; s=arc-20160816; b=u+3Bz+9L2UNc5P0MN78bpeA6jDT0BWclE2avMfnejNtpGqbqlZyCjLEjEXpCQe7ldx M/K+0Lui40m4YjG3iO4o3xDsw17BAocTdbVPFM7Xz9dyIeU4rSdEUc8cFCrHU2ZgdG5f sECX3LYFVDLuvB7okXVCClNZm6GCltwaNC48XXkOMDHUwEB06kk2T00fOyXMuyDwNWG8 jrrlDddN/8o647+G1PaELOw0bLpuCvq3kSYjeBDX/DfyWLT5PO4Ce65ohUaV5PUeiW8h MI0F2CP/q5Sug2iBbq5g0A5FP3slADX+x02WgtwyjpRrZW1IMKW2OMdzaE5psT9y05w6 xfpQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:organization:openpgp:from:references:cc:to:subject; bh=Foxil26B13Won0qGCnHAXbQHzph7rEaoIkZeDQhX4Dk=; b=YOXXbX19YizcuwtqLgVvKVbdMm7rVwPKuD/oC1z3c3jR+pN4qU8raeOxvn9DSQ9jy0 Vij4JRnI7BcEIoVvd6ONvYmADFyeq3EMk9yRJsHYur2tMsPfthW2b1Ma7JL5/8/B4xaw jc4J1qB1Nk03XR72NYQ+Xb21gaRiXtbmsSSzZ3B8H7ae0h6XWKuoPWm5Fi3e++TNa9Y0 1BIc0w9o0kRZpz7UZOaxS9WDHrhD77j/nsC+RtJ1+p1d8FRyAa7u59wsYKGqqqZff1PA rlYIlNhrB2Gs3ugal3ORzBd76rwtr5u3+5sajHXGWH9OIXv6FsWa8QbQrwgsVPFNLz9n W8iQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id bx21si16777798pjb.21.2019.08.07.03.20.23; Wed, 07 Aug 2019 03:20:40 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729407AbfHGKSu (ORCPT + 99 others); Wed, 7 Aug 2019 06:18:50 -0400 Received: from mx2.suse.de ([195.135.220.15]:55022 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726685AbfHGKSu (ORCPT ); Wed, 7 Aug 2019 06:18:50 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id A0451AE6F; Wed, 7 Aug 2019 10:18:48 +0000 (UTC) Subject: Re: [PATCH v2] bcache: fix deadlock in bcache_allocator To: Andrea Righi Cc: Kent Overstreet , linux-bcache@vger.kernel.org, linux-kernel@vger.kernel.org References: <20190806091801.GC11184@xps-13> <20190806173648.GB27866@xps-13> <20190807092554.GB23070@xps-13> From: Coly Li Openpgp: preference=signencrypt Organization: SUSE Labs Message-ID: Date: Wed, 7 Aug 2019 18:18:40 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: <20190807092554.GB23070@xps-13> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2019/8/7 5:25 下午, Andrea Righi wrote: > On Tue, Aug 06, 2019 at 07:36:48PM +0200, Andrea Righi wrote: >> On Tue, Aug 06, 2019 at 11:18:01AM +0200, Andrea Righi wrote: >>> bcache_allocator() can call the following: >>> >>> bch_allocator_thread() >>> -> bch_prio_write() >>> -> bch_bucket_alloc() >>> -> wait on &ca->set->bucket_wait >>> >>> But the wake up event on bucket_wait is supposed to come from >>> bch_allocator_thread() itself => deadlock: >>> >>> [ 1158.490744] INFO: task bcache_allocato:15861 blocked for more than 10 seconds. >>> [ 1158.495929] Not tainted 5.3.0-050300rc3-generic #201908042232 >>> [ 1158.500653] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. >>> [ 1158.504413] bcache_allocato D 0 15861 2 0x80004000 >>> [ 1158.504419] Call Trace: >>> [ 1158.504429] __schedule+0x2a8/0x670 >>> [ 1158.504432] schedule+0x2d/0x90 >>> [ 1158.504448] bch_bucket_alloc+0xe5/0x370 [bcache] >>> [ 1158.504453] ? wait_woken+0x80/0x80 >>> [ 1158.504466] bch_prio_write+0x1dc/0x390 [bcache] >>> [ 1158.504476] bch_allocator_thread+0x233/0x490 [bcache] >>> [ 1158.504491] kthread+0x121/0x140 >>> [ 1158.504503] ? invalidate_buckets+0x890/0x890 [bcache] >>> [ 1158.504506] ? kthread_park+0xb0/0xb0 >>> [ 1158.504510] ret_from_fork+0x35/0x40 >>> >>> Fix by making the call to bch_prio_write() non-blocking, so that >>> bch_allocator_thread() never waits on itself. >>> >>> Moreover, make sure to wake up the garbage collector thread when >>> bch_prio_write() is failing to allocate buckets. >>> >>> BugLink: https://bugs.launchpad.net/bugs/1784665 >>> BugLink: https://bugs.launchpad.net/bugs/1796292 >>> Signed-off-by: Andrea Righi >>> --- >>> Changes in v2: >>> - prevent retry_invalidate busy loop in bch_allocator_thread() >>> >>> drivers/md/bcache/alloc.c | 5 ++++- >>> drivers/md/bcache/bcache.h | 2 +- >>> drivers/md/bcache/super.c | 13 +++++++++---- >>> 3 files changed, 14 insertions(+), 6 deletions(-) >>> >>> diff --git a/drivers/md/bcache/alloc.c b/drivers/md/bcache/alloc.c >>> index 6f776823b9ba..a1df0d95151c 100644 >>> --- a/drivers/md/bcache/alloc.c >>> +++ b/drivers/md/bcache/alloc.c >>> @@ -377,7 +377,10 @@ static int bch_allocator_thread(void *arg) >>> if (!fifo_full(&ca->free_inc)) >>> goto retry_invalidate; >>> >>> - bch_prio_write(ca); >>> + if (bch_prio_write(ca, false) < 0) { >>> + ca->invalidate_needs_gc = 1; >>> + wake_up_gc(ca->set); >>> + } >>> } >>> } >>> out: >>> diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h >>> index 013e35a9e317..deb924e1d790 100644 >>> --- a/drivers/md/bcache/bcache.h >>> +++ b/drivers/md/bcache/bcache.h >>> @@ -977,7 +977,7 @@ bool bch_cached_dev_error(struct cached_dev *dc); >>> __printf(2, 3) >>> bool bch_cache_set_error(struct cache_set *c, const char *fmt, ...); >>> >>> -void bch_prio_write(struct cache *ca); >>> +int bch_prio_write(struct cache *ca, bool wait); >>> void bch_write_bdev_super(struct cached_dev *dc, struct closure *parent); >>> >>> extern struct workqueue_struct *bcache_wq; >>> diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c >>> index 20ed838e9413..716ea272fb55 100644 >>> --- a/drivers/md/bcache/super.c >>> +++ b/drivers/md/bcache/super.c >>> @@ -529,7 +529,7 @@ static void prio_io(struct cache *ca, uint64_t bucket, int op, >>> closure_sync(cl); >>> } >>> >>> -void bch_prio_write(struct cache *ca) >>> +int bch_prio_write(struct cache *ca, bool wait) >>> { >>> int i; >>> struct bucket *b; >>> @@ -564,8 +564,12 @@ void bch_prio_write(struct cache *ca) >>> p->magic = pset_magic(&ca->sb); >>> p->csum = bch_crc64(&p->magic, bucket_bytes(ca) - 8); >>> >>> - bucket = bch_bucket_alloc(ca, RESERVE_PRIO, true); >>> - BUG_ON(bucket == -1); >>> + bucket = bch_bucket_alloc(ca, RESERVE_PRIO, wait); >>> + if (bucket == -1) { >>> + if (!wait) >>> + return -ENOMEM; >>> + BUG_ON(1); >>> + } >> >> Coly, >> >> looking more at this change, I think we should handle the failure path >> properly or we may leak buckets, am I right? (sorry for not realizing >> this before). Maybe we need something like the following on top of my >> previous patch. >> >> I'm going to run more stress tests with this patch applied and will try >> to figure out if we're actually leaking buckets without it. >> >> --- >> Subject: bcache: prevent leaking buckets in bch_prio_write() >> >> Handle the allocation failure path properly in bch_prio_write() to avoid >> leaking buckets from the previous successful iterations. >> >> Signed-off-by: Andrea Righi > > Coly, ignore this one please. A v3 of the previous patch with a better > fix for this potential buckets leak is on the way. Sure, waiting for next version :-) -- Coly Li