Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp13341364ybl; Sun, 29 Dec 2019 09:40:47 -0800 (PST) X-Google-Smtp-Source: APXvYqx3TPAhVY/qqw+uWY3ne6aItDXRuHaB978C+mdInmxhwxmzrthFSPOwFhoYz2uZKYwjSue5 X-Received: by 2002:a9d:7090:: with SMTP id l16mr39150382otj.187.1577641247168; Sun, 29 Dec 2019 09:40:47 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1577641247; cv=none; d=google.com; s=arc-20160816; b=EWbj8xmBmRm8QrrtRB3npYXiJuQziGuMKkp2ozhe7nunCGQef9kt8dwnNaapyk3L3w FpaVPQa864ifvbjelLGXHJP9jDjaLc4xgWiE0MPFEtgfX2/8znWJcj6zvZiFTz07dQ5/ 9y9kT/dND288govUhm5JuxUoof20ru4RmKSQZXAHQTap5yxqBc936C2QE3QnWjuxyNjF jl6596EqXBFRaPyvlcndZ0NywSXX6vqdd0r26w3g0Cww5lKf1eUrxBjfckJLx1BDqWh3 PeomX2szmq7lkxGOBwg8d1R3IZC6/RXgCPklQMAqNc+q4uk4WG3ljleSvYhsjY6PpNG5 c3RQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=L6H2Yujs/TlTGECbMSh9lS1cNWVM/gH/Sw5HTVWZBTU=; b=OldtL6aGLHvKMYnxvfhhzg+xMz6Iw4hcyNVBB0tiIm6luOLS1MNcEGe3xoQur+CpCF 3JsofvnCmYsWzncfaWrydUnTGB1497h/+VZr77nG0kLpMuNLRFyr1n0b5y8hekfx3ewK 9SQjNIopFdXqHGjd69Qgz+rnupY2hA8x9ntLE7W+UuvYwi4IB38wkXfPMECYUmVWR5Et gojokrY+pdGVj7UXHfBU3CZGL19M8igTCOKi4hzkCycjKXRvSZ80CYbpm5XNqW/dN786 jgXAjCiBqfIyV7PDy68QKSa1RrbojTxPQAVY+201HDnwg/sdKqblHf14lQ7EVdn91/LT czzg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=LJDtGLLU; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y6si15720992oih.217.2019.12.29.09.40.35; Sun, 29 Dec 2019 09:40:47 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=LJDtGLLU; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729217AbfL2ReM (ORCPT + 99 others); Sun, 29 Dec 2019 12:34:12 -0500 Received: from mail.kernel.org ([198.145.29.99]:36838 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729585AbfL2ReI (ORCPT ); Sun, 29 Dec 2019 12:34:08 -0500 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id C664E222C2; Sun, 29 Dec 2019 17:34:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1577640847; bh=jGvQGPfwq0PjgNYxIWVQVGggizWBoVccQZzJIXRjSi8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LJDtGLLU4qn4W6NleCtfnsudCnQFvFtbjmkio7n6uHo8kxDCHZqdNxONW4xcdq90f C1EY2DI10ISnIR3gomKa/tTrrMoaA+VegiV+lu4FJVozJZL/HQb9vCVmOO2mcGgsnb cFQfL2v0SpQsbT5+l0loHlmwsMJVgIsTiR4d8A9E= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Andrea Righi , Coly Li , Jens Axboe , Sasha Levin Subject: [PATCH 4.19 162/219] bcache: fix deadlock in bcache_allocator Date: Sun, 29 Dec 2019 18:19:24 +0100 Message-Id: <20191229162533.282910531@linuxfoundation.org> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20191229162508.458551679@linuxfoundation.org> References: <20191229162508.458551679@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Andrea Righi [ Upstream commit 84c529aea182939e68f618ed9813740c9165c7eb ] bcache_allocator can call the following: bch_allocator_thread() -> bch_prio_write() -> bch_bucket_alloc() -> wait on &ca->set->bucket_wait But the wake up event on bucket_wait is supposed to come from bch_allocator_thread() itself => deadlock: [ 1158.490744] INFO: task bcache_allocato:15861 blocked for more than 10 seconds. [ 1158.495929] Not tainted 5.3.0-050300rc3-generic #201908042232 [ 1158.500653] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 1158.504413] bcache_allocato D 0 15861 2 0x80004000 [ 1158.504419] Call Trace: [ 1158.504429] __schedule+0x2a8/0x670 [ 1158.504432] schedule+0x2d/0x90 [ 1158.504448] bch_bucket_alloc+0xe5/0x370 [bcache] [ 1158.504453] ? wait_woken+0x80/0x80 [ 1158.504466] bch_prio_write+0x1dc/0x390 [bcache] [ 1158.504476] bch_allocator_thread+0x233/0x490 [bcache] [ 1158.504491] kthread+0x121/0x140 [ 1158.504503] ? invalidate_buckets+0x890/0x890 [bcache] [ 1158.504506] ? kthread_park+0xb0/0xb0 [ 1158.504510] ret_from_fork+0x35/0x40 Fix by making the call to bch_prio_write() non-blocking, so that bch_allocator_thread() never waits on itself. Moreover, make sure to wake up the garbage collector thread when bch_prio_write() is failing to allocate buckets. BugLink: https://bugs.launchpad.net/bugs/1784665 BugLink: https://bugs.launchpad.net/bugs/1796292 Signed-off-by: Andrea Righi Signed-off-by: Coly Li Signed-off-by: Jens Axboe Signed-off-by: Sasha Levin --- drivers/md/bcache/alloc.c | 5 ++++- drivers/md/bcache/bcache.h | 2 +- drivers/md/bcache/super.c | 27 +++++++++++++++++++++------ 3 files changed, 26 insertions(+), 8 deletions(-) diff --git a/drivers/md/bcache/alloc.c b/drivers/md/bcache/alloc.c index 9c3beb1e382b..46794cac167e 100644 --- a/drivers/md/bcache/alloc.c +++ b/drivers/md/bcache/alloc.c @@ -377,7 +377,10 @@ retry_invalidate: if (!fifo_full(&ca->free_inc)) goto retry_invalidate; - bch_prio_write(ca); + if (bch_prio_write(ca, false) < 0) { + ca->invalidate_needs_gc = 1; + wake_up_gc(ca->set); + } } } out: diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h index 83f0b91aeb90..4677b18ac281 100644 --- a/drivers/md/bcache/bcache.h +++ b/drivers/md/bcache/bcache.h @@ -959,7 +959,7 @@ bool bch_cached_dev_error(struct cached_dev *dc); __printf(2, 3) bool bch_cache_set_error(struct cache_set *c, const char *fmt, ...); -void bch_prio_write(struct cache *ca); +int bch_prio_write(struct cache *ca, bool wait); void bch_write_bdev_super(struct cached_dev *dc, struct closure *parent); extern struct workqueue_struct *bcache_wq; diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c index 2d60bcdb5b9c..c45d9ad01077 100644 --- a/drivers/md/bcache/super.c +++ b/drivers/md/bcache/super.c @@ -525,12 +525,29 @@ static void prio_io(struct cache *ca, uint64_t bucket, int op, closure_sync(cl); } -void bch_prio_write(struct cache *ca) +int bch_prio_write(struct cache *ca, bool wait) { int i; struct bucket *b; struct closure cl; + pr_debug("free_prio=%zu, free_none=%zu, free_inc=%zu", + fifo_used(&ca->free[RESERVE_PRIO]), + fifo_used(&ca->free[RESERVE_NONE]), + fifo_used(&ca->free_inc)); + + /* + * Pre-check if there are enough free buckets. In the non-blocking + * scenario it's better to fail early rather than starting to allocate + * buckets and do a cleanup later in case of failure. + */ + if (!wait) { + size_t avail = fifo_used(&ca->free[RESERVE_PRIO]) + + fifo_used(&ca->free[RESERVE_NONE]); + if (prio_buckets(ca) > avail) + return -ENOMEM; + } + closure_init_stack(&cl); lockdep_assert_held(&ca->set->bucket_lock); @@ -540,9 +557,6 @@ void bch_prio_write(struct cache *ca) atomic_long_add(ca->sb.bucket_size * prio_buckets(ca), &ca->meta_sectors_written); - //pr_debug("free %zu, free_inc %zu, unused %zu", fifo_used(&ca->free), - // fifo_used(&ca->free_inc), fifo_used(&ca->unused)); - for (i = prio_buckets(ca) - 1; i >= 0; --i) { long bucket; struct prio_set *p = ca->disk_buckets; @@ -560,7 +574,7 @@ void bch_prio_write(struct cache *ca) p->magic = pset_magic(&ca->sb); p->csum = bch_crc64(&p->magic, bucket_bytes(ca) - 8); - bucket = bch_bucket_alloc(ca, RESERVE_PRIO, true); + bucket = bch_bucket_alloc(ca, RESERVE_PRIO, wait); BUG_ON(bucket == -1); mutex_unlock(&ca->set->bucket_lock); @@ -589,6 +603,7 @@ void bch_prio_write(struct cache *ca) ca->prio_last_buckets[i] = ca->prio_buckets[i]; } + return 0; } static void prio_read(struct cache *ca, uint64_t bucket) @@ -1884,7 +1899,7 @@ static int run_cache_set(struct cache_set *c) mutex_lock(&c->bucket_lock); for_each_cache(ca, c, i) - bch_prio_write(ca); + bch_prio_write(ca, true); mutex_unlock(&c->bucket_lock); err = "cannot allocate new UUID bucket"; -- 2.20.1