Received: by 2002:a05:6a10:1287:0:0:0:0 with SMTP id d7csp3698654pxv; Mon, 26 Jul 2021 09:40:08 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw2cigfk9hdzZ46jh48I5w2iYKhJSn6zBleXMXb6YKxdP4VbmrbXW2k+XfWezlSlW+W/Ewy X-Received: by 2002:aa7:d146:: with SMTP id r6mr9229133edo.264.1627317508317; Mon, 26 Jul 2021 09:38:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1627317508; cv=none; d=google.com; s=arc-20160816; b=zvSYWGm/tn7mkzt0TbxU/UPzKxAakj+0MfzbN+jEDR2QrfSBozZ4TV4jMLrTAuF1Pg 26Clalc9GqiCspc79y/TgZz0TEZy0ZpTvYqDRLBoK7x3GWmi339gOXo2YVxiXQQnmWLi DzEVwfp8Zi24kS4Zwb4XoBltGk0F5qnRqnm/Ksg0l7kBe+eXIRBfuBUMTsKzajWjsBdK 4l6Mwx8RhqXrB8mNlsP8P3NDRTGOhM9+vyV1fvsoSjqjhTYw5kRtBetllJapX9yuIF/H zhCFBi8w7xCSXpI3AVYiYTodYC40YrGvMgojNijd4/+pmxdcH4JgaTxZ9ymIPuoqTGi3 GIYw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=RY7ZRbwFUZpDXD/UExVaQRkFVhqjNpHiuWu89ZoUSxw=; b=XrDpVIW6FxMJq0h4xhK38o2bWkV7EEdefqf2oIct2Hso8LkEBNKJ/K3YILwbK+G3Qc IecvIhsGJlTZu/ljwjxAdmU5oIa8z3QjEV3sRNOU5lXJPcZXvTKkwwKwtxIV82uTBbjI S+mOsoYPE4h7szOlqJNtP4PAZzlPhVsQi74FqtZSH/OD2eu0EdXC3n61QTrVahZO5kax VNBYwVw61KXtdcAmy2ddug2xc0BuKuDpgNeVVpDlcy93hYmkzfRkWcKFuZ8VRss62Ye8 zE/k61bB46LQ4p3zcbtpb1D2COk0+dIn7peSH93BRbO0tDr8r4iFjC6NT2cHTNpJbiJV O0EA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=syvpWdoz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id o23si327383ejg.517.2021.07.26.09.38.05; Mon, 26 Jul 2021 09:38:28 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=syvpWdoz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240616AbhGZPzr (ORCPT + 99 others); Mon, 26 Jul 2021 11:55:47 -0400 Received: from mail.kernel.org ([198.145.29.99]:52544 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233534AbhGZPfK (ORCPT ); Mon, 26 Jul 2021 11:35:10 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 1AF2360FD7; Mon, 26 Jul 2021 16:15:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1627316138; bh=s8ompZTZEUeJzSoDoH1NA8m5Av3ALM9MlNS/m9dsWD8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=syvpWdozXHn07wMb0zjv8pi03gGcYgPjD8mR/hmH1DbzyNvSAu1Ud6+yk8u1yfjFT OSU9rwyYJtouUKC0mBhNLAPRMe83JEi5Blp+E/A0w0ceqN6MONKFam1huu1OfpUsfI MdZNwxQhut0iGiyQrblQRLqPuplhjtMBC87P0PhE= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Ilya Dryomov , Robin Geuze Subject: [PATCH 5.13 203/223] rbd: dont hold lock_rwsem while running_list is being drained Date: Mon, 26 Jul 2021 17:39:55 +0200 Message-Id: <20210726153852.833420149@linuxfoundation.org> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210726153846.245305071@linuxfoundation.org> References: <20210726153846.245305071@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Ilya Dryomov commit ed9eb71085ecb7ded9a5118cec2ab70667cc7350 upstream. Currently rbd_quiesce_lock() holds lock_rwsem for read while blocking on releasing_wait completion. On the I/O completion side, each image request also needs to take lock_rwsem for read. Because rw_semaphore implementation doesn't allow new readers after a writer has indicated interest in the lock, this can result in a deadlock if something that needs to take lock_rwsem for write gets involved. For example: 1. watch error occurs 2. rbd_watch_errcb() takes lock_rwsem for write, clears owner_cid and releases lock_rwsem 3. after reestablishing the watch, rbd_reregister_watch() takes lock_rwsem for write and calls rbd_reacquire_lock() 4. rbd_quiesce_lock() downgrades lock_rwsem to for read and blocks on releasing_wait until running_list becomes empty 5. another watch error occurs 6. rbd_watch_errcb() blocks trying to take lock_rwsem for write 7. no in-flight image request can complete and delete itself from running_list because lock_rwsem won't be granted anymore A similar scenario can occur with "lock has been acquired" and "lock has been released" notification handers which also take lock_rwsem for write to update owner_cid. We don't actually get anything useful from sitting on lock_rwsem in rbd_quiesce_lock() -- owner_cid updates certainly don't need to be synchronized with. In fact the whole owner_cid tracking logic could probably be removed from the kernel client because we don't support proxied maintenance operations. Cc: stable@vger.kernel.org # 5.3+ URL: https://tracker.ceph.com/issues/42757 Signed-off-by: Ilya Dryomov Tested-by: Robin Geuze Signed-off-by: Greg Kroah-Hartman --- drivers/block/rbd.c | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) --- a/drivers/block/rbd.c +++ b/drivers/block/rbd.c @@ -4100,8 +4100,6 @@ again: static bool rbd_quiesce_lock(struct rbd_device *rbd_dev) { - bool need_wait; - dout("%s rbd_dev %p\n", __func__, rbd_dev); lockdep_assert_held_write(&rbd_dev->lock_rwsem); @@ -4113,11 +4111,11 @@ static bool rbd_quiesce_lock(struct rbd_ */ rbd_dev->lock_state = RBD_LOCK_STATE_RELEASING; rbd_assert(!completion_done(&rbd_dev->releasing_wait)); - need_wait = !list_empty(&rbd_dev->running_list); - downgrade_write(&rbd_dev->lock_rwsem); - if (need_wait) - wait_for_completion(&rbd_dev->releasing_wait); - up_read(&rbd_dev->lock_rwsem); + if (list_empty(&rbd_dev->running_list)) + return true; + + up_write(&rbd_dev->lock_rwsem); + wait_for_completion(&rbd_dev->releasing_wait); down_write(&rbd_dev->lock_rwsem); if (rbd_dev->lock_state != RBD_LOCK_STATE_RELEASING)