Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932164AbcKGNWB (ORCPT ); Mon, 7 Nov 2016 08:22:01 -0500 Received: from mx2.suse.de ([195.135.220.15]:36826 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932262AbcKGNFc (ORCPT ); Mon, 7 Nov 2016 08:05:32 -0500 X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References" From: Jiri Slaby To: stable@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Bart Van Assche , Mike Snitzer , Jiri Slaby Subject: [PATCH 3.12 15/72] dm: mark request_queue dead before destroying the DM device Date: Mon, 7 Nov 2016 14:04:22 +0100 Message-Id: <2210e6dc67d9dfc1b9811b8f0a82548a208e96d6.1478523828.git.jslaby@suse.cz> X-Mailer: git-send-email 2.10.2 In-Reply-To: <0f3caac741164dcff670ae0f4d1cfcb0a7026a1c.1478523828.git.jslaby@suse.cz> References: <0f3caac741164dcff670ae0f4d1cfcb0a7026a1c.1478523828.git.jslaby@suse.cz> In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1270 Lines: 45 From: Bart Van Assche 3.12-stable review patch. If anyone has any objections, please let me know. =============== commit 3b785fbcf81c3533772c52b717f77293099498d3 upstream. This avoids that new requests are queued while __dm_destroy() is in progress. [js] use md->queue instead of non-present helper Signed-off-by: Bart Van Assche Signed-off-by: Mike Snitzer Signed-off-by: Jiri Slaby --- drivers/md/dm.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 78ab0a131cf1..8c82835a4749 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -2428,6 +2428,7 @@ EXPORT_SYMBOL_GPL(dm_device_name); static void __dm_destroy(struct mapped_device *md, bool wait) { + struct request_queue *q = md->queue; struct dm_table *map; int srcu_idx; @@ -2438,6 +2439,10 @@ static void __dm_destroy(struct mapped_device *md, bool wait) set_bit(DMF_FREEING, &md->flags); spin_unlock(&_minor_lock); + spin_lock_irq(q->queue_lock); + queue_flag_set(QUEUE_FLAG_DYING, q); + spin_unlock_irq(q->queue_lock); + /* * Take suspend_lock so that presuspend and postsuspend methods * do not race with internal suspend. -- 2.10.2