Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934593AbcJZMZM (ORCPT ); Wed, 26 Oct 2016 08:25:12 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:55024 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933687AbcJZMZG (ORCPT ); Wed, 26 Oct 2016 08:25:06 -0400 From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Bart Van Assche , Mike Snitzer Subject: [PATCH 4.8 034/140] dm rq: take request_queue lock while clearing QUEUE_FLAG_STOPPED Date: Wed, 26 Oct 2016 14:21:34 +0200 Message-Id: <20161026122221.854849518@linuxfoundation.org> X-Mailer: git-send-email 2.10.1 In-Reply-To: <20161026122220.384323763@linuxfoundation.org> References: <20161026122220.384323763@linuxfoundation.org> User-Agent: quilt/0.64 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1558 Lines: 56 4.8-stable review patch. If anyone has any objections, please let me know. ------------------ From: Mike Snitzer commit 9dbeaeabacb26260d1621fe58f0f6fdedc8860d4 upstream. Every call of queue_flag_clear_unlocked() after block device initialization has finished is wrong if blk_cleanup_queue() can be called concurrently. Convert queue_flag_clear_unlocked() into queue_flag_clear() and protect it by the block layer queue lock. Also, factor out dm_mq_start_queue(). Reported-by: Bart Van Assche Signed-off-by: Mike Snitzer Signed-off-by: Greg Kroah-Hartman --- drivers/md/dm-rq.c | 19 ++++++++++++++----- 1 file changed, 14 insertions(+), 5 deletions(-) --- a/drivers/md/dm-rq.c +++ b/drivers/md/dm-rq.c @@ -73,15 +73,24 @@ static void dm_old_start_queue(struct re spin_unlock_irqrestore(q->queue_lock, flags); } +static void dm_mq_start_queue(struct request_queue *q) +{ + unsigned long flags; + + spin_lock_irqsave(q->queue_lock, flags); + queue_flag_clear(QUEUE_FLAG_STOPPED, q); + spin_unlock_irqrestore(q->queue_lock, flags); + + blk_mq_start_stopped_hw_queues(q, true); + blk_mq_kick_requeue_list(q); +} + void dm_start_queue(struct request_queue *q) { if (!q->mq_ops) dm_old_start_queue(q); - else { - queue_flag_clear_unlocked(QUEUE_FLAG_STOPPED, q); - blk_mq_start_stopped_hw_queues(q, true); - blk_mq_kick_requeue_list(q); - } + else + dm_mq_start_queue(q); } static void dm_old_stop_queue(struct request_queue *q)