Received: by 2002:a25:c205:0:0:0:0:0 with SMTP id s5csp3652415ybf; Tue, 3 Mar 2020 09:50:15 -0800 (PST) X-Google-Smtp-Source: ADFU+vvrmhJ3Ix1PsKPkEeUn86v4ZgUafskO6uqTmZQmMeKdIM6ano8jHXIeY799Hd4R7nuC11Ss X-Received: by 2002:a05:6830:1db6:: with SMTP id z22mr1037831oti.169.1583257815228; Tue, 03 Mar 2020 09:50:15 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1583257815; cv=none; d=google.com; s=arc-20160816; b=al/eEsyi6NA49xfj5dcS5i93UxadaPDiagdP85U6AFH749POe/SNhX50CNZ4KHv+Br NoxTfkw3vtAxO0I8Iua9F3kyk6BjjbA3CRCtsFlhQ0mDCWtcomD665NtTutI0ts9Wfx6 CRdmzQv0++Edipbj15vLCUKUx5OnvhMc8qihKofnw+fpN4HhT18SkQN8XBPIcNmWg6x9 2mPYFgCsOQap9KzZGfl3BhfMetEs7fIfJhRaoCXK5RVb1E+f8QU3f7T4BAARRQj0s42J H1z4lMb97wf7609jH/Xy43d9jpl1r93Nz7qHfH0oP5PjWcmu507WqQlxohx5jim8N2RZ HijA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=HFUOKO8Jesa75D2qEwQ3VakRxosMt01m59EchH7v0+4=; b=bGGj9BdwX1/diQgtTbfrP9Vh6/062GiWZedNv/tBffJPnkjt4yUo6HNGEMz/bjqKs7 IBNl2KIHF7j8YEW5YX5DjvZvie2a44a0XMGY8HwQ2tIF3NQiWYWBAoA2zay6JrClv2ms bd0ypjgy9y+kTBIZspCi0wfara06GWXmmc7+5sXnze/PLAqi2CHChzfeiTOybiZNsLPG 5jQbjqjIQa8cjQDbXMy38GjZRzwz1Vu/FWzTT2i1JQX2qIDDy7HMGss+e/7HW2D3A1Zo kAIZ91bSxdweCTaHDdElzf3yaFA8WaX9f/RMnR7d0ZLC9tZP9figMaPhIW07tASQHxsH OqsA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=n8XQjzVP; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a59si2261835otb.8.2020.03.03.09.50.03; Tue, 03 Mar 2020 09:50:15 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=n8XQjzVP; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731673AbgCCRsq (ORCPT + 99 others); Tue, 3 Mar 2020 12:48:46 -0500 Received: from mail.kernel.org ([198.145.29.99]:55944 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731624AbgCCRsp (ORCPT ); Tue, 3 Mar 2020 12:48:45 -0500 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 96DE220CC7; Tue, 3 Mar 2020 17:48:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1583257724; bh=fiow1jOLrBPF2kIgyQleElQONaYdG2nWNG10FhoHcNw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=n8XQjzVP8DXTMOz6J3UocclH4tDYZjwUbLJmXaxc3/Ym65PgXzardnaRlxtGgS1eG qp0wFqUgkl2tuQ3ZiA0S+qnR68oMj6xqe4aR87HoYHuLmdPUQUHoIErzreeJ1We3g+ YTsa5FWuPQ+8Mw6NG1m81nW/Pa6Pdro74xBjveC4= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Nigel Kirkland , James Smart , Sagi Grimberg , Christoph Hellwig , Keith Busch , Jens Axboe , Sasha Levin Subject: [PATCH 5.5 079/176] nvme: prevent warning triggered by nvme_stop_keep_alive Date: Tue, 3 Mar 2020 18:42:23 +0100 Message-Id: <20200303174313.772741002@linuxfoundation.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200303174304.593872177@linuxfoundation.org> References: <20200303174304.593872177@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Nigel Kirkland [ Upstream commit 97b2512ad000a409b4073dd1a71e4157d76675cb ] Delayed keep alive work is queued on system workqueue and may be cancelled via nvme_stop_keep_alive from nvme_reset_wq, nvme_fc_wq or nvme_wq. Check_flush_dependency detects mismatched attributes between the work-queue context used to cancel the keep alive work and system-wq. Specifically system-wq does not have the WQ_MEM_RECLAIM flag, whereas the contexts used to cancel keep alive work have WQ_MEM_RECLAIM flag. Example warning: workqueue: WQ_MEM_RECLAIM nvme-reset-wq:nvme_fc_reset_ctrl_work [nvme_fc] is flushing !WQ_MEM_RECLAIM events:nvme_keep_alive_work [nvme_core] To avoid the flags mismatch, delayed keep alive work is queued on nvme_wq. However this creates a secondary concern where work and a request to cancel that work may be in the same work queue - namely err_work in the rdma and tcp transports, which will want to flush/cancel the keep alive work which will now be on nvme_wq. After reviewing the transports, it looks like err_work can be moved to nvme_reset_wq. In fact that aligns them better with transition into RESETTING and performing related reset work in nvme_reset_wq. Change nvme-rdma and nvme-tcp to perform err_work in nvme_reset_wq. Signed-off-by: Nigel Kirkland Signed-off-by: James Smart Reviewed-by: Sagi Grimberg Reviewed-by: Christoph Hellwig Signed-off-by: Keith Busch Signed-off-by: Jens Axboe Signed-off-by: Sasha Levin --- drivers/nvme/host/core.c | 10 +++++----- drivers/nvme/host/rdma.c | 2 +- drivers/nvme/host/tcp.c | 2 +- 3 files changed, 7 insertions(+), 7 deletions(-) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 641c07347e8d8..ada59df642d29 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -66,8 +66,8 @@ MODULE_PARM_DESC(streams, "turn on support for Streams write directives"); * nvme_reset_wq - hosts nvme reset works * nvme_delete_wq - hosts nvme delete works * - * nvme_wq will host works such are scan, aen handling, fw activation, - * keep-alive error recovery, periodic reconnects etc. nvme_reset_wq + * nvme_wq will host works such as scan, aen handling, fw activation, + * keep-alive, periodic reconnects etc. nvme_reset_wq * runs reset works which also flush works hosted on nvme_wq for * serialization purposes. nvme_delete_wq host controller deletion * works which flush reset works for serialization. @@ -976,7 +976,7 @@ static void nvme_keep_alive_end_io(struct request *rq, blk_status_t status) startka = true; spin_unlock_irqrestore(&ctrl->lock, flags); if (startka) - schedule_delayed_work(&ctrl->ka_work, ctrl->kato * HZ); + queue_delayed_work(nvme_wq, &ctrl->ka_work, ctrl->kato * HZ); } static int nvme_keep_alive(struct nvme_ctrl *ctrl) @@ -1006,7 +1006,7 @@ static void nvme_keep_alive_work(struct work_struct *work) dev_dbg(ctrl->device, "reschedule traffic based keep-alive timer\n"); ctrl->comp_seen = false; - schedule_delayed_work(&ctrl->ka_work, ctrl->kato * HZ); + queue_delayed_work(nvme_wq, &ctrl->ka_work, ctrl->kato * HZ); return; } @@ -1023,7 +1023,7 @@ static void nvme_start_keep_alive(struct nvme_ctrl *ctrl) if (unlikely(ctrl->kato == 0)) return; - schedule_delayed_work(&ctrl->ka_work, ctrl->kato * HZ); + queue_delayed_work(nvme_wq, &ctrl->ka_work, ctrl->kato * HZ); } void nvme_stop_keep_alive(struct nvme_ctrl *ctrl) diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index 2a47c6c5007e1..3e85c5cacefd2 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -1088,7 +1088,7 @@ static void nvme_rdma_error_recovery(struct nvme_rdma_ctrl *ctrl) if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_RESETTING)) return; - queue_work(nvme_wq, &ctrl->err_work); + queue_work(nvme_reset_wq, &ctrl->err_work); } static void nvme_rdma_wr_error(struct ib_cq *cq, struct ib_wc *wc, diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index f8fa5c5b79f17..49d4373b84eb3 100644 --- a/drivers/nvme/host/tcp.c +++ b/drivers/nvme/host/tcp.c @@ -422,7 +422,7 @@ static void nvme_tcp_error_recovery(struct nvme_ctrl *ctrl) if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING)) return; - queue_work(nvme_wq, &to_tcp_ctrl(ctrl)->err_work); + queue_work(nvme_reset_wq, &to_tcp_ctrl(ctrl)->err_work); } static int nvme_tcp_process_nvme_cqe(struct nvme_tcp_queue *queue, -- 2.20.1