Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp4160500yba; Sun, 19 May 2019 12:11:19 -0700 (PDT) X-Google-Smtp-Source: APXvYqyJ2L/OuDDjYJw1cXYK2VL0eiiQkt2r9+hM8M+yOXORBn5YUlCjQZOK3/93xA9X3DG6CRjn X-Received: by 2002:a62:117:: with SMTP id 23mr46847603pfb.156.1558293079843; Sun, 19 May 2019 12:11:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1558293079; cv=none; d=google.com; s=arc-20160816; b=aRiDYHNxu/VnOo46MDw0aMRNVAPeFVXBd26W/SRlCTQqQkKpEfS/oOpHPLe7Am7A8d 7ygiYus5dz4av305qMt0Jjyv0XB/NELEIGWOWk71hyw15t73NHFzLA1Bm8FR44yKtmtC X2tr4R2PEWmmhy/J+LxPwlTwAnY4Pj/ql1vN5zdiuMUjcvJsOIErpNIVPa1Dq5NZizza H0S0Rksj3cn4X7eUAKbkc9HWuak848XY+CdkuotE+d0uCdA7rtbo6e3NiC/QwFT1xudc ji5BPhEkk+KrTRhSwJdBnJDj5KUgKDovh3WaWo4yT1YtPP1CUjIj0P0AvCcJDRl297tl hBAQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=clJHY7KyFyDcM9FkV4yHc3Tj8r0uZgSu3tfBFo7C+aY=; b=WEceSP0vDQdBOZWLQ1P912e7gQcK9dc7T7tfHpDGzve2y6uUH0czGhktweotjblkP7 7opH+A6dZ+SFwKGm39mB3/4RtdCrujI1trhhYMe/uUv22qIKcdr/cXvgsvQhgxqrV7LS uU3ZDKB7Abzn9NixcgWrUIQC1wnxZqV/xe1Nj8vhHKgyHBJrB4GFeSqxl75Al8Ff6RbF z6Q0C31lkAw7ISmYHw46Xd/H0bGDdmo5CjndFarP17SwGachgeVUtHQtJwuztPWYyZSB ZQcBItWGuptCnSA3m9x3oMiIGlkbIF1ZA0RfLPIrgvF8nuwLk1tJLsmB3uKjaV3dAja6 YKxA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=KFPKUQZN; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h25si16617059pfn.209.2019.05.19.12.11.05; Sun, 19 May 2019 12:11:19 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=KFPKUQZN; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727419AbfESRes (ORCPT + 99 others); Sun, 19 May 2019 13:34:48 -0400 Received: from mail-pf1-f196.google.com ([209.85.210.196]:45278 "EHLO mail-pf1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726384AbfESRes (ORCPT ); Sun, 19 May 2019 13:34:48 -0400 Received: by mail-pf1-f196.google.com with SMTP id s11so6038723pfm.12 for ; Sun, 19 May 2019 10:34:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=clJHY7KyFyDcM9FkV4yHc3Tj8r0uZgSu3tfBFo7C+aY=; b=KFPKUQZNtyglYSGyldyc0U0Pa14GqiTJrcbRXwVIjV6nQ8+y8qkBgdJCSBimIjltfq k5ra/g8y5IHzwWhPqMQlcFCiCjIVeRs3oxboogMipNXB0prlMqY7hI1UQmcUm7cOMC3g p6oL2Igq4PjUDdNsej1qxElqZJMSzEMtcE0GHHGGL2ih2xhxEUTOXlYWoFWW40fG+SAC Zs9BplhcuHu0kvEScraT+01B2ZG07BEkulX5LMU8neCXQXzzQ0B/wP0hVWltMmdhLue8 7KhkM3OtmlBe4R5hi6GnTJeIDH2PMDivRXnZPpkV3fxQ+8z9Pox1I7Op9PWe2YSg62Tu 52QA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=clJHY7KyFyDcM9FkV4yHc3Tj8r0uZgSu3tfBFo7C+aY=; b=Dw4yijgPE2Q+jHC5Up1DRzSHWneu0Z8BUXZVGQH6IWd/WV9E9egVo0Xn76h1ZC6Lgb Ce2xfMmGQs9ajvsvBN05Lpjzm6oOi1RZVfsYf9CxZJHYdnx0m2pkQaAsnKf0E94vCGAW rT0FIcF5kSkS2i1VmTTYWiXEwwYShBaHSffLE2gAI0DZwL2ut2pK3EqbV3iuTc4VP5Na SSF1BpqSEyFusH9bWNQxNx2p4N5uIMOHc3DbHX1H8hjpvOBAHuCxJM0KYwnWSxPL0h3x LDenRr8sw6cKSNJe9fLRw6cOluyWfDDG0jkYO8WzqodGddjhfwoCMunEZ8cm4pBJ9c8Q FPVw== X-Gm-Message-State: APjAAAWMdPWC9xJf0VmfgbUI8KpnRkd38rt7ySk9ny086CaRrg7yvspC 0/bw8dThr3aQ+Vd92iyoyZElQ92L X-Received: by 2002:aa7:8b57:: with SMTP id i23mr54011253pfd.54.1558278452344; Sun, 19 May 2019 08:07:32 -0700 (PDT) Received: from mita-MS-7A45.lan ([240f:34:212d:1:5085:bb4a:e3a8:fc9d]) by smtp.gmail.com with ESMTPSA id g17sm2441105pfb.56.2019.05.19.08.07.29 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sun, 19 May 2019 08:07:31 -0700 (PDT) From: Akinobu Mita To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org Cc: Akinobu Mita , Johannes Berg , Keith Busch , Jens Axboe , Christoph Hellwig , Sagi Grimberg , Minwoo Im , Kenneth Heitke , Chaitanya Kulkarni Subject: [PATCH v4 6/7] nvme-pci: trigger device coredump on command timeout Date: Mon, 20 May 2019 00:06:57 +0900 Message-Id: <1558278418-5702-7-git-send-email-akinobu.mita@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1558278418-5702-1-git-send-email-akinobu.mita@gmail.com> References: <1558278418-5702-1-git-send-email-akinobu.mita@gmail.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This enables the nvme driver to trigger a device coredump when command timeout occurs, and it helps diagnose and debug issues. This can be tested with fail_io_timeout fault injection. # echo 1 > /sys/kernel/debug/fail_io_timeout/probability # echo 1 > /sys/kernel/debug/fail_io_timeout/times # echo 1 > /sys/block/nvme0n1/io-timeout-fail # dd if=/dev/nvme0n1 of=/dev/null Cc: Johannes Berg Cc: Keith Busch Cc: Jens Axboe Cc: Christoph Hellwig Cc: Sagi Grimberg Cc: Minwoo Im Cc: Kenneth Heitke Cc: Chaitanya Kulkarni Signed-off-by: Akinobu Mita --- * v4 - Abandon the reset if nvme_coredump_logs() returns error code drivers/nvme/host/pci.c | 41 +++++++++++++++++++++++++---------------- 1 file changed, 25 insertions(+), 16 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 8a29c52..6436e72 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -87,12 +87,12 @@ MODULE_PARM_DESC(poll_queues, "Number of queues to use for polled IO."); struct nvme_dev; struct nvme_queue; -static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown); +static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown, bool dump); static bool __nvme_disable_io_queues(struct nvme_dev *dev, u8 opcode); -static void __maybe_unused nvme_coredump_init(struct nvme_dev *dev); -static int __maybe_unused nvme_coredump_logs(struct nvme_dev *dev); -static void __maybe_unused nvme_coredump_complete(struct nvme_dev *dev); +static void nvme_coredump_init(struct nvme_dev *dev); +static int nvme_coredump_logs(struct nvme_dev *dev); +static void nvme_coredump_complete(struct nvme_dev *dev); /* * Represents an NVM Express device. Each nvme_dev is a PCI function. @@ -1280,7 +1280,7 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved) */ if (nvme_should_reset(dev, csts)) { nvme_warn_reset(dev, csts); - nvme_dev_disable(dev, false); + nvme_dev_disable(dev, false, true); nvme_reset_ctrl(&dev->ctrl); return BLK_EH_DONE; } @@ -1310,7 +1310,7 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved) dev_warn_ratelimited(dev->ctrl.device, "I/O %d QID %d timeout, disable controller\n", req->tag, nvmeq->qid); - nvme_dev_disable(dev, shutdown); + nvme_dev_disable(dev, shutdown, true); nvme_req(req)->flags |= NVME_REQ_CANCELLED; return BLK_EH_DONE; default: @@ -1326,7 +1326,7 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved) dev_warn(dev->ctrl.device, "I/O %d QID %d timeout, reset controller\n", req->tag, nvmeq->qid); - nvme_dev_disable(dev, false); + nvme_dev_disable(dev, false, true); nvme_reset_ctrl(&dev->ctrl); nvme_req(req)->flags |= NVME_REQ_CANCELLED; @@ -2382,7 +2382,7 @@ static void nvme_pci_disable(struct nvme_dev *dev) } } -static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown) +static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown, bool dump) { bool dead = true; struct pci_dev *pdev = to_pci_dev(dev->dev); @@ -2407,6 +2407,9 @@ static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown) nvme_wait_freeze_timeout(&dev->ctrl, NVME_IO_TIMEOUT); } + if (dump) + nvme_coredump_init(dev); + nvme_stop_queues(&dev->ctrl); if (!dead && dev->ctrl.queue_count > 0) { @@ -2477,7 +2480,7 @@ static void nvme_remove_dead_ctrl(struct nvme_dev *dev, int status) dev_warn(dev->ctrl.device, "Removing after probe failure status: %d\n", status); nvme_get_ctrl(&dev->ctrl); - nvme_dev_disable(dev, false); + nvme_dev_disable(dev, false, false); nvme_kill_queues(&dev->ctrl); if (!queue_work(nvme_wq, &dev->remove_work)) nvme_put_ctrl(&dev->ctrl); @@ -2499,7 +2502,7 @@ static void nvme_reset_work(struct work_struct *work) * moving on. */ if (dev->ctrl.ctrl_config & NVME_CC_ENABLE) - nvme_dev_disable(dev, false); + nvme_dev_disable(dev, false, false); mutex_lock(&dev->shutdown_lock); result = nvme_pci_enable(dev); @@ -2536,6 +2539,11 @@ static void nvme_reset_work(struct work_struct *work) if (result) goto out; + result = nvme_coredump_logs(dev); + if (result) + goto out; + nvme_coredump_complete(dev); + if (dev->ctrl.oacs & NVME_CTRL_OACS_SEC_SUPP) { if (!dev->ctrl.opal_dev) dev->ctrl.opal_dev = @@ -2598,6 +2606,7 @@ static void nvme_reset_work(struct work_struct *work) out_unlock: mutex_unlock(&dev->shutdown_lock); out: + nvme_coredump_complete(dev); nvme_remove_dead_ctrl(dev, result); } @@ -2788,7 +2797,7 @@ static int nvme_probe(struct pci_dev *pdev, const struct pci_device_id *id) static void nvme_reset_prepare(struct pci_dev *pdev) { struct nvme_dev *dev = pci_get_drvdata(pdev); - nvme_dev_disable(dev, false); + nvme_dev_disable(dev, false, false); } static void nvme_reset_done(struct pci_dev *pdev) @@ -2800,7 +2809,7 @@ static void nvme_reset_done(struct pci_dev *pdev) static void nvme_shutdown(struct pci_dev *pdev) { struct nvme_dev *dev = pci_get_drvdata(pdev); - nvme_dev_disable(dev, true); + nvme_dev_disable(dev, true, false); } /* @@ -2817,14 +2826,14 @@ static void nvme_remove(struct pci_dev *pdev) if (!pci_device_is_present(pdev)) { nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_DEAD); - nvme_dev_disable(dev, true); + nvme_dev_disable(dev, true, false); nvme_dev_remove_admin(dev); } flush_work(&dev->ctrl.reset_work); nvme_stop_ctrl(&dev->ctrl); nvme_remove_namespaces(&dev->ctrl); - nvme_dev_disable(dev, true); + nvme_dev_disable(dev, true, false); nvme_release_cmb(dev); nvme_free_host_mem(dev); nvme_dev_remove_admin(dev); @@ -2841,7 +2850,7 @@ static int nvme_suspend(struct device *dev) struct pci_dev *pdev = to_pci_dev(dev); struct nvme_dev *ndev = pci_get_drvdata(pdev); - nvme_dev_disable(ndev, true); + nvme_dev_disable(ndev, true, false); return 0; } @@ -3290,7 +3299,7 @@ static pci_ers_result_t nvme_error_detected(struct pci_dev *pdev, case pci_channel_io_frozen: dev_warn(dev->ctrl.device, "frozen state error detected, reset controller\n"); - nvme_dev_disable(dev, false); + nvme_dev_disable(dev, false, false); return PCI_ERS_RESULT_NEED_RESET; case pci_channel_io_perm_failure: dev_warn(dev->ctrl.device, -- 2.7.4