Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp1006657pxk; Fri, 18 Sep 2020 00:57:01 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwEOOw0lspZU+31hCfY9Q4V4vq5EXZnjeHNXKJyZ54fz6q3Ma5c0LAEfyRRb/wltTAaNMDA X-Received: by 2002:a17:906:e18:: with SMTP id l24mr34744432eji.334.1600415821268; Fri, 18 Sep 2020 00:57:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1600415821; cv=none; d=google.com; s=arc-20160816; b=orUxL1oDr3nm9whV1c7wxoMRH5HOPDqj5O2xBhIBy9vEbFAmTaj1QudPoelPNi54Ba G+9GXR5cV+MolAKq//xjTGVhO82YDdAqwnRgH1RcEtUl1GNE16ooXESwVw973Xialzpk mkW+WsU4EswDlXNgQyTtKIS7SGjIC+dH3BqAleXNKCaV+DAyPxRDDDQLhhRImIrhcLG7 S8IlFk10+fA041EQcFMiYoBt7LZVG5yft1ABsuJdy7gUQUear6p/4JsNjGutjIx+mTr+ dJOXBa7RYFLplF67+2owHW2ZQG82C5881jEaNZo+3stIUw+L+LNCWcwIGEWHKCnLKhTo I89A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:message-id:date:subject:cc:to:from; bh=H49SEpvistlXGsWow3vsKQABcktSjwCpyPwz6KMH18U=; b=ykAeC0QncOOnYr977XuvUbQdvdBAcpdxAu5fewNNHMJTxKq9U8Ifgv8HiivjlEqSho 56EsnE4FdqPg5duyM0yUZr3PFI9CU4tuqqf6Rvbpm3hIh6+hgLRuEn5M4DpXqc06Iajh wYlpGCUprRwy8QmD+QjW1ygCtB1hhPUGPfcP2r4oPDt1X9DRoSGvhiqNg9UvM4Mv2uzz 0hMNqcphv7gU9zVAuxg8OhN1DJtUnA99Y1IXMqcqdx2NmcGcq5nGFiHW7XCamyRSx/8P SeKK02qItqji8xG6cSXFyfWFCV35L53QnQjnJHOTitS7UlngwkdoLuDewg3IspLXBwkM JyGQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id ds2si1542577ejc.117.2020.09.18.00.56.38; Fri, 18 Sep 2020 00:57:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726868AbgIRHwy (ORCPT + 99 others); Fri, 18 Sep 2020 03:52:54 -0400 Received: from smtp.h3c.com ([60.191.123.50]:57717 "EHLO h3cspam02-ex.h3c.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726646AbgIRHwy (ORCPT ); Fri, 18 Sep 2020 03:52:54 -0400 Received: from DAG2EX03-BASE.srv.huawei-3com.com ([10.8.0.66]) by h3cspam02-ex.h3c.com with ESMTPS id 08I7pQ0v063983 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Fri, 18 Sep 2020 15:51:26 +0800 (GMT-8) (envelope-from tian.xianting@h3c.com) Received: from localhost.localdomain (10.99.212.201) by DAG2EX03-BASE.srv.huawei-3com.com (10.8.0.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5; Fri, 18 Sep 2020 15:51:28 +0800 From: Xianting Tian To: , , , CC: , , Xianting Tian Subject: [PATCH] nvme: use correct upper limit for tag in nvme_handle_cqe() Date: Fri, 18 Sep 2020 15:44:34 +0800 Message-ID: <20200918074434.6461-1-tian.xianting@h3c.com> X-Mailer: git-send-email 2.17.1 MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.99.212.201] X-ClientProxiedBy: BJSMTP01-EX.srv.huawei-3com.com (10.63.20.132) To DAG2EX03-BASE.srv.huawei-3com.com (10.8.0.66) X-DNSRBL: X-MAIL: h3cspam02-ex.h3c.com 08I7pQ0v063983 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We met a crash issue when hot-insert a nvme device, blk_mq_tag_to_rq() returned null(req=null), then crash happened in nvme_end_request(): req = blk_mq_tag_to_rq(); struct nvme_request *rq = nvme_req(req); //rq = req + 1 rq->result = result; <==crash here!!! [ 1124.256246] nvme nvme5: pci function 0000:e1:00.0 [ 1124.256323] nvme 0000:e1:00.0: enabling device (0000 -> 0002) [ 1125.720859] nvme nvme5: 96/0/0 default/read/poll queues [ 1125.732483] nvme5n1: p1 p2 p3 [ 1125.788049] BUG: unable to handle kernel NULL pointer dereference at 0000000000000130 [ 1125.788054] PGD 0 P4D 0 [ 1125.788057] Oops: 0002 [#1] SMP NOPTI [ 1125.788059] CPU: 50 PID: 0 Comm: swapper/50 Kdump: loaded Tainted: G ------- -t - 4.18.0-147.el8.x86_64 #1 [ 1125.788065] RIP: 0010:nvme_irq+0xe8/0x240 [nvme] [ 1125.788068] RSP: 0018:ffff916b8ec83ed0 EFLAGS: 00010813 [ 1125.788069] RAX: 0000000000000000 RBX: ffff918ae9211b00 RCX: 0000000000000000 [ 1125.788070] RDX: 000000000000400b RSI: 0000000000000000 RDI: 0000000000000000 [ 1125.788071] RBP: ffff918ae8870000 R08: 0000000000000004 R09: ffff918ae8870000 [ 1125.788072] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000 [ 1125.788073] R13: 0000000000000001 R14: 0000000000000000 R15: 0000000000000001 [ 1125.788075] FS: 0000000000000000(0000) GS:ffff916b8ec80000(0000) knlGS:0000000000000000 [ 1125.788075] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 1125.788076] CR2: 0000000000000130 CR3: 0000001768f00000 CR4: 0000000000340ee0 [ 1125.788077] Call Trace: [ 1125.788080] [ 1125.788085] __handle_irq_event_percpu+0x40/0x180 [ 1125.788087] handle_irq_event_percpu+0x30/0x80 [ 1125.788089] handle_irq_event+0x36/0x53 [ 1125.788090] handle_edge_irq+0x82/0x190 [ 1125.788094] handle_irq+0xbf/0x100 [ 1125.788098] do_IRQ+0x49/0xd0 [ 1125.788100] common_interrupt+0xf/0xf The analysis of the possible cause of above crash as below. According to our test, in nvme_pci_enable(), 'dev->q_depth' is set to 1024, in nvme_create_io_queues()->nvme_alloc_queue(), 'nvmeq->q_depth' is set to 'dev->q_depth'. In nvme_dev_add(), 'dev->tagset.queue_depth' is set to 1023: dev->tagset.queue_depth = min_t(unsigned int, dev->q_depth, BLK_MQ_MAX_DEPTH) - 1; //why -1?? first involved by commit a4aea562 In nvme_alloc_queue(), 'nvmeq->q_depth' is set to dev->q_depth'. So we got below values for mutiple depth: dev->q_depth = 1024 dev->tagset.queue_depth = 1023 nvmeq->q_depth = 1024 In blk_mq_alloc_rqs(), it allocated 1023(dev->tagset.queue_depth) rqs for one hw queue. In nvme_alloc_queue(), it allocated 1024(nvmeq->q_depth) cqes for nvmeq->cqes[]. When process cqe in nvme_handle_cqe(), it get it by: struct nvme_completion *cqe = &nvmeq->cqes[idx]; We also added below prints in nvme_handle_cqe() and blk_mq_tag_to_rq(): static inline void nvme_handle_cqe(struct nvme_queue *nvmeq, u16 idx) { volatile struct nvme_completion *cqe = &nvmeq->cqes[idx]; struct request *req; //debug print dev_warn(nvmeq->dev->ctrl.device, "command_id %d completed on queue %d, nvmeq q_depth %d, nvme tagset q_depth %d\n", cqe->command_id, le16_to_cpu(cqe->sq_id), nvmeq->q_depth, nvmeq->dev->tagset.queue_depth); if (unlikely(cqe->command_id >= nvmeq->q_depth)) { dev_warn(nvmeq->dev->ctrl.device, "invalid id %d completed on queue %d\n", cqe->command_id, le16_to_cpu(cqe->sq_id)); return; } ... ... req = blk_mq_tag_to_rq(nvme_queue_tagset(nvmeq), cqe->command_id); ... ... } struct request *blk_mq_tag_to_rq(struct blk_mq_tags *tags, unsigned int tag) { //debug print printk("tag, nr_tags:%d %d\n", tag, tags->nr_tags); if (tag < tags->nr_tags) { prefetch(tags->rqs[tag]); return tags->rqs[tag]; } return NULL; } According to the output, we know the max tag number(nr_tags) is 1023, but nvmeq->q_depth is 1024 and nvmeq->cqes[] has 1024 cqes. So if command_id(tag) is 1023, the judgement "if (unlikely(cqe->command_id >= nvmeq->q_depth))" in nvme_handle_cqe() is useless, we will get a null pointer, which is returned by blk_mq_tag_to_rq(): [ 16.649973] nvme nvme0: command_id 968 completed on queue 13, nvmeq q_depth 1024, nvme tagset q_depth 1023 [ 16.649974] tag, nr_tags:968 1023 This patch is to make command_id match to its correct upper limit 'nvmeq->dev->tagset.queue_depth', not nvmeq->q_depth. So even if we got 1023 of command_id, we can avoid a null pointer deference. Signed-off-by: Xianting Tian --- drivers/nvme/host/pci.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 899d2f4d7..c681e26d0 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -940,7 +940,7 @@ static inline void nvme_handle_cqe(struct nvme_queue *nvmeq, u16 idx) struct nvme_completion *cqe = &nvmeq->cqes[idx]; struct request *req; - if (unlikely(cqe->command_id >= nvmeq->q_depth)) { + if (unlikely(cqe->command_id >= nvmeq->dev->tagset.queue_depth)) { dev_warn(nvmeq->dev->ctrl.device, "invalid id %d completed on queue %d\n", cqe->command_id, le16_to_cpu(cqe->sq_id)); -- 2.17.1