Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp1396689imu; Wed, 9 Jan 2019 17:56:46 -0800 (PST) X-Google-Smtp-Source: ALg8bN7ibG5KqiwLA1HHh77+aNWGt/rZa84qu/BsVyJ8ZHRhfj3RFC7mNMBDlya5LYB0rIC89kF+ X-Received: by 2002:a17:902:7201:: with SMTP id ba1mr8390876plb.105.1547085406277; Wed, 09 Jan 2019 17:56:46 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1547085406; cv=none; d=google.com; s=arc-20160816; b=FKNGkbYrZ7AR/jZeUyWjmF0ludPu6tlOyg7T4tP5rDyyyibtZkZGwOfTbhaBuc3qIO XEVDdFMAg42ubtGf+vE1KGknpRoBSarKTM3BQtyxL0eIgtBiELD069kGgEjob/OkRX73 vFupbGHMI8rmfQmgfLg+5IXRwDjmGpp3J96dj+DNM4OMMcwUigki2iY8BqW4P5o1lKYi Gdr3AZ7YyO3x51rgb+W8Mj0ERdj2vNnUg+orIcaXmT5n3KcMJuwCwvFxJFvO/tVb08Ym KbMEw/pV5Cw73Ff5jqF1NJafqWjcs7OGxywr3C1ioF73vGSsFG5OMMNNM6dylJz3cGsR uX3w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=Z+6MzPk3dPuu+mF0fR23Wk69/W3gLPyqAW+5NIwKhH8=; b=D1anvGf8c8gYFnimhMjxHYtEVax4uX6lzx4YQVeIR3as4IvDYjXQLV6lwHG5WgPcRs f5uFw4yJmi7fSfntpHrhVlmW+1KHyh2j4rid4asb1CGkEVuh0KxYonpnaGfbVorRmJog 9Wjb0SOrsuunMcKLpTSN41tL6Et2QhgHY49q4RkGyS7mbYzjLWwkr9/En8hqSOpo1ewi XBd8ts/dZ88gN38KjDRASwPjmd43ECCqoZ/eRDcCDOhOQPcAIN/6tM8qkvZ7fagOl8lc dwUhVoTcEJqd9y/Ecsob0KRyliHf00RaS47FTm8CgIIYrkxidzno3jUHPzMgLc9PCWsV 8Llw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h91si69837531pld.411.2019.01.09.17.56.30; Wed, 09 Jan 2019 17:56:46 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726891AbfAJBzY (ORCPT + 99 others); Wed, 9 Jan 2019 20:55:24 -0500 Received: from szxga06-in.huawei.com ([45.249.212.32]:39722 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726425AbfAJBzY (ORCPT ); Wed, 9 Jan 2019 20:55:24 -0500 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 08B4D5A91EE07DC43A21; Thu, 10 Jan 2019 09:55:22 +0800 (CST) Received: from [127.0.0.1] (10.57.71.8) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.408.0; Thu, 10 Jan 2019 09:55:15 +0800 Subject: Re: [PATCH] nvme: fix out of bounds access in nvme_cqe_pending To: Christoph Hellwig CC: , , , , , , , , , , , , , References: <1546827727-49635-1-git-send-email-yaohongbo@huawei.com> <20190109183920.GA22070@lst.de> From: Yao HongBo Message-ID: <991b5090-adf7-78e1-ae19-0df94566c212@huawei.com> Date: Thu, 10 Jan 2019 09:54:59 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.0 MIME-Version: 1.0 In-Reply-To: <20190109183920.GA22070@lst.de> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.57.71.8] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 1/10/2019 2:39 AM, Christoph Hellwig wrote: > On Mon, Jan 07, 2019 at 10:22:07AM +0800, Hongbo Yao wrote: >> There is an out of bounds array access in nvme_cqe_peding(). >> >> When enable irq_thread for nvme interrupt, there is racing between the >> nvmeq->cq_head updating and reading. > > Just curious: why did you enable this option? Do you have a workload > where it matters? Yes, there were a lot of hard interrupts reported when reading the nvme disk, the OS can not schedule and result in the soft lockup.so i enabled the irq_thread. >> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c >> index d668682..68375d4 100644 >> --- a/drivers/nvme/host/pci.c >> +++ b/drivers/nvme/host/pci.c >> @@ -908,9 +908,11 @@ static void nvme_complete_cqes(struct nvme_queue *nvmeq, u16 start, u16 end) >> >> static inline void nvme_update_cq_head(struct nvme_queue *nvmeq) >> { >> - if (++nvmeq->cq_head == nvmeq->q_depth) { >> + if (nvmeq->cq_head == (nvmeq->q_depth - 1)) { >> nvmeq->cq_head = 0; >> nvmeq->cq_phase = !nvmeq->cq_phase; >> + } else { >> + ++nvmeq->cq_head; > > No need for the braces above, but otherwise this looks fine. I'll apply > it to nvme-4.21. > > . > Need i send a v2 version?