Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp1159762imu; Wed, 9 Jan 2019 12:45:18 -0800 (PST) X-Google-Smtp-Source: ALg8bN4AIYQA8yOa4v+e7KlYOYBMAmlBYz+D6LBYDHYodIjHjQeOBV3otQ9E+BtoyEbjrTisiMPW X-Received: by 2002:a63:1e56:: with SMTP id p22mr6783676pgm.126.1547066718588; Wed, 09 Jan 2019 12:45:18 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1547066718; cv=none; d=google.com; s=arc-20160816; b=p1q4iul4hxcdOoi7cw0jI2Y80OVOZbzJ8tZcu5hI0tt8kD8fzEs1jEls7Eybkixs85 jIPq+vortaBC97cfyoKX4ItwTjaSoCURYaQawDkr4zlxC0P+IH+cYg5gZ402cQizx09Z NUzy8on9bWyvGcnqS5swIJBmHI2h8+3Xz9yrlTUPyFw4kZlHOhjkqRErxBIUYghV+ScL rVWMSXLJhB1IMuGQrAEOHC911aI0A4GF8UuCD5EBBCwrNTR/OCFQRhbvsYB7TK7h9+Of 0aSkJiFMH/MwreNpLU97BD3R5aP5fZ+/eY8mmImfTAzvubWsRgDpEYrkE3hJQv5ip1wy dQ+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=dQqu1BymvSScXU/0LJv+TVbLZlh/VHIB6iSMzMyYOHI=; b=rtRgJqQuJpJ2m1uIn4QSE65zS0khgewyXxD9PxSYYOFiA6EsxDPsJoEx1r01gaGiot VRzXx6bRTcKner/k81o/tod8MlRC2bLK32i2jt6o31YmSE+a0bn2t27pBC/+TQVTBNvZ sNubJg9kW3beX3oDsOz0XM3NiSupxoxEuC4v8x/5TIQ7HXm9LduY197OMtmJHEv/B2VA AQfNkRAz6OFxuveAu24oYKDUSbWObNMIkFzhljuS8Cpiq/W4e05wwesR7b/7AU7kKjbj GyalAchpe4GFfGuq7BA+6bOLAN6UbsomT1C08ka0yUUYv8VwM0Yl9SjOACxvKTqkVH2e d0gg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x61si4927290plb.303.2019.01.09.12.45.03; Wed, 09 Jan 2019 12:45:18 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727818AbfAISjW (ORCPT + 99 others); Wed, 9 Jan 2019 13:39:22 -0500 Received: from verein.lst.de ([213.95.11.211]:43346 "EHLO newverein.lst.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726465AbfAISjW (ORCPT ); Wed, 9 Jan 2019 13:39:22 -0500 Received: by newverein.lst.de (Postfix, from userid 2407) id B3FAB67358; Wed, 9 Jan 2019 19:39:20 +0100 (CET) Date: Wed, 9 Jan 2019 19:39:20 +0100 From: Christoph Hellwig To: Hongbo Yao Cc: wangxiongfeng2@huawei.com, guohanjun@huawei.com, huawei.libin@huawei.com, thunder.leizhen@huawei.com, tanxiaojun@huawei.com, xiexiuqi@huawei.com, yangyingliang@huawei.com, cj.chengjian@huawei.com, wxf.wang@hisilicon.com, keith.busch@intel.com, axboe@fb.com, hch@lst.de, sagi@grimberg.me, linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH] nvme: fix out of bounds access in nvme_cqe_pending Message-ID: <20190109183920.GA22070@lst.de> References: <1546827727-49635-1-git-send-email-yaohongbo@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1546827727-49635-1-git-send-email-yaohongbo@huawei.com> User-Agent: Mutt/1.5.17 (2007-11-01) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jan 07, 2019 at 10:22:07AM +0800, Hongbo Yao wrote: > There is an out of bounds array access in nvme_cqe_peding(). > > When enable irq_thread for nvme interrupt, there is racing between the > nvmeq->cq_head updating and reading. Just curious: why did you enable this option? Do you have a workload where it matters? > diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c > index d668682..68375d4 100644 > --- a/drivers/nvme/host/pci.c > +++ b/drivers/nvme/host/pci.c > @@ -908,9 +908,11 @@ static void nvme_complete_cqes(struct nvme_queue *nvmeq, u16 start, u16 end) > > static inline void nvme_update_cq_head(struct nvme_queue *nvmeq) > { > - if (++nvmeq->cq_head == nvmeq->q_depth) { > + if (nvmeq->cq_head == (nvmeq->q_depth - 1)) { > nvmeq->cq_head = 0; > nvmeq->cq_phase = !nvmeq->cq_phase; > + } else { > + ++nvmeq->cq_head; No need for the braces above, but otherwise this looks fine. I'll apply it to nvme-4.21.