Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757833AbbKFXsj (ORCPT ); Fri, 6 Nov 2015 18:48:39 -0500 Received: from g4t3426.houston.hp.com ([15.201.208.54]:37448 "EHLO g4t3426.houston.hp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753298AbbKFXsi convert rfc822-to-8bit (ORCPT ); Fri, 6 Nov 2015 18:48:38 -0500 From: "Elliott, Robert (Persistent Memory)" To: Jens Axboe , "linux-kernel@vger.kernel.org" , "linux-block@vger.kernel.org" CC: "keith.busch@intel.com" , "hch@infradead.org" Subject: RE: [PATCH 4/5] NVMe: add blk polling support Thread-Topic: [PATCH 4/5] NVMe: add blk polling support Thread-Index: AQHRGLfHLP8//+EDu0WStxxW824+Q56PnrXA Date: Fri, 6 Nov 2015 23:46:07 +0000 Message-ID: <94D0CD8314A33A4D9D801C0FE68B40295BE10552@G4W3202.americas.hpqcorp.net> References: <1446830423-25027-1-git-send-email-axboe@fb.com> <1446830423-25027-5-git-send-email-axboe@fb.com> In-Reply-To: <1446830423-25027-5-git-send-email-axboe@fb.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [16.210.48.36] Content-Type: text/plain; charset="Windows-1252" Content-Transfer-Encoding: 8BIT MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2069 Lines: 57 > -----Original Message----- > From: linux-kernel-owner@vger.kernel.org [mailto:linux-kernel- > owner@vger.kernel.org] On Behalf Of Jens Axboe > Sent: Friday, November 6, 2015 11:20 AM ... > Subject: [PATCH 4/5] NVMe: add blk polling support > > Add nvme_poll(), which will check a specific completion queue for > command completions. Wire that up to the new block layer poll > mechanism. > > Later on we'll setup specific sq/cq pairs that don't have interrups > enabled, so we can do more efficient polling. As of this patch, an > IRQ will still trigger on command completion. ... > -static int nvme_process_cq(struct nvme_queue *nvmeq) > +static void __nvme_process_cq(struct nvme_queue *nvmeq, unsigned int > *tag) > { > u16 head, phase; > > @@ -953,6 +953,8 @@ static int nvme_process_cq(struct nvme_queue *nvmeq) > head = 0; > phase = !phase; > } > + if (tag && *tag == cqe.command_id) > + *tag = -1; > ctx = nvme_finish_cmd(nvmeq, cqe.command_id, &fn); > fn(nvmeq, ctx, &cqe); > } The NVMe completion queue entries are 16 bytes long. Although it's most likely to write them from 0..15 in one PCIe Memory Write transaction, the NVMe device could write those bytes in any order. It could thus update the command identifier before the other bytes, causing this code to process invalid stale values in the other fields. When using interrupts, the MSI-X interrupt ensures the whole entry is updated first, since it is delivered with a PCIe Memory Write transaction and upstream writes do not pass upstream writes. The existing interrupt handler loops looking for additional completions, so is susceptible to this same problem - it's just less likely. The only concern is completions that are added while the CPU is in the interrupt handler. --- Robert Elliott, HPE Persistent Memory -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/