Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752312AbbFRFNV (ORCPT ); Thu, 18 Jun 2015 01:13:21 -0400 Received: from cmrelayp1.emulex.com ([138.239.112.140]:57357 "EHLO CMRELAYP1.ad.emulex.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1750741AbbFRFNO (ORCPT ); Thu, 18 Jun 2015 01:13:14 -0400 From: Parav Pandit To: linux-nvme@lists.infradead.org, willy@linux.intel.com Cc: parav.pandit@avagotech.com, axboe@kernel.dk, linux-kernel@vger.kernel.org, keith.busch@intel.com, Parav Pandit Subject: [PATCH] NVMe: Fixed race between nvme_thread & probe path. Date: Thu, 18 Jun 2015 16:13:50 +0530 Message-Id: <1434624230-25050-1-git-send-email-Parav.pandit@avagotech.com> X-Mailer: git-send-email 1.7.1 X-OriginalArrivalTime: 18 Jun 2015 05:13:14.0591 (UTC) FILETIME=[7A927EF0:01D0A985] Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1758 Lines: 52 Kernel thread nvme_thread and driver load process can be executing in parallel on different CPU. This leads to race condition whenever nvme_alloc_queue() instructions are executed out of order that can reflects incorrect value for nvme_thread. Memory barrier in nvme_alloc_queue() ensures that it maintains the order and and data dependency read barrier in reader thread ensures that cpu cache is synced. Signed-off-by: Parav Pandit --- drivers/block/nvme-core.c | 12 ++++++++++-- 1 files changed, 10 insertions(+), 2 deletions(-) diff --git a/drivers/block/nvme-core.c b/drivers/block/nvme-core.c index 5961ed7..90fb0ce 100644 --- a/drivers/block/nvme-core.c +++ b/drivers/block/nvme-core.c @@ -1403,8 +1403,10 @@ static struct nvme_queue *nvme_alloc_queue(struct nvme_dev *dev, int qid, nvmeq->q_db = &dev->dbs[qid * 2 * dev->db_stride]; nvmeq->q_depth = depth; nvmeq->qid = qid; - dev->queue_count++; dev->queues[qid] = nvmeq; + /* update queues first before updating queue_count */ + smp_wmb(); + dev->queue_count++; return nvmeq; @@ -2073,7 +2075,13 @@ static int nvme_kthread(void *data) continue; } for (i = 0; i < dev->queue_count; i++) { - struct nvme_queue *nvmeq = dev->queues[i]; + struct nvme_queue *nvmeq; + + /* make sure to read queue_count before + * traversing queues. + */ + smp_read_barrier_depends(); + nvmeq = dev->queues[i]; if (!nvmeq) continue; spin_lock_irq(&nvmeq->q_lock); -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/