Received: by 2002:ac0:aed5:0:0:0:0:0 with SMTP id t21csp4015042imb; Wed, 6 Mar 2019 03:12:51 -0800 (PST) X-Google-Smtp-Source: APXvYqxAgkX5tMVVVCZQnCkMEKuSpKSpA472k5Dos0A5v/APKH+kjYRX6xNcmxuUBXmOH8bHPpOt X-Received: by 2002:a63:2e06:: with SMTP id u6mr5763653pgu.71.1551870771624; Wed, 06 Mar 2019 03:12:51 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1551870771; cv=none; d=google.com; s=arc-20160816; b=nRpzxWKEYZ9c5mhpI/Q6siykCGBq3GCBRQD74uqiDVSrtIrn30Ax033191A5dXHIdM PE8qel1BQAxI3afw5dKBUSp50qlJdBhfncaGMTQFnDSb1ZIikUH1PAVWTdqoIUWOjmm1 7f3/1gVClcC6uZ7j+Sbf1O1NOlLsJ5YxsKreEIU4ZOlVussoXQq1a65D6cX5SgY99bak RI5y9o57tVjqgNs0+fNbrXAY+VCPGeaU6YROVxz8zlCPSMv1/aw6K2t7B82WIOX3gawy 20wZAATECEor2Zb5FOiHq/d8YQPIRFutiZMtKcAvTOkl45cYSfHlU3IKjAT6M6A4c2Ux dvhA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:date:subject:cc :to:from; bh=lZkiSBcXq8jrOVCVmq+WMibmwAnEiDHz7X8p77csMSw=; b=kbLIvnTSNjc3+kqNpkulFCJr3+j8mA2zanW2efAfWlz5kXbySvfNIJ0iZz3UynysOA IOaDVtufnz08TlKiXiPu2xkCmH+EH3R/L7imzgB4D5ptlNKzjdb9AVwEo9HOSGVTWazd Jjy/hfyECoFFNvixhzsQncaESaylPj/mariZYWfEEpoocbswtuskeMKQTeZH61k6VbPE 4TEYOsOjFRF/6L5V1H3BNet5AeC/VGxaxEdqSXzz35HHy7CaC5RKZtgYg2IP2JC34Tq2 pG0Z3P13HvQUDDep0mtjCWQ6FUIiEo5fit4BYw+upLR8Uljhr/ox/lleagTNQA3gZSW/ 3ozQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t13si1396424pfa.98.2019.03.06.03.12.36; Wed, 06 Mar 2019 03:12:51 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729994AbfCFKFI (ORCPT + 99 others); Wed, 6 Mar 2019 05:05:08 -0500 Received: from mail1.windriver.com ([147.11.146.13]:40077 "EHLO mail1.windriver.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729831AbfCFKFI (ORCPT ); Wed, 6 Mar 2019 05:05:08 -0500 Received: from ALA-HCA.corp.ad.wrs.com ([147.11.189.40]) by mail1.windriver.com (8.15.2/8.15.1) with ESMTPS id x26A4Iho025648 (version=TLSv1 cipher=AES128-SHA bits=128 verify=FAIL); Wed, 6 Mar 2019 02:04:18 -0800 (PST) Received: from pek-lpggp6.wrs.com (128.224.153.40) by ALA-HCA.corp.ad.wrs.com (147.11.189.40) with Microsoft SMTP Server id 14.3.439.0; Wed, 6 Mar 2019 02:04:18 -0800 From: Yongxin Liu To: , CC: , , , , , , Subject: [PATCH RT] nvdimm: make lane acquirement RT aware Date: Wed, 6 Mar 2019 17:57:09 +0800 Message-ID: <20190306095709.23138-1-yongxin.liu@windriver.com> X-Mailer: git-send-email 2.14.4 MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently, nvdimm driver isn't RT compatible. nd_region_acquire_lane() disables preemption with get_cpu() which causes "scheduling while atomic" spews on RT, when using fio to test pmem as block device. In this change, we replace get_cpu/put_cpu with local_lock_cpu/ local_unlock_cpu, and introduce per CPU variable "ndl_local_lock". Due to preemption on RT, this lock can avoid race condition for the same lane on the same CPU. When CPU number is greater than the lane number, lane can be shared among CPUs. "ndl_lock->lock" is used to protect the lane in this situation. This patch is derived from Dan Williams and Pankaj Gupta's proposal from https://www.mail-archive.com/linux-nvdimm@lists.01.org/msg13359.html and https://www.spinics.net/lists/linux-rt-users/msg20280.html. Many thanks to them. Cc: Dan Williams Cc: Pankaj Gupta Cc: linux-rt-users Cc: linux-nvdimm Signed-off-by: Yongxin Liu --- drivers/nvdimm/region_devs.c | 40 +++++++++++++++++++--------------------- 1 file changed, 19 insertions(+), 21 deletions(-) diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c index fa37afcd43ff..6c5388cf2477 100644 --- a/drivers/nvdimm/region_devs.c +++ b/drivers/nvdimm/region_devs.c @@ -18,9 +18,13 @@ #include #include #include +#include #include "nd-core.h" #include "nd.h" +/* lock for tasks on the same CPU to sequence the access to the lane */ +static DEFINE_LOCAL_IRQ_LOCK(ndl_local_lock); + /* * For readq() and writeq() on 32-bit builds, the hi-lo, lo-hi order is * irrelevant. @@ -935,18 +939,15 @@ int nd_blk_region_init(struct nd_region *nd_region) unsigned int nd_region_acquire_lane(struct nd_region *nd_region) { unsigned int cpu, lane; + struct nd_percpu_lane *ndl_lock, *ndl_count; - cpu = get_cpu(); - if (nd_region->num_lanes < nr_cpu_ids) { - struct nd_percpu_lane *ndl_lock, *ndl_count; + cpu = local_lock_cpu(ndl_local_lock); - lane = cpu % nd_region->num_lanes; - ndl_count = per_cpu_ptr(nd_region->lane, cpu); - ndl_lock = per_cpu_ptr(nd_region->lane, lane); - if (ndl_count->count++ == 0) - spin_lock(&ndl_lock->lock); - } else - lane = cpu; + lane = cpu % nd_region->num_lanes; + ndl_count = per_cpu_ptr(nd_region->lane, cpu); + ndl_lock = per_cpu_ptr(nd_region->lane, lane); + if (ndl_count->count++ == 0) + spin_lock(&ndl_lock->lock); return lane; } @@ -954,17 +955,14 @@ EXPORT_SYMBOL(nd_region_acquire_lane); void nd_region_release_lane(struct nd_region *nd_region, unsigned int lane) { - if (nd_region->num_lanes < nr_cpu_ids) { - unsigned int cpu = get_cpu(); - struct nd_percpu_lane *ndl_lock, *ndl_count; - - ndl_count = per_cpu_ptr(nd_region->lane, cpu); - ndl_lock = per_cpu_ptr(nd_region->lane, lane); - if (--ndl_count->count == 0) - spin_unlock(&ndl_lock->lock); - put_cpu(); - } - put_cpu(); + struct nd_percpu_lane *ndl_lock, *ndl_count; + unsigned int cpu = smp_processor_id(); + + ndl_count = per_cpu_ptr(nd_region->lane, cpu); + ndl_lock = per_cpu_ptr(nd_region->lane, lane); + if (--ndl_count->count == 0) + spin_unlock(&ndl_lock->lock); + local_unlock_cpu(ndl_local_lock); } EXPORT_SYMBOL(nd_region_release_lane); -- 2.14.4