Received: by 2002:ac0:aed5:0:0:0:0:0 with SMTP id t21csp5657114imb; Thu, 7 Mar 2019 22:31:48 -0800 (PST) X-Google-Smtp-Source: APXvYqwVl9ApM7XK2vH2DMPbyGQTI4hv5BtWTwnraBP1OjvHVyDdhsHjQ6mW8nc6GzyxPfwkiXnz X-Received: by 2002:aa7:9090:: with SMTP id i16mr16476375pfa.85.1552026708619; Thu, 07 Mar 2019 22:31:48 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1552026708; cv=none; d=google.com; s=arc-20160816; b=jRNrwFAa9u4p7xqsq8x6gZWAfoV7e9oNVBSNrnmmGRZRsXtGDZFCTFfY0Z9V94SOrU IORHCLxv1r6QyROG3s122uFr3np/aGtfFl0IQpi+E/rbjfxY77MTFbDdX1+lz8EiDlc4 g/y+CdbDwuavDm4yj/x9VAh28Ql6llKZF02wHYqZKV4GhA9CFK90yHePYo7RFe7lfs7D PwVVuLE/WUMLi57TJQ+bayX5s+PzlYlba9EX0xEyijSNiui4PR3EiD4mdU8bM26teeCW QKgPlOvN9OdJEz0kWM3O84DVq+Wjr3GvrUcN1xIqZqHBfDzq5/9Tf3k8nwRZZbJZoIsG rYYw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:thread-index:thread-topic :content-transfer-encoding:mime-version:subject:references :in-reply-to:message-id:cc:to:from:date; bh=C1wVYGazzDEy5EdbJmzHZnJS6g7rUc4NYHP1nhXISSE=; b=LmQ55SWA6fogAsBqE10Jf9mGsKKNtiWw7cqwJWBtsM8OBMYYSZeqLIbC5ZNS7vzE1H iRqGHUml5xVTRiuSTiEI5UxZ5kPN7ZVwSJWckj+zZFU/tWJRjCjZ+R3nU2vVfqkPXo37 tdWPlME1jHdZKxvHQ+4zjJ1KM2uyFFkoBogMXdb3yJ4ksVEcCHJ/l/IGk32qEjI6kwUG Fc419Detlr7ruBYqW9lli7ZOjUL+pDj1RQpyM2ho9kOeP6DNYbGlBk7ooyECnRfPQwMb bPgkVkysTwVh14yZgw4MMeg2ywNb03pLELD1BJ3qhgUCDQwg6mDZpLtGjlXIviHYnJMn KB3w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o24si6318531pfa.34.2019.03.07.22.31.33; Thu, 07 Mar 2019 22:31:48 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726399AbfCHGbK (ORCPT + 99 others); Fri, 8 Mar 2019 01:31:10 -0500 Received: from mx1.redhat.com ([209.132.183.28]:50746 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725372AbfCHGbJ (ORCPT ); Fri, 8 Mar 2019 01:31:09 -0500 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 5862FE6A75; Fri, 8 Mar 2019 06:31:09 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 3E9705D719; Fri, 8 Mar 2019 06:31:09 +0000 (UTC) Received: from zmail21.collab.prod.int.phx2.redhat.com (zmail21.collab.prod.int.phx2.redhat.com [10.5.83.24]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id F026B3FB10; Fri, 8 Mar 2019 06:31:08 +0000 (UTC) Date: Fri, 8 Mar 2019 01:31:08 -0500 (EST) From: Pankaj Gupta To: Yongxin Liu Cc: linux-kernel@vger.kernel.org, linux-rt-users@vger.kernel.org, bigeasy@linutronix.de, tglx@linutronix.de, rostedt@goodmis.org, dan j williams , paul gortmaker , linux-nvdimm@lists.01.org Message-ID: <1859347572.10599669.1552026668860.JavaMail.zimbra@redhat.com> In-Reply-To: <20190306095709.23138-1-yongxin.liu@windriver.com> References: <20190306095709.23138-1-yongxin.liu@windriver.com> Subject: Re: [PATCH RT] nvdimm: make lane acquirement RT aware MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [10.67.116.134, 10.4.195.22] Thread-Topic: nvdimm: make lane acquirement RT aware Thread-Index: 4ghs9+nDfy/dsrzSA79Oy7bSbkhtAA== X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.38]); Fri, 08 Mar 2019 06:31:09 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > Currently, nvdimm driver isn't RT compatible. > nd_region_acquire_lane() disables preemption with get_cpu() which > causes "scheduling while atomic" spews on RT, when using fio to test > pmem as block device. > > In this change, we replace get_cpu/put_cpu with local_lock_cpu/ > local_unlock_cpu, and introduce per CPU variable "ndl_local_lock". > Due to preemption on RT, this lock can avoid race condition for the > same lane on the same CPU. When CPU number is greater than the lane > number, lane can be shared among CPUs. "ndl_lock->lock" is used to > protect the lane in this situation. > > This patch is derived from Dan Williams and Pankaj Gupta's proposal from > https://www.mail-archive.com/linux-nvdimm@lists.01.org/msg13359.html > and https://www.spinics.net/lists/linux-rt-users/msg20280.html. > Many thanks to them. > > Cc: Dan Williams > Cc: Pankaj Gupta > Cc: linux-rt-users > Cc: linux-nvdimm > Signed-off-by: Yongxin Liu This patch looks good to me. Acked-by: Pankaj Gupta > --- > drivers/nvdimm/region_devs.c | 40 +++++++++++++++++++--------------------- > 1 file changed, 19 insertions(+), 21 deletions(-) > > diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c > index fa37afcd43ff..6c5388cf2477 100644 > --- a/drivers/nvdimm/region_devs.c > +++ b/drivers/nvdimm/region_devs.c > @@ -18,9 +18,13 @@ > #include > #include > #include > +#include > #include "nd-core.h" > #include "nd.h" > > +/* lock for tasks on the same CPU to sequence the access to the lane */ > +static DEFINE_LOCAL_IRQ_LOCK(ndl_local_lock); > + > /* > * For readq() and writeq() on 32-bit builds, the hi-lo, lo-hi order is > * irrelevant. > @@ -935,18 +939,15 @@ int nd_blk_region_init(struct nd_region *nd_region) > unsigned int nd_region_acquire_lane(struct nd_region *nd_region) > { > unsigned int cpu, lane; > + struct nd_percpu_lane *ndl_lock, *ndl_count; > > - cpu = get_cpu(); > - if (nd_region->num_lanes < nr_cpu_ids) { > - struct nd_percpu_lane *ndl_lock, *ndl_count; > + cpu = local_lock_cpu(ndl_local_lock); > > - lane = cpu % nd_region->num_lanes; > - ndl_count = per_cpu_ptr(nd_region->lane, cpu); > - ndl_lock = per_cpu_ptr(nd_region->lane, lane); > - if (ndl_count->count++ == 0) > - spin_lock(&ndl_lock->lock); > - } else > - lane = cpu; > + lane = cpu % nd_region->num_lanes; > + ndl_count = per_cpu_ptr(nd_region->lane, cpu); > + ndl_lock = per_cpu_ptr(nd_region->lane, lane); > + if (ndl_count->count++ == 0) > + spin_lock(&ndl_lock->lock); > > return lane; > } > @@ -954,17 +955,14 @@ EXPORT_SYMBOL(nd_region_acquire_lane); > > void nd_region_release_lane(struct nd_region *nd_region, unsigned int lane) > { > - if (nd_region->num_lanes < nr_cpu_ids) { > - unsigned int cpu = get_cpu(); > - struct nd_percpu_lane *ndl_lock, *ndl_count; > - > - ndl_count = per_cpu_ptr(nd_region->lane, cpu); > - ndl_lock = per_cpu_ptr(nd_region->lane, lane); > - if (--ndl_count->count == 0) > - spin_unlock(&ndl_lock->lock); > - put_cpu(); > - } > - put_cpu(); > + struct nd_percpu_lane *ndl_lock, *ndl_count; > + unsigned int cpu = smp_processor_id(); > + > + ndl_count = per_cpu_ptr(nd_region->lane, cpu); > + ndl_lock = per_cpu_ptr(nd_region->lane, lane); > + if (--ndl_count->count == 0) > + spin_unlock(&ndl_lock->lock); > + local_unlock_cpu(ndl_local_lock); > } > EXPORT_SYMBOL(nd_region_release_lane); > > -- > 2.14.4 > >