Received: by 10.223.185.116 with SMTP id b49csp3932399wrg; Mon, 26 Feb 2018 08:25:25 -0800 (PST) X-Google-Smtp-Source: AH8x226ihEKwmxbuHyX2+Ce2l4mWhq18wjyvl7jr6crj1wM0bk5vlQZ/3J99u922/vX19/5u2eBP X-Received: by 2002:a17:902:600e:: with SMTP id r14-v6mr11325112plj.200.1519662325364; Mon, 26 Feb 2018 08:25:25 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1519662325; cv=none; d=google.com; s=arc-20160816; b=vfKY9OttoFtR9vpFQoYsg7khUvisaDEpt4m7GoWI3qQLKQwiR6UXUikO8MoEP6HVUg opn08M8sGsWHEzuL26eAThoF5u/StHNzl98vj6l3+3qRWv0L3jGPJvaXExh/LC5wxwhm z3/HuuS1+GaE0vwREL4lMiGXmEptcj8lzZsUPrn/AD8n/yFflH2vqWH9fLL7u5jPVSXz hx4X7f2/Sxoc1VoEsGYxz+gjewBi648e14b1D7MhkSx1jBYIUWouWWCt99S9a/GY2VlY FTY8wIKYF7xr1Mz1HBETrpQjjpk3sB6a8zVBkEApJMrtjZSUlEyzv4CyIGlvAR/SKcVl 5K9w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=ZUvhvJ0ftZVFkRjVJpRINTg/ziAbzsH2vBKoJp7Ing0=; b=iOQMqGcuNMCroy7TRf9WRQMdiJn/gYwyKE3gSKPXy70oaVcqQDdjcte2ExgxBzDUP1 ltmppgDfzDfRLg/TggecGiiRbzAS9scedHjQG5RioKZEkfvgliQj3uWqP6z75HCuSqTS 1sIVkULPfZYvp2CMDN/BxJ79B4kkHyk/J8UeuZ+94mSSmziZdJCE2LngpqU24GLOwkk6 bL5o+A64UkQHRp39ud9ysmJV3X5Cdoe8WFZv+hxTQ4ixs98ZwOkSh+iPs4Wske8hFM2t kWVaFs4U1yEm6aGKoDcrqu8U00w30UYZKby28O898mwSQmtNcLLFyPpymOE6EiN9hKl2 eEMg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 3-v6si6919922plv.604.2018.02.26.08.25.10; Mon, 26 Feb 2018 08:25:25 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752099AbeBZQY3 (ORCPT + 99 others); Mon, 26 Feb 2018 11:24:29 -0500 Received: from mga02.intel.com ([134.134.136.20]:40460 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751550AbeBZQYY (ORCPT ); Mon, 26 Feb 2018 11:24:24 -0500 X-Amp-Result: UNSCANNABLE X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 26 Feb 2018 08:24:24 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.47,397,1515484800"; d="scan'208";a="207153186" Received: from unknown (HELO localhost.localdomain) ([10.232.112.44]) by fmsmga006.fm.intel.com with ESMTP; 26 Feb 2018 08:24:23 -0800 Date: Mon, 26 Feb 2018 09:24:40 -0700 From: Keith Busch To: baegjae@gmail.com Cc: axboe@fb.com, hch@lst.de, sagi@grimberg.me, linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH] nvme-multipath: fix sysfs dangerously created links Message-ID: <20180226162440.GB10832@localhost.localdomain> References: <20180226085123.26120-1-baegjae@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180226085123.26120-1-baegjae@gmail.com> User-Agent: Mutt/1.9.1 (2017-09-22) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Feb 26, 2018 at 05:51:23PM +0900, baegjae@gmail.com wrote: > From: Baegjae Sung > > If multipathing is enabled, each NVMe subsystem creates a head > namespace (e.g., nvme0n1) and multiple private namespaces > (e.g., nvme0c0n1 and nvme0c1n1) in sysfs. When creating links for > private namespaces, links of head namespace are used, so the > namespace creation order must be followed (e.g., nvme0n1 -> > nvme0c1n1). If the order is not followed, links of sysfs will be > incomplete or kernel panic will occur. > > The kernel panic was: > kernel BUG at fs/sysfs/symlink.c:27! > Call Trace: > nvme_mpath_add_disk_links+0x5d/0x80 [nvme_core] > nvme_validate_ns+0x5c2/0x850 [nvme_core] > nvme_scan_work+0x1af/0x2d0 [nvme_core] > > Correct order > Context A Context B > nvme0n1 > nvme0c0n1 nvme0c1n1 > > Incorrect order > Context A Context B > nvme0c1n1 > nvme0n1 > nvme0c0n1 > > The function of a head namespace creation is moved to maintain the > correct order. We verified the code with or without multipathing > using three vendors of dual-port NVMe SSDs. > > Signed-off-by: Baegjae Sung Thanks, I see what you mean on the potential ordering problem here. Calling nvme_mpath_add_disk, though, before the 'head' has any namespace paths available looks like you'll get a lot of 'no path available' warnings during the bring-up. It should resolve itself shortly after, but the warnings will be a bit alarming, right? > --- > drivers/nvme/host/core.c | 12 +++--------- > 1 file changed, 3 insertions(+), 9 deletions(-) > > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c > index 0fe7ea35c221..28777b7352a5 100644 > --- a/drivers/nvme/host/core.c > +++ b/drivers/nvme/host/core.c > @@ -2844,7 +2844,7 @@ static struct nvme_ns_head *nvme_alloc_ns_head(struct nvme_ctrl *ctrl, > } > > static int nvme_init_ns_head(struct nvme_ns *ns, unsigned nsid, > - struct nvme_id_ns *id, bool *new) > + struct nvme_id_ns *id) > { > struct nvme_ctrl *ctrl = ns->ctrl; > bool is_shared = id->nmic & (1 << 0); > @@ -2860,8 +2860,7 @@ static int nvme_init_ns_head(struct nvme_ns *ns, unsigned nsid, > ret = PTR_ERR(head); > goto out_unlock; > } > - > - *new = true; > + nvme_mpath_add_disk(head); > } else { > struct nvme_ns_ids ids; > > @@ -2873,8 +2872,6 @@ static int nvme_init_ns_head(struct nvme_ns *ns, unsigned nsid, > ret = -EINVAL; > goto out_unlock; > } > - > - *new = false; > } > > list_add_tail(&ns->siblings, &head->list); > @@ -2945,7 +2942,6 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid) > struct nvme_id_ns *id; > char disk_name[DISK_NAME_LEN]; > int node = dev_to_node(ctrl->dev), flags = GENHD_FL_EXT_DEVT; > - bool new = true; > > ns = kzalloc_node(sizeof(*ns), GFP_KERNEL, node); > if (!ns) > @@ -2971,7 +2967,7 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid) > if (id->ncap == 0) > goto out_free_id; > > - if (nvme_init_ns_head(ns, nsid, id, &new)) > + if (nvme_init_ns_head(ns, nsid, id)) > goto out_free_id; > nvme_setup_streams_ns(ctrl, ns); > > @@ -3037,8 +3033,6 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid) > pr_warn("%s: failed to register lightnvm sysfs group for identification\n", > ns->disk->disk_name); > > - if (new) > - nvme_mpath_add_disk(ns->head); > nvme_mpath_add_disk_links(ns); > return; > out_unlink_ns: > -- > 2.16.2 >