Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp560675imm; Thu, 31 May 2018 05:38:49 -0700 (PDT) X-Google-Smtp-Source: ADUXVKIgw3JdQQqDXl5oAEWLtTtAiE+6ZgAnhLT4SDqKqoRfQbp0a35X/VNaVjEC+FRw9XvPXNM7 X-Received: by 2002:a17:902:1023:: with SMTP id b32-v6mr6825203pla.145.1527770329472; Thu, 31 May 2018 05:38:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527770329; cv=none; d=google.com; s=arc-20160816; b=tRfwFn3Lq8aNCPvoHs9S6HuspGkI8P3wJTN8ho4lM/GvCuId12eShz8G1OJWBsTGFC l9yOSssjFic65LF5zMZh38uAz32v+pMhwnAiegMYCthPTJumFcSWPvVVDoiSRc8yhoJ7 r5jRsE4MJ0q5h2Od91SrLQRJHnAGrC3DiG8R3TpRY9SPpF3f+9oIJ1OOiJNNZiNglHq1 LKutu1mhKHYYKrm/DB+7VjVCfyDNp6yrVDUPPE/sLZgpwsv8SEDQ4ih2r/GBrpUIFB4P mAbaX6z4ZeViERagUvMSNRdjbN52VxTqlCKTbpeTHal+QDdt4Vqpp4oDcRGMHxvAcYpI byww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=4f9mghynfSvslWYZ5fNkqyXDTnF4JRKD9AHwDlmWYtk=; b=sXBOQJe1Xw8JSXHw3DQP/npPQyxRAbqnKlVIscled4+4oE39ybHWH6G+bSdPtnBASD 7iO0rdPEBFX02rLpkdStGyL4uBhYG2YYJWX0ij2W83b4eU2Mb3q/+YVFof1ROZhNCvsR g2XOV9zCjvJ4vfeCqskTgkoZt5MvMJtTnlyW1D/cysq2xOFvoYMu9BAfktCBLzrkH0xW NoCzSG7tAx/WMj2MfhtdS4aSTjTClM8GoV9u6xlAt42OIdt9zebFsfnQbGXyfTXTZUAr I1rFMunttq6uCXZRxV+b2nxI2r7lp4oWkIeAlQ/crxfoGkcE2vCnSxwz4GGIsAluUCD9 F0FA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o12-v6si29841072pgc.515.2018.05.31.05.38.34; Thu, 31 May 2018 05:38:49 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754894AbeEaMho (ORCPT + 99 others); Thu, 31 May 2018 08:37:44 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:43876 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1754197AbeEaMhl (ORCPT ); Thu, 31 May 2018 08:37:41 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 8661E80125F1; Thu, 31 May 2018 12:37:40 +0000 (UTC) Received: from localhost (unknown [10.18.25.149]) by smtp.corp.redhat.com (Postfix) with ESMTPS id CCFF284422; Thu, 31 May 2018 12:37:39 +0000 (UTC) Date: Thu, 31 May 2018 08:37:39 -0400 From: Mike Snitzer To: Sagi Grimberg Cc: Christoph Hellwig , Johannes Thumshirn , Keith Busch , Hannes Reinecke , Laurence Oberman , Ewan Milne , James Smart , Linux Kernel Mailinglist , Linux NVMe Mailinglist , "Martin K . Petersen" , Martin George , John Meneghini Subject: Re: [PATCH 0/3] Provide more fine grained control over multipathing Message-ID: <20180531123738.GA10552@redhat.com> References: <20180525125322.15398-1-jthumshirn@suse.de> <20180525130535.GA24239@lst.de> <20180525135813.GB9591@redhat.com> <20180530220206.GA7037@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.8]); Thu, 31 May 2018 12:37:40 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.8]); Thu, 31 May 2018 12:37:40 +0000 (UTC) for IP:'10.11.54.5' DOMAIN:'int-mx05.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'msnitzer@redhat.com' RCPT:'' Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, May 31 2018 at 4:37am -0400, Sagi Grimberg wrote: > > >Wouldn't expect you guys to nurture this 'mpath_personality' knob. SO > >when features like "dispersed namespaces" land a negative check would > >need to be added in the code to prevent switching from "native". > > > >And once something like "dispersed namespaces" lands we'd then have to > >see about a more sophisticated switch that operates at a different > >granularity. Could also be that switching one subsystem that is part of > >"dispersed namespaces" would then cascade to all other associated > >subsystems? Not that dissimilar from the 3rd patch in this series that > >allows a 'device' switch to be done in terms of the subsystem. > > Which I think is broken by allowing to change this personality on the > fly. I saw your reply to the 1/3 patch.. I do agree it is broken for not checking if any handles are active. But that is easily fixed no? Or are you suggesting some other aspect of "broken"? > >Anyway, I don't know the end from the beginning on something you just > >told me about ;) But we're all in this together. And can take it as it > >comes. > > I agree but this will be exposed to user-space and we will need to live > with it for a long long time... OK, well dm-multipath has been around for a long long time. We cannot simply wish it away. Regardless of whatever architectural grievances are levied against it. There are far more customer and vendor products that have been developed to understand and consume dm-multipath and multipath-tools interfaces than native NVMe multipath. > >>Don't get me wrong, I do support your cause, and I think nvme should try > >>to help, I just think that subsystem granularity is not the correct > >>approach going forward. > > > >I understand there will be limits to this 'mpath_personality' knob's > >utility and it'll need to evolve over time. But the burden of making > >more advanced NVMe multipath features accessible outside of native NVMe > >isn't intended to be on any of the NVMe maintainers (other than maybe > >remembering to disallow the switch where it makes sense in the future). > > I would expect that any "advanced multipath features" would be properly > brought up with the NVMe TWG as a ratified standard and find its way > to nvme. So I don't think this particularly is a valid argument. You're misreading me again. I'm also saying stop worrying. I'm saying any future native NVMe multipath features that come about don't necessarily get immediate dm-multipath parity. The native NVMe multipath would need appropriate negative checks. > >>As I said, I've been off the grid, can you remind me why global knob is > >>not sufficient? > > > >Because once nvme_core.multipath=N is set: native NVMe multipath is then > >not accessible from the same host. The goal of this patchset is to give > >users choice. But not limit them to _only_ using dm-multipath if they > >just have some legacy needs. > > > >Tough to be convincing with hypotheticals but I could imagine a very > >obvious usecase for native NVMe multipathing be PCI-based embedded NVMe > >"fabrics" (especially if/when the numa-based path selector lands). But > >the same host with PCI NVMe could be connected to a FC network that has > >historically always been managed via dm-multipath.. but say that > >FC-based infrastructure gets updated to use NVMe (to leverage a wider > >NVMe investment, whatever?) -- but maybe admins would still prefer to > >use dm-multipath for the NVMe over FC. > > You are referring to an array exposing media via nvmf and scsi > simultaneously? I'm not sure that there is a clean definition of > how that is supposed to work (ANA/ALUA, reservations, etc..) No I'm referring to completely disjoint arrays that are homed to the same host. > >>This might sound stupid to you, but can't users that desperately must > >>keep using dm-multipath (for its mature toolset or what-not) just > >>stack it on multipath nvme device? (I might be completely off on > >>this so feel free to correct my ignorance). > > > >We could certainly pursue adding multipath-tools support for native NVMe > >multipathing. Not opposed to it (even if just reporting topology and > >state). But given the extensive lengths NVMe multipath goes to hide > >devices we'd need some way to piercing through the opaque nvme device > >that native NVMe multipath exposes. But that really is a tangent > >relative to this patchset. Since that kind of visibility would also > >benefit the nvme cli... otherwise how are users to even be able to trust > >but verify native NVMe multipathing did what it expected it to? > > Can you explain what is missing for multipath-tools to resolve topology? I've not poured over these nvme interfaces (below I just learned nvme-cli has since grown the capability). SO I'm not informed enough to know if nvme cli has grown other new capabilities. In any case, training multipath-tools to understand native NVMe multipath topology doesn't replace actual dm-multipath interface and associated information. Per-device statistics is something that users want to be able to see. Per-device up/down state, etc. > nvme list-subsys is doing just that, doesn't it? It lists subsys-ctrl > topology but that is sort of the important information as controllers > are the real paths. I had nvme cli version 1.4; which doesn't have nvme list-subsys. Which means I need to uninstall the distro provided nvme-cli-1.4-3.el7.x86_64 and find the relevant upstream and build from src.... Yes, this looks like the basic topology info I was hoping for: # nvme list-subsys nvme-subsys0 - NQN=nqn.2014.08.org.nvmexpress:80868086PHMB7361004R280CGN INTEL SSDPED1D280GA \ +- nvme0 pcie 0000:5e:00.0 nvme-subsys1 - NQN=mptestnqn \ +- nvme1 fc traddr=nn-0x200140111111dbcc:pn-0x100140111111dbcc host_traddr=nn-0x200140111111dac8:pn-0x100140111111dac8 +- nvme2 fc traddr=nn-0x200140111111dbcd:pn-0x100140111111dbcd host_traddr=nn-0x200140111111dac9:pn-0x100140111111dac9 +- nvme3 fc traddr=nn-0x200140111111dbce:pn-0x100140111111dbce host_traddr=nn-0x200140111111daca:pn-0x100140111111daca +- nvme4 fc traddr=nn-0x200140111111dbcf:pn-0x100140111111dbcf host_traddr=nn-0x200140111111dacb:pn-0x100140111111dacb