Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp793595imm; Thu, 31 May 2018 09:28:54 -0700 (PDT) X-Google-Smtp-Source: ADUXVKLCI1hLciSjplKERmTlnMmhHUvbI5viT5ytnDejHnsNBulHItfS/zDdUbqMRF8eFpVXWu+Y X-Received: by 2002:a17:902:ac1:: with SMTP id 59-v6mr5699491plp.36.1527784134623; Thu, 31 May 2018 09:28:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527784134; cv=none; d=google.com; s=arc-20160816; b=zWhphh15F8ycF2N8rV0JjTQuv3cAQXczLcROKHmCcS5KouzeQ1GEvWizMFjWR3aQbI EvjrGP7138jIb1upJqLe7VJjznBXgRT2m8Snhl/xR1tXeostCCx57WGVjBK525HBg3BX J1n5sJ0/KU5Ru33tZfWrT+oT4CS/Ob8M5yf+BnhbgzJ54nuNHP3FtvzIbKN6N6t/xckN wMYwh3z/retayzQrEkHQj1BRLhDP6zmbi7GwfFc8HpIRx65+POWOGh90wIqeuH9qjIMT h7iotROpIwkZpXhb1MNk8C52zQHMUSYm7irTeG+x1rsLbAkVnCQt4XM8gXiTUgaXKBmF C75A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=jIYbMpI/BAzWA+7BcwEiNJ5H8+DMkrzJozfLP2PWBwI=; b=bHZ94EuaKpDqSLrvwdk9wUkuZKZE7JkVBFqit88JRJZq6VFCnpASDQ+arKFyroXHWl pxy3miYH+UYh5IBKPx238ecFRFIWYQqxrEx1zG2phKu9N6dZ2dz2ZjNOtD3QawC/ZxVs orJKxKIaw944BeDPE6DjU653ejMQQ+o0En78gVZy3RAAn+HXPTx2XBm4Bs3wot6SzZBj US/TEFkCxmgZCRNmDVOweKi8YpNd9ptrSZ1pV2HQMSoLP3iTf2RA6oN+wV7iTV/WDYhX QimP/lwrdNwMNnktkAvyp2qTOhoU3yhTlvWOnZE1/XLVogz4Jht8L7WHW6xc9MhZjmjT TwRw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a6-v6si17164343plz.351.2018.05.31.09.28.40; Thu, 31 May 2018 09:28:54 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755747AbeEaQ0o (ORCPT + 99 others); Thu, 31 May 2018 12:26:44 -0400 Received: from verein.lst.de ([213.95.11.211]:38950 "EHLO newverein.lst.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755735AbeEaQ0m (ORCPT ); Thu, 31 May 2018 12:26:42 -0400 Received: by newverein.lst.de (Postfix, from userid 2407) id 9B9BE68CEE; Thu, 31 May 2018 18:33:11 +0200 (CEST) Date: Thu, 31 May 2018 18:33:11 +0200 From: Christoph Hellwig To: Mike Snitzer Cc: Sagi Grimberg , Christoph Hellwig , Johannes Thumshirn , Keith Busch , Hannes Reinecke , Laurence Oberman , Ewan Milne , James Smart , Linux Kernel Mailinglist , Linux NVMe Mailinglist , "Martin K . Petersen" , Martin George , John Meneghini Subject: Re: [PATCH 0/3] Provide more fine grained control over multipathing Message-ID: <20180531163311.GA30954@lst.de> References: <20180525125322.15398-1-jthumshirn@suse.de> <20180525130535.GA24239@lst.de> <20180525135813.GB9591@redhat.com> <20180530220206.GA7037@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180530220206.GA7037@redhat.com> User-Agent: Mutt/1.5.17 (2007-11-01) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, May 30, 2018 at 06:02:06PM -0400, Mike Snitzer wrote: > Because once nvme_core.multipath=N is set: native NVMe multipath is then > not accessible from the same host. The goal of this patchset is to give > users choice. But not limit them to _only_ using dm-multipath if they > just have some legacy needs. Choise by itself really isn't an argument. We need a really good use case for all the complexity, and so far none has been presented. > Tough to be convincing with hypotheticals but I could imagine a very > obvious usecase for native NVMe multipathing be PCI-based embedded NVMe > "fabrics" (especially if/when the numa-based path selector lands). But > the same host with PCI NVMe could be connected to a FC network that has > historically always been managed via dm-multipath.. but say that > FC-based infrastructure gets updated to use NVMe (to leverage a wider > NVMe investment, whatever?) -- but maybe admins would still prefer to > use dm-multipath for the NVMe over FC. That is a lot of maybes. If they prefer the good old way on FC then can easily stay with SCSI, or for that matter use the global switch off. > > This might sound stupid to you, but can't users that desperately must > > keep using dm-multipath (for its mature toolset or what-not) just > > stack it on multipath nvme device? (I might be completely off on > > this so feel free to correct my ignorance). > > We could certainly pursue adding multipath-tools support for native NVMe > multipathing. Not opposed to it (even if just reporting topology and > state). But given the extensive lengths NVMe multipath goes to hide > devices we'd need some way to piercing through the opaque nvme device > that native NVMe multipath exposes. But that really is a tangent > relative to this patchset. Since that kind of visibility would also > benefit the nvme cli... otherwise how are users to even be able to trust > but verify native NVMe multipathing did what it expected it to? Just look at the nvme-cli output or sysfs. It's all been there since the code was merged to mainline.