Received: by 2002:ac0:a594:0:0:0:0:0 with SMTP id m20-v6csp3565503imm; Fri, 25 May 2018 07:52:02 -0700 (PDT) X-Google-Smtp-Source: AB8JxZrX1U6XFLRMWft6poKBxtW/L5iv3SHXbmV7TL+/aWqVV10l7ESuWrlOh3eHcsWZEK0W690X X-Received: by 2002:a17:902:7082:: with SMTP id z2-v6mr2872734plk.373.1527259922147; Fri, 25 May 2018 07:52:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527259922; cv=none; d=google.com; s=arc-20160816; b=wuFnm6XK0CqgjO/F7eluW9F5na0DW42rAv/NhA+MdgJdLrGmLWGnVAWAoeEVS1TFIc SSLha6zmGYaRpMp+o7nzJqW4Ds6TUrmGr7nWY+hLFgUDx1+viRjYKkDfUKn1XbK94mEV JNrn31ya7vHaemK7ixTBxg6F/uCXuZP5YnhA9CPN8a4h8z/RoUPuLDiUlnE9GhPKyVaB ZZZdK4tyw1rz5MSZsqaurzlQMVbQ9Yv0G0FxVxiijvT+Fw+PpYvNU4ITlxL9pe40YE16 orWoSCyMY6GS95tgwq0tXHUlhAvqIZc1Q+FjFEvWLTN3qa2kw0Dxny9Mn1M/L0pO36lU PkLQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=50wFhq7FkNC32mm0htynhVc+JAPhOKUoTQ0NZt7rWMs=; b=Yx1+Bzg8WlpNy0r+tbpmT4nJ7QN/Cd2KaEK9qFlN9Mk0bcrJBwksLTbpNFrItlMFf3 OvXsr5v8+Lw9nWN91ofL7FSapm0RxBiL6QI06RJfYEaDcTGXo0OnXYVrm7IuIeDIH7Hr 73pJOwyOYSCwGuxhVIowoMQkiL7DDzodAghehQdunywbSPuwXzuySE5ZTTYaj8RYMa6G GtIIPK/r25EvO/i4Dop4xvmg1z4FKtEV0NyCD41PqZCgOQ4P5+mMlG+Z46+F++sbCMHj 6ndItT6VyO+vOtJmxOxSvCwzZWhHem/Un9zCxTB9BfRbi5q4CJYSqpGfhpbWhCdoaKhY /A7A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m12-v6si22939693pls.498.2018.05.25.07.51.46; Fri, 25 May 2018 07:52:02 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S936308AbeEYOu7 (ORCPT + 99 others); Fri, 25 May 2018 10:50:59 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:54208 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S936200AbeEYOu6 (ORCPT ); Fri, 25 May 2018 10:50:58 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id CAD8C818BAFC; Fri, 25 May 2018 14:50:57 +0000 (UTC) Received: from localhost (unknown [10.18.25.149]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 527C3210C6CB; Fri, 25 May 2018 14:50:57 +0000 (UTC) Date: Fri, 25 May 2018 10:50:56 -0400 From: Mike Snitzer To: Christoph Hellwig Cc: Johannes Thumshirn , Keith Busch , Sagi Grimberg , Hannes Reinecke , Laurence Oberman , Ewan Milne , James Smart , Linux Kernel Mailinglist , Linux NVMe Mailinglist , "Martin K . Petersen" , Martin George , John Meneghini Subject: Re: [PATCH 0/3] Provide more fine grained control over multipathing Message-ID: <20180525145056.GD9591@redhat.com> References: <20180525125322.15398-1-jthumshirn@suse.de> <20180525130535.GA24239@lst.de> <20180525135813.GB9591@redhat.com> <20180525141211.GA25971@lst.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180525141211.GA25971@lst.de> User-Agent: Mutt/1.5.21 (2010-09-15) X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.8]); Fri, 25 May 2018 14:50:57 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.8]); Fri, 25 May 2018 14:50:57 +0000 (UTC) for IP:'10.11.54.4' DOMAIN:'int-mx04.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'msnitzer@redhat.com' RCPT:'' Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, May 25 2018 at 10:12am -0400, Christoph Hellwig wrote: > On Fri, May 25, 2018 at 09:58:13AM -0400, Mike Snitzer wrote: > > We all basically knew this would be your position. But at this year's > > LSF we pretty quickly reached consensus that we do in fact need this. > > Except for yourself, Sagi and afaik Martin George: all on the cc were in > > attendance and agreed. > > And I very mich disagree, and you'd bette come up with a good reason > to overide me as the author and maintainer of this code. I hope you don't truly think this is me vs you. Some of the reasons are: 1) we need flexibility during the transition to native NVMe multipath 2) we need to support existing customers' dm-multipath storage networks 3) asking users to use an entirely new infrastructure that conflicts with their dm-multipath expertise and established norms is a hard sell. Especially for environments that have a mix of traditional multipath (FC, iSCSI, whatever) and NVMe over fabrics. 4) Layered products (both vendor provided and user developed) have been trained to fully support and monitor dm-multipath; they have no understanding of native NVMe multipath > > And since then we've exchanged mails to refine and test Johannes' > > implementation. > > Since when was acting behind the scenes a good argument for anything? I mentioned our continued private collaboration to establish that this wasn't a momentary weakness by anyone at LSF. It has had a lot of soak time in our heads. We did it privately because we needed a concrete proposal that works for our needs. Rather than getting shot down over some shortcoming in an RFC-style submission. > > Hopefully this clarifies things, thanks. > > It doesn't. > > The whole point we have native multipath in nvme is because dm-multipath > is the wrong architecture (and has been, long predating you, nothing > personal). And I don't want to be stuck additional decades with this > in nvme. We allowed a global opt-in to ease the three people in the > world with existing setups to keep using that, but I also said I > won't go any step further. And I stand to that. Thing is you really don't get to dictate that to the industry. Sorry. Reality is this ability to switch "native" vs "other" gives us the options I've been talking about absolutely needing since the start of this NVMe multipathing debate. Your fighting against it for so long has prevented progress on NVMe multipath in general. Taking this change will increase native NVMe multipath deployment. Otherwise we're just going to have to disable native multipath entirely for the time being. That does users a disservice because I completely agree that there _will_ be setups where native NVMe multipath really does offer a huge win. But those setups could easily be deployed on the same hosts as another variant of NVMe that really does want the use of the legacy DM multipath stack (possibly even just for reason 4 above). Mike