Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp590577imm; Wed, 6 Jun 2018 02:45:09 -0700 (PDT) X-Google-Smtp-Source: ADUXVKLT6W2sQRcyMkBfFsRE6+v9nmj8V2QuPidnplZcBU9sx/t+Rx30agddGv6SH9LnISWBVVpo X-Received: by 2002:a62:c16:: with SMTP id u22-v6mr1723727pfi.177.1528278309822; Wed, 06 Jun 2018 02:45:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528278309; cv=none; d=google.com; s=arc-20160816; b=NhEp4Ru69TcsdIkBxgyreMA5euRRi5RccUkN68POta/WgE+zwSfsnfEy5FhgoAsI29 gax6F1aJSwzljsP9i0k6jX1HpC1Lke1ELriAmibig2y41sU21Cy1V4+bvarY8WM8d7sP Odz53uD2kOY6gOr417G0yEtaH3rsSufxs8tjBs8iUATO09Ph+IrxoBQn6bcTlqsd5WB2 DLugy3aHPpZAXXEwhOumLszVXVi/4BRFatsLi43eVvX+b05F6lBSdlb9gOn5AJ63dxRV Empd6W8Mf1ANkXav2rHE3kT7vRC2Dzg5ksa2fju8TPAMHzs/7mxlOkSB71K+t6M0IxJ1 5FlA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=e/zlvdPS7XoTQsmUzWnGvLKYcyEdIuyqfx1dvvnuIQ4=; b=SgDvSMTiN7eCxdP1zGO84OMGQN8FClvYcJnqGuEXH/fFidN7gnh7SIqOl8tsSc+oYq 5Mgln5zc61/05OIHLv5dvqs++5lpw+KHJf2Ji8H92fQ93iYOSxcrUIk4T521bt0827SC MwVx1I9rquQgEUhsGuzUcxf7iX0wQZMUrNOCJq67kPFNVlfh+UHY3n7HvMtB7oECZ0PU IeunN+xKmFom646Vtt+tKGANJHKOgrsx+lZWDtiBFg3h1UDd7Nfa3Nz83uRP+tSxBA3f jh7kU3Epzlc8L2yM9sqdq/1YLYm2LTGysI3PvcKS7Eqz7vp+xVZBNm0e2GqnI/XOcStx adpw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v69-v6si25684654pgb.609.2018.06.06.02.44.55; Wed, 06 Jun 2018 02:45:09 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932622AbeFFJoQ (ORCPT + 99 others); Wed, 6 Jun 2018 05:44:16 -0400 Received: from verein.lst.de ([213.95.11.211]:36730 "EHLO newverein.lst.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932415AbeFFJoP (ORCPT ); Wed, 6 Jun 2018 05:44:15 -0400 Received: by newverein.lst.de (Postfix, from userid 2407) id 98EBE68E44; Wed, 6 Jun 2018 11:51:30 +0200 (CEST) Date: Wed, 6 Jun 2018 11:51:30 +0200 From: Christoph Hellwig To: Roland Dreier Cc: Christoph Hellwig , Sagi Grimberg , Mike Snitzer , Johannes Thumshirn , Keith Busch , Hannes Reinecke , Laurence Oberman , Ewan Milne , James Smart , Linux Kernel Mailinglist , Linux NVMe Mailinglist , "Martin K . Petersen" , Martin George , John Meneghini Subject: Re: [PATCH 0/3] Provide more fine grained control over multipathing Message-ID: <20180606095130.GA10485@lst.de> References: <20180525125322.15398-1-jthumshirn@suse.de> <20180525130535.GA24239@lst.de> <20180525135813.GB9591@redhat.com> <20180605044222.GA29384@lst.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.17 (2007-11-01) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jun 05, 2018 at 03:57:05PM -0700, Roland Dreier wrote: > That makes sense but I'm not sure it covers everything. Probably the > most common way to do NVMe/RDMA will be with a single HCA that has > multiple ports, so there's no sensible CPU locality. On the other > hand we want to keep both ports to the fabric busy. Setting different > paths for different queues makes sense, but there may be > single-threaded applications that want a different policy. > > I'm not saying anything very profound, but we have to find the right > balance between too many and too few knobs. Agreed. And the philosophy here is to start with a as few knobs as possible and work from there based on actual use cases. Single threaded applications will run into issues with general blk-mq philosophy, so to work around that we'll need to dig deeper and allow borrowing of other cpu queues if we want to cater for that.