Received: by 2002:ac0:bc90:0:0:0:0:0 with SMTP id a16csp531518img; Wed, 20 Mar 2019 05:54:49 -0700 (PDT) X-Google-Smtp-Source: APXvYqy6AIAa1dcHdjdkhRSQZgZvIMuEcVeU2qpAbyUNao65IV/+zcg4XbdQPbPmsRWT7iHIXGb5 X-Received: by 2002:a65:6656:: with SMTP id z22mr6915055pgv.95.1553086489688; Wed, 20 Mar 2019 05:54:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553086489; cv=none; d=google.com; s=arc-20160816; b=hRWkADy6dcxUtqnNmh8ydvbeHwztC4zhxkuTQuqlJ2g2c3ijcXEPqdaUdyDixMLypT ixDq81iCDEW4jXmj6MgpkiW2PGpaS1BhZZy8Hu2Lwymu8/HgZbNJwvlgSEjGr6jrEoBt R65QCDWzkvw1BiaPGpkC9Z+fcMd1pHDfBtBtE/k72kTvejBFOYt3hZK1XFN7KjwY6zTJ z/L39vl+nTB54KTUMS0dbXDp36GiRy4te5HRQ+0XHaCY9WqSXa+yqI22QWnw8TPhcOP8 FShpjMN4/YwOhZzQdcQv3s4CmkRc1DQ3wnQS1bsMljXtHjnoXip69F2ZAPJ5iQjhMBTE m3EA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-language :content-transfer-encoding:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=nmL52BRMfoxYwGu8nWI2glqNCBTx3hnKviqsGuqPMko=; b=te4mSmlh547ghrPcN9F/9xk/qaeIr0AlleQOpYh9U1qzLa1kNyOJu6nDRfuZtzO9Ne esod7KchKCOVITxJ4PQ+CMYDu47bl58F1Qv9M2O3h8mabz34MPn2jJkzK3RR2H8wPX/3 YFr7qIduCCAC/taEO6AQOjdOE5B744tBUHo4UtQ8L5UhUoQZVLrnfRuMAekcmZu0bgRi WMykcgJTM0b8A9oCRr6trjmSlygNMn1+hohb1r7MlZvnlBbITdSSaqjKw0F/p3gMoLU7 p01K+BgHeda5DPcQls1ExAhhCn5qx7Qw5hXY+CyDDuVD+FAUU6x4hhHbbWuOS6q+AyMO /QEA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i13si1668022pgr.58.2019.03.20.05.54.34; Wed, 20 Mar 2019 05:54:49 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727757AbfCTMxv (ORCPT + 99 others); Wed, 20 Mar 2019 08:53:51 -0400 Received: from mx1.redhat.com ([209.132.183.28]:54886 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727661AbfCTMxv (ORCPT ); Wed, 20 Mar 2019 08:53:51 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id E75E230820DD; Wed, 20 Mar 2019 12:53:50 +0000 (UTC) Received: from [10.72.12.117] (ovpn-12-117.pek2.redhat.com [10.72.12.117]) by smtp.corp.redhat.com (Postfix) with ESMTP id 25D3F60606; Wed, 20 Mar 2019 12:53:37 +0000 (UTC) Subject: Re: virtio-blk: should num_vqs be limited by num_possible_cpus()? To: Dongli Zhang , Cornelia Huck Cc: mst@redhat.com, virtualization@lists.linux-foundation.org, linux-block@vger.kernel.org, axboe@kernel.dk, linux-kernel@vger.kernel.org, Stefan Hajnoczi References: <20190312183351.74764f4f.cohuck@redhat.com> <173d19c9-24db-35f2-269f-0b9b83bd0ad6@oracle.com> <20190313103900.1ea7f996.cohuck@redhat.com> <537e6420-8994-43d6-1d4d-ccb6e0fafa0b@redhat.com> <20190315134112.7d63348c.cohuck@redhat.com> <1df52766-88fb-6b23-d160-b891c3017133@redhat.com> <0fbdcfa6-cbd7-4f09-93b1-40898d5f77d1@oracle.com> From: Jason Wang Message-ID: Date: Wed, 20 Mar 2019 20:53:33 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.5.1 MIME-Version: 1.0 In-Reply-To: <0fbdcfa6-cbd7-4f09-93b1-40898d5f77d1@oracle.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-US X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.47]); Wed, 20 Mar 2019 12:53:51 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2019/3/19 上午10:22, Dongli Zhang wrote: > Hi Jason, > > On 3/18/19 3:47 PM, Jason Wang wrote: >> On 2019/3/15 下午8:41, Cornelia Huck wrote: >>> On Fri, 15 Mar 2019 12:50:11 +0800 >>> Jason Wang wrote: >>> >>>> Or something like I proposed several years ago? >>>> https://do-db2.lkml.org/lkml/2014/12/25/169 >>>> >>>> Btw, for virtio-net, I think we actually want to go for having a maximum >>>> number of supported queues like what hardware did. This would be useful >>>> for e.g cpu hotplug or XDP (requires per cpu TX queue). But the current >>>> vector allocation doesn't support this which will results all virtqueues >>>> to share a single vector. We may indeed need more flexible policy here. >>> I think it should be possible for the driver to give the transport >>> hints how to set up their queues/interrupt structures. (The driver >>> probably knows best about its requirements.) Perhaps whether a queue is >>> high or low frequency, or whether it should be low latency, or even >>> whether two queues could share a notification mechanism without >>> drawbacks. It's up to the transport to make use of that information, if >>> possible. >> >> Exactly and it was what the above series tried to do by providing hints of e.g >> which queues want to share a notification. >> > I read about your patch set on providing more flexibility of queue-to-vector > mapping. > > One use case of the patch set is we would be able to enable more queues when > there is limited number of vectors. > > Another use case we may classify queues as hight priority or low priority as > mentioned by Cornelia. > > For virtio-blk, we may extend virtio-blk based on this patch set to enable > something similar to write_queues/poll_queues in nvme, when (set->nr_maps != 1). > > > Yet, the question I am asking in this email thread is for a difference scenario. > > The issue is not we are not having enough vectors (although this is why only 1 > vector is allocated for all virtio-blk queues). As so far virtio-blk has > (set->nr_maps == 1), block layer would limit the number of hw queues by > nr_cpu_ids, we indeed do not need more than nr_cpu_ids hw queues in virtio-blk. > > That's why I ask why not change the flow as below options when the number of > supported hw queues is more than nr_cpu_ids (and set->nr_maps == 1. virtio-blk > does not set nr_maps and block layer would set it to 1 when the driver does not > specify with a value): > > option 1: > As what nvme and xen-netfront do, limit the hw queue number by nr_cpu_ids. How do they limit the hw queue number? A command? > > option 2: > If the vectors is not enough, use the max number vector (indeed nr_cpu_ids) as > number of hw queues. We can share vectors in this case. > > option 3: > We should allow more vectors even the block layer would support at most > nr_cpu_ids queues. > > > I understand a new policy for queue-vector mapping is very helpful. I am just > asking the question from block layer's point of view. > > Thank you very much! > > Dongli Zhang Don't know much for block, cc Stefan for more idea. Thanks