Received: by 2002:ac0:950c:0:0:0:0:0 with SMTP id f12csp3795201imc; Thu, 14 Mar 2019 05:35:34 -0700 (PDT) X-Google-Smtp-Source: APXvYqwdZ0bulx2jJlWxVrhf8dC0FftFU9wOj6/Bfi0Ipjm6B5+kb/eajE/OHCE2CxesIeF025Qk X-Received: by 2002:a62:5a86:: with SMTP id o128mr1156056pfb.243.1552566934099; Thu, 14 Mar 2019 05:35:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1552566934; cv=none; d=google.com; s=arc-20160816; b=gug8O22lFlR+JnEA3UEnzZZWWa0SgCMfn4gZE9trv4nugRztkMy8EOr0O32h8+0UNf KYEqFc6xvDbBN/5zuqGWeIJtGlsVZLP4n9cy7XQYKVBIBp/93T0nYIRBVrhTwzfXfnsi pTm+kVdeFsUfFe8lmSqkgotYCTTB2q0Xnb1iNtarOFpf4isq1Ce9EEYv+5rmbRgwzuQo 9as1iHLd7gCx8zsav93dgPAew3JjR+nObWAh3SOTRBYjJngsNS1zWHA5RCoTYHwtmrLg VwYsIE3pGr+dV/+d+N5d0ODJO4tFBMAIZEuq6qqA7Zn1tY88gCFpvcgPNArh9a0XmzY6 o4lA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=HzarW53smR/Xi78U3o4KlEFG6eASYZBPPWNkxc/9+mM=; b=ak/EGxa74sGqEEwAu5Z/Iu6a7sTGUXHNeCmVLYTdcBP/9gS0K26peJSxN/aUPtzBqq GTNzUaA3uWX+WfDNb7WMc7SI9/S7iOLiJYDFzm21s4+IVyJoM/GgIv4SMNhIsNPxWEwS 6bh/GHHGr5r+YupyAZ7exyh2xkZDGXVQhb8wyEe3eHqdgL/bdTvVLI2A5QXiX9dG9/FX qdwqNnfkP+LhD3NXwl9zHlUXZwMCOAjVBVNcjX+mKGn7swdxcUTSS+QxTISYrZKrjeS+ snceetm5LPdJQybQniE3pW9FF3ScKWkWTWyI+BIz1VLa25DE213GEwZSMDLRQA7pAcHi YzJw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m9si12188135pgv.238.2019.03.14.05.35.18; Thu, 14 Mar 2019 05:35:34 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727397AbfCNMdD (ORCPT + 99 others); Thu, 14 Mar 2019 08:33:03 -0400 Received: from mail-qt1-f194.google.com ([209.85.160.194]:35776 "EHLO mail-qt1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726896AbfCNMdD (ORCPT ); Thu, 14 Mar 2019 08:33:03 -0400 Received: by mail-qt1-f194.google.com with SMTP id h39so5783636qte.2 for ; Thu, 14 Mar 2019 05:33:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=HzarW53smR/Xi78U3o4KlEFG6eASYZBPPWNkxc/9+mM=; b=eu7r1R9WVviAjuCb0xkLAOOu8/YfqBpOCsoOvIlJNtcW2Lb/A12ENXCylJeZBxVEuy e0l9wVLrJwisNhSFY462YgTVKBzT49PfCqYR3gKIsl0aDpm+fGC9AUwVOjWj2eaiyMBK xpMU7tSnd/A9zQDN3ax6Q1YhAZtP4dcOJogZuw37qcUiyUZ5bq6m7bCDBlzW8509k22a tEBajbuW+pe1Q8K6ud4ESL6kYZn+5xPSo71Lh9pId8sY2A1iTllcP1Z0a2vDOazZhHEt sIbpVegNpXnARrh7yQWJDUAfiFLFDqTLwA0xHDVRWFiOaauufzFQ9sr76L5WfhCS1SAB YcYA== X-Gm-Message-State: APjAAAVIeGCLIdr+baAwHMGwum1wC2GlCi/373QaGl3q9dBOmgxMEuID jsbBdOrJZdZQlIPaUqsnQTLuDA== X-Received: by 2002:ac8:2850:: with SMTP id 16mr38427650qtr.84.1552566782205; Thu, 14 Mar 2019 05:33:02 -0700 (PDT) Received: from redhat.com (pool-173-76-246-42.bstnma.fios.verizon.net. [173.76.246.42]) by smtp.gmail.com with ESMTPSA id 10sm9720909qtx.40.2019.03.14.05.33.00 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 14 Mar 2019 05:33:01 -0700 (PDT) Date: Thu, 14 Mar 2019 08:32:58 -0400 From: "Michael S. Tsirkin" To: Dongli Zhang Cc: virtualization@lists.linux-foundation.org, linux-block@vger.kernel.org, axboe@kernel.dk, jasowang@redhat.com, linux-kernel@vger.kernel.org Subject: Re: virtio-blk: should num_vqs be limited by num_possible_cpus()? Message-ID: <20190314082926-mutt-send-email-mst@kernel.org> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Mar 12, 2019 at 10:22:46AM -0700, Dongli Zhang wrote: > I observed that there is one msix vector for config and one shared vector > for all queues in below qemu cmdline, when the num-queues for virtio-blk > is more than the number of possible cpus: > > qemu: "-smp 4" while "-device virtio-blk-pci,drive=drive-0,id=virtblk0,num-queues=6" So why do this? > # cat /proc/interrupts > CPU0 CPU1 CPU2 CPU3 > ... ... > 24: 0 0 0 0 PCI-MSI 65536-edge virtio0-config > 25: 0 0 0 59 PCI-MSI 65537-edge virtio0-virtqueues > ... ... > > > However, when num-queues is the same as number of possible cpus: > > qemu: "-smp 4" while "-device virtio-blk-pci,drive=drive-0,id=virtblk0,num-queues=4" > > # cat /proc/interrupts > CPU0 CPU1 CPU2 CPU3 > ... ... > 24: 0 0 0 0 PCI-MSI 65536-edge virtio0-config > 25: 2 0 0 0 PCI-MSI 65537-edge virtio0-req.0 > 26: 0 35 0 0 PCI-MSI 65538-edge virtio0-req.1 > 27: 0 0 32 0 PCI-MSI 65539-edge virtio0-req.2 > 28: 0 0 0 0 PCI-MSI 65540-edge virtio0-req.3 > ... ... > > In above case, there is one msix vector per queue. > > > This is because the max number of queues is not limited by the number of > possible cpus. > > By default, nvme (regardless about write_queues and poll_queues) and > xen-blkfront limit the number of queues with num_possible_cpus(). > > > Is this by design on purpose, or can we fix with below? > > > diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c > index 4bc083b..df95ce3 100644 > --- a/drivers/block/virtio_blk.c > +++ b/drivers/block/virtio_blk.c > @@ -513,6 +513,8 @@ static int init_vq(struct virtio_blk *vblk) > if (err) > num_vqs = 1; > > + num_vqs = min(num_possible_cpus(), num_vqs); > + > vblk->vqs = kmalloc_array(num_vqs, sizeof(*vblk->vqs), GFP_KERNEL); > if (!vblk->vqs) > return -ENOMEM; > -- > > > PS: The same issue is applicable to virtio-scsi as well. > > Thank you very much! > > Dongli Zhang I don't think this will address the issue if there's vcpu hotplug though. Because it's not about num_possible_cpus it's about the # of active VCPUs, right? Does block hangle CPU hotplug generally? We could maybe address that by switching vq to msi vector mapping in a cpu hotplug notifier... -- MST