Received: by 2002:ac0:950c:0:0:0:0:0 with SMTP id f12csp2277329imc; Tue, 12 Mar 2019 10:25:32 -0700 (PDT) X-Google-Smtp-Source: APXvYqwkDrFoJcaZpxrdk4b9+Fl0Q+ubJVLO59futUGzn/0mn3kVhAbiBniuXo1qlnIqmGSpRXZe X-Received: by 2002:a17:902:9304:: with SMTP id bc4mr41673294plb.81.1552411532465; Tue, 12 Mar 2019 10:25:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1552411532; cv=none; d=google.com; s=arc-20160816; b=og4UX5TsYOkAoO6jLXbleP3je91Jv0BMJfyN3y4OBgfJropv2U5iLSFgRe0D5+/Ujc 2b9QvZG0VhxmFSkK9rONq9C57YgiLqtPtUiGpYx1+RS5WlRXLRFgdnZZa5UREIHtkX8U lytpCIdbc9nPDqjWj7JWj9I0lS3Y+ONAYCQZHwz8hsEAjW14UjNnD2lIHut9iD1OUQAD 5yR7lnNX9p6vxYqya5JFLt2aKFqXyLpF+7QFk0Y1AFTrc1F1Ymy7p1T9W/qtO+sAn6ku yTyXKsbn+EQ8+LxRSJJ5yEgjahQX3yxmhwPvnBTDH8wHnnUvT8vuxQUw7EmxQI/D+kyJ /8QQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-disposition :content-transfer-encoding:subject:cc:to:from:date:message-id :mime-version:dkim-signature; bh=iOjG49OepcYwW0hFgqwiU+e6H/1osYxSanFbyDuWEf0=; b=MtmO9+v/pc1209KY76ycB1HRDfFWNOe29bryGdZF/K9qTJ2Q4ffPYMRbRC5kGDJ0Za 0xk6ra0Balu2A+QxskHmXRlnOd4r+l47NOt4/fRDxZ+gD7MWlb+3afGt0RCJDQ5RlJ6J jVE5bBM4OhNjHa4N09stg4t1EyDkVHnR10f2rREZT0/UqyPJh5v6Ebmom4xsIRFwip64 VcnQASfMaN5mLhVLQ9jjAg1ItyXFrXVtFfjSpIA7Q7AXYFqPRaxzzYv3x7ij7cg6E4Rs nWMu8PsDTrct+JtB09sP5aInR4PEyvaTVzAfZOiAdAFpUDvG0es/VYpKHe2gvV8ecB8T Js7w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2018-07-02 header.b=cpR19JXd; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id gn14si8561384plb.171.2019.03.12.10.25.16; Tue, 12 Mar 2019 10:25:32 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2018-07-02 header.b=cpR19JXd; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730061AbfCLRWz (ORCPT + 99 others); Tue, 12 Mar 2019 13:22:55 -0400 Received: from aserp2130.oracle.com ([141.146.126.79]:48294 "EHLO aserp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726833AbfCLRWy (ORCPT ); Tue, 12 Mar 2019 13:22:54 -0400 Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1]) by aserp2130.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x2CHEY8U159031; Tue, 12 Mar 2019 17:22:48 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=mime-version : message-id : date : from : to : cc : subject : content-type : content-transfer-encoding; s=corp-2018-07-02; bh=iOjG49OepcYwW0hFgqwiU+e6H/1osYxSanFbyDuWEf0=; b=cpR19JXd90mLmPVgD7ys+E3oaDO8XW940yzSZ4KqYPNTllwjq+6dPBSjEgth2r1ChE9/ disihy+TtFn/KMhCh+JK8mzfPp4c6mZFu8SZ9IrRCXsUCn17xAr5sg7Cnyf4CRdN1Hmf bd2lcu9g91buKXYCu/n7s8hl2/tG/5tyqtJBenequkQqKGU7DDZORRPvY8YNgqB15Wme XsnSudBR4NbBH53GtZUnanYgHaNmVDNMIoXmFiSaE9/sDAvpkZP3YHfCnvwzHFQaVp62 cQqi6i8q4gURRb5bjg7ptAviTUhHPO7CfkvcZNnrUP3K0fMb633vmY5VBpZb/3PTm4M8 fA== Received: from userv0021.oracle.com (userv0021.oracle.com [156.151.31.71]) by aserp2130.oracle.com with ESMTP id 2r430epmm8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 12 Mar 2019 17:22:48 +0000 Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72]) by userv0021.oracle.com (8.14.4/8.14.4) with ESMTP id x2CHMlV5016709 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 12 Mar 2019 17:22:47 GMT Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9]) by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id x2CHMlDf018500; Tue, 12 Mar 2019 17:22:47 GMT MIME-Version: 1.0 Message-ID: Date: Tue, 12 Mar 2019 10:22:46 -0700 (PDT) From: Dongli Zhang To: , Cc: , , , Subject: virtio-blk: should num_vqs be limited by num_possible_cpus()? X-Mailer: Zimbra on Oracle Beehive Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=9193 signatures=668685 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1903120118 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org I observed that there is one msix vector for config and one shared vector for all queues in below qemu cmdline, when the num-queues for virtio-blk is more than the number of possible cpus: qemu: "-smp 4" while "-device virtio-blk-pci,drive=3Ddrive-0,id=3Dvirtblk0,= num-queues=3D6" # cat /proc/interrupts=20 CPU0 CPU1 CPU2 CPU3 ... ... 24: 0 0 0 0 PCI-MSI 65536-edge = virtio0-config 25: 0 0 0 59 PCI-MSI 65537-edge = virtio0-virtqueues ... ... However, when num-queues is the same as number of possible cpus: qemu: "-smp 4" while "-device virtio-blk-pci,drive=3Ddrive-0,id=3Dvirtblk0,= num-queues=3D4" # cat /proc/interrupts=20 CPU0 CPU1 CPU2 CPU3 ... ...=20 24: 0 0 0 0 PCI-MSI 65536-edge = virtio0-config 25: 2 0 0 0 PCI-MSI 65537-edge = virtio0-req.0 26: 0 35 0 0 PCI-MSI 65538-edge = virtio0-req.1 27: 0 0 32 0 PCI-MSI 65539-edge = virtio0-req.2 28: 0 0 0 0 PCI-MSI 65540-edge = virtio0-req.3 ... ... In above case, there is one msix vector per queue. This is because the max number of queues is not limited by the number of possible cpus. By default, nvme (regardless about write_queues and poll_queues) and xen-blkfront limit the number of queues with num_possible_cpus(). Is this by design on purpose, or can we fix with below? diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index 4bc083b..df95ce3 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -513,6 +513,8 @@ static int init_vq(struct virtio_blk *vblk) =09if (err) =09=09num_vqs =3D 1; =20 +=09num_vqs =3D min(num_possible_cpus(), num_vqs); + =09vblk->vqs =3D kmalloc_array(num_vqs, sizeof(*vblk->vqs), GFP_KERNEL); =09if (!vblk->vqs) =09=09return -ENOMEM; -- PS: The same issue is applicable to virtio-scsi as well. Thank you very much! Dongli Zhang