Received: by 2002:ac0:bc90:0:0:0:0:0 with SMTP id a16csp5169039img; Wed, 27 Mar 2019 03:33:44 -0700 (PDT) X-Google-Smtp-Source: APXvYqz2y93Hi1kOHLcXyCjF6a4/hE+Swyogr0XhpfDvL3pv+74EDXsmfHv9yyYUxMaAoZmoomnR X-Received: by 2002:a65:5b47:: with SMTP id y7mr33687987pgr.449.1553682823945; Wed, 27 Mar 2019 03:33:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553682823; cv=none; d=google.com; s=arc-20160816; b=N8ytqHYeFnDuF94by54NeeDT/QwZU5HLwVIDcFFXGn85HbGjOjyDaBuj55VvOo732B vh0DLyQEBkd9kFloU+3T1Ar4aD8dXcTYjQrhUVL5CviBYzpsb6yZoq2SOH81PO1aiooj B8fixN5Q7nLGjKCGhiIh+T2dgK7mQeJBQCU7vGnGIFfh8QpRXD3E8pgTUKqLV1LTk9m/ rLUYc2zwaoH/cUosPOucGiPKuqsklxQmk7qa9Ef5Vx+JqU6HcnAWITxfDzJ30ErMiTn/ NZZwpdFynjynumQ4uPkQHs/z/ysKJic9BtUQ57h4v7EcE0cHqFnLJchOQHBxUyIZJrgn uSpQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from :dkim-signature; bh=1tW29WhXsQZtDQQ0Z2G78kTtwyNKbuql7nWIeE0goBk=; b=I/3EMvUpo7mUsyLI8p3Ddp7Q9pB01mHCQHUhINheXXcj996ESWMReDgPrUEbLyvUVa WU2WWRiRk0MKqqIH9enOJQXpC5eYVXZxQF8siSQAI14CIRdnkOGiVgPETaSChn5Er7CK iiNLF0D6OPjq5m2LNLUyZvsB222gqeIjJ47NFYiSI3xOpYhsjeQ1dJrj3kyKU3YuhviW Msw6WjEEPpT1QWjotApXF3//7J8j0I/EWu610uXvOTtTFeNt1gupSBUOKzQ5crggy32/ rda+8satx27UNx2COVbRnhYRvSsYUaH08VxDwx5G9g7QicisL2ZeDdzPRIokCjseOhpV HVnA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2018-07-02 header.b=QZAdvwXJ; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k12si9812819pls.436.2019.03.27.03.33.27; Wed, 27 Mar 2019 03:33:43 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2018-07-02 header.b=QZAdvwXJ; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733007AbfC0Kct (ORCPT + 99 others); Wed, 27 Mar 2019 06:32:49 -0400 Received: from aserp2130.oracle.com ([141.146.126.79]:39018 "EHLO aserp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727328AbfC0Kct (ORCPT ); Wed, 27 Mar 2019 06:32:49 -0400 Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1]) by aserp2130.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x2RADXc5043835; Wed, 27 Mar 2019 10:32:40 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id; s=corp-2018-07-02; bh=1tW29WhXsQZtDQQ0Z2G78kTtwyNKbuql7nWIeE0goBk=; b=QZAdvwXJ/Xi0U5zgiZuGZxvKOZ32XprVxbBz003PbKreRkziqZ5xQnyb247EbscXhM/q A+FlBFrl2DovzdIvZyisqKp8gyzoOvQG+B/hCDwlnVNt8FF52p2cbXoT9yA+6T8sUYJA +KtpuaZpWau5NgreWJsBxmyjO0a1FKVUXnoczkv8Q6+sHH3MoPdvgyKPqZVs8EWUOCN8 Ob7o+XL7V5/RI2YIYqGUg9tU5RwKnWGAfZzeJl3zGVzf78SUWUgt33N5Uid9NObmsvtb pKu7WZaKP3ITLcLLvi8mYUJfpOz9OakMHNfQtkoZUH+UYy1VpRNYXGfrqWxBf5/TiqLM nQ== Received: from aserv0021.oracle.com (aserv0021.oracle.com [141.146.126.233]) by aserp2130.oracle.com with ESMTP id 2re6g0yns1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 27 Mar 2019 10:32:40 +0000 Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by aserv0021.oracle.com (8.14.4/8.14.4) with ESMTP id x2RAWdoW019899 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 27 Mar 2019 10:32:40 GMT Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id x2RAWcp9001129; Wed, 27 Mar 2019 10:32:38 GMT Received: from linux.cn.oracle.com (/10.182.69.106) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Wed, 27 Mar 2019 03:32:37 -0700 From: Dongli Zhang To: linux-scsi@vger.kernel.org, virtualization@lists.linux-foundation.org, linux-block@vger.kernel.org Cc: mst@redhat.com, jasowang@redhat.com, axboe@kernel.dk, jejb@linux.ibm.com, martin.petersen@oracle.com, cohuck@redhat.com, linux-kernel@vger.kernel.org Subject: [PATCH 0/2] Limit number of hw queues by nr_cpu_ids for virtio-blk and virtio-scsi Date: Wed, 27 Mar 2019 18:36:33 +0800 Message-Id: <1553682995-5682-1-git-send-email-dongli.zhang@oracle.com> X-Mailer: git-send-email 2.7.4 X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=9207 signatures=668685 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=686 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1903270074 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When tag_set->nr_maps is 1, the block layer limits the number of hw queues by nr_cpu_ids. No matter how many hw queues are use by virtio-blk/virtio-scsi, as they both have (tag_set->nr_maps == 1), they can use at most nr_cpu_ids hw queues. In addition, specifically for pci scenario, when the 'num-queues' specified by qemu is more than maxcpus, virtio-blk/virtio-scsi would not be able to allocate more than maxcpus vectors in order to have a vector for each queue. As a result, they fall back into MSI-X with one vector for config and one shared for queues. Considering above reasons, this patch set limits the number of hw queues used by nr_cpu_ids for both virtio-blk and virtio-scsi. ------------------------------------------------------------- Here is test result of virtio-scsi: qemu cmdline: -smp 2,maxcpus=4, \ -device virtio-scsi-pci,id=scsi0,num_queues=8, \ -device scsi-hd,drive=drive0,bus=scsi0.0,channel=0,scsi-id=0,lun=0, \ -drive file=test.img,if=none,id=drive0 Although maxcpus=4 and num_queues=8, 4 queues are used while 2 interrupts are allocated. # cat /proc/interrupts ... ... 24: 0 0 PCI-MSI 65536-edge virtio0-config 25: 0 369 PCI-MSI 65537-edge virtio0-virtqueues ... ... # /sys/block/sda/mq/ 0 1 2 3 ------> 4 queues although qemu sets num_queues=8 With the patch set, there is per-queue interrupt. # cat /proc/interrupts 24: 0 0 PCI-MSI 65536-edge virtio0-config 25: 0 0 PCI-MSI 65537-edge virtio0-control 26: 0 0 PCI-MSI 65538-edge virtio0-event 27: 296 0 PCI-MSI 65539-edge virtio0-request 28: 0 139 PCI-MSI 65540-edge virtio0-request 29: 0 0 PCI-MSI 65541-edge virtio0-request 30: 0 0 PCI-MSI 65542-edge virtio0-request # ls /sys/block/sda/mq 0 1 2 3 ------------------------------------------------------------- Here is test result of virtio-blk: qemu cmdline: -smp 2,maxcpus=4, -device virtio-blk-pci,drive=drive-virtio-disk0,id=virtio-disk0,num-queues=8 -drive test.img,format=raw,if=none,id=drive-virtio-disk0 Although maxcpus=4 and num-queues=8, 4 queues are used while 2 interrupts are allocated. # cat /proc/interrupts ... ... 24: 0 0 PCI-MSI 65536-edge virtio0-config 25: 0 65 PCI-MSI 65537-edge virtio0-virtqueues ... ... # ls /sys/block/vda/mq 0 1 2 3 -------> 4 queues although qemu sets num_queues=8 With the patch set, there is per-queue interrupt. # cat /proc/interrupts 24: 0 0 PCI-MSI 65536-edge virtio0-config 25: 64 0 PCI-MSI 65537-edge virtio0-req.0 26: 0 10290 PCI-MSI 65538-edge virtio0-req.1 27: 0 0 PCI-MSI 65539-edge virtio0-req.2 28: 0 0 PCI-MSI 65540-edge virtio0-req.3 # ls /sys/block/vda/mq/ 0 1 2 3 Reference: https://lore.kernel.org/lkml/e4afe4c5-0262-4500-aeec-60f30734b4fc@default/ Thank you very much! Dongli Zhang