Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752259AbbKBEWR (ORCPT ); Sun, 1 Nov 2015 23:22:17 -0500 Received: from aserp1040.oracle.com ([141.146.126.69]:39933 "EHLO aserp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751460AbbKBEWP (ORCPT ); Sun, 1 Nov 2015 23:22:15 -0500 From: Bob Liu To: xen-devel@lists.xen.org Cc: linux-kernel@vger.kernel.org, roger.pau@citrix.com, konrad.wilk@oracle.com, felipe.franciosi@citrix.com, axboe@fb.com, avanzini.arianna@gmail.com, rafal.mielniczuk@citrix.com, jonathan.davies@citrix.com, david.vrabel@citrix.com, Bob Liu Subject: [PATCH v4 00/10] xen-block: multi hardware-queues/rings support Date: Mon, 2 Nov 2015 12:21:36 +0800 Message-Id: <1446438106-20171-1-git-send-email-bob.liu@oracle.com> X-Mailer: git-send-email 1.7.10.4 X-Source-IP: aserv0022.oracle.com [141.146.126.234] Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2460 Lines: 75 Note: These patches were based on original work of Arianna's internship for GNOME's Outreach Program for Women. After using blk-mq api, a guest has more than one(nr_vpus) software request queues associated with each block front. These queues can be mapped over several rings(hardware queues) to the backend, making it very easy for us to run multiple threads on the backend for a single virtual disk. By having different threads issuing requests at the same time, the performance of guest can be improved significantly. Test was done based on null_blk driver: dom0: v4.3-rc7 16vcpus 10GB "modprobe null_blk" domU: v4.3-rc7 16vcpus 10GB [test] rw=read direct=1 ioengine=libaio bs=4k time_based runtime=30 filename=/dev/xvdb numjobs=16 iodepth=64 iodepth_batch=64 iodepth_batch_complete=64 group_reporting domU(orig) 4 queues 8 queues 16 queues iops: 690k 1024k(+30%) 800k 750k After patch 9 and 10: domU(orig) 4 queues 8 queues 16 queues iops: 690k 1600k(+100%) 1450k 1320k Chart: https://www.dropbox.com/s/agrcy2pbzbsvmwv/iops.png?dl=0 Also see huge improvements for write and real SSD storage. --- v4: * Rebase to v4.3-rc7 * Comments from Roger v3: * Rebased to v4.2-rc8 Bob Liu (10): xen/blkif: document blkif multi-queue/ring extension xen/blkfront: separate per ring information out of device info xen/blkfront: pseudo support for multi hardware queues/rings xen/blkfront: split per device io_lock xen/blkfront: negotiate number of queues/rings to be used with backend xen/blkback: separate ring information out of struct xen_blkif xen/blkback: pseudo support for multi hardware queues/rings xen/blkback: get the number of hardware queues/rings from blkfront xen/blkfront: make persistent grants per-queue xen/blkback: make pool of persistent grants and free pages per-queue drivers/block/xen-blkback/blkback.c | 386 ++++++++++--------- drivers/block/xen-blkback/common.h | 78 ++-- drivers/block/xen-blkback/xenbus.c | 359 ++++++++++++------ drivers/block/xen-blkfront.c | 718 ++++++++++++++++++++++-------------- include/xen/interface/io/blkif.h | 48 +++ 5 files changed, 971 insertions(+), 618 deletions(-) -- 1.8.3.1 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/