Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752850AbbKNDMk (ORCPT ); Fri, 13 Nov 2015 22:12:40 -0500 Received: from userp1040.oracle.com ([156.151.31.81]:42405 "EHLO userp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751437AbbKNDMi (ORCPT ); Fri, 13 Nov 2015 22:12:38 -0500 From: Bob Liu To: xen-devel@lists.xen.org Cc: linux-kernel@vger.kernel.org, roger.pau@citrix.com, konrad.wilk@oracle.com, felipe.franciosi@citrix.com, axboe@fb.com, avanzini.arianna@gmail.com, rafal.mielniczuk@citrix.com, jonathan.davies@citrix.com, david.vrabel@citrix.com, Bob Liu Subject: [PATCH v5 00/10] xen-block: multi hardware-queues/rings support Date: Sat, 14 Nov 2015 11:12:09 +0800 Message-Id: <1447470739-18136-1-git-send-email-bob.liu@oracle.com> X-Mailer: git-send-email 1.7.10.4 X-Source-IP: userv0022.oracle.com [156.151.31.74] Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2797 Lines: 83 Note: These patches were based on original work of Arianna's internship for GNOME's Outreach Program for Women. After using blk-mq api, a guest has more than one(nr_vpus) software request queues associated with each block front. These queues can be mapped over several rings(hardware queues) to the backend, making it very easy for us to run multiple threads on the backend for a single virtual disk. By having different threads issuing requests at the same time, the performance of guest can be improved significantly. Test was done based on null_blk driver: dom0: v4.3-rc7 16vcpus 10GB "modprobe null_blk" domU: v4.3-rc7 16vcpus 10GB [test] rw=read direct=1 ioengine=libaio bs=4k time_based runtime=30 filename=/dev/xvdb numjobs=16 iodepth=64 iodepth_batch=64 iodepth_batch_complete=64 group_reporting Results: iops1: After commit("xen/blkfront: make persistent grants per-queue"). iops2: After commit("xen/blkback: make persistent grants and free pages pool per-queue"). Queues: 1 4 8 16 Iops orig(k): 810 1064 780 700 Iops1(k): 810 1230(~20%) 1024(~20%) 850(~20%) Iops2(k): 810 1410(~35%) 1354(~75%) 1440(~100%) With 4 queues after this series we can get ~75% increase in IOPS, and performance won't drop if incresing queue numbers. Please find the respective chart in this link: https://www.dropbox.com/s/agrcy2pbzbsvmwv/iops.png?dl=0 --- v5: * Rebase to xen/tip.git tags/for-linus-4.4-rc0-tag. * Comments from Konrad. v4: * Rebase to v4.3-rc7. * Comments from Roger. v3: * Rebased to v4.2-rc8. Bob Liu (10): xen/blkif: document blkif multi-queue/ring extension xen/blkfront: separate per ring information out of device info xen/blkfront: pseudo support for multi hardware queues/rings xen/blkfront: split per device io_lock xen/blkfront: negotiate number of queues/rings to be used with backend xen/blkback: separate ring information out of struct xen_blkif xen/blkback: pseudo support for multi hardware queues/rings xen/blkback: get the number of hardware queues/rings from blkfront xen/blkfront: make persistent grants per-queue xen/blkback: make pool of persistent grants and free pages per-queue drivers/block/xen-blkback/blkback.c | 386 ++++++++++--------- drivers/block/xen-blkback/common.h | 78 ++-- drivers/block/xen-blkback/xenbus.c | 359 ++++++++++++------ drivers/block/xen-blkfront.c | 718 ++++++++++++++++++++++-------------- include/xen/interface/io/blkif.h | 48 +++ 5 files changed, 971 insertions(+), 618 deletions(-) -- 1.8.3.1 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/