Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755164AbbHKGJN (ORCPT ); Tue, 11 Aug 2015 02:09:13 -0400 Received: from aserp1040.oracle.com ([141.146.126.69]:43931 "EHLO aserp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755135AbbHKGJL (ORCPT ); Tue, 11 Aug 2015 02:09:11 -0400 Message-ID: <55C99130.3020501@oracle.com> Date: Tue, 11 Aug 2015 14:07:44 +0800 From: Bob Liu User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130308 Thunderbird/17.0.4 MIME-Version: 1.0 To: Jens Axboe CC: Rafal Mielniczuk , Marcus Granado , Arianna Avanzini , Felipe Franciosi , "linux-kernel@vger.kernel.org" , Christoph Hellwig , David Vrabel , "xen-devel@lists.xenproject.org" , "boris.ostrovsky@oracle.com" , Jonathan Davies Subject: Re: [Xen-devel] [PATCH RFC v2 0/5] Multi-queue support for xen-blkfront and xen-blkback References: <1410479844-2864-1-git-send-email-avanzini.arianna@gmail.com> <20141001202721.GF12581@laptop.dumpdata.com> <20150428073646.GA16022@infradead.org> <553F3ADF.3000301@gmail.com> <555327A5.1060200@oracle.com> <5592A5EF.2050005@citrix.com> <55935848.7080909@fb.com> <55C8C8CE.7020301@fb.com> In-Reply-To: <55C8C8CE.7020301@fb.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit X-Source-IP: aserv0022.oracle.com [141.146.126.234] Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 5108 Lines: 100 On 08/10/2015 11:52 PM, Jens Axboe wrote: > On 08/10/2015 05:03 AM, Rafal Mielniczuk wrote: >> On 01/07/15 04:03, Jens Axboe wrote: >>> On 06/30/2015 08:21 AM, Marcus Granado wrote: >>>> Hi, >>>> >>>> Our measurements for the multiqueue patch indicate a clear improvement >>>> in iops when more queues are used. >>>> >>>> The measurements were obtained under the following conditions: >>>> >>>> - using blkback as the dom0 backend with the multiqueue patch applied to >>>> a dom0 kernel 4.0 on 8 vcpus. >>>> >>>> - using a recent Ubuntu 15.04 kernel 3.19 with multiqueue frontend >>>> applied to be used as a guest on 4 vcpus >>>> >>>> - using a micron RealSSD P320h as the underlying local storage on a Dell >>>> PowerEdge R720 with 2 Xeon E5-2643 v2 cpus. >>>> >>>> - fio 2.2.7-22-g36870 as the generator of synthetic loads in the guest. >>>> We used direct_io to skip caching in the guest and ran fio for 60s >>>> reading a number of block sizes ranging from 512 bytes to 4MiB. Queue >>>> depth of 32 for each queue was used to saturate individual vcpus in the >>>> guest. >>>> >>>> We were interested in observing storage iops for different values of >>>> block sizes. Our expectation was that iops would improve when increasing >>>> the number of queues, because both the guest and dom0 would be able to >>>> make use of more vcpus to handle these requests. >>>> >>>> These are the results (as aggregate iops for all the fio threads) that >>>> we got for the conditions above with sequential reads: >>>> >>>> fio_threads io_depth block_size 1-queue_iops 8-queue_iops >>>> 8 32 512 158K 264K >>>> 8 32 1K 157K 260K >>>> 8 32 2K 157K 258K >>>> 8 32 4K 148K 257K >>>> 8 32 8K 124K 207K >>>> 8 32 16K 84K 105K >>>> 8 32 32K 50K 54K >>>> 8 32 64K 24K 27K >>>> 8 32 128K 11K 13K >>>> >>>> 8-queue iops was better than single queue iops for all the block sizes. >>>> There were very good improvements as well for sequential writes with >>>> block size 4K (from 80K iops with single queue to 230K iops with 8 >>>> queues), and no regressions were visible in any measurement performed. >>> Great results! And I don't know why this code has lingered for so long, >>> so thanks for helping get some attention to this again. >>> >>> Personally I'd be really interested in the results for the same set of >>> tests, but without the blk-mq patches. Do you have them, or could you >>> potentially run them? >>> >> Hello, >> >> We rerun the tests for sequential reads with the identical settings but with Bob Liu's multiqueue patches reverted from dom0 and guest kernels. >> The results we obtained were *better* than the results we got with multiqueue patches applied: >> >> fio_threads io_depth block_size 1-queue_iops 8-queue_iops *no-mq-patches_iops* >> 8 32 512 158K 264K 321K >> 8 32 1K 157K 260K 328K >> 8 32 2K 157K 258K 336K >> 8 32 4K 148K 257K 308K >> 8 32 8K 124K 207K 188K >> 8 32 16K 84K 105K 82K >> 8 32 32K 50K 54K 36K >> 8 32 64K 24K 27K 16K >> 8 32 128K 11K 13K 11K >> >> We noticed that the requests are not merged by the guest when the multiqueue patches are applied, >> which results in a regression for small block sizes (RealSSD P320h's optimal block size is around 32-64KB). >> >> We observed similar regression for the Dell MZ-5EA1000-0D3 100 GB 2.5" Internal SSD >> >> As I understand blk-mq layer bypasses I/O scheduler which also effectively disables merges. >> Could you explain why it is difficult to enable merging in the blk-mq layer? >> That could help closing the performance gap we observed. >> >> Otherwise, the tests shows that the multiqueue patches does not improve the performance, >> at least when it comes to sequential read/writes operations. > > blk-mq still provides merging, there should be no difference there. Does the xen patches set BLK_MQ_F_SHOULD_MERGE? > Yes. Is it possible that xen-blkfront driver dequeue requests too fast after we have multiple hardware queues? Because new requests don't have the chance merging with old requests which were already dequeued and issued. -- Regards, -Bob -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/