Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754167AbbDXC7V (ORCPT ); Thu, 23 Apr 2015 22:59:21 -0400 Received: from youngberry.canonical.com ([91.189.89.112]:45670 "EHLO youngberry.canonical.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754093AbbDXC7T (ORCPT ); Thu, 23 Apr 2015 22:59:19 -0400 Date: Fri, 24 Apr 2015 10:59:08 +0800 From: Ming Lei To: "Justin M. Forbes" Cc: linux-kernel , tom.leiming@gmail.com Subject: Re: loop block-mq conversion scalability issues Message-ID: <20150424105908.47de489c@tom-T450> In-Reply-To: <1429823050.26534.9.camel@redhat.com> References: <1429823050.26534.9.camel@redhat.com> X-Mailer: Claws Mail 3.9.3 (GTK+ 2.24.23; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2146 Lines: 56 Hi Justin, Thanks for the report. On Thu, 23 Apr 2015 16:04:10 -0500 "Justin M. Forbes" wrote: > The block-mq conversion for loop in 4.0 kernels is showing us an > interesting scalability problem with live CDs (ro, squashfs). It was > noticed when testing the Fedora beta that the more CPUs a liveCD image > was given, the slower it would boot. A 4 core qemu instance or bare > metal instance took more than twice as long to boot compared to a single > CPU instance. After investigating, this came directly to the block-mq > conversion, reverting these 4 patches will return performance. More > details are available at > https://bugzilla.redhat.com/show_bug.cgi?id=1210857 > I don't think that reverting the patches is the ideal solution so I am > looking for other options. Since you know this code a bit better than I > do I thought I would run it by you while I am looking as well. I can understand the issue because the default @max_active for alloc_workqueue() is quite big(512), which may cause too much context switchs, then loop I/O performance gets decreased. Actually I have written the kernel dio/aio based patch for decreasing both CPU and memory utilization without sacrificing I/O performance, and I will try to improve and push the patch during this cycle and hope it can be merged(kernel/aio.c change is dropped, and only fs change is needed on fs/direct-io.c). But the following change should help for your case, could you test it? --- diff --git a/drivers/block/loop.c b/drivers/block/loop.c index c6b3726..b1cb41d 100644 --- a/drivers/block/loop.c +++ b/drivers/block/loop.c @@ -1831,7 +1831,7 @@ static int __init loop_init(void) } loop_wq = alloc_workqueue("kloopd", - WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_UNBOUND, 0); + WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_UNBOUND, 32); if (!loop_wq) { err = -ENOMEM; goto misc_out; Thanks, Ming Lei -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/