Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932345AbbEVNcj (ORCPT ); Fri, 22 May 2015 09:32:39 -0400 Received: from youngberry.canonical.com ([91.189.89.112]:36715 "EHLO youngberry.canonical.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932217AbbEVNch (ORCPT ); Fri, 22 May 2015 09:32:37 -0400 MIME-Version: 1.0 In-Reply-To: References: <1430826595-5888-1-git-send-email-ming.lei@canonical.com> <1430826595-5888-3-git-send-email-ming.lei@canonical.com> Date: Fri, 22 May 2015 21:32:34 +0800 Message-ID: Subject: Re: [PATCH 2/2] block: loop: avoiding too many pending per work I/O From: Ming Lei To: Josh Boyer Cc: Jens Axboe , "Linux-Kernel@Vger. Kernel. Org" , "Justin M. Forbes" , Jeff Moyer , Tejun Heo , Christoph Hellwig , "v4.0" Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2715 Lines: 69 On Fri, May 22, 2015 at 8:36 PM, Josh Boyer wrote: > On Tue, May 5, 2015 at 7:49 AM, Ming Lei wrote: >> If there are too many pending per work I/O, too many >> high priority work thread can be generated so that >> system performance can be effected. >> >> This patch limits the max_active parameter of workqueue as 16. >> >> This patch fixes Fedora 22 live booting performance >> regression when it is booted from squashfs over dm >> based on loop, and looks the following reasons are >> related with the problem: >> >> - not like other filesyststems(such as ext4), squashfs >> is a bit special, and I observed that increasing I/O jobs >> to access file in squashfs only improve I/O performance a >> little, but it can make big difference for ext4 >> >> - nested loop: both squashfs.img and ext3fs.img are mounted >> as loop block, and ext3fs.img is inside the squashfs >> >> - during booting, lots of tasks may run concurrently >> >> Fixes: b5dd2f6047ca108001328aac0e8588edd15f1778 >> Cc: stable@vger.kernel.org (v4.0) >> Cc: Justin M. Forbes >> Signed-off-by: Ming Lei > > Did we ever come to conclusion on this and patch 1/2 in the series? > Fedora has them applied to it's 4.0.y based kernels to fix the > performance regression we saw, and we're carrying them in rawhide as > well. I'm curious if these will go into 4.1 or if they're queued at > all for 4.2? I saw it queued in for-next branch of block tree, so it should be merged to 4.2. > > josh > >> --- >> drivers/block/loop.c | 2 +- >> 1 file changed, 1 insertion(+), 1 deletion(-) >> >> diff --git a/drivers/block/loop.c b/drivers/block/loop.c >> index 3dc1598..1bee523 100644 >> --- a/drivers/block/loop.c >> +++ b/drivers/block/loop.c >> @@ -725,7 +725,7 @@ static int loop_set_fd(struct loop_device *lo, fmode_t mode, >> goto out_putf; >> error = -ENOMEM; >> lo->wq = alloc_workqueue("kloopd%d", >> - WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_UNBOUND, 0, >> + WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_UNBOUND, 16, >> lo->lo_number); >> if (!lo->wq) >> goto out_putf; >> -- >> 1.9.1 >> >> -- >> To unsubscribe from this list: send the line "unsubscribe stable" in >> the body of a message to majordomo@vger.kernel.org >> More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/