Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753383AbYKXPd1 (ORCPT ); Mon, 24 Nov 2008 10:33:27 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1752523AbYKXPdQ (ORCPT ); Mon, 24 Nov 2008 10:33:16 -0500 Received: from mx2.redhat.com ([66.187.237.31]:40873 "EHLO mx2.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752502AbYKXPdP (ORCPT ); Mon, 24 Nov 2008 10:33:15 -0500 From: Jeff Moyer To: Jens Axboe Cc: "Vitaly V. Bursov" , linux-kernel@vger.kernel.org Subject: Re: Slow file transfer speeds with CFQ IO scheduler in some cases References: <4917263D.2090904@telenet.dn.ua> <20081110104423.GA26778@kernel.dk> <20081110135618.GI26778@kernel.dk> <20081112190227.GS26778@kernel.dk> X-PGP-KeyID: 1F78E1B4 X-PGP-CertKey: F6FE 280D 8293 F72C 65FD 5A58 1FF8 A7CA 1F78 E1B4 X-PCLoadLetter: What the f**k does that mean? Date: Mon, 24 Nov 2008 10:33:05 -0500 In-Reply-To: <20081112190227.GS26778@kernel.dk> (Jens Axboe's message of "Wed, 12 Nov 2008 20:02:28 +0100") Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/23.0.60 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1716 Lines: 35 Jens Axboe writes: > nfsd aside (which does seem to have some different behaviour skewing the > results), the original patch came about because dump(8) has a really > stupid design that offloads IO to a number of processes. This basically > makes fairly sequential IO more random with CFQ, since each process gets > its own io context. My feeling is that we should fix dump instead of > introducing a fair bit of complexity (and slowdown) in CFQ. I'm not > aware of any other good programs out there that would do something > similar, so I don't think there's a lot of merrit to spending cycles on > detecting cooperating processes. > > Jeff will take a look at fixing dump instead, and I may have promised > him that santa will bring him something nice this year if he does (since > I'm sure it'll be painful on the eyes). Sorry to drum up this topic once again, but we've recently run into another instance where the close cooperator patch helps significantly. The case is KVM using the virtio disk driver. The host-side uses posix_aio calls to issue I/O on behalf of the guest. It's worth noting that pthread_create does not pass CLONE_IO (at least that was my reading of the code). It is questionable whether it really should as that will change the I/O scheduling dynamics. So, Jens, what do you think? Should we collect some performance numbers to make sure that the close cooperator patch doesn't hurt the common case? Cheers, Jeff -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/