Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S261615AbUCCDot (ORCPT ); Tue, 2 Mar 2004 22:44:49 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S261678AbUCCDot (ORCPT ); Tue, 2 Mar 2004 22:44:49 -0500 Received: from mtvcafw.SGI.COM ([192.48.171.6]:63875 "EHLO zok.sgi.com") by vger.kernel.org with ESMTP id S261615AbUCCDos (ORCPT ); Tue, 2 Mar 2004 22:44:48 -0500 Date: Tue, 2 Mar 2004 19:44:32 -0800 To: "Chen, Kenneth W" Cc: linux-kernel@vger.kernel.org, Andrew Morton Subject: Re: per-cpu blk_plug_list Message-ID: <20040303034432.GA31277@sgi.com> Mail-Followup-To: "Chen, Kenneth W" , linux-kernel@vger.kernel.org, Andrew Morton References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.5.1+cvs20040105i From: jbarnes@sgi.com (Jesse Barnes) Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1014 Lines: 22 On Mon, Mar 01, 2004 at 01:18:40PM -0800, Chen, Kenneth W wrote: > blk_plug_list/blk_plug_lock manages plug/unplug action. When you have > lots of cpu simultaneously submits I/O, there are lots of movement with > the device queue on and off that global list. Our measurement showed > that blk_plug_lock contention prevents linux-2.6.3 kernel to scale pass > beyond 40 thousand I/O per second in the I/O submit path. This helped out our machines quite a bit too. Without the patch, we weren't able to scale above 80000 IOPS, but now we exceed 110000 (and parity with our internal XSCSI based tree). Maybe the plug lists and locks should be per-device though, rather than per-cpu? That would make the migration case easier I think. Is that possible? Thanks, Jesse - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/