Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932477AbaAHODr (ORCPT ); Wed, 8 Jan 2014 09:03:47 -0500 Received: from bombadil.infradead.org ([198.137.202.9]:60062 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932307AbaAHODp (ORCPT ); Wed, 8 Jan 2014 09:03:45 -0500 Date: Wed, 8 Jan 2014 06:03:43 -0800 From: Christoph Hellwig To: Jan Kara Cc: Christoph Hellwig , Sergey Meirovich , linux-scsi , Linux Kernel Mailing List , Gluk Subject: Re: Terrible performance of sequential O_DIRECT 4k writes in SAN environment. ~3 times slower then Solars 10 with the same HBA/Storage. Message-ID: <20140108140343.GB588@infradead.org> References: <20140106201032.GA13491@quack.suse.cz> <20140107155830.GA28395@infradead.org> <20140108011713.GA5212@quack.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20140108011713.GA5212@quack.suse.cz> User-Agent: Mutt/1.5.21 (2010-09-15) X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org See http://www.infradead.org/rpr.html Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jan 08, 2014 at 02:17:13AM +0100, Jan Kara wrote: > Well, I was specifically worried about i_mutex locking. In particular: > Before we report appending IO completion we need to update i_size. > To update i_size we need to grab i_mutex. > > Now this is unpleasant because inode_dio_wait() happens under i_mutex so > the above would create lock inversion. And we cannot really do > inode_dio_done() before grabbing i_mutex as that would open interesting > races between truncate decreasing i_size and DIO increasing it. Yeah, XFS splits this between the ilock and iolock, which just makes life in this area a whole lot easier. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/