Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752268AbaAWPr6 (ORCPT ); Thu, 23 Jan 2014 10:47:58 -0500 Received: from bedivere.hansenpartnership.com ([66.63.167.143]:41074 "EHLO bedivere.hansenpartnership.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751169AbaAWPr4 (ORCPT ); Thu, 23 Jan 2014 10:47:56 -0500 Message-ID: <1390492073.2372.118.camel@dabdike.int.hansenpartnership.com> Subject: Re: [Lsf-pc] [LSF/MM TOPIC] really large storage sectors - going beyond 4096 bytes From: James Bottomley To: Dave Chinner Cc: Chris Mason , "linux-scsi@vger.kernel.org" , "linux-mm@kvack.org" , "linux-kernel@vger.kernel.org" , "linux-ide@vger.kernel.org" , "mgorman@suse.de" , "linux-fsdevel@vger.kernel.org" , "akpm@linux-foundation.org" , "lsf-pc@lists.linux-foundation.org" , "rwheeler@redhat.com" Date: Thu, 23 Jan 2014 07:47:53 -0800 In-Reply-To: <20140123082734.GP13997@dastard> References: <52DF353D.6050300@redhat.com> <20140122093435.GS4963@suse.de> <52DFD168.8080001@redhat.com> <20140122143452.GW4963@suse.de> <52DFDCA6.1050204@redhat.com> <20140122151913.GY4963@suse.de> <1390410233.1198.7.camel@ret.masoncoding.com> <1390411300.2372.33.camel@dabdike.int.hansenpartnership.com> <1390413819.1198.20.camel@ret.masoncoding.com> <1390414439.2372.53.camel@dabdike.int.hansenpartnership.com> <20140123082734.GP13997@dastard> Content-Type: text/plain; charset="ISO-8859-15" X-Mailer: Evolution 3.8.5 Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, 2014-01-23 at 19:27 +1100, Dave Chinner wrote: > On Wed, Jan 22, 2014 at 10:13:59AM -0800, James Bottomley wrote: > > On Wed, 2014-01-22 at 18:02 +0000, Chris Mason wrote: > > > On Wed, 2014-01-22 at 09:21 -0800, James Bottomley wrote: > > > > On Wed, 2014-01-22 at 17:02 +0000, Chris Mason wrote: > > > > > > [ I like big sectors and I cannot lie ] > > > > I think I might be sceptical, but I don't think that's showing in my > > concerns ... > > > > > > > I really think that if we want to make progress on this one, we need > > > > > code and someone that owns it. Nick's work was impressive, but it was > > > > > mostly there for getting rid of buffer heads. If we have a device that > > > > > needs it and someone working to enable that device, we'll go forward > > > > > much faster. > > > > > > > > Do we even need to do that (eliminate buffer heads)? We cope with 4k > > > > sector only devices just fine today because the bh mechanisms now > > > > operate on top of the page cache and can do the RMW necessary to update > > > > a bh in the page cache itself which allows us to do only 4k chunked > > > > writes, so we could keep the bh system and just alter the granularity of > > > > the page cache. > > > > > > > > > > We're likely to have people mixing 4K drives and > > size here> on the same box. We could just go with the biggest size and > > > use the existing bh code for the sub-pagesized blocks, but I really > > > hesitate to change VM fundamentals for this. > > > > If the page cache had a variable granularity per device, that would cope > > with this. It's the variable granularity that's the VM problem. > > > > > From a pure code point of view, it may be less work to change it once in > > > the VM. But from an overall system impact point of view, it's a big > > > change in how the system behaves just for filesystem metadata. > > > > Agreed, but only if we don't do RMW in the buffer cache ... which may be > > a good reason to keep it. > > > > > > The other question is if the drive does RMW between 4k and whatever its > > > > physical sector size, do we need to do anything to take advantage of > > > > it ... as in what would altering the granularity of the page cache buy > > > > us? > > > > > > The real benefit is when and how the reads get scheduled. We're able to > > > do a much better job pipelining the reads, controlling our caches and > > > reducing write latency by having the reads done up in the OS instead of > > > the drive. > > > > I agree with all of that, but my question is still can we do this by > > propagating alignment and chunk size information (i.e. the physical > > sector size) like we do today. If the FS knows the optimal I/O patterns > > and tries to follow them, the odd cockup won't impact performance > > dramatically. The real question is can the FS make use of this layout > > information *without* changing the page cache granularity? Only if you > > answer me "no" to this do I think we need to worry about changing page > > cache granularity. > > We already do this today. > > The problem is that we are limited by the page cache assumption that > the block device/filesystem never need to manage multiple pages as > an atomic unit of change. Hence we can't use the generic > infrastructure as it stands to handle block/sector sizes larger than > a page size... If the compound page infrastructure exists today and is usable for this, what else do we need to do? ... because if it's a couple of trivial changes and a few minor patches to filesystems to take advantage of it, we might as well do it anyway. I was only objecting on the grounds that the last time we looked at it, it was major VM surgery. Can someone give a summary of how far we are away from being able to do this with the VM system today and what extra work is needed (and how big is this piece of work)? James -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/