Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753539AbbHRViL (ORCPT ); Tue, 18 Aug 2015 17:38:11 -0400 Received: from imap.thunk.org ([74.207.234.97]:36472 "EHLO imap.thunk.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751199AbbHRViG (ORCPT ); Tue, 18 Aug 2015 17:38:06 -0400 Date: Tue, 18 Aug 2015 21:38:04 +0000 From: tytso@mit.edu To: Brice Goglin Cc: LKML Subject: Re: Why is SECTOR_SIZE = 512 inside kernel ? Message-ID: <20150818213804.GB8806@thunk.org> Mail-Followup-To: tytso@mit.edu, Brice Goglin , LKML References: <20150817135450.GB27202@thunk.org> <55D39E67.3010009@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <55D39E67.3010009@gmail.com> User-Agent: Mutt/1.5.23 (2014-03-12) X-SA-Exim-Connect-IP: X-SA-Exim-Mail-From: tytso@thunk.org X-SA-Exim-Scanned: No (on imap.thunk.org); SAEximRunCond expanded to false Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2409 Lines: 46 On Tue, Aug 18, 2015 at 11:06:47PM +0200, Brice Goglin wrote: > Le 17/08/2015 15:54, Theodore Ts'o a ?crit : > > > > It's cast in stone. There are too many places all over the kernel, > > especially in a huge number of file systems, which assume that the > > sector size is 512 bytes. So above the block layer, the sector size > > is always going to be 512. > > Could this be a problem when using pmem/nvdimm devices with > byte-granularity (no BTT layer)? (hw_sector_size reports > 512 in this case while we could expect 1 instead). > Or it just doesn't matter because BTT is the only way to use > these devices for filesystems like other block devices? Right now there are very few applications that understand how to use pmem/nvdimm devices as memory. And even where they do, they will need some kind of file system to provide resource isolation in case more than one application or more than one user wants to use the pmem/nvdimm. In that case, they will probably mmap a file and then access the nvdimm directly. In that case, the applications won't be using the block device layer at all, so they won't care about the advertised hw_sector_Size. The challenge with pmem-aware applications is that they need to be able to correctly update their in-memory data structures in such a way that they can correctly recover after an arbitrary power failure. That means they have to use atomic updates and/or copy-on-write update schemes, and I suspect most application writes just aren't going to be able to get this right. So for many legacy applications, they will still read in the file "foo", make changes in local memory, and then write the new contents to the file "foo.new", and then rename "foo.new" on top of "foo". These applications will effectively use nvdimm as super fast flash, and so they will use file systems as file systems. And since file systems today all use block sizes which are multiples of the traditional 512 byte sector size, again, changing something as fundamental as the kernel's internal sector size doesn't have any real value, at least not as far as pmem/nvdimm support is concerned. - Ted -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/