Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752993AbdI0Pja (ORCPT ); Wed, 27 Sep 2017 11:39:30 -0400 Received: from mga02.intel.com ([134.134.136.20]:48561 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752188AbdI0Pj2 (ORCPT ); Wed, 27 Sep 2017 11:39:28 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.42,445,1500966000"; d="scan'208";a="1019080933" Date: Wed, 27 Sep 2017 09:39:18 -0600 From: Ross Zwisler To: Jan Kara Cc: Dan Williams , Ross Zwisler , Andrew Morton , "linux-kernel@vger.kernel.org" , "Darrick J. Wong" , "J. Bruce Fields" , Christoph Hellwig , Dave Chinner , Jeff Layton , linux-fsdevel , Linux MM , "linux-nvdimm@lists.01.org" , linux-xfs@vger.kernel.org Subject: Re: [PATCH 6/7] mm, fs: introduce file_operations->post_mmap() Message-ID: <20170927153918.GA24314@linux.intel.com> Mail-Followup-To: Ross Zwisler , Jan Kara , Dan Williams , Andrew Morton , "linux-kernel@vger.kernel.org" , "Darrick J. Wong" , "J. Bruce Fields" , Christoph Hellwig , Dave Chinner , Jeff Layton , linux-fsdevel , Linux MM , "linux-nvdimm@lists.01.org" , linux-xfs@vger.kernel.org References: <20170925231404.32723-1-ross.zwisler@linux.intel.com> <20170925231404.32723-7-ross.zwisler@linux.intel.com> <20170926185751.GB31146@linux.intel.com> <20170926210645.GA7798@linux.intel.com> <20170927113527.GD25746@quack2.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170927113527.GD25746@quack2.suse.cz> User-Agent: Mutt/1.9.0 (2017-09-02) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2894 Lines: 52 On Wed, Sep 27, 2017 at 01:35:27PM +0200, Jan Kara wrote: > On Tue 26-09-17 14:41:53, Dan Williams wrote: > > On Tue, Sep 26, 2017 at 2:06 PM, Ross Zwisler > > wrote: > > > On Tue, Sep 26, 2017 at 12:19:21PM -0700, Dan Williams wrote: > > >> On Tue, Sep 26, 2017 at 11:57 AM, Ross Zwisler > > > <> > > >> > This decision can only be made (in this > > >> > proposed scheme) *after* the inode->i_mapping->i_mmap tree has been > > >> > populated, which means we need another call into the filesystem after this > > >> > insertion has happened. > > >> > > >> I get that, but it seems over-engineered and something that can also > > >> be safely cleaned up after the fact by the code path that is disabling > > >> DAX. > > > > > > I don't think you can safely clean it up after the fact because some thread > > > might have already called ->mmap() to set up the vma->vm_flags for their new > > > mapping, but they haven't added it to inode->i_mapping->i_mmap. > > > > If madvise(MADV_NOHUGEPAGE) can dynamically change vm_flags, then the > > DAX disable path can as well. VM_MIXEDMAP looks to be a nop for normal > > memory mappings. > > > > > The inode->i_mapping->i_mmap tree is the only way (that I know of at least) > > > that the filesystem has any idea about about the mapping. This is the method > > > by which we would try and clean up mapping flags, if we were to do so, and > > > it's the only way that the filesystem can know whether or not mappings exist. > > > > > > The only way that I could think of to make this safely work is to have the > > > insertion into the inode->i_mapping->i_mmap tree be our sync point. After > > > that the filesystem and the mapping code can communicate on the state of DAX, > > > but before that I think it's basically indeterminate. > > > > If we lose the race and leak VM_HUGEPAGE to a non-DAX mapping what > > breaks? I'd rather be in favor of not setting VM_HUGEPAGE at all in > > the ->mmap() handler and let the default THP policy take over. In > > fact, see transparent_hugepage_enabled() we already auto-enable huge > > page support for dax mappings regardless of VM_HUGEPAGE. > > Hum, this is an interesting option. So do you suggest that filesystems > supporting DAX would always setup mappings with VM_MIXEDMAP and without > VM_HUGEPAGE and thus we'd get rid of dependency on S_DAX flag in ->mmap? > That could actually work. The only possible issue I can see is that > VM_MIXEDMAP is still slightly different from normal page mappings and it > could have some performance implications - e.g. copy_page_range() does more > work on VM_MIXEDMAP mappings but not on normal page mappings. It looks like having VM_MIXEDMAP always set for filesystems that support DAX might affect their memory's NUMA migration in the non-DAX case? 8e76d4e sched, numa: do not hint for NUMA balancing on VM_MIXEDMAP mappings