2016-11-01 19:54:02

by Ross Zwisler

[permalink] [raw]
Subject: [PATCH v9 00/16] re-enable DAX PMD support

DAX PMDs have been disabled since Jan Kara introduced DAX radix tree based
locking. This series allows DAX PMDs to participate in the DAX radix tree
based locking scheme so that they can be re-enabled.

Previously we had talked about this series going through the XFS tree, but
Jan has a patch set that will need to build on this series and it heavily
modifies the MM code. I think he would prefer that series to go through
Andrew Morton's -MM tree, so it probably makes sense for this series to go
through that same tree.

For reference, here is the series from Jan that I was talking about:
https://marc.info/?l=linux-mm&m=147499252322902&w=2

Andrew, can you please pick this up for the v4.10 merge window?
This series is currently based on v4.9-rc3. I tried to rebase onto a -mm
branch or tag, but couldn't find one that contained the DAX iomap changes
that were merged as part of the v4.9 merge window. I'm happy to rebase &
test on a v4.9-rc* based -MM branch or tag whenever they are available.

Changes since v8:
- Rebased onto v4.9-rc3.
- Updated the DAX PMD fault path so that on fallback we always check to see
if we are dealing with a transparent huge page, and if we are we will
split it. This was already happening for one of the fallback cases via a
patch from Toshi, and Jan hit a deadlock in another fallback case where
the same splitting was needed. (Jan & Toshi)

This series has passed all my xfstests testing, including the test that was
hitting the deadlock with v8.

Here is a tree containing my changes:
https://git.kernel.org/cgit/linux/kernel/git/zwisler/linux.git/log/?h=dax_pmd_v9

Ross Zwisler (16):
ext4: tell DAX the size of allocation holes
dax: remove buffer_size_valid()
ext2: remove support for DAX PMD faults
dax: make 'wait_table' global variable static
dax: remove the last BUG_ON() from fs/dax.c
dax: consistent variable naming for DAX entries
dax: coordinate locking for offsets in PMD range
dax: remove dax_pmd_fault()
dax: correct dax iomap code namespace
dax: add dax_iomap_sector() helper function
dax: dax_iomap_fault() needs to call iomap_end()
dax: move RADIX_DAX_* defines to dax.h
dax: move put_(un)locked_mapping_entry() in dax.c
dax: add struct iomap based DAX PMD support
xfs: use struct iomap based DAX PMD fault path
dax: remove "depends on BROKEN" from FS_DAX_PMD

fs/Kconfig | 1 -
fs/dax.c | 826 +++++++++++++++++++++++++++++-----------------------
fs/ext2/file.c | 35 +--
fs/ext4/inode.c | 3 +
fs/xfs/xfs_aops.c | 26 +-
fs/xfs/xfs_aops.h | 3 -
fs/xfs/xfs_file.c | 10 +-
include/linux/dax.h | 58 +++-
mm/filemap.c | 5 +-
9 files changed, 537 insertions(+), 430 deletions(-)

--
2.7.4


2016-11-01 19:54:08

by Ross Zwisler

[permalink] [raw]
Subject: [PATCH v9 06/16] dax: consistent variable naming for DAX entries

No functional change.

Consistently use the variable name 'entry' instead of 'ret' for DAX radix
tree entries. This was already happening in most of the code, so update
get_unlocked_mapping_entry(), grab_mapping_entry() and
dax_unlock_mapping_entry().

Signed-off-by: Ross Zwisler <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: Jan Kara <[email protected]>
---
fs/dax.c | 34 +++++++++++++++++-----------------
1 file changed, 17 insertions(+), 17 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index 219fa2b..835e7f0 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -357,7 +357,7 @@ static inline void *unlock_slot(struct address_space *mapping, void **slot)
static void *get_unlocked_mapping_entry(struct address_space *mapping,
pgoff_t index, void ***slotp)
{
- void *ret, **slot;
+ void *entry, **slot;
struct wait_exceptional_entry_queue ewait;
wait_queue_head_t *wq = dax_entry_waitqueue(mapping, index);

@@ -367,13 +367,13 @@ static void *get_unlocked_mapping_entry(struct address_space *mapping,
ewait.key.index = index;

for (;;) {
- ret = __radix_tree_lookup(&mapping->page_tree, index, NULL,
+ entry = __radix_tree_lookup(&mapping->page_tree, index, NULL,
&slot);
- if (!ret || !radix_tree_exceptional_entry(ret) ||
+ if (!entry || !radix_tree_exceptional_entry(entry) ||
!slot_locked(mapping, slot)) {
if (slotp)
*slotp = slot;
- return ret;
+ return entry;
}
prepare_to_wait_exclusive(wq, &ewait.wait,
TASK_UNINTERRUPTIBLE);
@@ -396,13 +396,13 @@ static void *get_unlocked_mapping_entry(struct address_space *mapping,
*/
static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index)
{
- void *ret, **slot;
+ void *entry, **slot;

restart:
spin_lock_irq(&mapping->tree_lock);
- ret = get_unlocked_mapping_entry(mapping, index, &slot);
+ entry = get_unlocked_mapping_entry(mapping, index, &slot);
/* No entry for given index? Make sure radix tree is big enough. */
- if (!ret) {
+ if (!entry) {
int err;

spin_unlock_irq(&mapping->tree_lock);
@@ -410,10 +410,10 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index)
mapping_gfp_mask(mapping) & ~__GFP_HIGHMEM);
if (err)
return ERR_PTR(err);
- ret = (void *)(RADIX_TREE_EXCEPTIONAL_ENTRY |
+ entry = (void *)(RADIX_TREE_EXCEPTIONAL_ENTRY |
RADIX_DAX_ENTRY_LOCK);
spin_lock_irq(&mapping->tree_lock);
- err = radix_tree_insert(&mapping->page_tree, index, ret);
+ err = radix_tree_insert(&mapping->page_tree, index, entry);
radix_tree_preload_end();
if (err) {
spin_unlock_irq(&mapping->tree_lock);
@@ -425,11 +425,11 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index)
/* Good, we have inserted empty locked entry into the tree. */
mapping->nrexceptional++;
spin_unlock_irq(&mapping->tree_lock);
- return ret;
+ return entry;
}
/* Normal page in radix tree? */
- if (!radix_tree_exceptional_entry(ret)) {
- struct page *page = ret;
+ if (!radix_tree_exceptional_entry(entry)) {
+ struct page *page = entry;

get_page(page);
spin_unlock_irq(&mapping->tree_lock);
@@ -442,9 +442,9 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index)
}
return page;
}
- ret = lock_slot(mapping, slot);
+ entry = lock_slot(mapping, slot);
spin_unlock_irq(&mapping->tree_lock);
- return ret;
+ return entry;
}

void dax_wake_mapping_entry_waiter(struct address_space *mapping,
@@ -469,11 +469,11 @@ void dax_wake_mapping_entry_waiter(struct address_space *mapping,

void dax_unlock_mapping_entry(struct address_space *mapping, pgoff_t index)
{
- void *ret, **slot;
+ void *entry, **slot;

spin_lock_irq(&mapping->tree_lock);
- ret = __radix_tree_lookup(&mapping->page_tree, index, NULL, &slot);
- if (WARN_ON_ONCE(!ret || !radix_tree_exceptional_entry(ret) ||
+ entry = __radix_tree_lookup(&mapping->page_tree, index, NULL, &slot);
+ if (WARN_ON_ONCE(!entry || !radix_tree_exceptional_entry(entry) ||
!slot_locked(mapping, slot))) {
spin_unlock_irq(&mapping->tree_lock);
return;
--
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

2016-11-01 19:54:05

by Ross Zwisler

[permalink] [raw]
Subject: [PATCH v9 03/16] ext2: remove support for DAX PMD faults

DAX PMD support was added via the following commit:

commit e7b1ea2ad658 ("ext2: huge page fault support")

I believe this path to be untested as ext2 doesn't reliably provide block
allocations that are aligned to 2MiB. In my testing I've been unable to
get ext2 to actually fault in a PMD. It always fails with a "pfn
unaligned" message because the sector returned by ext2_get_block() isn't
aligned.

I've tried various settings for the "stride" and "stripe_width" extended
options to mkfs.ext2, without any luck.

Since we can't reliably get PMDs, remove support so that we don't have an
untested code path that we may someday traverse when we happen to get an
aligned block allocation. This should also make 4k DAX faults in ext2 a
bit faster since they will no longer have to call the PMD fault handler
only to get a response of VM_FAULT_FALLBACK.

Signed-off-by: Ross Zwisler <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: Jan Kara <[email protected]>
---
fs/ext2/file.c | 29 ++++++-----------------------
1 file changed, 6 insertions(+), 23 deletions(-)

diff --git a/fs/ext2/file.c b/fs/ext2/file.c
index a0e1478..fb88b51 100644
--- a/fs/ext2/file.c
+++ b/fs/ext2/file.c
@@ -107,27 +107,6 @@ static int ext2_dax_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
return ret;
}

-static int ext2_dax_pmd_fault(struct vm_area_struct *vma, unsigned long addr,
- pmd_t *pmd, unsigned int flags)
-{
- struct inode *inode = file_inode(vma->vm_file);
- struct ext2_inode_info *ei = EXT2_I(inode);
- int ret;
-
- if (flags & FAULT_FLAG_WRITE) {
- sb_start_pagefault(inode->i_sb);
- file_update_time(vma->vm_file);
- }
- down_read(&ei->dax_sem);
-
- ret = dax_pmd_fault(vma, addr, pmd, flags, ext2_get_block);
-
- up_read(&ei->dax_sem);
- if (flags & FAULT_FLAG_WRITE)
- sb_end_pagefault(inode->i_sb);
- return ret;
-}
-
static int ext2_dax_pfn_mkwrite(struct vm_area_struct *vma,
struct vm_fault *vmf)
{
@@ -154,7 +133,11 @@ static int ext2_dax_pfn_mkwrite(struct vm_area_struct *vma,

static const struct vm_operations_struct ext2_dax_vm_ops = {
.fault = ext2_dax_fault,
- .pmd_fault = ext2_dax_pmd_fault,
+ /*
+ * .pmd_fault is not supported for DAX because allocation in ext2
+ * cannot be reliably aligned to huge page sizes and so pmd faults
+ * will always fail and fail back to regular faults.
+ */
.page_mkwrite = ext2_dax_fault,
.pfn_mkwrite = ext2_dax_pfn_mkwrite,
};
@@ -166,7 +149,7 @@ static int ext2_file_mmap(struct file *file, struct vm_area_struct *vma)

file_accessed(file);
vma->vm_ops = &ext2_dax_vm_ops;
- vma->vm_flags |= VM_MIXEDMAP | VM_HUGEPAGE;
+ vma->vm_flags |= VM_MIXEDMAP;
return 0;
}
#else
--
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

2016-11-01 19:54:09

by Ross Zwisler

[permalink] [raw]
Subject: [PATCH v9 07/16] dax: coordinate locking for offsets in PMD range

DAX radix tree locking currently locks entries based on the unique
combination of the 'mapping' pointer and the pgoff_t 'index' for the entry.
This works for PTEs, but as we move to PMDs we will need to have all the
offsets within the range covered by the PMD to map to the same bit lock.
To accomplish this, for ranges covered by a PMD entry we will instead lock
based on the page offset of the beginning of the PMD entry. The 'mapping'
pointer is still used in the same way.

Signed-off-by: Ross Zwisler <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: Jan Kara <[email protected]>
---
fs/dax.c | 65 +++++++++++++++++++++++++++++++++--------------------
include/linux/dax.h | 2 +-
mm/filemap.c | 2 +-
3 files changed, 43 insertions(+), 26 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index 835e7f0..7238702 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -64,14 +64,6 @@ static int __init init_dax_wait_table(void)
}
fs_initcall(init_dax_wait_table);

-static wait_queue_head_t *dax_entry_waitqueue(struct address_space *mapping,
- pgoff_t index)
-{
- unsigned long hash = hash_long((unsigned long)mapping ^ index,
- DAX_WAIT_TABLE_BITS);
- return wait_table + hash;
-}
-
static long dax_map_atomic(struct block_device *bdev, struct blk_dax_ctl *dax)
{
struct request_queue *q = bdev->bd_queue;
@@ -285,7 +277,7 @@ EXPORT_SYMBOL_GPL(dax_do_io);
*/
struct exceptional_entry_key {
struct address_space *mapping;
- unsigned long index;
+ pgoff_t entry_start;
};

struct wait_exceptional_entry_queue {
@@ -293,6 +285,26 @@ struct wait_exceptional_entry_queue {
struct exceptional_entry_key key;
};

+static wait_queue_head_t *dax_entry_waitqueue(struct address_space *mapping,
+ pgoff_t index, void *entry, struct exceptional_entry_key *key)
+{
+ unsigned long hash;
+
+ /*
+ * If 'entry' is a PMD, align the 'index' that we use for the wait
+ * queue to the start of that PMD. This ensures that all offsets in
+ * the range covered by the PMD map to the same bit lock.
+ */
+ if (RADIX_DAX_TYPE(entry) == RADIX_DAX_PMD)
+ index &= ~((1UL << (PMD_SHIFT - PAGE_SHIFT)) - 1);
+
+ key->mapping = mapping;
+ key->entry_start = index;
+
+ hash = hash_long((unsigned long)mapping ^ index, DAX_WAIT_TABLE_BITS);
+ return wait_table + hash;
+}
+
static int wake_exceptional_entry_func(wait_queue_t *wait, unsigned int mode,
int sync, void *keyp)
{
@@ -301,7 +313,7 @@ static int wake_exceptional_entry_func(wait_queue_t *wait, unsigned int mode,
container_of(wait, struct wait_exceptional_entry_queue, wait);

if (key->mapping != ewait->key.mapping ||
- key->index != ewait->key.index)
+ key->entry_start != ewait->key.entry_start)
return 0;
return autoremove_wake_function(wait, mode, sync, NULL);
}
@@ -359,12 +371,10 @@ static void *get_unlocked_mapping_entry(struct address_space *mapping,
{
void *entry, **slot;
struct wait_exceptional_entry_queue ewait;
- wait_queue_head_t *wq = dax_entry_waitqueue(mapping, index);
+ wait_queue_head_t *wq;

init_wait(&ewait.wait);
ewait.wait.func = wake_exceptional_entry_func;
- ewait.key.mapping = mapping;
- ewait.key.index = index;

for (;;) {
entry = __radix_tree_lookup(&mapping->page_tree, index, NULL,
@@ -375,6 +385,8 @@ static void *get_unlocked_mapping_entry(struct address_space *mapping,
*slotp = slot;
return entry;
}
+
+ wq = dax_entry_waitqueue(mapping, index, entry, &ewait.key);
prepare_to_wait_exclusive(wq, &ewait.wait,
TASK_UNINTERRUPTIBLE);
spin_unlock_irq(&mapping->tree_lock);
@@ -447,10 +459,20 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index)
return entry;
}

+/*
+ * We do not necessarily hold the mapping->tree_lock when we call this
+ * function so it is possible that 'entry' is no longer a valid item in the
+ * radix tree. This is okay, though, because all we really need to do is to
+ * find the correct waitqueue where tasks might be sleeping waiting for that
+ * old 'entry' and wake them.
+ */
void dax_wake_mapping_entry_waiter(struct address_space *mapping,
- pgoff_t index, bool wake_all)
+ pgoff_t index, void *entry, bool wake_all)
{
- wait_queue_head_t *wq = dax_entry_waitqueue(mapping, index);
+ struct exceptional_entry_key key;
+ wait_queue_head_t *wq;
+
+ wq = dax_entry_waitqueue(mapping, index, entry, &key);

/*
* Checking for locked entry and prepare_to_wait_exclusive() happens
@@ -458,13 +480,8 @@ void dax_wake_mapping_entry_waiter(struct address_space *mapping,
* So at this point all tasks that could have seen our entry locked
* must be in the waitqueue and the following check will see them.
*/
- if (waitqueue_active(wq)) {
- struct exceptional_entry_key key;
-
- key.mapping = mapping;
- key.index = index;
+ if (waitqueue_active(wq))
__wake_up(wq, TASK_NORMAL, wake_all ? 0 : 1, &key);
- }
}

void dax_unlock_mapping_entry(struct address_space *mapping, pgoff_t index)
@@ -480,7 +497,7 @@ void dax_unlock_mapping_entry(struct address_space *mapping, pgoff_t index)
}
unlock_slot(mapping, slot);
spin_unlock_irq(&mapping->tree_lock);
- dax_wake_mapping_entry_waiter(mapping, index, false);
+ dax_wake_mapping_entry_waiter(mapping, index, entry, false);
}

static void put_locked_mapping_entry(struct address_space *mapping,
@@ -505,7 +522,7 @@ static void put_unlocked_mapping_entry(struct address_space *mapping,
return;

/* We have to wake up next waiter for the radix tree entry lock */
- dax_wake_mapping_entry_waiter(mapping, index, false);
+ dax_wake_mapping_entry_waiter(mapping, index, entry, false);
}

/*
@@ -532,7 +549,7 @@ int dax_delete_mapping_entry(struct address_space *mapping, pgoff_t index)
radix_tree_delete(&mapping->page_tree, index);
mapping->nrexceptional--;
spin_unlock_irq(&mapping->tree_lock);
- dax_wake_mapping_entry_waiter(mapping, index, true);
+ dax_wake_mapping_entry_waiter(mapping, index, entry, true);

return 1;
}
diff --git a/include/linux/dax.h b/include/linux/dax.h
index add6c4b..a41a747 100644
--- a/include/linux/dax.h
+++ b/include/linux/dax.h
@@ -22,7 +22,7 @@ int iomap_dax_fault(struct vm_area_struct *vma, struct vm_fault *vmf,
int dax_fault(struct vm_area_struct *, struct vm_fault *, get_block_t);
int dax_delete_mapping_entry(struct address_space *mapping, pgoff_t index);
void dax_wake_mapping_entry_waiter(struct address_space *mapping,
- pgoff_t index, bool wake_all);
+ pgoff_t index, void *entry, bool wake_all);

#ifdef CONFIG_FS_DAX
struct page *read_dax_sector(struct block_device *bdev, sector_t n);
diff --git a/mm/filemap.c b/mm/filemap.c
index c7fe2f1..8709f1e9 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -143,7 +143,7 @@ static int page_cache_tree_insert(struct address_space *mapping,
if (node)
workingset_node_pages_dec(node);
/* Wakeup waiters for exceptional entry lock */
- dax_wake_mapping_entry_waiter(mapping, page->index,
+ dax_wake_mapping_entry_waiter(mapping, page->index, p,
false);
}
}
--
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

2016-11-03 01:58:26

by Dave Chinner

[permalink] [raw]
Subject: Re: [PATCH v9 00/16] re-enable DAX PMD support

On Tue, Nov 01, 2016 at 01:54:02PM -0600, Ross Zwisler wrote:
> DAX PMDs have been disabled since Jan Kara introduced DAX radix tree based
> locking. This series allows DAX PMDs to participate in the DAX radix tree
> based locking scheme so that they can be re-enabled.

I've seen patch 0/16 - where did you send the other 16? I need to
pick up the bug fix that is in this patch set...

> Previously we had talked about this series going through the XFS tree, but
> Jan has a patch set that will need to build on this series and it heavily
> modifies the MM code. I think he would prefer that series to go through
> Andrew Morton's -MM tree, so it probably makes sense for this series to go
> through that same tree.

Seriously, I was 10 minutes away from pushing out the previous
version of this patchset as a stable topic branch, just like has
been discussed and several times over the past week. Indeed, I
mentioned that I was planning on pushing out this topic branch today
not more than 4 hours ago, and you were on the cc list.

The -mm tree is not the place to merge patchsets with dependencies
like this because it's an unstable, rebasing tree. Hence it cannot
be shared and used as the base of common development between
multiple git trees like we have for the fs/ subsystem.

This needs to go out as a stable topic branch so that other
dependent work can reliably build on top of it for the next merge
window. e.g. the ext4 DAX iomap patch series that is likely to be
merged through the ext4 tree, so it needs a stable branch. There's
iomap direct IO patches for XFS pending, and they conflict with this
patchset. i.e. we need a stable git base to work from...

Cheers,

Dave.
--
Dave Chinner
[email protected]

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

2016-11-03 17:51:02

by Ross Zwisler

[permalink] [raw]
Subject: Re: [PATCH v9 00/16] re-enable DAX PMD support

On Thu, Nov 03, 2016 at 12:58:26PM +1100, Dave Chinner wrote:
> On Tue, Nov 01, 2016 at 01:54:02PM -0600, Ross Zwisler wrote:
> > DAX PMDs have been disabled since Jan Kara introduced DAX radix tree based
> > locking. This series allows DAX PMDs to participate in the DAX radix tree
> > based locking scheme so that they can be re-enabled.
>
> I've seen patch 0/16 - where did you send the other 16? I need to
> pick up the bug fix that is in this patch set...

I CC'd your "[email protected]" address on the entire set, as well as all
the usual lists (linux-xfs, linux-fsdevel, linux-nvdimm, etc).

They are also available via the libnvdimm patchwork:

https://patchwork.kernel.org/project/linux-nvdimm/list/

or via my tree:

https://git.kernel.org/cgit/linux/kernel/git/zwisler/linux.git/log/?h=dax_pmd_v9

The only patch that is different between v8 and v9 is:
[PATCH v9 14/16] dax: add struct iomap based DAX PMD support

> > Previously we had talked about this series going through the XFS tree, but
> > Jan has a patch set that will need to build on this series and it heavily
> > modifies the MM code. I think he would prefer that series to go through
> > Andrew Morton's -MM tree, so it probably makes sense for this series to go
> > through that same tree.
>
> Seriously, I was 10 minutes away from pushing out the previous
> version of this patchset as a stable topic branch, just like has
> been discussed and several times over the past week. Indeed, I
> mentioned that I was planning on pushing out this topic branch today
> not more than 4 hours ago, and you were on the cc list.

I'm confused - I sent v9 of this series out 2 days ago, on Tuesday?
I have seen multiple messages from you this week saying you were going to pick
this series up, but I saw them all after I had already sent this series out.

> The -mm tree is not the place to merge patchsets with dependencies
> like this because it's an unstable, rebasing tree. Hence it cannot
> be shared and used as the base of common development between
> multiple git trees like we have for the fs/ subsystem.
>
> This needs to go out as a stable topic branch so that other
> dependent work can reliably build on top of it for the next merge
> window. e.g. the ext4 DAX iomap patch series that is likely to be
> merged through the ext4 tree, so it needs a stable branch. There's
> iomap direct IO patches for XFS pending, and they conflict with this
> patchset. i.e. we need a stable git base to work from...

Yea, my apologies. Really this comes down to a lack of understanding on my
part about about which series should be merged via which maintainers, and how
stable topic branches can be shared. I didn't realize that if you make a
stable branch that could be easily used by other trees, and that for example
Jan's MM or ext4 based patches could be merged by another maintainer but be
based on your topic branch.

Sorry for the confusion, I was just trying to figure out a way that Jan's
changes could also be merged. Please do pick up v9 of my PMD set. :)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

2016-11-03 21:16:46

by Dave Chinner

[permalink] [raw]
Subject: Re: [PATCH v9 00/16] re-enable DAX PMD support

On Thu, Nov 03, 2016 at 11:51:02AM -0600, Ross Zwisler wrote:
> On Thu, Nov 03, 2016 at 12:58:26PM +1100, Dave Chinner wrote:
> > On Tue, Nov 01, 2016 at 01:54:02PM -0600, Ross Zwisler wrote:
> > > DAX PMDs have been disabled since Jan Kara introduced DAX radix tree based
> > > locking. This series allows DAX PMDs to participate in the DAX radix tree
> > > based locking scheme so that they can be re-enabled.
> >
> > I've seen patch 0/16 - where did you send the other 16? I need to
> > pick up the bug fix that is in this patch set...
>
> I CC'd your "[email protected]" address on the entire set, as well as all
> the usual lists (linux-xfs, linux-fsdevel, linux-nvdimm, etc).

Ok, now I'm /really/ confused. Procmail logs show:

>From [email protected] Wed Nov 02 06:56:46 2016
Subject: [PATCH v9 00/16] re-enable DAX PMD support
Folder: incoming/xfs-linux/new/1478030206.9177_1.dastard 5348
>From [email protected] Wed Nov 02 06:56:48 2016
Subject: [PATCH v9 01/16] ext4: tell DAX the size of allocation holes
Folder: incoming/xfs-linux/new/1478030208.9182_1.dastard 3725
>From [email protected] Wed Nov 02 06:56:49 2016
Subject: [PATCH v9 02/16] dax: remove buffer_size_valid()
Folder: incoming/xfs-linux/new/1478030209.9187_1.dastard 4692
.....

so procmail has seen them, and put them all in the same bucket like
it has for everything else.

But only patch 0 appeared in my linux-xfs mail box - the rest of the
files logged by procmail don't exist. No errors or indications of
failures anywhere. They've just vanished into thin air...

> They are also available via the libnvdimm patchwork:
>
> https://patchwork.kernel.org/project/linux-nvdimm/list/
>
> or via my tree:
>
> https://git.kernel.org/cgit/linux/kernel/git/zwisler/linux.git/log/?h=dax_pmd_v9
>
> The only patch that is different between v8 and v9 is:
> [PATCH v9 14/16] dax: add struct iomap based DAX PMD support

OK, thanks, I'll pull it in.

>
> > > Previously we had talked about this series going through the XFS tree, but
> > > Jan has a patch set that will need to build on this series and it heavily
> > > modifies the MM code. I think he would prefer that series to go through
> > > Andrew Morton's -MM tree, so it probably makes sense for this series to go
> > > through that same tree.
> >
> > Seriously, I was 10 minutes away from pushing out the previous
> > version of this patchset as a stable topic branch, just like has
> > been discussed and several times over the past week. Indeed, I
> > mentioned that I was planning on pushing out this topic branch today
> > not more than 4 hours ago, and you were on the cc list.
>
> I'm confused - I sent v9 of this series out 2 days ago, on Tuesday?
> I have seen multiple messages from you this week saying you were going to pick
> this series up, but I saw them all after I had already sent this series out.

That's what I'm really confused - I replied immediately after this
email appeared in my in-box - I was working from v8 because I didn't
know this version existed. This v9 patch zero email hit procmail on
"Wed Nov 02 06:56:46 2016" and i replied immediately when i saw it:
"On Thu, Nov 03, 2016 at 12:58:26PM +1100,"

So there's some 30 hours between it passing through procmail and
mutt adding it to my inbox. And mutt hasn't see any of the other
emails in the thread.

/me sighs and wonders how much other email has been going missing
lately....

> Sorry for the confusion,

Clearly not your fault, Ross.

> I was just trying to figure out a way that Jan's
> changes could also be merged. Please do pick up v9 of my PMD set. :)

Will do, but I've got to find my way out of WTF-Landia first...

Cheers,

Dave.

--
Dave Chinner
[email protected]

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>