2012-12-13 08:07:40

by Darrick J. Wong

[permalink] [raw]
Subject: [PATCH v2.3 0/3] mm/fs: Implement faster stable page writes on filesystems

Hi all,

This patchset ("stable page writes, part 2") makes some key modifications to
the original 'stable page writes' patchset. First, it provides creators
(devices and filesystems) of a backing_dev_info a flag that declares whether or
not it is necessary to ensure that page contents cannot change during writeout.
It is no longer assumed that this is true of all devices (which was never true
anyway). Second, the flag is used to relaxed the wait_on_page_writeback calls
so that wait only occurs if the device needs it. Third, it fixes up the
remaining disk-backed filesystems to use this improved conditional-wait logic
to provide stable page writes on those filesystems.

It is hoped that (for people not using checksumming devices, anyway) this
patchset will give back unnecessary performance decreases since the original
stable page write patchset went into 3.0. Sorry about not fixing it sooner.

Complaints were registered by several people about the long write latencies
introduced by the original stable page write patchset. Generally speaking, the
kernel ought to allocate as little extra memory as possible to facilitate
writeout, but for people who simply cannot wait, a second page stability
strategy is (re)introduced: snapshotting page contents. The waiting behavior
is still the default strategy; to enable page snapshotting, a superblock flag
(MS_SNAP_STABLE) must be set. This flag is primary used to bandaid^Henable
stable page writeout on ext3[1], but a mount options is provided for impatient
ext4 users.

Given that there are already a few storage devices and network FSes that have
rolled their own page stability wait/page snapshot code, it would be nice to
move towards consolidating all of these. It seems possible that iscsi and
raid5 may wish to use the new stable page write support to enable zero-copy
writeout.

In the future, it would be useful to develop a heuristic to select a strategy
automatically rather than leaving it up to manual control.

This patchset has been lightly tested on 3.7.0 on x64 with ext3, ext4, and xfs.

--D

[1] The alternative fixes to ext3 include fixing the locking order and page bit
handling like we did for ext4 (but then why not just use ext4?), or setting
PG_writeback so early that ext3 becomes extremely slow. I tried that, but the
number of write()s I could initiate dropped by nearly an order of magnitude.
That was a bit much even for the author of the stable page series! :)


2012-12-13 08:09:33

by Darrick J. Wong

[permalink] [raw]
Subject: [PATCH 3/4] 9pfs: Fix filesystem to wait for stable page writeback

Fix up the ->page_mkwrite handler to provide stable page writes if necessary.

Signed-off-by: Darrick J. Wong <[email protected]>
---
fs/9p/vfs_file.c | 1 +
1 file changed, 1 insertion(+)


diff --git a/fs/9p/vfs_file.c b/fs/9p/vfs_file.c
index c2483e9..357260b 100644
--- a/fs/9p/vfs_file.c
+++ b/fs/9p/vfs_file.c
@@ -620,6 +620,7 @@ v9fs_vm_page_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf)
lock_page(page);
if (page->mapping != inode->i_mapping)
goto out_unlock;
+ wait_for_stable_page(page);

return VM_FAULT_LOCKED;
out_unlock:


2012-12-13 08:07:55

by Darrick J. Wong

[permalink] [raw]
Subject: [PATCH 2/4] mm: Only enforce stable page writes if the backing device requires it

Create a helper function to check if a backing device requires stable page
writes and, if so, performs the necessary wait. Then, make it so that all
points in the memory manager that handle making pages writable use the helper
function. This should provide stable page write support to most filesystems,
while eliminating unnecessary waiting for devices that don't require the
feature.

Signed-off-by: Darrick J. Wong <[email protected]>
---
fs/buffer.c | 2 +-
fs/ext4/inode.c | 2 +-
include/linux/pagemap.h | 1 +
mm/filemap.c | 3 ++-
mm/page-writeback.c | 20 ++++++++++++++++++++
5 files changed, 25 insertions(+), 3 deletions(-)


diff --git a/fs/buffer.c b/fs/buffer.c
index ec0aca8..c562136 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -2363,7 +2363,7 @@ int __block_page_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf,
if (unlikely(ret < 0))
goto out_unlock;
set_page_dirty(page);
- wait_on_page_writeback(page);
+ wait_for_stable_page(page);
return 0;
out_unlock:
unlock_page(page);
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index b3c243b..1bfc79f 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -4814,7 +4814,7 @@ int ext4_page_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf)
if (!walk_page_buffers(NULL, page_buffers(page), 0, len, NULL,
ext4_bh_unmapped)) {
/* Wait so that we don't change page under IO */
- wait_on_page_writeback(page);
+ wait_for_stable_page(page);
ret = VM_FAULT_LOCKED;
goto out;
}
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index e42c762..ea631ee 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -398,6 +398,7 @@ static inline void wait_on_page_writeback(struct page *page)
}

extern void end_page_writeback(struct page *page);
+void wait_for_stable_page(struct page *page);

/*
* Add an arbitrary waiter to a page's wait queue
diff --git a/mm/filemap.c b/mm/filemap.c
index 83efee7..5577dc8 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1728,6 +1728,7 @@ int filemap_page_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf)
* see the dirty page and writeprotect it again.
*/
set_page_dirty(page);
+ wait_for_stable_page(page);
out:
sb_end_pagefault(inode->i_sb);
return ret;
@@ -2274,7 +2275,7 @@ repeat:
return NULL;
}
found:
- wait_on_page_writeback(page);
+ wait_for_stable_page(page);
return page;
}
EXPORT_SYMBOL(grab_cache_page_write_begin);
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 830893b..3e4a8cc 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -2275,3 +2275,23 @@ int mapping_tagged(struct address_space *mapping, int tag)
return radix_tree_tagged(&mapping->page_tree, tag);
}
EXPORT_SYMBOL(mapping_tagged);
+
+/**
+ * wait_for_stable_page() - wait for writeback to finish, if necessary.
+ * @page: The page to wait on.
+ *
+ * This function determines if the given page is related to a backing device
+ * that requires page contents to be held stable during writeback. If so, then
+ * it will wait for any pending writeback to complete.
+ */
+void wait_for_stable_page(struct page *page)
+{
+ struct address_space *mapping = page_mapping(page);
+ struct backing_dev_info *bdi = mapping->backing_dev_info;
+
+ if (!bdi_cap_stable_pages_required(bdi))
+ return;
+
+ wait_on_page_writeback(page);
+}
+EXPORT_SYMBOL_GPL(wait_for_stable_page);

2012-12-13 08:08:11

by Darrick J. Wong

[permalink] [raw]
Subject: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

Several complaints have been received regarding long file write latencies when
memory pages must be held stable during writeback. Since it might not be
acceptable to stall programs for the entire duration of a page write (which may
take many milliseconds even on good hardware), enable a second strategy wherein
pages are snapshotted as part of submit_bio; the snapshot can be held stable
while writes continue.

This provides a band-aid to provide stable page writes on jbd without needing
to backport the fixed locking scheme in jbd2. A mount option is added to ext4
to allow administrators to enable it there.

For those wondering about the ext3 bandage -- fixing the jbd locking (which was
done as part of ext4dev years ago) is a lot of surgery, and setting
PG_writeback on data pages when we actually hold the page lock dropped ext3
performance by nearly an order of magnitude. If we're going to migrate iscsi
and raid to use stable page writes, the complaints about high latency will
likely return. We might as well centralize their page snapshotting thing to
one place.

Signed-off-by: Darrick J. Wong <[email protected]>
---
arch/tile/Kconfig | 6 ------
block/blk-core.c | 8 +++++---
fs/ext3/super.c | 1 +
fs/ext4/super.c | 7 ++++++-
include/uapi/linux/fs.h | 1 +
mm/Kconfig | 10 ++++++++++
mm/bounce.c | 38 ++++++++++++++++++++++++++++++++++++--
mm/page-writeback.c | 4 ++++
8 files changed, 63 insertions(+), 12 deletions(-)


diff --git a/arch/tile/Kconfig b/arch/tile/Kconfig
index 875d008..c671fda 100644
--- a/arch/tile/Kconfig
+++ b/arch/tile/Kconfig
@@ -410,12 +410,6 @@ config TILE_USB
Provides USB host adapter support for the built-in EHCI and OHCI
interfaces on TILE-Gx chips.

-# USB OHCI needs the bounce pool since tilegx will often have more
-# than 4GB of memory, but we don't currently use the IOTLB to present
-# a 32-bit address to OHCI. So we need to use a bounce pool instead.
-config NEED_BOUNCE_POOL
- def_bool USB_OHCI_HCD
-
source "drivers/pci/hotplug/Kconfig"

endmenu
diff --git a/block/blk-core.c b/block/blk-core.c
index 3c95c4d..85e7705 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1433,6 +1433,11 @@ void blk_queue_bio(struct request_queue *q, struct bio *bio)
*/
blk_queue_bounce(q, &bio);

+ if (bio_integrity_enabled(bio) && bio_integrity_prep(bio)) {
+ bio_endio(bio, -EIO);
+ return;
+ }
+
if (bio->bi_rw & (REQ_FLUSH | REQ_FUA)) {
spin_lock_irq(q->queue_lock);
where = ELEVATOR_INSERT_FLUSH;
@@ -1673,9 +1678,6 @@ generic_make_request_checks(struct bio *bio)
*/
blk_partition_remap(bio);

- if (bio_integrity_enabled(bio) && bio_integrity_prep(bio))
- goto end_io;
-
if (bio_check_eod(bio, nr_sectors))
goto end_io;

diff --git a/fs/ext3/super.c b/fs/ext3/super.c
index 5366393..d251793 100644
--- a/fs/ext3/super.c
+++ b/fs/ext3/super.c
@@ -2068,6 +2068,7 @@ static int ext3_fill_super (struct super_block *sb, void *data, int silent)
test_opt(sb,DATA_FLAGS) == EXT3_MOUNT_JOURNAL_DATA ? "journal":
test_opt(sb,DATA_FLAGS) == EXT3_MOUNT_ORDERED_DATA ? "ordered":
"writeback");
+ sb->s_flags |= MS_SNAP_STABLE;

return 0;

diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index 80928f7..64d7bc8 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -1222,7 +1222,7 @@ enum {
Opt_inode_readahead_blks, Opt_journal_ioprio,
Opt_dioread_nolock, Opt_dioread_lock,
Opt_discard, Opt_nodiscard, Opt_init_itable, Opt_noinit_itable,
- Opt_max_dir_size_kb,
+ Opt_max_dir_size_kb, Opt_snap_stable,
};

static const match_table_t tokens = {
@@ -1297,6 +1297,7 @@ static const match_table_t tokens = {
{Opt_init_itable, "init_itable"},
{Opt_noinit_itable, "noinit_itable"},
{Opt_max_dir_size_kb, "max_dir_size_kb=%u"},
+ {Opt_snap_stable, "snapshot_stable"},
{Opt_removed, "check=none"}, /* mount option from ext2/3 */
{Opt_removed, "nocheck"}, /* mount option from ext2/3 */
{Opt_removed, "reservation"}, /* mount option from ext2/3 */
@@ -1478,6 +1479,7 @@ static const struct mount_opts {
{Opt_jqfmt_vfsv0, QFMT_VFS_V0, MOPT_QFMT},
{Opt_jqfmt_vfsv1, QFMT_VFS_V1, MOPT_QFMT},
{Opt_max_dir_size_kb, 0, MOPT_GTE0},
+ {Opt_snap_stable, 0, 0},
{Opt_err, 0, 0}
};

@@ -1549,6 +1551,9 @@ static int handle_mount_opt(struct super_block *sb, char *opt, int token,
return -1;
*journal_ioprio = IOPRIO_PRIO_VALUE(IOPRIO_CLASS_BE, arg);
return 1;
+ case Opt_snap_stable:
+ sb->s_flags |= MS_SNAP_STABLE;
+ return 1;
}

for (m = ext4_mount_opts; m->token != Opt_err; m++) {
diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
index 780d4c6..0144fbb 100644
--- a/include/uapi/linux/fs.h
+++ b/include/uapi/linux/fs.h
@@ -69,6 +69,7 @@ struct inodes_stat_t {
#define MS_REMOUNT 32 /* Alter flags of a mounted FS */
#define MS_MANDLOCK 64 /* Allow mandatory locks on an FS */
#define MS_DIRSYNC 128 /* Directory modifications are synchronous */
+#define MS_SNAP_STABLE 256 /* Snapshot pages during writeback, if needed */
#define MS_NOATIME 1024 /* Do not update access times. */
#define MS_NODIRATIME 2048 /* Do not update directory access times */
#define MS_BIND 4096
diff --git a/mm/Kconfig b/mm/Kconfig
index a3f8ddd..78db0e1 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -224,6 +224,16 @@ config BOUNCE
def_bool y
depends on BLOCK && MMU && (ZONE_DMA || HIGHMEM)

+# On the 'tile' arch, USB OHCI needs the bounce pool since tilegx will often
+# have more than 4GB of memory, but we don't currently use the IOTLB to present
+# a 32-bit address to OHCI. So we need to use a bounce pool instead.
+#
+# We also use the bounce pool to provide stable page writes for users that
+# don't (or can't) afford the wait latency.
+config NEED_BOUNCE_POOL
+ bool
+ default y if (TILE && USB_OHCI_HCD) || (BLK_DEV_INTEGRITY && (EXT3_FS || EXT4_FS))
+
config NR_QUICK
int
depends on QUICKLIST
diff --git a/mm/bounce.c b/mm/bounce.c
index 0420867..fa11935 100644
--- a/mm/bounce.c
+++ b/mm/bounce.c
@@ -178,6 +178,38 @@ static void bounce_end_io_read_isa(struct bio *bio, int err)
__bounce_end_io_read(bio, isa_page_pool, err);
}

+#ifdef CONFIG_NEED_BOUNCE_POOL
+static int might_snapshot_stable_page_write(struct bio **bio_orig)
+{
+ return bio_data_dir(*bio_orig) == WRITE;
+}
+
+static int should_snapshot_stable_pages(struct page *page, int rw)
+{
+ struct backing_dev_info *bdi;
+ struct address_space *mapping = page_mapping(page);
+
+ if (!mapping)
+ return 0;
+ bdi = mapping->backing_dev_info;
+ if (!bdi_cap_stable_pages_required(bdi))
+ return 0;
+
+ return mapping->host->i_sb->s_flags & MS_SNAP_STABLE &&
+ rw == WRITE;
+}
+#else
+static int might_snapshot_stable_page_write(struct bio **bio_orig)
+{
+ return 0;
+}
+
+static int should_snapshot_static_pages(struct page *page, int rw)
+{
+ return 0;
+}
+#endif /* CONFIG_NEED_BOUNCE_POOL */
+
static void __blk_queue_bounce(struct request_queue *q, struct bio **bio_orig,
mempool_t *pool)
{
@@ -192,7 +224,8 @@ static void __blk_queue_bounce(struct request_queue *q, struct bio **bio_orig,
/*
* is destination page below bounce pfn?
*/
- if (page_to_pfn(page) <= queue_bounce_pfn(q))
+ if (page_to_pfn(page) <= queue_bounce_pfn(q) &&
+ !should_snapshot_stable_pages(page, rw))
continue;

/*
@@ -284,7 +317,8 @@ void blk_queue_bounce(struct request_queue *q, struct bio **bio_orig)
* don't waste time iterating over bio segments
*/
if (!(q->bounce_gfp & GFP_DMA)) {
- if (queue_bounce_pfn(q) >= blk_max_pfn)
+ if (queue_bounce_pfn(q) >= blk_max_pfn &&
+ !might_snapshot_stable_page_write(bio_orig))
return;
pool = page_pool;
} else {
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 3e4a8cc..fbd8efb 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -2291,6 +2291,10 @@ void wait_for_stable_page(struct page *page)

if (!bdi_cap_stable_pages_required(bdi))
return;
+#ifdef CONFIG_NEED_BOUNCE_POOL
+ if (mapping->host->i_sb->s_flags & MS_SNAP_STABLE)
+ return;
+#endif /* CONFIG_NEED_BOUNCE_POOL */

wait_on_page_writeback(page);
}


2012-12-13 08:07:47

by Darrick J. Wong

[permalink] [raw]
Subject: [PATCH 1/4] bdi: Allow block devices to say that they require stable page writes

This creates a per-backing-device flag that tracks whether or not pages must be
held immutable during writeout. Eventually it will be used to waive
wait_for_page_writeback() if nobody requires stable pages.

Signed-off-by: Darrick J. Wong <[email protected]>
---
Documentation/ABI/testing/sysfs-class-bdi | 5 +++++
block/blk-integrity.c | 4 ++++
include/linux/backing-dev.h | 6 ++++++
mm/backing-dev.c | 11 +++++++++++
4 files changed, 26 insertions(+)


diff --git a/Documentation/ABI/testing/sysfs-class-bdi b/Documentation/ABI/testing/sysfs-class-bdi
index 5f50097..d773d56 100644
--- a/Documentation/ABI/testing/sysfs-class-bdi
+++ b/Documentation/ABI/testing/sysfs-class-bdi
@@ -48,3 +48,8 @@ max_ratio (read-write)
most of the write-back cache. For example in case of an NFS
mount that is prone to get stuck, or a FUSE mount which cannot
be trusted to play fair.
+
+stable_pages_required (read-only)
+
+ If set, the backing device requires that all pages comprising a write
+ request must not be changed until writeout is complete.
diff --git a/block/blk-integrity.c b/block/blk-integrity.c
index da2a818..dabd221 100644
--- a/block/blk-integrity.c
+++ b/block/blk-integrity.c
@@ -420,6 +420,8 @@ int blk_integrity_register(struct gendisk *disk, struct blk_integrity *template)
} else
bi->name = bi_unsupported_name;

+ disk->queue->backing_dev_info.capabilities |= BDI_CAP_STABLE_WRITES;
+
return 0;
}
EXPORT_SYMBOL(blk_integrity_register);
@@ -438,6 +440,8 @@ void blk_integrity_unregister(struct gendisk *disk)
if (!disk || !disk->integrity)
return;

+ disk->queue->backing_dev_info.capabilities &= ~BDI_CAP_STABLE_WRITES;
+
bi = disk->integrity;

kobject_uevent(&bi->kobj, KOBJ_REMOVE);
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index 2a9a9ab..085501f 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -253,6 +253,7 @@ int bdi_set_max_ratio(struct backing_dev_info *bdi, unsigned int max_ratio);
#define BDI_CAP_EXEC_MAP 0x00000040
#define BDI_CAP_NO_ACCT_WB 0x00000080
#define BDI_CAP_SWAP_BACKED 0x00000100
+#define BDI_CAP_STABLE_WRITES 0x00000200

#define BDI_CAP_VMFLAGS \
(BDI_CAP_READ_MAP | BDI_CAP_WRITE_MAP | BDI_CAP_EXEC_MAP)
@@ -307,6 +308,11 @@ long wait_iff_congested(struct zone *zone, int sync, long timeout);
int pdflush_proc_obsolete(struct ctl_table *table, int write,
void __user *buffer, size_t *lenp, loff_t *ppos);

+static inline bool bdi_cap_stable_pages_required(struct backing_dev_info *bdi)
+{
+ return bdi->capabilities & BDI_CAP_STABLE_WRITES;
+}
+
static inline bool bdi_cap_writeback_dirty(struct backing_dev_info *bdi)
{
return !(bdi->capabilities & BDI_CAP_NO_WRITEBACK);
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index d3ca2b3..41733c5 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -221,12 +221,23 @@ static ssize_t max_ratio_store(struct device *dev,
}
BDI_SHOW(max_ratio, bdi->max_ratio)

+static ssize_t stable_pages_required_show(struct device *dev,
+ struct device_attribute *attr,
+ char *page)
+{
+ struct backing_dev_info *bdi = dev_get_drvdata(dev);
+
+ return snprintf(page, PAGE_SIZE-1, "%d\n",
+ bdi_cap_stable_pages_required(bdi) ? 1 : 0);
+}
+
#define __ATTR_RW(attr) __ATTR(attr, 0644, attr##_show, attr##_store)

static struct device_attribute bdi_dev_attrs[] = {
__ATTR_RW(read_ahead_kb),
__ATTR_RW(min_ratio),
__ATTR_RW(max_ratio),
+ __ATTR_RO(stable_pages_required),
__ATTR_NULL,
};


2012-12-14 01:48:09

by Andy Lutomirski

[permalink] [raw]
Subject: Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

On 12/13/2012 12:08 AM, Darrick J. Wong wrote:
> Several complaints have been received regarding long file write latencies when
> memory pages must be held stable during writeback. Since it might not be
> acceptable to stall programs for the entire duration of a page write (which may
> take many milliseconds even on good hardware), enable a second strategy wherein
> pages are snapshotted as part of submit_bio; the snapshot can be held stable
> while writes continue.
>
> This provides a band-aid to provide stable page writes on jbd without needing
> to backport the fixed locking scheme in jbd2. A mount option is added to ext4
> to allow administrators to enable it there.

I'm a bit confused as to what it has to do with ext3. Wouldn't this be
useful as a mount option everywhere, though?

If this becomes widely used, would it be better to snapshot on
wait_for_stable_page instead of on io submission?

FWIW, I'm about to pound pretty hard on this whole patchset on a box
that doesn't need stable pages. I'll let you know how it goes.

--Andy

2012-12-14 02:10:49

by Darrick J. Wong

[permalink] [raw]
Subject: Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

On Thu, Dec 13, 2012 at 05:48:06PM -0800, Andy Lutomirski wrote:
> On 12/13/2012 12:08 AM, Darrick J. Wong wrote:
> > Several complaints have been received regarding long file write latencies when
> > memory pages must be held stable during writeback. Since it might not be
> > acceptable to stall programs for the entire duration of a page write (which may
> > take many milliseconds even on good hardware), enable a second strategy wherein
> > pages are snapshotted as part of submit_bio; the snapshot can be held stable
> > while writes continue.
> >
> > This provides a band-aid to provide stable page writes on jbd without needing
> > to backport the fixed locking scheme in jbd2. A mount option is added to ext4
> > to allow administrators to enable it there.
>
> I'm a bit confused as to what it has to do with ext3. Wouldn't this be
> useful as a mount option everywhere, though?

ext3 requires snapshots; the rest are ok with either strategy.

*If* snapshotting is generally liked, then yes I'll go redo it as a vfs mount
option.

> If this becomes widely used, would it be better to snapshot on
> wait_for_stable_page instead of on io submission?

That really depends on how long you can afford to wait and how much free
memory you have. :) It's all a big tradeoff between write latency and
consumption of memory pages and bandwidth, and one that I doubt I'm qualified
to make for everyone.

> FWIW, I'm about to pound pretty hard on this whole patchset on a box
> that doesn't need stable pages. I'll let you know how it goes.

Yay!

--D

2012-12-14 03:33:39

by Dave Chinner

[permalink] [raw]
Subject: Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

On Thu, Dec 13, 2012 at 06:10:49PM -0800, Darrick J. Wong wrote:
> On Thu, Dec 13, 2012 at 05:48:06PM -0800, Andy Lutomirski wrote:
> > On 12/13/2012 12:08 AM, Darrick J. Wong wrote:
> > > Several complaints have been received regarding long file write latencies when
> > > memory pages must be held stable during writeback. Since it might not be
> > > acceptable to stall programs for the entire duration of a page write (which may
> > > take many milliseconds even on good hardware), enable a second strategy wherein
> > > pages are snapshotted as part of submit_bio; the snapshot can be held stable
> > > while writes continue.
> > >
> > > This provides a band-aid to provide stable page writes on jbd without needing
> > > to backport the fixed locking scheme in jbd2. A mount option is added to ext4
> > > to allow administrators to enable it there.
> >
> > I'm a bit confused as to what it has to do with ext3. Wouldn't this be
> > useful as a mount option everywhere, though?
>
> ext3 requires snapshots; the rest are ok with either strategy.
>
> *If* snapshotting is generally liked, then yes I'll go redo it as a vfs mount
> option.

It's copying every single IO, right? If so, then please don't
propagate any further than is necessary to fix the broken
filesystems...

Cheers,

Dave.
--
Dave Chinner
[email protected]

2012-12-14 19:43:40

by Darrick J. Wong

[permalink] [raw]
Subject: Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

On Fri, Dec 14, 2012 at 02:33:34PM +1100, Dave Chinner wrote:
> On Thu, Dec 13, 2012 at 06:10:49PM -0800, Darrick J. Wong wrote:
> > On Thu, Dec 13, 2012 at 05:48:06PM -0800, Andy Lutomirski wrote:
> > > On 12/13/2012 12:08 AM, Darrick J. Wong wrote:
> > > > Several complaints have been received regarding long file write latencies when
> > > > memory pages must be held stable during writeback. Since it might not be
> > > > acceptable to stall programs for the entire duration of a page write (which may
> > > > take many milliseconds even on good hardware), enable a second strategy wherein
> > > > pages are snapshotted as part of submit_bio; the snapshot can be held stable
> > > > while writes continue.
> > > >
> > > > This provides a band-aid to provide stable page writes on jbd without needing
> > > > to backport the fixed locking scheme in jbd2. A mount option is added to ext4
> > > > to allow administrators to enable it there.
> > >
> > > I'm a bit confused as to what it has to do with ext3. Wouldn't this be
> > > useful as a mount option everywhere, though?
> >
> > ext3 requires snapshots; the rest are ok with either strategy.
> >
> > *If* snapshotting is generally liked, then yes I'll go redo it as a vfs mount
> > option.
>
> It's copying every single IO, right? If so, then please don't
> propagate any further than is necessary to fix the broken
> filesystems...

Yup. I wasn't intending this flag for general service, though I /am/ curious
to hear if anyone sees any substantive performance difference with snapshots.

--D
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> [email protected]

2012-12-15 01:12:37

by Andy Lutomirski

[permalink] [raw]
Subject: Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

On Thu, Dec 13, 2012 at 6:10 PM, Darrick J. Wong
<[email protected]> wrote:
> On Thu, Dec 13, 2012 at 05:48:06PM -0800, Andy Lutomirski wrote:
>> On 12/13/2012 12:08 AM, Darrick J. Wong wrote:
>> > Several complaints have been received regarding long file write latencies when
>> > memory pages must be held stable during writeback. Since it might not be
>> > acceptable to stall programs for the entire duration of a page write (which may
>> > take many milliseconds even on good hardware), enable a second strategy wherein
>> > pages are snapshotted as part of submit_bio; the snapshot can be held stable
>> > while writes continue.
>> >
>> > This provides a band-aid to provide stable page writes on jbd without needing
>> > to backport the fixed locking scheme in jbd2. A mount option is added to ext4
>> > to allow administrators to enable it there.
>>
>> I'm a bit confused as to what it has to do with ext3. Wouldn't this be
>> useful as a mount option everywhere, though?
>
> ext3 requires snapshots; the rest are ok with either strategy.
>
> *If* snapshotting is generally liked, then yes I'll go redo it as a vfs mount
> option.
>
>> If this becomes widely used, would it be better to snapshot on
>> wait_for_stable_page instead of on io submission?
>
> That really depends on how long you can afford to wait and how much free
> memory you have. :) It's all a big tradeoff between write latency and
> consumption of memory pages and bandwidth, and one that I doubt I'm qualified
> to make for everyone.
>
>> FWIW, I'm about to pound pretty hard on this whole patchset on a box
>> that doesn't need stable pages. I'll let you know how it goes.
>
> Yay!
>
> --D

It survived. I hit at least one mm bug, but I really don't think it's
a problem with your code. (I have not tried this workload on Linux
3.7 at all before. It normally runs on 3.5.) The box in question is
ext4 on LVM on dm-crypt on (hardware) RAID 5 on hpsa, which should not
need stable pages.

The majority of the data written (that wasn't unlinked before it was
dropped from cache) was checksummed when written and verified later.
Most of this data was written using mmap. This workload hammers the
vm concurrently in several threads, and it frequently stalls when
stable pages are enabled, so it's probably exercising the code
decently well.

Feel free to add Tested-by: Andy Lutomirski <[email protected]>

--Andy

2012-12-15 02:03:21

by Darrick J. Wong

[permalink] [raw]
Subject: Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

On Fri, Dec 14, 2012 at 05:12:37PM -0800, Andy Lutomirski wrote:
> On Thu, Dec 13, 2012 at 6:10 PM, Darrick J. Wong
> <[email protected]> wrote:
> > On Thu, Dec 13, 2012 at 05:48:06PM -0800, Andy Lutomirski wrote:
> >> On 12/13/2012 12:08 AM, Darrick J. Wong wrote:
> >> > Several complaints have been received regarding long file write latencies when
> >> > memory pages must be held stable during writeback. Since it might not be
> >> > acceptable to stall programs for the entire duration of a page write (which may
> >> > take many milliseconds even on good hardware), enable a second strategy wherein
> >> > pages are snapshotted as part of submit_bio; the snapshot can be held stable
> >> > while writes continue.
> >> >
> >> > This provides a band-aid to provide stable page writes on jbd without needing
> >> > to backport the fixed locking scheme in jbd2. A mount option is added to ext4
> >> > to allow administrators to enable it there.
> >>
> >> I'm a bit confused as to what it has to do with ext3. Wouldn't this be
> >> useful as a mount option everywhere, though?
> >
> > ext3 requires snapshots; the rest are ok with either strategy.
> >
> > *If* snapshotting is generally liked, then yes I'll go redo it as a vfs mount
> > option.
> >
> >> If this becomes widely used, would it be better to snapshot on
> >> wait_for_stable_page instead of on io submission?
> >
> > That really depends on how long you can afford to wait and how much free
> > memory you have. :) It's all a big tradeoff between write latency and
> > consumption of memory pages and bandwidth, and one that I doubt I'm qualified
> > to make for everyone.
> >
> >> FWIW, I'm about to pound pretty hard on this whole patchset on a box
> >> that doesn't need stable pages. I'll let you know how it goes.
> >
> > Yay!
> >
> > --D
>
> It survived. I hit at least one mm bug, but I really don't think it's
> a problem with your code. (I have not tried this workload on Linux
> 3.7 at all before. It normally runs on 3.5.) The box in question is

Would you mind sending along the bug report so I can make sure?

> ext4 on LVM on dm-crypt on (hardware) RAID 5 on hpsa, which should not
> need stable pages.
>
> The majority of the data written (that wasn't unlinked before it was
> dropped from cache) was checksummed when written and verified later.
> Most of this data was written using mmap. This workload hammers the
> vm concurrently in several threads, and it frequently stalls when
> stable pages are enabled, so it's probably exercising the code
> decently well.

Did you observe any change in performance?

> Feel free to add Tested-by: Andy Lutomirski <[email protected]>

Will do! Thanks for the testing!

--D

2012-12-15 02:07:11

by Andy Lutomirski

[permalink] [raw]
Subject: Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

On Fri, Dec 14, 2012 at 6:01 PM, Darrick J. Wong
<[email protected]> wrote:
> On Fri, Dec 14, 2012 at 05:12:37PM -0800, Andy Lutomirski wrote:
>> It survived. I hit at least one mm bug, but I really don't think it's
>> a problem with your code. (I have not tried this workload on Linux
>> 3.7 at all before. It normally runs on 3.5.) The box in question is
>
> Would you mind sending along the bug report so I can make sure?

http://marc.info/?l=linux-mm&m=135553342803210&w=2

>
>> ext4 on LVM on dm-crypt on (hardware) RAID 5 on hpsa, which should not
>> need stable pages.
>>
>> The majority of the data written (that wasn't unlinked before it was
>> dropped from cache) was checksummed when written and verified later.
>> Most of this data was written using mmap. This workload hammers the
>> vm concurrently in several threads, and it frequently stalls when
>> stable pages are enabled, so it's probably exercising the code
>> decently well.
>
> Did you observe any change in performance?

No. But I'm comparing to 3.5 + butchery to remove stable pages. With
stable pages on, this workload performs terribly. (It's a soft
real-time thing, as you can possibly guess from my domain name, and
various latency monitoring things go nuts when stable pages are
active.)

Actually, performance appears to be improved, probably due to
https://lkml.org/lkml/2012/12/14/14, which I tested concurrently.

>
>> Feel free to add Tested-by: Andy Lutomirski <[email protected]>
>
> Will do! Thanks for the testing!

My pleasure. When these changes go in to an upstream kernel, they'll
represent a significant reduction in how much our kernel differs from
kernel.org's :) Thanks for fixing this.

--Andy

2012-12-16 16:13:36

by Zheng Liu

[permalink] [raw]
Subject: Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

On Thu, Dec 13, 2012 at 12:08:11AM -0800, Darrick J. Wong wrote:
> diff --git a/mm/bounce.c b/mm/bounce.c
> index 0420867..fa11935 100644
> --- a/mm/bounce.c
> +++ b/mm/bounce.c
> @@ -178,6 +178,38 @@ static void bounce_end_io_read_isa(struct bio *bio, int err)
> __bounce_end_io_read(bio, isa_page_pool, err);
> }
>
> +#ifdef CONFIG_NEED_BOUNCE_POOL
> +static int might_snapshot_stable_page_write(struct bio **bio_orig)
> +{
> + return bio_data_dir(*bio_orig) == WRITE;
> +}
> +
> +static int should_snapshot_stable_pages(struct page *page, int rw)
> +{
> + struct backing_dev_info *bdi;
> + struct address_space *mapping = page_mapping(page);
> +
> + if (!mapping)
> + return 0;
> + bdi = mapping->backing_dev_info;
> + if (!bdi_cap_stable_pages_required(bdi))
> + return 0;
> +
> + return mapping->host->i_sb->s_flags & MS_SNAP_STABLE &&
> + rw == WRITE;
> +}
> +#else
> +static int might_snapshot_stable_page_write(struct bio **bio_orig)
> +{
> + return 0;
> +}
> +
> +static int should_snapshot_static_pages(struct page *page, int rw)
^^^^^^
It should be _stable_.

Regards,
- Zheng
> +{
> + return 0;
> +}
> +#endif /* CONFIG_NEED_BOUNCE_POOL */
> +
> static void __blk_queue_bounce(struct request_queue *q, struct bio **bio_orig,
> mempool_t *pool)
> {
> @@ -192,7 +224,8 @@ static void __blk_queue_bounce(struct request_queue *q, struct bio **bio_orig,
> /*
> * is destination page below bounce pfn?
> */
> - if (page_to_pfn(page) <= queue_bounce_pfn(q))
> + if (page_to_pfn(page) <= queue_bounce_pfn(q) &&
> + !should_snapshot_stable_pages(page, rw))
> continue;
>
> /*
> @@ -284,7 +317,8 @@ void blk_queue_bounce(struct request_queue *q, struct bio **bio_orig)
> * don't waste time iterating over bio segments
> */
> if (!(q->bounce_gfp & GFP_DMA)) {
> - if (queue_bounce_pfn(q) >= blk_max_pfn)
> + if (queue_bounce_pfn(q) >= blk_max_pfn &&
> + !might_snapshot_stable_page_write(bio_orig))
> return;
> pool = page_pool;
> } else {

2012-12-17 09:04:56

by Jan Kara

[permalink] [raw]
Subject: Re: [PATCH 1/4] bdi: Allow block devices to say that they require stable page writes

On Thu 13-12-12 00:07:47, Darrick J. Wong wrote:
> This creates a per-backing-device flag that tracks whether or not pages must be
> held immutable during writeout. Eventually it will be used to waive
> wait_for_page_writeback() if nobody requires stable pages.
>
> Signed-off-by: Darrick J. Wong <[email protected]>
Looks good. You can add:
Reviewed-by: Jan Kara <[email protected]>

Honza
> ---
> Documentation/ABI/testing/sysfs-class-bdi | 5 +++++
> block/blk-integrity.c | 4 ++++
> include/linux/backing-dev.h | 6 ++++++
> mm/backing-dev.c | 11 +++++++++++
> 4 files changed, 26 insertions(+)
>
>
> diff --git a/Documentation/ABI/testing/sysfs-class-bdi b/Documentation/ABI/testing/sysfs-class-bdi
> index 5f50097..d773d56 100644
> --- a/Documentation/ABI/testing/sysfs-class-bdi
> +++ b/Documentation/ABI/testing/sysfs-class-bdi
> @@ -48,3 +48,8 @@ max_ratio (read-write)
> most of the write-back cache. For example in case of an NFS
> mount that is prone to get stuck, or a FUSE mount which cannot
> be trusted to play fair.
> +
> +stable_pages_required (read-only)
> +
> + If set, the backing device requires that all pages comprising a write
> + request must not be changed until writeout is complete.
> diff --git a/block/blk-integrity.c b/block/blk-integrity.c
> index da2a818..dabd221 100644
> --- a/block/blk-integrity.c
> +++ b/block/blk-integrity.c
> @@ -420,6 +420,8 @@ int blk_integrity_register(struct gendisk *disk, struct blk_integrity *template)
> } else
> bi->name = bi_unsupported_name;
>
> + disk->queue->backing_dev_info.capabilities |= BDI_CAP_STABLE_WRITES;
> +
> return 0;
> }
> EXPORT_SYMBOL(blk_integrity_register);
> @@ -438,6 +440,8 @@ void blk_integrity_unregister(struct gendisk *disk)
> if (!disk || !disk->integrity)
> return;
>
> + disk->queue->backing_dev_info.capabilities &= ~BDI_CAP_STABLE_WRITES;
> +
> bi = disk->integrity;
>
> kobject_uevent(&bi->kobj, KOBJ_REMOVE);
> diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
> index 2a9a9ab..085501f 100644
> --- a/include/linux/backing-dev.h
> +++ b/include/linux/backing-dev.h
> @@ -253,6 +253,7 @@ int bdi_set_max_ratio(struct backing_dev_info *bdi, unsigned int max_ratio);
> #define BDI_CAP_EXEC_MAP 0x00000040
> #define BDI_CAP_NO_ACCT_WB 0x00000080
> #define BDI_CAP_SWAP_BACKED 0x00000100
> +#define BDI_CAP_STABLE_WRITES 0x00000200
>
> #define BDI_CAP_VMFLAGS \
> (BDI_CAP_READ_MAP | BDI_CAP_WRITE_MAP | BDI_CAP_EXEC_MAP)
> @@ -307,6 +308,11 @@ long wait_iff_congested(struct zone *zone, int sync, long timeout);
> int pdflush_proc_obsolete(struct ctl_table *table, int write,
> void __user *buffer, size_t *lenp, loff_t *ppos);
>
> +static inline bool bdi_cap_stable_pages_required(struct backing_dev_info *bdi)
> +{
> + return bdi->capabilities & BDI_CAP_STABLE_WRITES;
> +}
> +
> static inline bool bdi_cap_writeback_dirty(struct backing_dev_info *bdi)
> {
> return !(bdi->capabilities & BDI_CAP_NO_WRITEBACK);
> diff --git a/mm/backing-dev.c b/mm/backing-dev.c
> index d3ca2b3..41733c5 100644
> --- a/mm/backing-dev.c
> +++ b/mm/backing-dev.c
> @@ -221,12 +221,23 @@ static ssize_t max_ratio_store(struct device *dev,
> }
> BDI_SHOW(max_ratio, bdi->max_ratio)
>
> +static ssize_t stable_pages_required_show(struct device *dev,
> + struct device_attribute *attr,
> + char *page)
> +{
> + struct backing_dev_info *bdi = dev_get_drvdata(dev);
> +
> + return snprintf(page, PAGE_SIZE-1, "%d\n",
> + bdi_cap_stable_pages_required(bdi) ? 1 : 0);
> +}
> +
> #define __ATTR_RW(attr) __ATTR(attr, 0644, attr##_show, attr##_store)
>
> static struct device_attribute bdi_dev_attrs[] = {
> __ATTR_RW(read_ahead_kb),
> __ATTR_RW(min_ratio),
> __ATTR_RW(max_ratio),
> + __ATTR_RO(stable_pages_required),
> __ATTR_NULL,
> };
>
>
--
Jan Kara <[email protected]>
SUSE Labs, CR

2012-12-17 09:16:59

by Jan Kara

[permalink] [raw]
Subject: Re: [PATCH 2/4] mm: Only enforce stable page writes if the backing device requires it

On Thu 13-12-12 00:07:55, Darrick J. Wong wrote:
> Create a helper function to check if a backing device requires stable page
> writes and, if so, performs the necessary wait. Then, make it so that all
> points in the memory manager that handle making pages writable use the helper
> function. This should provide stable page write support to most filesystems,
> while eliminating unnecessary waiting for devices that don't require the
> feature.
There are two more places that would seem to need conversion from
wait_on_page_writeback() to wait_for_stable_page():
gfs2_page_mkwrite()
nilfs_page_mkwrite()

Otherwise the patch looks OK.

Honza


> Signed-off-by: Darrick J. Wong <[email protected]>
> ---
> fs/buffer.c | 2 +-
> fs/ext4/inode.c | 2 +-
> include/linux/pagemap.h | 1 +
> mm/filemap.c | 3 ++-
> mm/page-writeback.c | 20 ++++++++++++++++++++
> 5 files changed, 25 insertions(+), 3 deletions(-)
>
>
> diff --git a/fs/buffer.c b/fs/buffer.c
> index ec0aca8..c562136 100644
> --- a/fs/buffer.c
> +++ b/fs/buffer.c
> @@ -2363,7 +2363,7 @@ int __block_page_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf,
> if (unlikely(ret < 0))
> goto out_unlock;
> set_page_dirty(page);
> - wait_on_page_writeback(page);
> + wait_for_stable_page(page);
> return 0;
> out_unlock:
> unlock_page(page);
> diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
> index b3c243b..1bfc79f 100644
> --- a/fs/ext4/inode.c
> +++ b/fs/ext4/inode.c
> @@ -4814,7 +4814,7 @@ int ext4_page_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf)
> if (!walk_page_buffers(NULL, page_buffers(page), 0, len, NULL,
> ext4_bh_unmapped)) {
> /* Wait so that we don't change page under IO */
> - wait_on_page_writeback(page);
> + wait_for_stable_page(page);
> ret = VM_FAULT_LOCKED;
> goto out;
> }
> diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
> index e42c762..ea631ee 100644
> --- a/include/linux/pagemap.h
> +++ b/include/linux/pagemap.h
> @@ -398,6 +398,7 @@ static inline void wait_on_page_writeback(struct page *page)
> }
>
> extern void end_page_writeback(struct page *page);
> +void wait_for_stable_page(struct page *page);
>
> /*
> * Add an arbitrary waiter to a page's wait queue
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 83efee7..5577dc8 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -1728,6 +1728,7 @@ int filemap_page_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf)
> * see the dirty page and writeprotect it again.
> */
> set_page_dirty(page);
> + wait_for_stable_page(page);
> out:
> sb_end_pagefault(inode->i_sb);
> return ret;
> @@ -2274,7 +2275,7 @@ repeat:
> return NULL;
> }
> found:
> - wait_on_page_writeback(page);
> + wait_for_stable_page(page);
> return page;
> }
> EXPORT_SYMBOL(grab_cache_page_write_begin);
> diff --git a/mm/page-writeback.c b/mm/page-writeback.c
> index 830893b..3e4a8cc 100644
> --- a/mm/page-writeback.c
> +++ b/mm/page-writeback.c
> @@ -2275,3 +2275,23 @@ int mapping_tagged(struct address_space *mapping, int tag)
> return radix_tree_tagged(&mapping->page_tree, tag);
> }
> EXPORT_SYMBOL(mapping_tagged);
> +
> +/**
> + * wait_for_stable_page() - wait for writeback to finish, if necessary.
> + * @page: The page to wait on.
> + *
> + * This function determines if the given page is related to a backing device
> + * that requires page contents to be held stable during writeback. If so, then
> + * it will wait for any pending writeback to complete.
> + */
> +void wait_for_stable_page(struct page *page)
> +{
> + struct address_space *mapping = page_mapping(page);
> + struct backing_dev_info *bdi = mapping->backing_dev_info;
> +
> + if (!bdi_cap_stable_pages_required(bdi))
> + return;
> +
> + wait_on_page_writeback(page);
> +}
> +EXPORT_SYMBOL_GPL(wait_for_stable_page);
>
--
Jan Kara <[email protected]>
SUSE Labs, CR

2012-12-17 10:11:46

by Jan Kara

[permalink] [raw]
Subject: Re: [PATCH 3/4] 9pfs: Fix filesystem to wait for stable page writeback

On Thu 13-12-12 00:08:02, Darrick J. Wong wrote:
> Fix up the ->page_mkwrite handler to provide stable page writes if necessary.
>
> Signed-off-by: Darrick J. Wong <[email protected]>
Looks good. Also UBIFS and OCFS2 seem to need similar treatment. Patches
attached...

Honza
> ---
> fs/9p/vfs_file.c | 1 +
> 1 file changed, 1 insertion(+)
>
>
> diff --git a/fs/9p/vfs_file.c b/fs/9p/vfs_file.c
> index c2483e9..357260b 100644
> --- a/fs/9p/vfs_file.c
> +++ b/fs/9p/vfs_file.c
> @@ -620,6 +620,7 @@ v9fs_vm_page_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf)
> lock_page(page);
> if (page->mapping != inode->i_mapping)
> goto out_unlock;
> + wait_for_stable_page(page);
>
> return VM_FAULT_LOCKED;
> out_unlock:
>
--
Jan Kara <[email protected]>
SUSE Labs, CR


Attachments:
(No filename) (805.00 B)
0001-ocfs2-Wait-for-page-writeback-to-provide-stable-page.patch (964.00 B)
0002-ubifs-Wait-for-page-writeback-to-provide-stable-page.patch (946.00 B)
Download all attachments

2012-12-17 10:24:13

by Jan Kara

[permalink] [raw]
Subject: Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

On Thu 13-12-12 00:08:11, Darrick J. Wong wrote:
> Several complaints have been received regarding long file write latencies when
> memory pages must be held stable during writeback. Since it might not be
> acceptable to stall programs for the entire duration of a page write (which may
> take many milliseconds even on good hardware), enable a second strategy wherein
> pages are snapshotted as part of submit_bio; the snapshot can be held stable
> while writes continue.
>
> This provides a band-aid to provide stable page writes on jbd without needing
> to backport the fixed locking scheme in jbd2. A mount option is added to ext4
> to allow administrators to enable it there.
>
> For those wondering about the ext3 bandage -- fixing the jbd locking (which was
> done as part of ext4dev years ago) is a lot of surgery, and setting
> PG_writeback on data pages when we actually hold the page lock dropped ext3
> performance by nearly an order of magnitude. If we're going to migrate iscsi
> and raid to use stable page writes, the complaints about high latency will
> likely return. We might as well centralize their page snapshotting thing to
> one place.
Umm, I kind of like this solution for ext3...

> diff --git a/mm/Kconfig b/mm/Kconfig
> index a3f8ddd..78db0e1 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -224,6 +224,16 @@ config BOUNCE
> def_bool y
> depends on BLOCK && MMU && (ZONE_DMA || HIGHMEM)
>
> +# On the 'tile' arch, USB OHCI needs the bounce pool since tilegx will often
> +# have more than 4GB of memory, but we don't currently use the IOTLB to present
> +# a 32-bit address to OHCI. So we need to use a bounce pool instead.
> +#
> +# We also use the bounce pool to provide stable page writes for users that
> +# don't (or can't) afford the wait latency.
> +config NEED_BOUNCE_POOL
> + bool
> + default y if (TILE && USB_OHCI_HCD) || (BLK_DEV_INTEGRITY && (EXT3_FS || EXT4_FS))
> +
This means that NEED_BOUNCE_POOL is going to be enabled on pretty much
any distro kernel...

> config NR_QUICK
> int
> depends on QUICKLIST
> diff --git a/mm/bounce.c b/mm/bounce.c
> index 0420867..fa11935 100644
> --- a/mm/bounce.c
> +++ b/mm/bounce.c
> @@ -178,6 +178,38 @@ static void bounce_end_io_read_isa(struct bio *bio, int err)
> __bounce_end_io_read(bio, isa_page_pool, err);
> }
>
> +#ifdef CONFIG_NEED_BOUNCE_POOL
> +static int might_snapshot_stable_page_write(struct bio **bio_orig)
> +{
> + return bio_data_dir(*bio_orig) == WRITE;
> +}
... so might might_snapshot_stable_page_write() will be true for each
write. And thus blk_queue_bounce() becomes considerably more expensive?
Also calling should_snapshot_stable_pages() for every page seems to be
stupid since its result is going to be the same for all the pages in the
bio (well, I could imagine setups where it won't be but do we want to
support them?).

So cannot we just make a function like should_snapshot_stable_pages() to
test whether we really need the bouncing, use it in blk_queue_bounce() and
then pass the information to __blk_queue_bounce() if needed?

Honza

> +static int should_snapshot_stable_pages(struct page *page, int rw)
> +{
> + struct backing_dev_info *bdi;
> + struct address_space *mapping = page_mapping(page);
> +
> + if (!mapping)
> + return 0;
> + bdi = mapping->backing_dev_info;
> + if (!bdi_cap_stable_pages_required(bdi))
> + return 0;
> +
> + return mapping->host->i_sb->s_flags & MS_SNAP_STABLE &&
> + rw == WRITE;
> +}
> +#else
> +static int might_snapshot_stable_page_write(struct bio **bio_orig)
> +{
> + return 0;
> +}
> +
> +static int should_snapshot_static_pages(struct page *page, int rw)
> +{
> + return 0;
> +}
> +#endif /* CONFIG_NEED_BOUNCE_POOL */
> +
> static void __blk_queue_bounce(struct request_queue *q, struct bio **bio_orig,
> mempool_t *pool)
> {
> @@ -192,7 +224,8 @@ static void __blk_queue_bounce(struct request_queue *q, struct bio **bio_orig,
> /*
> * is destination page below bounce pfn?
> */
> - if (page_to_pfn(page) <= queue_bounce_pfn(q))
> + if (page_to_pfn(page) <= queue_bounce_pfn(q) &&
> + !should_snapshot_stable_pages(page, rw))
> continue;
>
> /*
> @@ -284,7 +317,8 @@ void blk_queue_bounce(struct request_queue *q, struct bio **bio_orig)
> * don't waste time iterating over bio segments
> */
> if (!(q->bounce_gfp & GFP_DMA)) {
> - if (queue_bounce_pfn(q) >= blk_max_pfn)
> + if (queue_bounce_pfn(q) >= blk_max_pfn &&
> + !might_snapshot_stable_page_write(bio_orig))
> return;
> pool = page_pool;
> } else {
> diff --git a/mm/page-writeback.c b/mm/page-writeback.c
> index 3e4a8cc..fbd8efb 100644
> --- a/mm/page-writeback.c
> +++ b/mm/page-writeback.c
> @@ -2291,6 +2291,10 @@ void wait_for_stable_page(struct page *page)
>
> if (!bdi_cap_stable_pages_required(bdi))
> return;
> +#ifdef CONFIG_NEED_BOUNCE_POOL
> + if (mapping->host->i_sb->s_flags & MS_SNAP_STABLE)
> + return;
> +#endif /* CONFIG_NEED_BOUNCE_POOL */
>
> wait_on_page_writeback(page);
> }
>
--
Jan Kara <[email protected]>
SUSE Labs, CR

2012-12-17 22:55:54

by Darrick J. Wong

[permalink] [raw]
Subject: Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

On Fri, Dec 14, 2012 at 06:06:50PM -0800, Andy Lutomirski wrote:
> On Fri, Dec 14, 2012 at 6:01 PM, Darrick J. Wong
> <[email protected]> wrote:
> > On Fri, Dec 14, 2012 at 05:12:37PM -0800, Andy Lutomirski wrote:
> >> It survived. I hit at least one mm bug, but I really don't think it's
> >> a problem with your code. (I have not tried this workload on Linux
> >> 3.7 at all before. It normally runs on 3.5.) The box in question is
> >
> > Would you mind sending along the bug report so I can make sure?
>
> http://marc.info/?l=linux-mm&m=135553342803210&w=2

Hm, this looks like a hugepages thing, which (afaik) doesn't touch fs code at
all. Looks like this patchset is in the clear.

> >
> >> ext4 on LVM on dm-crypt on (hardware) RAID 5 on hpsa, which should not
> >> need stable pages.
> >>
> >> The majority of the data written (that wasn't unlinked before it was
> >> dropped from cache) was checksummed when written and verified later.
> >> Most of this data was written using mmap. This workload hammers the
> >> vm concurrently in several threads, and it frequently stalls when
> >> stable pages are enabled, so it's probably exercising the code
> >> decently well.
> >
> > Did you observe any change in performance?
>
> No. But I'm comparing to 3.5 + butchery to remove stable pages. With
> stable pages on, this workload performs terribly. (It's a soft
> real-time thing, as you can possibly guess from my domain name, and
> various latency monitoring things go nuts when stable pages are
> active.)

Well, I guess that's good. :)

> Actually, performance appears to be improved, probably due to
> https://lkml.org/lkml/2012/12/14/14, which I tested concurrently.
>
> >
> >> Feel free to add Tested-by: Andy Lutomirski <[email protected]>
> >
> > Will do! Thanks for the testing!
>
> My pleasure. When these changes go in to an upstream kernel, they'll
> represent a significant reduction in how much our kernel differs from
> kernel.org's :) Thanks for fixing this.

No problem!

--D

2012-12-17 22:57:40

by Darrick J. Wong

[permalink] [raw]
Subject: Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

On Mon, Dec 17, 2012 at 12:13:36AM +0800, Zheng Liu wrote:
> On Thu, Dec 13, 2012 at 12:08:11AM -0800, Darrick J. Wong wrote:
> > diff --git a/mm/bounce.c b/mm/bounce.c
> > index 0420867..fa11935 100644
> > --- a/mm/bounce.c
> > +++ b/mm/bounce.c
> > @@ -178,6 +178,38 @@ static void bounce_end_io_read_isa(struct bio *bio, int err)
> > __bounce_end_io_read(bio, isa_page_pool, err);
> > }
> >
> > +#ifdef CONFIG_NEED_BOUNCE_POOL
> > +static int might_snapshot_stable_page_write(struct bio **bio_orig)
> > +{
> > + return bio_data_dir(*bio_orig) == WRITE;
> > +}
> > +
> > +static int should_snapshot_stable_pages(struct page *page, int rw)
> > +{
> > + struct backing_dev_info *bdi;
> > + struct address_space *mapping = page_mapping(page);
> > +
> > + if (!mapping)
> > + return 0;
> > + bdi = mapping->backing_dev_info;
> > + if (!bdi_cap_stable_pages_required(bdi))
> > + return 0;
> > +
> > + return mapping->host->i_sb->s_flags & MS_SNAP_STABLE &&
> > + rw == WRITE;
> > +}
> > +#else
> > +static int might_snapshot_stable_page_write(struct bio **bio_orig)
> > +{
> > + return 0;
> > +}
> > +
> > +static int should_snapshot_static_pages(struct page *page, int rw)
> ^^^^^^
> It should be _stable_.

Good catch! Thank you!

--D
>
> Regards,
> - Zheng
> > +{
> > + return 0;
> > +}
> > +#endif /* CONFIG_NEED_BOUNCE_POOL */
> > +
> > static void __blk_queue_bounce(struct request_queue *q, struct bio **bio_orig,
> > mempool_t *pool)
> > {
> > @@ -192,7 +224,8 @@ static void __blk_queue_bounce(struct request_queue *q, struct bio **bio_orig,
> > /*
> > * is destination page below bounce pfn?
> > */
> > - if (page_to_pfn(page) <= queue_bounce_pfn(q))
> > + if (page_to_pfn(page) <= queue_bounce_pfn(q) &&
> > + !should_snapshot_stable_pages(page, rw))
> > continue;
> >
> > /*
> > @@ -284,7 +317,8 @@ void blk_queue_bounce(struct request_queue *q, struct bio **bio_orig)
> > * don't waste time iterating over bio segments
> > */
> > if (!(q->bounce_gfp & GFP_DMA)) {
> > - if (queue_bounce_pfn(q) >= blk_max_pfn)
> > + if (queue_bounce_pfn(q) >= blk_max_pfn &&
> > + !might_snapshot_stable_page_write(bio_orig))
> > return;
> > pool = page_pool;
> > } else {
> --
> To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html

2012-12-17 23:20:24

by Darrick J. Wong

[permalink] [raw]
Subject: Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

On Mon, Dec 17, 2012 at 11:23:59AM +0100, Jan Kara wrote:
> On Thu 13-12-12 00:08:11, Darrick J. Wong wrote:
> > Several complaints have been received regarding long file write latencies when
> > memory pages must be held stable during writeback. Since it might not be
> > acceptable to stall programs for the entire duration of a page write (which may
> > take many milliseconds even on good hardware), enable a second strategy wherein
> > pages are snapshotted as part of submit_bio; the snapshot can be held stable
> > while writes continue.
> >
> > This provides a band-aid to provide stable page writes on jbd without needing
> > to backport the fixed locking scheme in jbd2. A mount option is added to ext4
> > to allow administrators to enable it there.
> >
> > For those wondering about the ext3 bandage -- fixing the jbd locking (which was
> > done as part of ext4dev years ago) is a lot of surgery, and setting
> > PG_writeback on data pages when we actually hold the page lock dropped ext3
> > performance by nearly an order of magnitude. If we're going to migrate iscsi
> > and raid to use stable page writes, the complaints about high latency will
> > likely return. We might as well centralize their page snapshotting thing to
> > one place.
> Umm, I kind of like this solution for ext3...

:)

> > diff --git a/mm/Kconfig b/mm/Kconfig
> > index a3f8ddd..78db0e1 100644
> > --- a/mm/Kconfig
> > +++ b/mm/Kconfig
> > @@ -224,6 +224,16 @@ config BOUNCE
> > def_bool y
> > depends on BLOCK && MMU && (ZONE_DMA || HIGHMEM)
> >
> > +# On the 'tile' arch, USB OHCI needs the bounce pool since tilegx will often
> > +# have more than 4GB of memory, but we don't currently use the IOTLB to present
> > +# a 32-bit address to OHCI. So we need to use a bounce pool instead.
> > +#
> > +# We also use the bounce pool to provide stable page writes for users that
> > +# don't (or can't) afford the wait latency.
> > +config NEED_BOUNCE_POOL
> > + bool
> > + default y if (TILE && USB_OHCI_HCD) || (BLK_DEV_INTEGRITY && (EXT3_FS || EXT4_FS))
> > +
> This means that NEED_BOUNCE_POOL is going to be enabled on pretty much
> any distro kernel...

I want to drop EXT4_FS from that default line. Then we only need it in the
case where EXT4_USE_FOR_EXT23=y hasn't taken the place of EXT3_FS=y.

<shrug> Has anyone actually done that?

Heh, even *I* haven't done that.

> > config NR_QUICK
> > int
> > depends on QUICKLIST
> > diff --git a/mm/bounce.c b/mm/bounce.c
> > index 0420867..fa11935 100644
> > --- a/mm/bounce.c
> > +++ b/mm/bounce.c
> > @@ -178,6 +178,38 @@ static void bounce_end_io_read_isa(struct bio *bio, int err)
> > __bounce_end_io_read(bio, isa_page_pool, err);
> > }
> >
> > +#ifdef CONFIG_NEED_BOUNCE_POOL
> > +static int might_snapshot_stable_page_write(struct bio **bio_orig)
> > +{
> > + return bio_data_dir(*bio_orig) == WRITE;
> > +}
> ... so might might_snapshot_stable_page_write() will be true for each
> write. And thus blk_queue_bounce() becomes considerably more expensive?
> Also calling should_snapshot_stable_pages() for every page seems to be
> stupid since its result is going to be the same for all the pages in the
> bio (well, I could imagine setups where it won't be but do we want to
> support them?).
>
> So cannot we just make a function like should_snapshot_stable_pages() to
> test whether we really need the bouncing, use it in blk_queue_bounce() and
> then pass the information to __blk_queue_bounce() if needed?

Yes. I'd actually considered simply calling should_snapshot() on the first
bio_vec page and passing that information through, but thought I should play it
safe for the first revision.

However, a bio targets a single block device, so this seems like a safe
optimization.

--D
>
> Honza
>
> > +static int should_snapshot_stable_pages(struct page *page, int rw)
> > +{
> > + struct backing_dev_info *bdi;
> > + struct address_space *mapping = page_mapping(page);
> > +
> > + if (!mapping)
> > + return 0;
> > + bdi = mapping->backing_dev_info;
> > + if (!bdi_cap_stable_pages_required(bdi))
> > + return 0;
> > +
> > + return mapping->host->i_sb->s_flags & MS_SNAP_STABLE &&
> > + rw == WRITE;
> > +}
> > +#else
> > +static int might_snapshot_stable_page_write(struct bio **bio_orig)
> > +{
> > + return 0;
> > +}
> > +
> > +static int should_snapshot_static_pages(struct page *page, int rw)
> > +{
> > + return 0;
> > +}
> > +#endif /* CONFIG_NEED_BOUNCE_POOL */
> > +
> > static void __blk_queue_bounce(struct request_queue *q, struct bio **bio_orig,
> > mempool_t *pool)
> > {
> > @@ -192,7 +224,8 @@ static void __blk_queue_bounce(struct request_queue *q, struct bio **bio_orig,
> > /*
> > * is destination page below bounce pfn?
> > */
> > - if (page_to_pfn(page) <= queue_bounce_pfn(q))
> > + if (page_to_pfn(page) <= queue_bounce_pfn(q) &&
> > + !should_snapshot_stable_pages(page, rw))
> > continue;
> >
> > /*
> > @@ -284,7 +317,8 @@ void blk_queue_bounce(struct request_queue *q, struct bio **bio_orig)
> > * don't waste time iterating over bio segments
> > */
> > if (!(q->bounce_gfp & GFP_DMA)) {
> > - if (queue_bounce_pfn(q) >= blk_max_pfn)
> > + if (queue_bounce_pfn(q) >= blk_max_pfn &&
> > + !might_snapshot_stable_page_write(bio_orig))
> > return;
> > pool = page_pool;
> > } else {
> > diff --git a/mm/page-writeback.c b/mm/page-writeback.c
> > index 3e4a8cc..fbd8efb 100644
> > --- a/mm/page-writeback.c
> > +++ b/mm/page-writeback.c
> > @@ -2291,6 +2291,10 @@ void wait_for_stable_page(struct page *page)
> >
> > if (!bdi_cap_stable_pages_required(bdi))
> > return;
> > +#ifdef CONFIG_NEED_BOUNCE_POOL
> > + if (mapping->host->i_sb->s_flags & MS_SNAP_STABLE)
> > + return;
> > +#endif /* CONFIG_NEED_BOUNCE_POOL */
> >
> > wait_on_page_writeback(page);
> > }
> >
> --
> Jan Kara <[email protected]>
> SUSE Labs, CR
> --
> To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html

2012-12-27 19:14:49

by OGAWA Hirofumi

[permalink] [raw]
Subject: Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

"Darrick J. Wong" <[email protected]> writes:

> diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
> index 780d4c6..0144fbb 100644
> --- a/include/uapi/linux/fs.h
> +++ b/include/uapi/linux/fs.h
> @@ -69,6 +69,7 @@ struct inodes_stat_t {
> #define MS_REMOUNT 32 /* Alter flags of a mounted FS */
> #define MS_MANDLOCK 64 /* Allow mandatory locks on an FS */
> #define MS_DIRSYNC 128 /* Directory modifications are synchronous */
> +#define MS_SNAP_STABLE 256 /* Snapshot pages during writeback, if needed */
> #define MS_NOATIME 1024 /* Do not update access times. */
> #define MS_NODIRATIME 2048 /* Do not update directory access times */
> #define MS_BIND 4096

I think this flag should be separated into "FS provide stable page" and
"FS needs bounce buffer for stable page".

My fs (I guess btrfs also) provides stable page by better way, and
doesn't need to wait writeback flags too. What needs is just to avoid
those stable page stuff.

Thanks.
--
OGAWA Hirofumi <[email protected]>

2012-12-27 21:42:33

by Darrick J. Wong

[permalink] [raw]
Subject: Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

On Fri, Dec 28, 2012 at 04:14:49AM +0900, OGAWA Hirofumi wrote:
> "Darrick J. Wong" <[email protected]> writes:
>
> > diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
> > index 780d4c6..0144fbb 100644
> > --- a/include/uapi/linux/fs.h
> > +++ b/include/uapi/linux/fs.h
> > @@ -69,6 +69,7 @@ struct inodes_stat_t {
> > #define MS_REMOUNT 32 /* Alter flags of a mounted FS */
> > #define MS_MANDLOCK 64 /* Allow mandatory locks on an FS */
> > #define MS_DIRSYNC 128 /* Directory modifications are synchronous */
> > +#define MS_SNAP_STABLE 256 /* Snapshot pages during writeback, if needed */
> > #define MS_NOATIME 1024 /* Do not update access times. */
> > #define MS_NODIRATIME 2048 /* Do not update directory access times */
> > #define MS_BIND 4096
>
> I think this flag should be separated into "FS provide stable page" and
> "FS needs bounce buffer for stable page".
>
> My fs (I guess btrfs also) provides stable page by better way, and
> doesn't need to wait writeback flags too. What needs is just to avoid
> those stable page stuff.

How does your fs (are we talking about vfat?) provide stable pages?

btrfs creates its own bdi and doesn't set the "stable pages required" flag, so
it already skips all the stable page stuff.

--D
>
> Thanks.
> --
> OGAWA Hirofumi <[email protected]>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html

2012-12-27 21:48:35

by OGAWA Hirofumi

[permalink] [raw]
Subject: Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

"Darrick J. Wong" <[email protected]> writes:

>> I think this flag should be separated into "FS provide stable page" and
>> "FS needs bounce buffer for stable page".
>>
>> My fs (I guess btrfs also) provides stable page by better way, and
>> doesn't need to wait writeback flags too. What needs is just to avoid
>> those stable page stuff.
>
> How does your fs (are we talking about vfat?) provide stable pages?

Basically it copies the data to another page before modifying data only
if page has writeback flag. (this is about new fs under development stage.)

> btrfs creates its own bdi and doesn't set the "stable pages required"
> flag, so it already skips all the stable page stuff.

I see.
--
OGAWA Hirofumi <[email protected]>

2013-01-07 20:44:36

by Darrick J. Wong

[permalink] [raw]
Subject: Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

On Fri, Dec 28, 2012 at 06:48:35AM +0900, OGAWA Hirofumi wrote:
> "Darrick J. Wong" <[email protected]> writes:
>
> >> I think this flag should be separated into "FS provide stable page" and
> >> "FS needs bounce buffer for stable page".
> >>
> >> My fs (I guess btrfs also) provides stable page by better way, and
> >> doesn't need to wait writeback flags too. What needs is just to avoid
> >> those stable page stuff.
> >
> > How does your fs (are we talking about vfat?) provide stable pages?
>
> Basically it copies the data to another page before modifying data only
> if page has writeback flag. (this is about new fs under development stage.)

Aha, we're talking about tux3. Since (afaik) there aren't any other
filesystems requesting this flag and tux3 uses the device bdi, should I just
create a patch and send it to you so it'll be included when tux3 goes in?

--D

>
> > btrfs creates its own bdi and doesn't set the "stable pages required"
> > flag, so it already skips all the stable page stuff.
>
> I see.
> --
> OGAWA Hirofumi <[email protected]>

2013-01-08 09:45:28

by OGAWA Hirofumi

[permalink] [raw]
Subject: Re: [PATCH 4/4] block: Optionally snapshot page contents to provide stable pages during write

"Darrick J. Wong" <[email protected]> writes:

> On Fri, Dec 28, 2012 at 06:48:35AM +0900, OGAWA Hirofumi wrote:
>> "Darrick J. Wong" <[email protected]> writes:
>>
>> >> I think this flag should be separated into "FS provide stable page" and
>> >> "FS needs bounce buffer for stable page".
>> >>
>> >> My fs (I guess btrfs also) provides stable page by better way, and
>> >> doesn't need to wait writeback flags too. What needs is just to avoid
>> >> those stable page stuff.
>> >
>> > How does your fs (are we talking about vfat?) provide stable pages?
>>
>> Basically it copies the data to another page before modifying data only
>> if page has writeback flag. (this is about new fs under development stage.)
>
> Aha, we're talking about tux3. Since (afaik) there aren't any other
> filesystems requesting this flag and tux3 uses the device bdi, should I just
> create a patch and send it to you so it'll be included when tux3 goes in?

If core doesn't provide, no need to waste your time. I will.

Thanks.
--
OGAWA Hirofumi <[email protected]>