2024-03-14 13:47:06

by Pankaj Raghav (Samsung)

[permalink] [raw]
Subject: [PATCH v3 00/11] enable bs > ps in XFS

From: Pankaj Raghav <[email protected]>

This is the third version of the series that enables block size > page size
(Large Block Size) in XFS. The context and motivation can be seen in cover
letter of the RFC v1[1]. We also recorded a talk about this effort at LPC [3],
if someone would like more context on this effort.

A lot of emphasis has been put on testing using kdevops. The testing has
been split into regression and progression.

Regression testing:
In regression testing, we ran the whole test suite to check for
*regression on existing profiles due to the page cache changes.

No regression was found with the patches added on top.

*Baseline for regression was created using SOAK_DURATION of 2.5 hours
and having used about 7-8 XFS test clusters to test loop fstests over
70 times. We then scraped for critical failures (crashes, XFS or page
cache asserts, or hung tasks) and have reported these to the community
as well.[4]

Progression testing:
For progression testing, we tested for 8k, 16k, 32k and 64k block sizes.
To compare it with existing support, an ARM VM with 64k base page system
(without our patches) was used as a reference to check for actual failures
due to LBS support in a 4k base page size system.

There are some common failures upstream for bs=64k that needs to be
fixed[5].
There are also some tests that assumes block size < page size that needs to
be fixed. I have a tree with fixes for xfstests here [6], which I will be
sending soon to the list.

No new failures were found with the LBS support.

We've done some preliminary performance tests with fio on XFS on 4k block
size against pmem and NVMe with buffered IO and Direct IO on vanilla
v6.8-rc4 Vs v6.8-rc4 + these patches applied, and detected no regressions.

We also wrote an eBPF tool called blkalgn [7] to see if IO sent to the device
is aligned and at least filesystem block size in length.

Git tree:
https://github.com/linux-kdevops/linux/tree/large-block-minorder-6.8

[1] https://lore.kernel.org/lkml/[email protected]/
[2] https://lore.kernel.org/linux-xfs/[email protected]/
[3] https://www.youtube.com/watch?v=ar72r5Xf7x4
[4] https://github.com/linux-kdevops/kdevops/blob/master/docs/xfs-bugs.md
489 non-critical issues and 55 critical issues. We've determined and reported
that the 55 critical issues have all fall into 5 common XFS asserts or hung
tasks and 2 memory management asserts.
[5] https://lore.kernel.org/linux-xfs/[email protected]/
[6] https://github.com/Panky-codes/xfstests/tree/lbs-fixes
[7] https://github.com/iovisor/bcc/pull/4813

Changes since v2:
- Simplified the filemap and readahead changes. (Thanks willy)
- Removed DEFINE_READAHEAD_ALIGN.
- Added minorder support to readahead_expand().

Changes since v1:
- Round up to nearest min nr pages in ra_init
- Calculate index in filemap_create instead of doing in
filemap_get_pages
- Remove unnecessary BUG_ONs in the delete path
- Use check_shl_overflow instead of check_mul_overflow
- Cast to uint32_t instead of unsigned long in xfs_stat_blksize

Changes since RFC v2:
- Move order 1 patch above the 1st patch
- Remove order == 1 conditional in `fs: Allow fine-grained control of
folio sizes`. This fixed generic/630 that was reported in the previous version.
- Hide the max order and expose `mapping_set_folio_min_order` instead.
- Add new helper mapping_start_index_align and DEFINE_READAHEAD_ALIGN
- don't call `page_cache_ra_order` with min order in do_mmap_sync_readahead
- simplify ondemand readahead with only aligning the start index at the end
- Don't cap ra_pages based on bdi->io_pages
- use `checked_mul_overflow` while calculating bytes in validate_fsb
- Remove config lbs option
- Add a warning while mounting a LBS kernel
- Add Acked-by and Reviewed-by from Hannes and Darrick.

Changes since RFC v1:
- Added willy's patch to enable order-1 folios.
- Unified common page cache effort from Hannes LBS work.
- Added a new helper min_nrpages and added CONFIG_THP for enabling mapping_large_folio_support
- Don't split a folio if it has minorder set. Remove the old code where we set extra pins if it has that requirement.
- Split the code in XFS between the validation of mapping count. Put the icache code changes with enabling bs > ps.
- Added CONFIG_XFS_LBS option
- align the index in do_read_cache_folio()
- Removed truncate changes
- Fixed generic/091 with iomap changes to iomap_dio_zero function.
- Took care of folio truncation scenario in page_cache_ra_unbounded() that happens after read_pages if a folio was found.
- Sqaushed and moved commits around
- Rebased on top of v6.8-rc4

Hannes Reinecke (1):
readahead: rework loop in page_cache_ra_unbounded()

Luis Chamberlain (2):
filemap: allocate mapping_min_order folios in the page cache
readahead: round up file_ra_state->ra_pages to mapping_min_nrpages

Matthew Wilcox (Oracle) (2):
mm: Support order-1 folios in the page cache
fs: Allow fine-grained control of folio sizes

Pankaj Raghav (6):
readahead: allocate folios with mapping_min_order in readahead
mm: do not split a folio if it has minimum folio order requirement
iomap: fix iomap_dio_zero() for fs bs > system page size
xfs: expose block size in stat
xfs: make the calculation generic in xfs_sb_validate_fsb_count()
xfs: enable block size larger than page size support

fs/iomap/direct-io.c | 13 ++++-
fs/xfs/libxfs/xfs_ialloc.c | 5 ++
fs/xfs/libxfs/xfs_shared.h | 3 ++
fs/xfs/xfs_icache.c | 6 ++-
fs/xfs/xfs_iops.c | 2 +-
fs/xfs/xfs_mount.c | 10 +++-
fs/xfs/xfs_super.c | 10 +---
include/linux/huge_mm.h | 7 ++-
include/linux/pagemap.h | 100 ++++++++++++++++++++++++++++--------
mm/filemap.c | 26 ++++++----
mm/huge_memory.c | 36 +++++++++++--
mm/internal.h | 4 +-
mm/readahead.c | 101 +++++++++++++++++++++++++++++--------
13 files changed, 247 insertions(+), 76 deletions(-)


base-commit: e8f897f4afef0031fe618a8e94127a0934896aba
--
2.43.0



2024-03-14 13:47:12

by Pankaj Raghav (Samsung)

[permalink] [raw]
Subject: [PATCH v3 06/11] readahead: round up file_ra_state->ra_pages to mapping_min_nrpages

From: Luis Chamberlain <[email protected]>

As we will be adding multiples of mapping_min_nrpages to the page cache,
initialize file_ra_state->ra_pages with bdi->ra_pages rounded up to the
nearest mapping_min_nrpages.

Signed-off-by: Luis Chamberlain <[email protected]>
Signed-off-by: Pankaj Raghav <[email protected]>
---
mm/readahead.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/mm/readahead.c b/mm/readahead.c
index 650834c033f0..50194fddecf1 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -138,7 +138,8 @@
void
file_ra_state_init(struct file_ra_state *ra, struct address_space *mapping)
{
- ra->ra_pages = inode_to_bdi(mapping->host)->ra_pages;
+ ra->ra_pages = round_up(inode_to_bdi(mapping->host)->ra_pages,
+ mapping_min_folio_nrpages(mapping));
ra->prev_pos = -1;
}
EXPORT_SYMBOL_GPL(file_ra_state_init);
--
2.43.0


2024-03-14 13:47:19

by Pankaj Raghav (Samsung)

[permalink] [raw]
Subject: [PATCH v3 09/11] xfs: expose block size in stat

From: Pankaj Raghav <[email protected]>

For block size larger than page size, the unit of efficient IO is
the block size, not the page size. Leaving stat() to report
PAGE_SIZE as the block size causes test programs like fsx to issue
illegal ranges for operations that require block size alignment
(e.g. fallocate() insert range). Hence update the preferred IO size
to reflect the block size in this case.

This change is based on a patch originally from Dave Chinner.[1]

[1] https://lwn.net/ml/linux-fsdevel/[email protected]/

Signed-off-by: Pankaj Raghav <[email protected]>
Signed-off-by: Luis Chamberlain <[email protected]>
---
fs/xfs/xfs_iops.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/xfs/xfs_iops.c b/fs/xfs/xfs_iops.c
index a0d77f5f512e..7ee829f7d708 100644
--- a/fs/xfs/xfs_iops.c
+++ b/fs/xfs/xfs_iops.c
@@ -543,7 +543,7 @@ xfs_stat_blksize(
return 1U << mp->m_allocsize_log;
}

- return PAGE_SIZE;
+ return max_t(uint32_t, PAGE_SIZE, mp->m_sb.sb_blocksize);
}

STATIC int
--
2.43.0


2024-03-14 14:06:36

by Pankaj Raghav (Samsung)

[permalink] [raw]
Subject: [PATCH v3 04/11] readahead: rework loop in page_cache_ra_unbounded()

From: Hannes Reinecke <[email protected]>

Rework the loop in page_cache_ra_unbounded() to advance with
the number of pages in a folio instead of just one page at a time.

Signed-off-by: Hannes Reinecke <[email protected]>
Co-developed-by: Pankaj Raghav <[email protected]>
Signed-off-by: Pankaj Raghav <[email protected]>
Acked-by: Darrick J. Wong <[email protected]>
---
mm/readahead.c | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/mm/readahead.c b/mm/readahead.c
index 369c70e2be42..37b938f4b54f 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -208,7 +208,7 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
struct address_space *mapping = ractl->mapping;
unsigned long index = readahead_index(ractl);
gfp_t gfp_mask = readahead_gfp_mask(mapping);
- unsigned long i;
+ unsigned long i = 0;

/*
* Partway through the readahead operation, we will have added
@@ -226,7 +226,7 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
/*
* Preallocate as many pages as we will need.
*/
- for (i = 0; i < nr_to_read; i++) {
+ while (i < nr_to_read) {
struct folio *folio = xa_load(&mapping->i_pages, index + i);

if (folio && !xa_is_value(folio)) {
@@ -239,8 +239,8 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
* not worth getting one just for that.
*/
read_pages(ractl);
- ractl->_index++;
- i = ractl->_index + ractl->_nr_pages - index - 1;
+ ractl->_index += folio_nr_pages(folio);
+ i = ractl->_index + ractl->_nr_pages - index;
continue;
}

@@ -252,13 +252,14 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
folio_put(folio);
read_pages(ractl);
ractl->_index++;
- i = ractl->_index + ractl->_nr_pages - index - 1;
+ i = ractl->_index + ractl->_nr_pages - index;
continue;
}
if (i == nr_to_read - lookahead_size)
folio_set_readahead(folio);
ractl->_workingset |= folio_test_workingset(folio);
- ractl->_nr_pages++;
+ ractl->_nr_pages += folio_nr_pages(folio);
+ i += folio_nr_pages(folio);
}

/*
--
2.43.0


2024-03-14 14:08:51

by Pankaj Raghav (Samsung)

[permalink] [raw]
Subject: [PATCH v3 10/11] xfs: make the calculation generic in xfs_sb_validate_fsb_count()

From: Pankaj Raghav <[email protected]>

Instead of assuming that PAGE_SHIFT is always higher than the blocklog,
make the calculation generic so that page cache count can be calculated
correctly for LBS.

Signed-off-by: Pankaj Raghav <[email protected]>
---
fs/xfs/xfs_mount.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c
index aabb25dc3efa..9cf800586da7 100644
--- a/fs/xfs/xfs_mount.c
+++ b/fs/xfs/xfs_mount.c
@@ -133,9 +133,16 @@ xfs_sb_validate_fsb_count(
{
ASSERT(PAGE_SHIFT >= sbp->sb_blocklog);
ASSERT(sbp->sb_blocklog >= BBSHIFT);
+ uint64_t max_index;
+ uint64_t max_bytes;
+
+ if (check_shl_overflow(nblocks, sbp->sb_blocklog, &max_bytes))
+ return -EFBIG;

/* Limited by ULONG_MAX of page cache index */
- if (nblocks >> (PAGE_SHIFT - sbp->sb_blocklog) > ULONG_MAX)
+ max_index = max_bytes >> PAGE_SHIFT;
+
+ if (max_index > ULONG_MAX)
return -EFBIG;
return 0;
}
--
2.43.0


2024-03-14 15:32:38

by Pankaj Raghav (Samsung)

[permalink] [raw]
Subject: [PATCH v3 02/11] fs: Allow fine-grained control of folio sizes

From: "Matthew Wilcox (Oracle)" <[email protected]>

Some filesystems want to be able to ensure that folios that are added to
the page cache are at least a certain size.
Add mapping_set_folio_min_order() to allow this level of control.

Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Co-developed-by: Pankaj Raghav <[email protected]>
Signed-off-by: Pankaj Raghav <[email protected]>
Signed-off-by: Luis Chamberlain <[email protected]>
Reviewed-by: Hannes Reinecke <[email protected]>
---
include/linux/pagemap.h | 100 ++++++++++++++++++++++++++++++++--------
1 file changed, 80 insertions(+), 20 deletions(-)

diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 2df35e65557d..fc8eb9c94e9c 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -202,13 +202,18 @@ enum mapping_flags {
AS_EXITING = 4, /* final truncate in progress */
/* writeback related tags are not used */
AS_NO_WRITEBACK_TAGS = 5,
- AS_LARGE_FOLIO_SUPPORT = 6,
- AS_RELEASE_ALWAYS, /* Call ->release_folio(), even if no private data */
- AS_STABLE_WRITES, /* must wait for writeback before modifying
+ AS_RELEASE_ALWAYS = 6, /* Call ->release_folio(), even if no private data */
+ AS_STABLE_WRITES = 7, /* must wait for writeback before modifying
folio contents */
- AS_UNMOVABLE, /* The mapping cannot be moved, ever */
+ AS_FOLIO_ORDER_MIN = 8,
+ AS_FOLIO_ORDER_MAX = 13, /* Bit 8-17 are used for FOLIO_ORDER */
+ AS_UNMOVABLE = 18, /* The mapping cannot be moved, ever */
};

+#define AS_FOLIO_ORDER_MIN_MASK 0x00001f00
+#define AS_FOLIO_ORDER_MAX_MASK 0x0003e000
+#define AS_FOLIO_ORDER_MASK (AS_FOLIO_ORDER_MIN_MASK | AS_FOLIO_ORDER_MAX_MASK)
+
/**
* mapping_set_error - record a writeback error in the address_space
* @mapping: the mapping in which an error should be set
@@ -344,9 +349,47 @@ static inline void mapping_set_gfp_mask(struct address_space *m, gfp_t mask)
m->gfp_mask = mask;
}

+/*
+ * There are some parts of the kernel which assume that PMD entries
+ * are exactly HPAGE_PMD_ORDER. Those should be fixed, but until then,
+ * limit the maximum allocation order to PMD size. I'm not aware of any
+ * assumptions about maximum order if THP are disabled, but 8 seems like
+ * a good order (that's 1MB if you're using 4kB pages)
+ */
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+#define MAX_PAGECACHE_ORDER HPAGE_PMD_ORDER
+#else
+#define MAX_PAGECACHE_ORDER 8
+#endif
+
+/*
+ * mapping_set_folio_min_order() - Set the minimum folio order
+ * @mapping: The address_space.
+ * @min: Minimum folio order (between 0-MAX_PAGECACHE_ORDER inclusive).
+ *
+ * The filesystem should call this function in its inode constructor to
+ * indicate which base size of folio the VFS can use to cache the contents
+ * of the file. This should only be used if the filesystem needs special
+ * handling of folio sizes (ie there is something the core cannot know).
+ * Do not tune it based on, eg, i_size.
+ *
+ * Context: This should not be called while the inode is active as it
+ * is non-atomic.
+ */
+static inline void mapping_set_folio_min_order(struct address_space *mapping,
+ unsigned int min)
+{
+ if (min > MAX_PAGECACHE_ORDER)
+ min = MAX_PAGECACHE_ORDER;
+
+ mapping->flags = (mapping->flags & ~AS_FOLIO_ORDER_MASK) |
+ (min << AS_FOLIO_ORDER_MIN) |
+ (MAX_PAGECACHE_ORDER << AS_FOLIO_ORDER_MAX);
+}
+
/**
* mapping_set_large_folios() - Indicate the file supports large folios.
- * @mapping: The file.
+ * @mapping: The address_space.
*
* The filesystem should call this function in its inode constructor to
* indicate that the VFS can use large folios to cache the contents of
@@ -357,7 +400,37 @@ static inline void mapping_set_gfp_mask(struct address_space *m, gfp_t mask)
*/
static inline void mapping_set_large_folios(struct address_space *mapping)
{
- __set_bit(AS_LARGE_FOLIO_SUPPORT, &mapping->flags);
+ mapping_set_folio_min_order(mapping, 0);
+}
+
+static inline unsigned int mapping_max_folio_order(struct address_space *mapping)
+{
+ return (mapping->flags & AS_FOLIO_ORDER_MAX_MASK) >> AS_FOLIO_ORDER_MAX;
+}
+
+static inline unsigned int mapping_min_folio_order(struct address_space *mapping)
+{
+ return (mapping->flags & AS_FOLIO_ORDER_MIN_MASK) >> AS_FOLIO_ORDER_MIN;
+}
+
+static inline unsigned long mapping_min_folio_nrpages(struct address_space *mapping)
+{
+ return 1UL << mapping_min_folio_order(mapping);
+}
+
+/**
+ * mapping_align_start_index() - Align starting index based on the min
+ * folio order of the page cache.
+ * @mapping: The address_space.
+ *
+ * Ensure the index used is aligned to the minimum folio order when adding
+ * new folios to the page cache by rounding down to the nearest minimum
+ * folio number of pages.
+ */
+static inline pgoff_t mapping_align_start_index(struct address_space *mapping,
+ pgoff_t index)
+{
+ return round_down(index, mapping_min_folio_nrpages(mapping));
}

/*
@@ -367,7 +440,7 @@ static inline void mapping_set_large_folios(struct address_space *mapping)
static inline bool mapping_large_folio_support(struct address_space *mapping)
{
return IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) &&
- test_bit(AS_LARGE_FOLIO_SUPPORT, &mapping->flags);
+ (mapping_max_folio_order(mapping) > 0);
}

static inline int filemap_nr_thps(struct address_space *mapping)
@@ -528,19 +601,6 @@ static inline void *detach_page_private(struct page *page)
return folio_detach_private(page_folio(page));
}

-/*
- * There are some parts of the kernel which assume that PMD entries
- * are exactly HPAGE_PMD_ORDER. Those should be fixed, but until then,
- * limit the maximum allocation order to PMD size. I'm not aware of any
- * assumptions about maximum order if THP are disabled, but 8 seems like
- * a good order (that's 1MB if you're using 4kB pages)
- */
-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
-#define MAX_PAGECACHE_ORDER HPAGE_PMD_ORDER
-#else
-#define MAX_PAGECACHE_ORDER 8
-#endif
-
#ifdef CONFIG_NUMA
struct folio *filemap_alloc_folio(gfp_t gfp, unsigned int order);
#else
--
2.43.0


2024-03-25 18:43:31

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [PATCH v3 02/11] fs: Allow fine-grained control of folio sizes

On Wed, Mar 13, 2024 at 06:02:44PM +0100, Pankaj Raghav (Samsung) wrote:
> +/*
> + * mapping_set_folio_min_order() - Set the minimum folio order
> + * @mapping: The address_space.
> + * @min: Minimum folio order (between 0-MAX_PAGECACHE_ORDER inclusive).
> + *
> + * The filesystem should call this function in its inode constructor to
> + * indicate which base size of folio the VFS can use to cache the contents
> + * of the file. This should only be used if the filesystem needs special
> + * handling of folio sizes (ie there is something the core cannot know).
> + * Do not tune it based on, eg, i_size.
> + *
> + * Context: This should not be called while the inode is active as it
> + * is non-atomic.
> + */
> +static inline void mapping_set_folio_min_order(struct address_space *mapping,
> + unsigned int min)
> +{
> + if (min > MAX_PAGECACHE_ORDER)
> + min = MAX_PAGECACHE_ORDER;
> +
> + mapping->flags = (mapping->flags & ~AS_FOLIO_ORDER_MASK) |
> + (min << AS_FOLIO_ORDER_MIN) |
> + (MAX_PAGECACHE_ORDER << AS_FOLIO_ORDER_MAX);
> +}

I was surprised when I read this, which indicates it might be surprising
for others too. I think it at least needs a comment to say that the
maximum will be set to the MAX_PAGECACHE_ORDER, because I was expecting
it to set max == min. I guess that isn't what XFS wants, but someone
doing this to, eg, ext4 is going to have an unpleasant surprise when
they call into block_read_full_folio() and overrun 'arr'.

I'm still not entirely convinced this wouldn't be better to do as
mapping_set_folio_order_range() and have

static inline void mapping_set_folio_min_order(struct address_space *mapping,
unsigned int min)
{
mapping_set_folio_range(mapping, min, MAX_PAGECACHE_ORDER);
}


2024-03-26 00:13:15

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [PATCH v3 00/11] enable bs > ps in XFS

On Wed, Mar 13, 2024 at 06:02:42PM +0100, Pankaj Raghav (Samsung) wrote:
> This is the third version of the series that enables block size > page size
> (Large Block Size) in XFS. The context and motivation can be seen in cover
> letter of the RFC v1[1]. We also recorded a talk about this effort at LPC [3],
> if someone would like more context on this effort.

Thank you. This is a lot better.

I'm still trying to understand your opinion on the contents of the
file_ra_state. Is it supposed to be properly aligned at all times, or
do we work with it in the terms of "desired number of pages" and then
force it to conform to the minimum-block-size reality right at the end?
Because you seem to be doing both at various points.

2024-03-26 00:20:23

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [PATCH v3 10/11] xfs: make the calculation generic in xfs_sb_validate_fsb_count()

On Wed, Mar 13, 2024 at 06:02:52PM +0100, Pankaj Raghav (Samsung) wrote:
> From: Pankaj Raghav <[email protected]>
>
> Instead of assuming that PAGE_SHIFT is always higher than the blocklog,
> make the calculation generic so that page cache count can be calculated
> correctly for LBS.
>
> Signed-off-by: Pankaj Raghav <[email protected]>
> ---
> fs/xfs/xfs_mount.c | 9 ++++++++-
> 1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c
> index aabb25dc3efa..9cf800586da7 100644
> --- a/fs/xfs/xfs_mount.c
> +++ b/fs/xfs/xfs_mount.c
> @@ -133,9 +133,16 @@ xfs_sb_validate_fsb_count(
> {
> ASSERT(PAGE_SHIFT >= sbp->sb_blocklog);

but ... you're still asserting that PAGE_SHIFT is larger than blocklog.
Shouldn't you delete that assertion?

> ASSERT(sbp->sb_blocklog >= BBSHIFT);
> + uint64_t max_index;
> + uint64_t max_bytes;
> +
> + if (check_shl_overflow(nblocks, sbp->sb_blocklog, &max_bytes))
> + return -EFBIG;
>
> /* Limited by ULONG_MAX of page cache index */
> - if (nblocks >> (PAGE_SHIFT - sbp->sb_blocklog) > ULONG_MAX)
> + max_index = max_bytes >> PAGE_SHIFT;
> +
> + if (max_index > ULONG_MAX)
> return -EFBIG;

This kind of depends on the implementation details of the page cache.
We have MAX_LFS_FILESIZE to abstract that; maybe that should be used
here?

2024-03-26 00:46:58

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [PATCH v3 04/11] readahead: rework loop in page_cache_ra_unbounded()

On Wed, Mar 13, 2024 at 06:02:46PM +0100, Pankaj Raghav (Samsung) wrote:
> @@ -239,8 +239,8 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
> * not worth getting one just for that.
> */
> read_pages(ractl);
> - ractl->_index++;
> - i = ractl->_index + ractl->_nr_pages - index - 1;
> + ractl->_index += folio_nr_pages(folio);
> + i = ractl->_index + ractl->_nr_pages - index;
> continue;
> }
>
> @@ -252,13 +252,14 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
> folio_put(folio);
> read_pages(ractl);
> ractl->_index++;
> - i = ractl->_index + ractl->_nr_pages - index - 1;
> + i = ractl->_index + ractl->_nr_pages - index;
> continue;
> }

You changed index++ in the first hunk, but not the second hunk. Is that
intentional?

2024-03-26 08:44:30

by Pankaj Raghav (Samsung)

[permalink] [raw]
Subject: Re: [PATCH v3 02/11] fs: Allow fine-grained control of folio sizes

On Mon, Mar 25, 2024 at 06:29:30PM +0000, Matthew Wilcox wrote:
> On Wed, Mar 13, 2024 at 06:02:44PM +0100, Pankaj Raghav (Samsung) wrote:
> > +/*
> > + * mapping_set_folio_min_order() - Set the minimum folio order
> > + * @mapping: The address_space.
> > + * @min: Minimum folio order (between 0-MAX_PAGECACHE_ORDER inclusive).
> > + *
> > + * The filesystem should call this function in its inode constructor to
> > + * indicate which base size of folio the VFS can use to cache the contents
> > + * of the file. This should only be used if the filesystem needs special
> > + * handling of folio sizes (ie there is something the core cannot know).
> > + * Do not tune it based on, eg, i_size.
> > + *
> > + * Context: This should not be called while the inode is active as it
> > + * is non-atomic.
> > + */
> > +static inline void mapping_set_folio_min_order(struct address_space *mapping,
> > + unsigned int min)
> > +{
> > + if (min > MAX_PAGECACHE_ORDER)
> > + min = MAX_PAGECACHE_ORDER;
> > +
> > + mapping->flags = (mapping->flags & ~AS_FOLIO_ORDER_MASK) |
> > + (min << AS_FOLIO_ORDER_MIN) |
> > + (MAX_PAGECACHE_ORDER << AS_FOLIO_ORDER_MAX);
> > +}
>
> I was surprised when I read this, which indicates it might be surprising
> for others too. I think it at least needs a comment to say that the
> maximum will be set to the MAX_PAGECACHE_ORDER, because I was expecting
> it to set max == min. I guess that isn't what XFS wants, but someone
> doing this to, eg, ext4 is going to have an unpleasant surprise when
> they call into block_read_full_folio() and overrun 'arr'.
>
> I'm still not entirely convinced this wouldn't be better to do as
> mapping_set_folio_order_range() and have
>
> static inline void mapping_set_folio_min_order(struct address_space *mapping,
> unsigned int min)
> {
> mapping_set_folio_range(mapping, min, MAX_PAGECACHE_ORDER);
> }

I agree. Having a helper like this will make it more explicit. The
limits checking can also be done in this helper itself.

Also it makes mapping_set_large_folio() more clear:

static inline void mapping_set_large_folios(struct address_space *mapping)
{
mapping_set_folio_range(mapping, 0, MAX_PAGECACHE_ORDER);
}

instead of just calling mapping_set_folio_min_order(). Thanks.

2024-03-26 08:56:31

by Pankaj Raghav (Samsung)

[permalink] [raw]
Subject: Re: [PATCH v3 04/11] readahead: rework loop in page_cache_ra_unbounded()

On Mon, Mar 25, 2024 at 06:41:01PM +0000, Matthew Wilcox wrote:
> On Wed, Mar 13, 2024 at 06:02:46PM +0100, Pankaj Raghav (Samsung) wrote:
> > @@ -239,8 +239,8 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
> > * not worth getting one just for that.
> > */
> > read_pages(ractl);
> > - ractl->_index++;
> > - i = ractl->_index + ractl->_nr_pages - index - 1;
> > + ractl->_index += folio_nr_pages(folio);
> > + i = ractl->_index + ractl->_nr_pages - index;
> > continue;
> > }
> >
> > @@ -252,13 +252,14 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
> > folio_put(folio);
> > read_pages(ractl);
> > ractl->_index++;
> > - i = ractl->_index + ractl->_nr_pages - index - 1;
> > + i = ractl->_index + ractl->_nr_pages - index;
> > continue;
> > }
>
> You changed index++ in the first hunk, but not the second hunk. Is that
> intentional?

The reason I didn't use folio_nr_pages(folio) in the second hunk is
because we have already `put` the folio and it is not valid anymore to
use folio_nr_pages right? Because we increase the ref count in
filemap_alloc() and we put if add fails.

Plus in the second hunk, adding the 0 order folio failed in that index,
so we just move on to the next index. Once we have the min order
support, if adding min order folio failed, we move by min_order.

And your comment on the next patch:

> Hah, you changed this here. Please move into previous patch.

We can't do that either because I am introducing the concept of min
order in the next patch.

2024-03-26 09:48:10

by Pankaj Raghav (Samsung)

[permalink] [raw]
Subject: Re: [PATCH v3 04/11] readahead: rework loop in page_cache_ra_unbounded()

Hi Hannes,

On 26/03/2024 10:39, Hannes Reinecke wrote:
> On 3/25/24 19:41, Matthew Wilcox wrote:
>> On Wed, Mar 13, 2024 at 06:02:46PM +0100, Pankaj Raghav (Samsung) wrote:
>>> @@ -239,8 +239,8 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
>>>                * not worth getting one just for that.
>>>                */
>>>               read_pages(ractl);
>>> -            ractl->_index++;
>>> -            i = ractl->_index + ractl->_nr_pages - index - 1;
>>> +            ractl->_index += folio_nr_pages(folio);
>>> +            i = ractl->_index + ractl->_nr_pages - index;
>>>               continue;
>>>           }
>>>   @@ -252,13 +252,14 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
>>>               folio_put(folio);
>>>               read_pages(ractl);
>>>               ractl->_index++;
>>> -            i = ractl->_index + ractl->_nr_pages - index - 1;
>>> +            i = ractl->_index + ractl->_nr_pages - index;
>>>               continue;
>>>           }
>>
>> You changed index++ in the first hunk, but not the second hunk.  Is that
>> intentional?
>
> Hmm. Looks you are right; it should be modified, too.
> Will be fixing it up.
>
You initially had also in the second hunk:
ractl->index += folio_nr_pages(folio);

and I changed it to what it is now.

The reason is in my reply to willy:
https://lore.kernel.org/linux-xfs/s4jn4t4betknd3y4ltfccqxyfktzdljiz7klgbqsrccmv3rwrd@orlwjz77oyxo/

Let me know if you agree with it.

> Cheers,
>
> Hannes
>

2024-03-26 09:54:20

by Hannes Reinecke

[permalink] [raw]
Subject: Re: [PATCH v3 00/11] enable bs > ps in XFS

On 3/25/24 20:19, Matthew Wilcox wrote:
> On Wed, Mar 13, 2024 at 06:02:42PM +0100, Pankaj Raghav (Samsung) wrote:
>> This is the third version of the series that enables block size > page size
>> (Large Block Size) in XFS. The context and motivation can be seen in cover
>> letter of the RFC v1[1]. We also recorded a talk about this effort at LPC [3],
>> if someone would like more context on this effort.
>
> Thank you. This is a lot better.
>
> I'm still trying to understand your opinion on the contents of the
> file_ra_state. Is it supposed to be properly aligned at all times, or
> do we work with it in the terms of "desired number of pages" and then
> force it to conform to the minimum-block-size reality right at the end?
> Because you seem to be doing both at various points.

Guess what, that's what I had been pondering, too.
Each way has its benefits, I guess.

Question really is do we keep the readahead iterator in units of pages,
and convert the result, or do we modify the readahead iterator to work
on folios, and convert the inputs.

Doesn't really matter much, but we need to decide. The former is
probably easier on the caller, and the latter is easier on the consumer.
Take your pick; I really don't mind.

But we should document the result :-)

Cheers,

Hannes


2024-03-26 09:54:38

by Pankaj Raghav (Samsung)

[permalink] [raw]
Subject: Re: [PATCH v3 10/11] xfs: make the calculation generic in xfs_sb_validate_fsb_count()

On Mon, Mar 25, 2024 at 07:15:59PM +0000, Matthew Wilcox wrote:
> On Wed, Mar 13, 2024 at 06:02:52PM +0100, Pankaj Raghav (Samsung) wrote:
> > From: Pankaj Raghav <[email protected]>
> >
> > Instead of assuming that PAGE_SHIFT is always higher than the blocklog,
> > make the calculation generic so that page cache count can be calculated
> > correctly for LBS.
> >
> > Signed-off-by: Pankaj Raghav <[email protected]>
> > ---
> > fs/xfs/xfs_mount.c | 9 ++++++++-
> > 1 file changed, 8 insertions(+), 1 deletion(-)
> >
> > diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c
> > index aabb25dc3efa..9cf800586da7 100644
> > --- a/fs/xfs/xfs_mount.c
> > +++ b/fs/xfs/xfs_mount.c
> > @@ -133,9 +133,16 @@ xfs_sb_validate_fsb_count(
> > {
> > ASSERT(PAGE_SHIFT >= sbp->sb_blocklog);
>
> but ... you're still asserting that PAGE_SHIFT is larger than blocklog.
> Shouldn't you delete that assertion?

I do that in the next patch when we enable LBS support in XFS by setting
min order. So we still need the check here that will be removed in the
next patch.
>
> > ASSERT(sbp->sb_blocklog >= BBSHIFT);
> > + uint64_t max_index;
> > + uint64_t max_bytes;
> > +
> > + if (check_shl_overflow(nblocks, sbp->sb_blocklog, &max_bytes))
> > + return -EFBIG;
> >
> > /* Limited by ULONG_MAX of page cache index */
> > - if (nblocks >> (PAGE_SHIFT - sbp->sb_blocklog) > ULONG_MAX)
> > + max_index = max_bytes >> PAGE_SHIFT;
> > +
> > + if (max_index > ULONG_MAX)
> > return -EFBIG;
>
> This kind of depends on the implementation details of the page cache.
> We have MAX_LFS_FILESIZE to abstract that; maybe that should be used
> here?

Makes sense. I will use it instead of using ULONG_MAX.

2024-03-26 10:00:45

by Hannes Reinecke

[permalink] [raw]
Subject: Re: [PATCH v3 04/11] readahead: rework loop in page_cache_ra_unbounded()

On 3/26/24 10:44, Pankaj Raghav wrote:
> Hi Hannes,
>
> On 26/03/2024 10:39, Hannes Reinecke wrote:
>> On 3/25/24 19:41, Matthew Wilcox wrote:
>>> On Wed, Mar 13, 2024 at 06:02:46PM +0100, Pankaj Raghav (Samsung) wrote:
>>>> @@ -239,8 +239,8 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
>>>>                * not worth getting one just for that.
>>>>                */
>>>>               read_pages(ractl);
>>>> -            ractl->_index++;
>>>> -            i = ractl->_index + ractl->_nr_pages - index - 1;
>>>> +            ractl->_index += folio_nr_pages(folio);
>>>> +            i = ractl->_index + ractl->_nr_pages - index;
>>>>               continue;
>>>>           }
>>>>   @@ -252,13 +252,14 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
>>>>               folio_put(folio);
>>>>               read_pages(ractl);
>>>>               ractl->_index++;
>>>> -            i = ractl->_index + ractl->_nr_pages - index - 1;
>>>> +            i = ractl->_index + ractl->_nr_pages - index;
>>>>               continue;
>>>>           }
>>>
>>> You changed index++ in the first hunk, but not the second hunk.  Is that
>>> intentional?
>>
>> Hmm. Looks you are right; it should be modified, too.
>> Will be fixing it up.
>>
> You initially had also in the second hunk:
> ractl->index += folio_nr_pages(folio);
>
> and I changed it to what it is now.
>
> The reason is in my reply to willy:
> https://lore.kernel.org/linux-xfs/s4jn4t4betknd3y4ltfccqxyfktzdljiz7klgbqsrccmv3rwrd@orlwjz77oyxo/
>
> Let me know if you agree with it.
>
Bah. That really is overly complicated. When we attempt a conversion
that conversion should be stand-alone, not rely on some other patch
modifications later on.
We definitely need to work on that to make it easier to review, even
without having to read the mail thread.

Cheers,

Hannes


2024-03-26 10:04:11

by Hannes Reinecke

[permalink] [raw]
Subject: Re: [PATCH v3 04/11] readahead: rework loop in page_cache_ra_unbounded()

On 3/25/24 19:41, Matthew Wilcox wrote:
> On Wed, Mar 13, 2024 at 06:02:46PM +0100, Pankaj Raghav (Samsung) wrote:
>> @@ -239,8 +239,8 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
>> * not worth getting one just for that.
>> */
>> read_pages(ractl);
>> - ractl->_index++;
>> - i = ractl->_index + ractl->_nr_pages - index - 1;
>> + ractl->_index += folio_nr_pages(folio);
>> + i = ractl->_index + ractl->_nr_pages - index;
>> continue;
>> }
>>
>> @@ -252,13 +252,14 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
>> folio_put(folio);
>> read_pages(ractl);
>> ractl->_index++;
>> - i = ractl->_index + ractl->_nr_pages - index - 1;
>> + i = ractl->_index + ractl->_nr_pages - index;
>> continue;
>> }
>
> You changed index++ in the first hunk, but not the second hunk. Is that
> intentional?

Hmm. Looks you are right; it should be modified, too.
Will be fixing it up.

Cheers,

Hannes


2024-03-26 10:07:10

by Pankaj Raghav (Samsung)

[permalink] [raw]
Subject: Re: [PATCH v3 04/11] readahead: rework loop in page_cache_ra_unbounded()

On 26/03/2024 11:00, Hannes Reinecke wrote:
> On 3/26/24 10:44, Pankaj Raghav wrote:
>> Hi Hannes,
>>
>> On 26/03/2024 10:39, Hannes Reinecke wrote:
>>> On 3/25/24 19:41, Matthew Wilcox wrote:
>>>> On Wed, Mar 13, 2024 at 06:02:46PM +0100, Pankaj Raghav (Samsung) wrote:
>>>>> @@ -239,8 +239,8 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
>>>>>                 * not worth getting one just for that.
>>>>>                 */
>>>>>                read_pages(ractl);
>>>>> -            ractl->_index++;
>>>>> -            i = ractl->_index + ractl->_nr_pages - index - 1;
>>>>> +            ractl->_index += folio_nr_pages(folio);
>>>>> +            i = ractl->_index + ractl->_nr_pages - index;
>>>>>                continue;
>>>>>            }
>>>>>    @@ -252,13 +252,14 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
>>>>>                folio_put(folio);
>>>>>                read_pages(ractl);
>>>>>                ractl->_index++;
>>>>> -            i = ractl->_index + ractl->_nr_pages - index - 1;
>>>>> +            i = ractl->_index + ractl->_nr_pages - index;
>>>>>                continue;
>>>>>            }
>>>>
>>>> You changed index++ in the first hunk, but not the second hunk.  Is that
>>>> intentional?
>>>
>>> Hmm. Looks you are right; it should be modified, too.
>>> Will be fixing it up.
>>>
>> You initially had also in the second hunk:
>> ractl->index += folio_nr_pages(folio);
>>
>> and I changed it to what it is now.
>>
>> The reason is in my reply to willy:
>> https://lore.kernel.org/linux-xfs/s4jn4t4betknd3y4ltfccqxyfktzdljiz7klgbqsrccmv3rwrd@orlwjz77oyxo/
>>
>> Let me know if you agree with it.
>>
> Bah. That really is overly complicated. When we attempt a conversion that conversion should be
> stand-alone, not rely on some other patch modifications later on.
> We definitely need to work on that to make it easier to review, even
> without having to read the mail thread.
>

I don't know understand what you mean by overly complicated. This conversion is standalone and it is
wrong to use folio_nr_pages after we `put` the folio. This patch just reworks the loop and in the
next patch I add min order support to readahead.

This patch doesn't depend on the next patch.

> Cheers,
>
> Hannes
>

2024-03-26 11:11:58

by Hannes Reinecke

[permalink] [raw]
Subject: Re: [PATCH v3 04/11] readahead: rework loop in page_cache_ra_unbounded()

On 3/26/24 11:06, Pankaj Raghav wrote:
> On 26/03/2024 11:00, Hannes Reinecke wrote:
>> On 3/26/24 10:44, Pankaj Raghav wrote:
>>> Hi Hannes,
>>>
>>> On 26/03/2024 10:39, Hannes Reinecke wrote:
>>>> On 3/25/24 19:41, Matthew Wilcox wrote:
>>>>> On Wed, Mar 13, 2024 at 06:02:46PM +0100, Pankaj Raghav (Samsung) wrote:
>>>>>> @@ -239,8 +239,8 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
>>>>>>                 * not worth getting one just for that.
>>>>>>                 */
>>>>>>                read_pages(ractl);
>>>>>> -            ractl->_index++;
>>>>>> -            i = ractl->_index + ractl->_nr_pages - index - 1;
>>>>>> +            ractl->_index += folio_nr_pages(folio);
>>>>>> +            i = ractl->_index + ractl->_nr_pages - index;
>>>>>>                continue;
>>>>>>            }
>>>>>>    @@ -252,13 +252,14 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
>>>>>>                folio_put(folio);
>>>>>>                read_pages(ractl);
>>>>>>                ractl->_index++;
>>>>>> -            i = ractl->_index + ractl->_nr_pages - index - 1;
>>>>>> +            i = ractl->_index + ractl->_nr_pages - index;
>>>>>>                continue;
>>>>>>            }
>>>>>
>>>>> You changed index++ in the first hunk, but not the second hunk.  Is that
>>>>> intentional?
>>>>
>>>> Hmm. Looks you are right; it should be modified, too.
>>>> Will be fixing it up.
>>>>
>>> You initially had also in the second hunk:
>>> ractl->index += folio_nr_pages(folio);
>>>
>>> and I changed it to what it is now.
>>>
>>> The reason is in my reply to willy:
>>> https://lore.kernel.org/linux-xfs/s4jn4t4betknd3y4ltfccqxyfktzdljiz7klgbqsrccmv3rwrd@orlwjz77oyxo/
>>>
>>> Let me know if you agree with it.
>>>
>> Bah. That really is overly complicated. When we attempt a conversion that conversion should be
>> stand-alone, not rely on some other patch modifications later on.
>> We definitely need to work on that to make it easier to review, even
>> without having to read the mail thread.
>>
>
> I don't know understand what you mean by overly complicated. This conversion is standalone and it is
> wrong to use folio_nr_pages after we `put` the folio. This patch just reworks the loop and in the
> next patch I add min order support to readahead.
>
> This patch doesn't depend on the next patch.
>

Let me rephrase: what does 'ractl->_index' signify?
From my understanding it should be the index of the
first folio/page in ractl, right?

If so I find it hard to understand how we _could_ increase it by one;
_index should _always_ in units of the minimal pagemap size.
And if we don't have it here (as you suggested in the mailthread)
I'd rather move this patch _after_ the minimal pagesize is introduced
to ensure that _index is always incremented by the right amount.

Cheers,

Hannes


2024-03-26 13:45:26

by Pankaj Raghav (Samsung)

[permalink] [raw]
Subject: Re: [PATCH v3 04/11] readahead: rework loop in page_cache_ra_unbounded()

On Tue, Mar 26, 2024 at 11:55:06AM +0100, Hannes Reinecke wrote:
> > > Bah. That really is overly complicated. When we attempt a conversion that conversion should be
> > > stand-alone, not rely on some other patch modifications later on.
> > > We definitely need to work on that to make it easier to review, even
> > > without having to read the mail thread.
> > >
> >
> > I don't know understand what you mean by overly complicated. This conversion is standalone and it is
> > wrong to use folio_nr_pages after we `put` the folio. This patch just reworks the loop and in the
> > next patch I add min order support to readahead.
> >
> > This patch doesn't depend on the next patch.
> >
>
> Let me rephrase: what does 'ractl->_index' signify?
> From my understanding it should be the index of the
> first folio/page in ractl, right?
>
> If so I find it hard to understand how we _could_ increase it by one; _index
> should _always_ in units of the minimal pagemap size.

I still have not introduced the minimal pagemap size concept here. That
comes in the next patch. This patch only reworks the loop and should not
have any functional changes. So the minimal pagemap size unit here is 1.

And to your next question how could we increase it only by one here:

// We come here if we didn't find any folio at index + i
..
folio = filemap_alloc_folio(gfp_mask, 0); // order 0 => 1 page
if (!folio)
break;
if (filemap_add_folio(mapping, folio, index + i,
gfp_mask) < 0) {
folio_put(folio);
read_pages(ractl);
ractl->_index++;
...

If we failed to add a folio of order 0 at (index + i), we put the folio
and start a read_pages() on whatever pages we added so far (ractl->index to
ractl->index + ractl->nr_pages).

read_pages() updates the ractl->index to ractl->index + ractl->nr_pages.
ractl->index after read_pages() should point to (index + i). As we had
issue adding a folio of order 0, we skip that index by incrementing the
ractl->index by 1.

Does this clarify? In your original patch, you used folio_nr_pages()
here. As I said before, we already know the size of the folio we tried
to add was 1, so we could just increment by 1, and we should not use the
folio to deduce the size after folio_put() as it is use after free.

> And if we don't have it here (as you suggested in the mailthread)
> I'd rather move this patch _after_ the minimal pagesize is introduced
> to ensure that _index is always incremented by the right amount.
>

I intended to have it as two atomic changes where there is
non-functional change that helps with the functional change that comes
later. If it is confusing, I could also combine this with the next
patch?

Or, I could have it as the first patch before I start adding the concept
of folio_min_order. Then it makes it clear that it is intended to be a
non-function change?
--
Pankaj

2024-03-26 15:05:08

by Pankaj Raghav (Samsung)

[permalink] [raw]
Subject: Re: [PATCH v3 00/11] enable bs > ps in XFS

On Mon, Mar 25, 2024 at 07:19:07PM +0000, Matthew Wilcox wrote:
> On Wed, Mar 13, 2024 at 06:02:42PM +0100, Pankaj Raghav (Samsung) wrote:
> > This is the third version of the series that enables block size > page size
> > (Large Block Size) in XFS. The context and motivation can be seen in cover
> > letter of the RFC v1[1]. We also recorded a talk about this effort at LPC [3],
> > if someone would like more context on this effort.
>
> Thank you. This is a lot better.

Thanks.
>
> I'm still trying to understand your opinion on the contents of the
> file_ra_state. Is it supposed to be properly aligned at all times, or
> do we work with it in the terms of "desired number of pages" and then
> force it to conform to the minimum-block-size reality right at the end?

The intention of the patches is to do the latter. Apart from the patch
that rounds up file_ra_state->ra_pages, I don't poke file_ra_state
anywhere and it is updated only at the end after we enforce
minimum-block-size constraint (page_cache_ra_order).

> Because you seem to be doing both at various points.
Could you elaborate more on where I do both? Maybe I am missing
something and I could change it in the next series.

The previous series was trying to do both but I intentially stuck to
updating the ra_state at the end in this series.

2024-03-26 15:12:01

by Pankaj Raghav (Samsung)

[permalink] [raw]
Subject: Re: [PATCH v3 04/11] readahead: rework loop in page_cache_ra_unbounded()

On Mon, Mar 25, 2024 at 06:41:01PM +0000, Matthew Wilcox wrote:
> On Wed, Mar 13, 2024 at 06:02:46PM +0100, Pankaj Raghav (Samsung) wrote:
> > @@ -239,8 +239,8 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
> > * not worth getting one just for that.
> > */
> > read_pages(ractl);
> > - ractl->_index++;
> > - i = ractl->_index + ractl->_nr_pages - index - 1;
> > + ractl->_index += folio_nr_pages(folio);
> > + i = ractl->_index + ractl->_nr_pages - index;
> > continue;
> > }
> >
> > @@ -252,13 +252,14 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
> > folio_put(folio);
> > read_pages(ractl);
> > ractl->_index++;
> > - i = ractl->_index + ractl->_nr_pages - index - 1;
> > + i = ractl->_index + ractl->_nr_pages - index;
> > continue;
> > }
>
> You changed index++ in the first hunk, but not the second hunk. Is that
> intentional?
After having some back and forth with Hannes, I see where the confusion
is coming from.

I intended this to be a non-functional change that helps with adding
min_order support later.

As this is a non-functional change, I will move this patch to be at the
start of the series as preparation patches before we start adding min_order
helpers and support.

--
Pankaj

2024-03-26 15:16:05

by Pankaj Raghav (Samsung)

[permalink] [raw]
Subject: Re: [PATCH v3 00/11] enable bs > ps in XFS

On 26/03/2024 10:53, Hannes Reinecke wrote:
> On 3/25/24 20:19, Matthew Wilcox wrote:
>> On Wed, Mar 13, 2024 at 06:02:42PM +0100, Pankaj Raghav (Samsung) wrote:
>>> This is the third version of the series that enables block size > page size
>>> (Large Block Size) in XFS. The context and motivation can be seen in cover
>>> letter of the RFC v1[1]. We also recorded a talk about this effort at LPC [3],
>>> if someone would like more context on this effort.
>>
>> Thank you.  This is a lot better.
>>
>> I'm still trying to understand your opinion on the contents of the
>> file_ra_state.  Is it supposed to be properly aligned at all times, or
>> do we work with it in the terms of "desired number of pages" and then
>> force it to conform to the minimum-block-size reality right at the end?
>> Because you seem to be doing both at various points.
>
> Guess what, that's what I had been pondering, too.
> Each way has its benefits, I guess.
>
> Question really is do we keep the readahead iterator in units of pages,
> and convert the result, or do we modify the readahead iterator to work
> on folios, and convert the inputs.
>
> Doesn't really matter much, but we need to decide. The former is probably easier on the caller, and
> the latter is easier on the consumer.
> Take your pick; I really don't mind.
>
> But we should document the result :-)
>

Having experimented both approaches, I prefer the latter as it looks more consistent and
contain the changes to few functions.

> Cheers,
>
> Hannes
>