2022-09-01 13:25:06

by Zhang Yi

[permalink] [raw]
Subject: [PATCH v2 00/14] fs/buffer: remove ll_rw_block()

Changes since v1:
- Remove redundant uptodate check in __bh_read(), use
bh_uptodate_or_lock() in bh_read() and bh_read_nowait().
- Improve the lock order in __bh_read_batch().
- Add return value 1 to bh_read(), indicate the buffer has been
already uptodate and no need to submit read IO, ext2 code in patch
13 need to know this case.
- Remove bh_read_locked() helper.
- Exchange the parameter sequence of bhs[] array and it's number in
bh_read[*]_batch() helpers.

v1: https://lore.kernel.org/linux-fsdevel/[email protected]/T/#t

Thanks,
Yi.

ll_rw_block() will skip locked buffer before submitting IO, it assumes
that locked buffer means it is under IO. This assumption is not always
true because we cannot guarantee every buffer lock path would submit
IO. After commit 88dbcbb3a484 ("blkdev: avoid migration stalls for
blkdev pages"), buffer_migrate_folio_norefs() becomes one exceptional
case, and there may be others. So ll_rw_block() is not safe on the sync
read path, we could get false positive EIO return value when filesystem
reading metadata. It seems that it could be only used on the readahead
path.

Unfortunately, many filesystem misuse the ll_rw_block() on the sync read
path. This patch set just remove ll_rw_block() and add new friendly
helpers, which could prevent false positive EIO on the read metadata
path. Thanks for the suggestion from Jan, the original discussion is at
[1].

patch 1: remove unused helpers in fs/buffer.c
patch 2: add new bh_read_[*] helpers
patch 3-11: remove all ll_rw_block() calls in filesystems
patch 12-14: do some leftover cleanups.

[1]. https://lore.kernel.org/linux-mm/[email protected]/

Zhang Yi (14):
fs/buffer: remove __breadahead_gfp()
fs/buffer: add some new buffer read helpers
fs/buffer: replace ll_rw_block()
gfs2: replace ll_rw_block()
isofs: replace ll_rw_block()
jbd2: replace ll_rw_block()
ntfs3: replace ll_rw_block()
ocfs2: replace ll_rw_block()
reiserfs: replace ll_rw_block()
udf: replace ll_rw_block()
ufs: replace ll_rw_block()
fs/buffer: remove ll_rw_block() helper
ext2: replace bh_submit_read() helper with bh_read_locked()
fs/buffer: remove bh_submit_read() helper

fs/buffer.c | 154 +++++++++++++++---------------------
fs/ext2/balloc.c | 7 +-
fs/gfs2/meta_io.c | 7 +-
fs/gfs2/quota.c | 8 +-
fs/isofs/compress.c | 2 +-
fs/jbd2/journal.c | 15 ++--
fs/jbd2/recovery.c | 16 ++--
fs/ntfs3/inode.c | 7 +-
fs/ocfs2/aops.c | 2 +-
fs/ocfs2/super.c | 4 +-
fs/reiserfs/journal.c | 11 +--
fs/reiserfs/stree.c | 4 +-
fs/reiserfs/super.c | 4 +-
fs/udf/dir.c | 2 +-
fs/udf/directory.c | 2 +-
fs/udf/inode.c | 8 +-
fs/ufs/balloc.c | 12 +--
include/linux/buffer_head.h | 48 ++++++++---
18 files changed, 146 insertions(+), 167 deletions(-)

--
2.31.1


2022-09-01 13:25:09

by Zhang Yi

[permalink] [raw]
Subject: [PATCH v2 04/14] gfs2: replace ll_rw_block()

ll_rw_block() is not safe for the sync read path because it cannot
guarantee that always submitting read IO if the buffer has been locked,
so stop using it. We also switch to new bh_readahead() helper for the
readahead path.

Signed-off-by: Zhang Yi <[email protected]>
---
fs/gfs2/meta_io.c | 7 ++-----
fs/gfs2/quota.c | 8 ++------
2 files changed, 4 insertions(+), 11 deletions(-)

diff --git a/fs/gfs2/meta_io.c b/fs/gfs2/meta_io.c
index 7e70e0ba5a6c..6ed728aae9a5 100644
--- a/fs/gfs2/meta_io.c
+++ b/fs/gfs2/meta_io.c
@@ -525,8 +525,7 @@ struct buffer_head *gfs2_meta_ra(struct gfs2_glock *gl, u64 dblock, u32 extlen)

if (buffer_uptodate(first_bh))
goto out;
- if (!buffer_locked(first_bh))
- ll_rw_block(REQ_OP_READ | REQ_META | REQ_PRIO, 1, &first_bh);
+ bh_read_nowait(first_bh, REQ_META | REQ_PRIO);

dblock++;
extlen--;
@@ -534,9 +533,7 @@ struct buffer_head *gfs2_meta_ra(struct gfs2_glock *gl, u64 dblock, u32 extlen)
while (extlen) {
bh = gfs2_getbuf(gl, dblock, CREATE);

- if (!buffer_uptodate(bh) && !buffer_locked(bh))
- ll_rw_block(REQ_OP_READ | REQ_RAHEAD | REQ_META |
- REQ_PRIO, 1, &bh);
+ bh_readahead(bh, REQ_RAHEAD | REQ_META | REQ_PRIO);
brelse(bh);
dblock++;
extlen--;
diff --git a/fs/gfs2/quota.c b/fs/gfs2/quota.c
index f201eaf59d0d..1ed17226d9ed 100644
--- a/fs/gfs2/quota.c
+++ b/fs/gfs2/quota.c
@@ -745,12 +745,8 @@ static int gfs2_write_buf_to_page(struct gfs2_inode *ip, unsigned long index,
}
if (PageUptodate(page))
set_buffer_uptodate(bh);
- if (!buffer_uptodate(bh)) {
- ll_rw_block(REQ_OP_READ | REQ_META | REQ_PRIO, 1, &bh);
- wait_on_buffer(bh);
- if (!buffer_uptodate(bh))
- goto unlock_out;
- }
+ if (bh_read(bh, REQ_META | REQ_PRIO) < 0)
+ goto unlock_out;
if (gfs2_is_jdata(ip))
gfs2_trans_add_data(ip->i_gl, bh);
else
--
2.31.1

2022-09-01 13:25:23

by Zhang Yi

[permalink] [raw]
Subject: [PATCH v2 03/14] fs/buffer: replace ll_rw_block()

ll_rw_block() is not safe for the sync IO path because it skip buffers
which has been locked by others, it could lead to false positive EIO
when submitting read IO. So stop using ll_rw_block(), switch to use new
helpers which could guarantee buffer locked and submit IO if needed.

Signed-off-by: Zhang Yi <[email protected]>
---
fs/buffer.c | 12 +++++-------
1 file changed, 5 insertions(+), 7 deletions(-)

diff --git a/fs/buffer.c b/fs/buffer.c
index a6bc769e665d..aec568b3ae52 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -562,7 +562,7 @@ void write_boundary_block(struct block_device *bdev,
struct buffer_head *bh = __find_get_block(bdev, bblock + 1, blocksize);
if (bh) {
if (buffer_dirty(bh))
- ll_rw_block(REQ_OP_WRITE, 1, &bh);
+ write_dirty_buffer(bh, 0);
put_bh(bh);
}
}
@@ -1342,7 +1342,7 @@ void __breadahead(struct block_device *bdev, sector_t block, unsigned size)
{
struct buffer_head *bh = __getblk(bdev, block, size);
if (likely(bh)) {
- ll_rw_block(REQ_OP_READ | REQ_RAHEAD, 1, &bh);
+ bh_readahead(bh, REQ_RAHEAD);
brelse(bh);
}
}
@@ -2022,7 +2022,7 @@ int __block_write_begin_int(struct folio *folio, loff_t pos, unsigned len,
if (!buffer_uptodate(bh) && !buffer_delay(bh) &&
!buffer_unwritten(bh) &&
(block_start < from || block_end > to)) {
- ll_rw_block(REQ_OP_READ, 1, &bh);
+ bh_read_nowait(bh, 0);
*wait_bh++=bh;
}
}
@@ -2582,11 +2582,9 @@ int block_truncate_page(struct address_space *mapping,
set_buffer_uptodate(bh);

if (!buffer_uptodate(bh) && !buffer_delay(bh) && !buffer_unwritten(bh)) {
- err = -EIO;
- ll_rw_block(REQ_OP_READ, 1, &bh);
- wait_on_buffer(bh);
+ err = bh_read(bh, 0);
/* Uhhuh. Read error. Complain and punt. */
- if (!buffer_uptodate(bh))
+ if (err < 0)
goto unlock;
}

--
2.31.1

2022-09-01 13:25:27

by Zhang Yi

[permalink] [raw]
Subject: [PATCH v2 02/14] fs/buffer: add some new buffer read helpers

Current ll_rw_block() helper is fragile because it assumes that locked
buffer means it's under IO which is submitted by some other who holds
the lock, it skip buffer if it failed to get the lock, so it's only
safe on the readahead path. Unfortunately, now that most filesystems
still use this helper mistakenly on the sync metadata read path. There
is no guarantee that the one who holds the buffer lock always submit IO
(e.g. buffer_migrate_folio_norefs() after commit 88dbcbb3a484 ("blkdev:
avoid migration stalls for blkdev pages"), it could lead to false
positive -EIO when submitting reading IO.

This patch add some friendly buffer read helpers to prepare replacing
ll_rw_block() and similar calls. We can only call bh_readahead_[]
helpers for the readahead paths.

Signed-off-by: Zhang Yi <[email protected]>
---
fs/buffer.c | 65 +++++++++++++++++++++++++++++++++++++
include/linux/buffer_head.h | 38 ++++++++++++++++++++++
2 files changed, 103 insertions(+)

diff --git a/fs/buffer.c b/fs/buffer.c
index a0b70b3239f3..a6bc769e665d 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -3017,6 +3017,71 @@ int bh_uptodate_or_lock(struct buffer_head *bh)
}
EXPORT_SYMBOL(bh_uptodate_or_lock);

+/**
+ * __bh_read - Submit read for a locked buffer
+ * @bh: struct buffer_head
+ * @op_flags: appending REQ_OP_* flags besides REQ_OP_READ
+ * @wait: wait until reading finish
+ *
+ * Returns zero on success or don't wait, and -EIO on error.
+ */
+int __bh_read(struct buffer_head *bh, blk_opf_t op_flags, bool wait)
+{
+ int ret = 0;
+
+ BUG_ON(!buffer_locked(bh));
+
+ get_bh(bh);
+ bh->b_end_io = end_buffer_read_sync;
+ submit_bh(REQ_OP_READ | op_flags, bh);
+ if (wait) {
+ wait_on_buffer(bh);
+ if (!buffer_uptodate(bh))
+ ret = -EIO;
+ }
+ return ret;
+}
+EXPORT_SYMBOL(__bh_read);
+
+/**
+ * __bh_read_batch - Submit read for a batch of unlocked buffers
+ * @nr: entry number of the buffer batch
+ * @bhs: a batch of struct buffer_head
+ * @op_flags: appending REQ_OP_* flags besides REQ_OP_READ
+ * @force_lock: force to get a lock on the buffer if set, otherwise drops any
+ * buffer that cannot lock.
+ *
+ * Returns zero on success or don't wait, and -EIO on error.
+ */
+void __bh_read_batch(int nr, struct buffer_head *bhs[],
+ blk_opf_t op_flags, bool force_lock)
+{
+ int i;
+
+ for (i = 0; i < nr; i++) {
+ struct buffer_head *bh = bhs[i];
+
+ if (buffer_uptodate(bh))
+ continue;
+
+ if (force_lock)
+ lock_buffer(bh);
+ else
+ if (!trylock_buffer(bh))
+ continue;
+
+ if (buffer_uptodate(bh)) {
+ unlock_buffer(bh);
+ continue;
+ }
+
+ bh->b_end_io = end_buffer_read_sync;
+ get_bh(bh);
+ submit_bh(REQ_OP_READ | op_flags, bh);
+ }
+}
+EXPORT_SYMBOL(__bh_read_batch);
+
/**
* bh_submit_read - Submit a locked buffer for reading
* @bh: struct buffer_head
diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h
index c3863c417b00..6d09785bed9f 100644
--- a/include/linux/buffer_head.h
+++ b/include/linux/buffer_head.h
@@ -232,6 +232,9 @@ void write_boundary_block(struct block_device *bdev,
sector_t bblock, unsigned blocksize);
int bh_uptodate_or_lock(struct buffer_head *bh);
int bh_submit_read(struct buffer_head *bh);
+int __bh_read(struct buffer_head *bh, blk_opf_t op_flags, bool wait);
+void __bh_read_batch(int nr, struct buffer_head *bhs[],
+ blk_opf_t op_flags, bool force_lock);

extern int buffer_heads_over_limit;

@@ -399,6 +402,41 @@ static inline struct buffer_head *__getblk(struct block_device *bdev,
return __getblk_gfp(bdev, block, size, __GFP_MOVABLE);
}

+static inline void bh_readahead(struct buffer_head *bh, blk_opf_t op_flags)
+{
+ if (!buffer_uptodate(bh) && trylock_buffer(bh)) {
+ if (!buffer_uptodate(bh))
+ __bh_read(bh, op_flags, false);
+ else
+ unlock_buffer(bh);
+ }
+}
+
+static inline void bh_read_nowait(struct buffer_head *bh, blk_opf_t op_flags)
+{
+ if (!bh_uptodate_or_lock(bh))
+ __bh_read(bh, op_flags, false);
+}
+
+/* Returns 1 if buffer uptodated, 0 on success, and -EIO on error. */
+static inline int bh_read(struct buffer_head *bh, blk_opf_t op_flags)
+{
+ if (bh_uptodate_or_lock(bh))
+ return 1;
+ return __bh_read(bh, op_flags, true);
+}
+
+static inline void bh_read_batch(int nr, struct buffer_head *bhs[])
+{
+ __bh_read_batch(nr, bhs, 0, true);
+}
+
+static inline void bh_readahead_batch(int nr, struct buffer_head *bhs[],
+ blk_opf_t op_flags)
+{
+ __bh_read_batch(nr, bhs, op_flags, false);
+}
+
/**
* __bread() - reads a specified block and returns the bh
* @bdev: the block_device to read from
--
2.31.1

2022-09-01 13:25:35

by Zhang Yi

[permalink] [raw]
Subject: [PATCH v2 05/14] isofs: replace ll_rw_block()

ll_rw_block() is not safe for the sync read path because it cannot
guarantee that submitting read IO if the buffer has been locked. We
could get false positive EIO return from zisofs_uncompress_block() if
he buffer has been locked by others. So stop using ll_rw_block(),
switch to sync helper instead.

Signed-off-by: Zhang Yi <[email protected]>
Reviewed-by: Jan Kara <[email protected]>
---
fs/isofs/compress.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/isofs/compress.c b/fs/isofs/compress.c
index b466172eec25..59b03d74ecbe 100644
--- a/fs/isofs/compress.c
+++ b/fs/isofs/compress.c
@@ -82,7 +82,7 @@ static loff_t zisofs_uncompress_block(struct inode *inode, loff_t block_start,
return 0;
}
haveblocks = isofs_get_blocks(inode, blocknum, bhs, needblocks);
- ll_rw_block(REQ_OP_READ, haveblocks, bhs);
+ bh_read_batch(haveblocks, bhs);

curbh = 0;
curpage = 0;
--
2.31.1

2022-09-01 13:25:37

by Zhang Yi

[permalink] [raw]
Subject: [PATCH v2 11/14] ufs: replace ll_rw_block()

ll_rw_block() is not safe for the sync read path because it cannot
guarantee that submitting read IO if the buffer has been locked. We
could get false positive EIO after wait_on_buffer() if the buffer has
been locked by others. So stop using ll_rw_block() in ufs.

Signed-off-by: Zhang Yi <[email protected]>
---
fs/ufs/balloc.c | 12 ++++--------
1 file changed, 4 insertions(+), 8 deletions(-)

diff --git a/fs/ufs/balloc.c b/fs/ufs/balloc.c
index bd810d8239f2..2436e3f82147 100644
--- a/fs/ufs/balloc.c
+++ b/fs/ufs/balloc.c
@@ -295,14 +295,10 @@ static void ufs_change_blocknr(struct inode *inode, sector_t beg,

if (!buffer_mapped(bh))
map_bh(bh, inode->i_sb, oldb + pos);
- if (!buffer_uptodate(bh)) {
- ll_rw_block(REQ_OP_READ, 1, &bh);
- wait_on_buffer(bh);
- if (!buffer_uptodate(bh)) {
- ufs_error(inode->i_sb, __func__,
- "read of block failed\n");
- break;
- }
+ if (bh_read(bh, 0) < 0) {
+ ufs_error(inode->i_sb, __func__,
+ "read of block failed\n");
+ break;
}

UFSD(" change from %llu to %llu, pos %u\n",
--
2.31.1

2022-09-01 13:25:45

by Zhang Yi

[permalink] [raw]
Subject: [PATCH v2 07/14] ntfs3: replace ll_rw_block()

ll_rw_block() is not safe for the sync read path because it cannot
guarantee that submitting read IO if the buffer has been locked. We
could get false positive EIO after wait_on_buffer() if the buffer has
been locked by others. So stop using ll_rw_block() in
ntfs_get_block_vbo().

Signed-off-by: Zhang Yi <[email protected]>
---
fs/ntfs3/inode.c | 7 ++-----
1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/fs/ntfs3/inode.c b/fs/ntfs3/inode.c
index 51363d4e8636..cadbfa111539 100644
--- a/fs/ntfs3/inode.c
+++ b/fs/ntfs3/inode.c
@@ -630,12 +630,9 @@ static noinline int ntfs_get_block_vbo(struct inode *inode, u64 vbo,
bh->b_size = block_size;
off = vbo & (PAGE_SIZE - 1);
set_bh_page(bh, page, off);
- ll_rw_block(REQ_OP_READ, 1, &bh);
- wait_on_buffer(bh);
- if (!buffer_uptodate(bh)) {
- err = -EIO;
+ err = bh_read(bh, 0);
+ if (err < 0)
goto out;
- }
zero_user_segment(page, off + voff, off + block_size);
}
}
--
2.31.1

2022-09-01 13:26:25

by Zhang Yi

[permalink] [raw]
Subject: [PATCH v2 08/14] ocfs2: replace ll_rw_block()

ll_rw_block() is not safe for the sync read path because it cannot
guarantee that submitting read IO if the buffer has been locked. We
could get false positive EIO after wait_on_buffer() if the buffer has
been locked by others. So stop using ll_rw_block() in ocfs2.

Signed-off-by: Zhang Yi <[email protected]>
---
fs/ocfs2/aops.c | 2 +-
fs/ocfs2/super.c | 4 +---
2 files changed, 2 insertions(+), 4 deletions(-)

diff --git a/fs/ocfs2/aops.c b/fs/ocfs2/aops.c
index af4157f61927..1d65f6ef00ca 100644
--- a/fs/ocfs2/aops.c
+++ b/fs/ocfs2/aops.c
@@ -636,7 +636,7 @@ int ocfs2_map_page_blocks(struct page *page, u64 *p_blkno,
!buffer_new(bh) &&
ocfs2_should_read_blk(inode, page, block_start) &&
(block_start < from || block_end > to)) {
- ll_rw_block(REQ_OP_READ, 1, &bh);
+ bh_read_nowait(bh, 0);
*wait_bh++=bh;
}

diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c
index e2cc9eec287c..26b4c2bfee49 100644
--- a/fs/ocfs2/super.c
+++ b/fs/ocfs2/super.c
@@ -1764,9 +1764,7 @@ static int ocfs2_get_sector(struct super_block *sb,
if (!buffer_dirty(*bh))
clear_buffer_uptodate(*bh);
unlock_buffer(*bh);
- ll_rw_block(REQ_OP_READ, 1, bh);
- wait_on_buffer(*bh);
- if (!buffer_uptodate(*bh)) {
+ if (bh_read(*bh, 0) < 0) {
mlog_errno(-EIO);
brelse(*bh);
*bh = NULL;
--
2.31.1

2022-09-01 13:26:53

by Zhang Yi

[permalink] [raw]
Subject: [PATCH v2 09/14] reiserfs: replace ll_rw_block()

ll_rw_block() is not safe for the sync read/write path because it cannot
guarantee that submitting read/write IO if the buffer has been locked.
We could get false positive EIO after wait_on_buffer() in read path if
the buffer has been locked by others. So stop using ll_rw_block() in
reiserfs. We also switch to new bh_readahead_batch() helper for the
buffer array readahead path.

Signed-off-by: Zhang Yi <[email protected]>
---
fs/reiserfs/journal.c | 11 ++++++-----
fs/reiserfs/stree.c | 4 ++--
fs/reiserfs/super.c | 4 +---
3 files changed, 9 insertions(+), 10 deletions(-)

diff --git a/fs/reiserfs/journal.c b/fs/reiserfs/journal.c
index 94addfcefede..9f62da7471c9 100644
--- a/fs/reiserfs/journal.c
+++ b/fs/reiserfs/journal.c
@@ -868,7 +868,7 @@ static int write_ordered_buffers(spinlock_t * lock,
*/
if (buffer_dirty(bh) && unlikely(bh->b_page->mapping == NULL)) {
spin_unlock(lock);
- ll_rw_block(REQ_OP_WRITE, 1, &bh);
+ write_dirty_buffer(bh, 0);
spin_lock(lock);
}
put_bh(bh);
@@ -1054,7 +1054,7 @@ static int flush_commit_list(struct super_block *s,
if (tbh) {
if (buffer_dirty(tbh)) {
depth = reiserfs_write_unlock_nested(s);
- ll_rw_block(REQ_OP_WRITE, 1, &tbh);
+ write_dirty_buffer(tbh, 0);
reiserfs_write_lock_nested(s, depth);
}
put_bh(tbh) ;
@@ -2240,7 +2240,7 @@ static int journal_read_transaction(struct super_block *sb,
}
}
/* read in the log blocks, memcpy to the corresponding real block */
- ll_rw_block(REQ_OP_READ, get_desc_trans_len(desc), log_blocks);
+ bh_read_batch(get_desc_trans_len(desc), log_blocks);
for (i = 0; i < get_desc_trans_len(desc); i++) {

wait_on_buffer(log_blocks[i]);
@@ -2342,10 +2342,11 @@ static struct buffer_head *reiserfs_breada(struct block_device *dev,
} else
bhlist[j++] = bh;
}
- ll_rw_block(REQ_OP_READ, j, bhlist);
+ bh = bhlist[0];
+ bh_read_nowait(bh, 0);
+ bh_readahead_batch(j - 1, &bhlist[1], 0);
for (i = 1; i < j; i++)
brelse(bhlist[i]);
- bh = bhlist[0];
wait_on_buffer(bh);
if (buffer_uptodate(bh))
return bh;
diff --git a/fs/reiserfs/stree.c b/fs/reiserfs/stree.c
index 9a293609a022..84c12a1947b2 100644
--- a/fs/reiserfs/stree.c
+++ b/fs/reiserfs/stree.c
@@ -579,7 +579,7 @@ static int search_by_key_reada(struct super_block *s,
if (!buffer_uptodate(bh[j])) {
if (depth == -1)
depth = reiserfs_write_unlock_nested(s);
- ll_rw_block(REQ_OP_READ | REQ_RAHEAD, 1, bh + j);
+ bh_readahead(bh[j], REQ_RAHEAD);
}
brelse(bh[j]);
}
@@ -685,7 +685,7 @@ int search_by_key(struct super_block *sb, const struct cpu_key *key,
if (!buffer_uptodate(bh) && depth == -1)
depth = reiserfs_write_unlock_nested(sb);

- ll_rw_block(REQ_OP_READ, 1, &bh);
+ bh_read_nowait(bh, 0);
wait_on_buffer(bh);

if (depth != -1)
diff --git a/fs/reiserfs/super.c b/fs/reiserfs/super.c
index c88cd2ce0665..a5ffec0c7517 100644
--- a/fs/reiserfs/super.c
+++ b/fs/reiserfs/super.c
@@ -1702,9 +1702,7 @@ static int read_super_block(struct super_block *s, int offset)
/* after journal replay, reread all bitmap and super blocks */
static int reread_meta_blocks(struct super_block *s)
{
- ll_rw_block(REQ_OP_READ, 1, &SB_BUFFER_WITH_SB(s));
- wait_on_buffer(SB_BUFFER_WITH_SB(s));
- if (!buffer_uptodate(SB_BUFFER_WITH_SB(s))) {
+ if (bh_read(SB_BUFFER_WITH_SB(s), 0) < 0) {
reiserfs_warning(s, "reiserfs-2504", "error reading the super");
return 1;
}
--
2.31.1

2022-09-01 13:27:05

by Zhang Yi

[permalink] [raw]
Subject: [PATCH v2 10/14] udf: replace ll_rw_block()

ll_rw_block() is not safe for the sync read path because it cannot
guarantee that submitting read IO if the buffer has been locked. We
could get false positive EIO after wait_on_buffer() if the buffer has
been locked by others. So stop using ll_rw_block(). We also switch to
new bh_readahead_batch() helper for the buffer array readahead path.

Signed-off-by: Zhang Yi <[email protected]>
---
fs/udf/dir.c | 2 +-
fs/udf/directory.c | 2 +-
fs/udf/inode.c | 8 +-------
3 files changed, 3 insertions(+), 9 deletions(-)

diff --git a/fs/udf/dir.c b/fs/udf/dir.c
index cad3772f9dbe..be640f4b2f2c 100644
--- a/fs/udf/dir.c
+++ b/fs/udf/dir.c
@@ -130,7 +130,7 @@ static int udf_readdir(struct file *file, struct dir_context *ctx)
brelse(tmp);
}
if (num) {
- ll_rw_block(REQ_OP_READ | REQ_RAHEAD, num, bha);
+ bh_readahead_batch(num, bha, REQ_RAHEAD);
for (i = 0; i < num; i++)
brelse(bha[i]);
}
diff --git a/fs/udf/directory.c b/fs/udf/directory.c
index a2adf6293093..16bcf2c6b8b3 100644
--- a/fs/udf/directory.c
+++ b/fs/udf/directory.c
@@ -89,7 +89,7 @@ struct fileIdentDesc *udf_fileident_read(struct inode *dir, loff_t *nf_pos,
brelse(tmp);
}
if (num) {
- ll_rw_block(REQ_OP_READ | REQ_RAHEAD, num, bha);
+ bh_readahead_batch(num, bha, REQ_RAHEAD);
for (i = 0; i < num; i++)
brelse(bha[i]);
}
diff --git a/fs/udf/inode.c b/fs/udf/inode.c
index 8d06daed549f..dce6ae9ae306 100644
--- a/fs/udf/inode.c
+++ b/fs/udf/inode.c
@@ -1211,13 +1211,7 @@ struct buffer_head *udf_bread(struct inode *inode, udf_pblk_t block,
if (!bh)
return NULL;

- if (buffer_uptodate(bh))
- return bh;
-
- ll_rw_block(REQ_OP_READ, 1, &bh);
-
- wait_on_buffer(bh);
- if (buffer_uptodate(bh))
+ if (bh_read(bh, 0) >= 0)
return bh;

brelse(bh);
--
2.31.1

2022-09-01 13:27:17

by Zhang Yi

[permalink] [raw]
Subject: [PATCH v2 14/14] fs/buffer: remove bh_submit_read() helper

bh_submit_read() has no user anymore, just remove it.

Signed-off-by: Zhang Yi <[email protected]>
Reviewed-by: Jan Kara <[email protected]>
---
fs/buffer.c | 25 -------------------------
include/linux/buffer_head.h | 1 -
2 files changed, 26 deletions(-)

diff --git a/fs/buffer.c b/fs/buffer.c
index 2cccc7586b99..b4c9fff3ab6c 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -3025,31 +3025,6 @@ void __bh_read_batch(int nr, struct buffer_head *bhs[],
}
EXPORT_SYMBOL(__bh_read_batch);

-/**
- * bh_submit_read - Submit a locked buffer for reading
- * @bh: struct buffer_head
- *
- * Returns zero on success and -EIO on error.
- */
-int bh_submit_read(struct buffer_head *bh)
-{
- BUG_ON(!buffer_locked(bh));
-
- if (buffer_uptodate(bh)) {
- unlock_buffer(bh);
- return 0;
- }
-
- get_bh(bh);
- bh->b_end_io = end_buffer_read_sync;
- submit_bh(REQ_OP_READ, bh);
- wait_on_buffer(bh);
- if (buffer_uptodate(bh))
- return 0;
- return -EIO;
-}
-EXPORT_SYMBOL(bh_submit_read);
-
void __init buffer_init(void)
{
unsigned long nrpages;
diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h
index b415d8bc2a09..9b6556d3f110 100644
--- a/include/linux/buffer_head.h
+++ b/include/linux/buffer_head.h
@@ -230,7 +230,6 @@ int submit_bh(blk_opf_t, struct buffer_head *);
void write_boundary_block(struct block_device *bdev,
sector_t bblock, unsigned blocksize);
int bh_uptodate_or_lock(struct buffer_head *bh);
-int bh_submit_read(struct buffer_head *bh);
int __bh_read(struct buffer_head *bh, blk_opf_t op_flags, bool wait);
void __bh_read_batch(int nr, struct buffer_head *bhs[],
blk_opf_t op_flags, bool force_lock);
--
2.31.1

2022-09-01 13:27:45

by Zhang Yi

[permalink] [raw]
Subject: [PATCH v2 13/14] ext2: replace bh_submit_read() helper with bh_read_locked()

bh_submit_read() and the uptodate check logic in bh_uptodate_or_lock()
has been integrated in bh_read() helper, so switch to use it directly.

Signed-off-by: Zhang Yi <[email protected]>
---
fs/ext2/balloc.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/fs/ext2/balloc.c b/fs/ext2/balloc.c
index c17ccc19b938..5dc0a31f4a08 100644
--- a/fs/ext2/balloc.c
+++ b/fs/ext2/balloc.c
@@ -126,6 +126,7 @@ read_block_bitmap(struct super_block *sb, unsigned int block_group)
struct ext2_group_desc * desc;
struct buffer_head * bh = NULL;
ext2_fsblk_t bitmap_blk;
+ int ret;

desc = ext2_get_group_desc(sb, block_group, NULL);
if (!desc)
@@ -139,10 +140,10 @@ read_block_bitmap(struct super_block *sb, unsigned int block_group)
block_group, le32_to_cpu(desc->bg_block_bitmap));
return NULL;
}
- if (likely(bh_uptodate_or_lock(bh)))
+ ret = bh_read(bh, 0);
+ if (ret > 0)
return bh;
-
- if (bh_submit_read(bh) < 0) {
+ if (ret < 0) {
brelse(bh);
ext2_error(sb, __func__,
"Cannot read block bitmap - "
--
2.31.1

2022-09-01 13:28:09

by Zhang Yi

[permalink] [raw]
Subject: [PATCH v2 12/14] fs/buffer: remove ll_rw_block() helper

Now that all ll_rw_block() users has been replaced to new safe helpers,
we just remove it here.

Signed-off-by: Zhang Yi <[email protected]>
Reviewed-by: Jan Kara <[email protected]>
---
fs/buffer.c | 63 +++----------------------------------
include/linux/buffer_head.h | 1 -
2 files changed, 4 insertions(+), 60 deletions(-)

diff --git a/fs/buffer.c b/fs/buffer.c
index aec568b3ae52..2cccc7586b99 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -152,7 +152,7 @@ static void __end_buffer_read_notouch(struct buffer_head *bh, int uptodate)

/*
* Default synchronous end-of-IO handler.. Just mark it up-to-date and
- * unlock the buffer. This is what ll_rw_block uses too.
+ * unlock the buffer.
*/
void end_buffer_read_sync(struct buffer_head *bh, int uptodate)
{
@@ -491,8 +491,8 @@ int inode_has_buffers(struct inode *inode)
* all already-submitted IO to complete, but does not queue any new
* writes to the disk.
*
- * To do O_SYNC writes, just queue the buffer writes with ll_rw_block as
- * you dirty the buffers, and then use osync_inode_buffers to wait for
+ * To do O_SYNC writes, just queue the buffer writes with write_dirty_buffer
+ * as you dirty the buffers, and then use osync_inode_buffers to wait for
* completion. Any other dirty buffers which are not yet queued for
* write will not be flushed to disk by the osync.
*/
@@ -1806,7 +1806,7 @@ int __block_write_full_page(struct inode *inode, struct page *page,
/*
* The page was marked dirty, but the buffers were
* clean. Someone wrote them back by hand with
- * ll_rw_block/submit_bh. A rare case.
+ * write_dirty_buffer/submit_bh. A rare case.
*/
end_page_writeback(page);

@@ -2713,61 +2713,6 @@ int submit_bh(blk_opf_t opf, struct buffer_head *bh)
}
EXPORT_SYMBOL(submit_bh);

-/**
- * ll_rw_block: low-level access to block devices (DEPRECATED)
- * @opf: block layer request operation and flags.
- * @nr: number of &struct buffer_heads in the array
- * @bhs: array of pointers to &struct buffer_head
- *
- * ll_rw_block() takes an array of pointers to &struct buffer_heads, and
- * requests an I/O operation on them, either a %REQ_OP_READ or a %REQ_OP_WRITE.
- * @opf contains flags modifying the detailed I/O behavior, most notably
- * %REQ_RAHEAD.
- *
- * This function drops any buffer that it cannot get a lock on (with the
- * BH_Lock state bit), any buffer that appears to be clean when doing a write
- * request, and any buffer that appears to be up-to-date when doing read
- * request. Further it marks as clean buffers that are processed for
- * writing (the buffer cache won't assume that they are actually clean
- * until the buffer gets unlocked).
- *
- * ll_rw_block sets b_end_io to simple completion handler that marks
- * the buffer up-to-date (if appropriate), unlocks the buffer and wakes
- * any waiters.
- *
- * All of the buffers must be for the same device, and must also be a
- * multiple of the current approved size for the device.
- */
-void ll_rw_block(const blk_opf_t opf, int nr, struct buffer_head *bhs[])
-{
- const enum req_op op = opf & REQ_OP_MASK;
- int i;
-
- for (i = 0; i < nr; i++) {
- struct buffer_head *bh = bhs[i];
-
- if (!trylock_buffer(bh))
- continue;
- if (op == REQ_OP_WRITE) {
- if (test_clear_buffer_dirty(bh)) {
- bh->b_end_io = end_buffer_write_sync;
- get_bh(bh);
- submit_bh(opf, bh);
- continue;
- }
- } else {
- if (!buffer_uptodate(bh)) {
- bh->b_end_io = end_buffer_read_sync;
- get_bh(bh);
- submit_bh(opf, bh);
- continue;
- }
- }
- unlock_buffer(bh);
- }
-}
-EXPORT_SYMBOL(ll_rw_block);
-
void write_dirty_buffer(struct buffer_head *bh, blk_opf_t op_flags)
{
lock_buffer(bh);
diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h
index 6d09785bed9f..b415d8bc2a09 100644
--- a/include/linux/buffer_head.h
+++ b/include/linux/buffer_head.h
@@ -223,7 +223,6 @@ struct buffer_head *alloc_buffer_head(gfp_t gfp_flags);
void free_buffer_head(struct buffer_head * bh);
void unlock_buffer(struct buffer_head *bh);
void __lock_buffer(struct buffer_head *bh);
-void ll_rw_block(blk_opf_t, int, struct buffer_head * bh[]);
int sync_dirty_buffer(struct buffer_head *bh);
int __sync_dirty_buffer(struct buffer_head *bh, blk_opf_t op_flags);
void write_dirty_buffer(struct buffer_head *bh, blk_opf_t op_flags);
--
2.31.1

2022-09-01 13:33:05

by Zhang Yi

[permalink] [raw]
Subject: [PATCH v2 06/14] jbd2: replace ll_rw_block()

ll_rw_block() is not safe for the sync read path because it cannot
guarantee that submitting read IO if the buffer has been locked. We
could get false positive EIO after wait_on_buffer() if the buffer has
been locked by others. So stop using ll_rw_block() in
journal_get_superblock(). We also switch to new bh_readahead_batch()
for the buffer array readahead path.

Signed-off-by: Zhang Yi <[email protected]>
---
fs/jbd2/journal.c | 15 ++++++---------
fs/jbd2/recovery.c | 16 ++++++++++------
2 files changed, 16 insertions(+), 15 deletions(-)

diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
index 6350d3857c89..140b070471c0 100644
--- a/fs/jbd2/journal.c
+++ b/fs/jbd2/journal.c
@@ -1893,19 +1893,16 @@ static int journal_get_superblock(journal_t *journal)
{
struct buffer_head *bh;
journal_superblock_t *sb;
- int err = -EIO;
+ int err;

bh = journal->j_sb_buffer;

J_ASSERT(bh != NULL);
- if (!buffer_uptodate(bh)) {
- ll_rw_block(REQ_OP_READ, 1, &bh);
- wait_on_buffer(bh);
- if (!buffer_uptodate(bh)) {
- printk(KERN_ERR
- "JBD2: IO error reading journal superblock\n");
- goto out;
- }
+ err = bh_read(bh, 0);
+ if (err < 0) {
+ printk(KERN_ERR
+ "JBD2: IO error reading journal superblock\n");
+ goto out;
}

if (buffer_verified(bh))
diff --git a/fs/jbd2/recovery.c b/fs/jbd2/recovery.c
index f548479615c6..1f878c315b03 100644
--- a/fs/jbd2/recovery.c
+++ b/fs/jbd2/recovery.c
@@ -100,7 +100,7 @@ static int do_readahead(journal_t *journal, unsigned int start)
if (!buffer_uptodate(bh) && !buffer_locked(bh)) {
bufs[nbufs++] = bh;
if (nbufs == MAXBUF) {
- ll_rw_block(REQ_OP_READ, nbufs, bufs);
+ bh_readahead_batch(nbufs, bufs, 0);
journal_brelse_array(bufs, nbufs);
nbufs = 0;
}
@@ -109,7 +109,7 @@ static int do_readahead(journal_t *journal, unsigned int start)
}

if (nbufs)
- ll_rw_block(REQ_OP_READ, nbufs, bufs);
+ bh_readahead_batch(nbufs, bufs, 0);
err = 0;

failed:
@@ -152,9 +152,14 @@ static int jread(struct buffer_head **bhp, journal_t *journal,
return -ENOMEM;

if (!buffer_uptodate(bh)) {
- /* If this is a brand new buffer, start readahead.
- Otherwise, we assume we are already reading it. */
- if (!buffer_req(bh))
+ /*
+ * If this is a brand new buffer, start readahead.
+ * Otherwise, we assume we are already reading it.
+ */
+ bool need_readahead = !buffer_req(bh);
+
+ bh_read_nowait(bh, 0);
+ if (need_readahead)
do_readahead(journal, offset);
wait_on_buffer(bh);
}
@@ -687,7 +692,6 @@ static int do_one_pass(journal_t *journal,
mark_buffer_dirty(nbh);
BUFFER_TRACE(nbh, "marking uptodate");
++info->nr_replays;
- /* ll_rw_block(WRITE, 1, &nbh); */
unlock_buffer(nbh);
brelse(obh);
brelse(nbh);
--
2.31.1

2022-09-01 15:53:53

by Jan Kara

[permalink] [raw]
Subject: Re: [PATCH v2 02/14] fs/buffer: add some new buffer read helpers

On Thu 01-09-22 21:34:53, Zhang Yi wrote:
> Current ll_rw_block() helper is fragile because it assumes that locked
> buffer means it's under IO which is submitted by some other who holds
> the lock, it skip buffer if it failed to get the lock, so it's only
> safe on the readahead path. Unfortunately, now that most filesystems
> still use this helper mistakenly on the sync metadata read path. There
> is no guarantee that the one who holds the buffer lock always submit IO
> (e.g. buffer_migrate_folio_norefs() after commit 88dbcbb3a484 ("blkdev:
> avoid migration stalls for blkdev pages"), it could lead to false
> positive -EIO when submitting reading IO.
>
> This patch add some friendly buffer read helpers to prepare replacing
> ll_rw_block() and similar calls. We can only call bh_readahead_[]
> helpers for the readahead paths.
>
> Signed-off-by: Zhang Yi <[email protected]>

Looks good to me. Feel free to add:

Reviewed-by: Jan Kara <[email protected]>

Honza

> ---
> fs/buffer.c | 65 +++++++++++++++++++++++++++++++++++++
> include/linux/buffer_head.h | 38 ++++++++++++++++++++++
> 2 files changed, 103 insertions(+)
>
> diff --git a/fs/buffer.c b/fs/buffer.c
> index a0b70b3239f3..a6bc769e665d 100644
> --- a/fs/buffer.c
> +++ b/fs/buffer.c
> @@ -3017,6 +3017,71 @@ int bh_uptodate_or_lock(struct buffer_head *bh)
> }
> EXPORT_SYMBOL(bh_uptodate_or_lock);
>
> +/**
> + * __bh_read - Submit read for a locked buffer
> + * @bh: struct buffer_head
> + * @op_flags: appending REQ_OP_* flags besides REQ_OP_READ
> + * @wait: wait until reading finish
> + *
> + * Returns zero on success or don't wait, and -EIO on error.
> + */
> +int __bh_read(struct buffer_head *bh, blk_opf_t op_flags, bool wait)
> +{
> + int ret = 0;
> +
> + BUG_ON(!buffer_locked(bh));
> +
> + get_bh(bh);
> + bh->b_end_io = end_buffer_read_sync;
> + submit_bh(REQ_OP_READ | op_flags, bh);
> + if (wait) {
> + wait_on_buffer(bh);
> + if (!buffer_uptodate(bh))
> + ret = -EIO;
> + }
> + return ret;
> +}
> +EXPORT_SYMBOL(__bh_read);
> +
> +/**
> + * __bh_read_batch - Submit read for a batch of unlocked buffers
> + * @nr: entry number of the buffer batch
> + * @bhs: a batch of struct buffer_head
> + * @op_flags: appending REQ_OP_* flags besides REQ_OP_READ
> + * @force_lock: force to get a lock on the buffer if set, otherwise drops any
> + * buffer that cannot lock.
> + *
> + * Returns zero on success or don't wait, and -EIO on error.
> + */
> +void __bh_read_batch(int nr, struct buffer_head *bhs[],
> + blk_opf_t op_flags, bool force_lock)
> +{
> + int i;
> +
> + for (i = 0; i < nr; i++) {
> + struct buffer_head *bh = bhs[i];
> +
> + if (buffer_uptodate(bh))
> + continue;
> +
> + if (force_lock)
> + lock_buffer(bh);
> + else
> + if (!trylock_buffer(bh))
> + continue;
> +
> + if (buffer_uptodate(bh)) {
> + unlock_buffer(bh);
> + continue;
> + }
> +
> + bh->b_end_io = end_buffer_read_sync;
> + get_bh(bh);
> + submit_bh(REQ_OP_READ | op_flags, bh);
> + }
> +}
> +EXPORT_SYMBOL(__bh_read_batch);
> +
> /**
> * bh_submit_read - Submit a locked buffer for reading
> * @bh: struct buffer_head
> diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h
> index c3863c417b00..6d09785bed9f 100644
> --- a/include/linux/buffer_head.h
> +++ b/include/linux/buffer_head.h
> @@ -232,6 +232,9 @@ void write_boundary_block(struct block_device *bdev,
> sector_t bblock, unsigned blocksize);
> int bh_uptodate_or_lock(struct buffer_head *bh);
> int bh_submit_read(struct buffer_head *bh);
> +int __bh_read(struct buffer_head *bh, blk_opf_t op_flags, bool wait);
> +void __bh_read_batch(int nr, struct buffer_head *bhs[],
> + blk_opf_t op_flags, bool force_lock);
>
> extern int buffer_heads_over_limit;
>
> @@ -399,6 +402,41 @@ static inline struct buffer_head *__getblk(struct block_device *bdev,
> return __getblk_gfp(bdev, block, size, __GFP_MOVABLE);
> }
>
> +static inline void bh_readahead(struct buffer_head *bh, blk_opf_t op_flags)
> +{
> + if (!buffer_uptodate(bh) && trylock_buffer(bh)) {
> + if (!buffer_uptodate(bh))
> + __bh_read(bh, op_flags, false);
> + else
> + unlock_buffer(bh);
> + }
> +}
> +
> +static inline void bh_read_nowait(struct buffer_head *bh, blk_opf_t op_flags)
> +{
> + if (!bh_uptodate_or_lock(bh))
> + __bh_read(bh, op_flags, false);
> +}
> +
> +/* Returns 1 if buffer uptodated, 0 on success, and -EIO on error. */
> +static inline int bh_read(struct buffer_head *bh, blk_opf_t op_flags)
> +{
> + if (bh_uptodate_or_lock(bh))
> + return 1;
> + return __bh_read(bh, op_flags, true);
> +}
> +
> +static inline void bh_read_batch(int nr, struct buffer_head *bhs[])
> +{
> + __bh_read_batch(nr, bhs, 0, true);
> +}
> +
> +static inline void bh_readahead_batch(int nr, struct buffer_head *bhs[],
> + blk_opf_t op_flags)
> +{
> + __bh_read_batch(nr, bhs, op_flags, false);
> +}
> +
> /**
> * __bread() - reads a specified block and returns the bh
> * @bdev: the block_device to read from
> --
> 2.31.1
>
--
Jan Kara <[email protected]>
SUSE Labs, CR

2022-09-01 15:57:02

by Jan Kara

[permalink] [raw]
Subject: Re: [PATCH v2 03/14] fs/buffer: replace ll_rw_block()

On Thu 01-09-22 21:34:54, Zhang Yi wrote:
> ll_rw_block() is not safe for the sync IO path because it skip buffers
> which has been locked by others, it could lead to false positive EIO
> when submitting read IO. So stop using ll_rw_block(), switch to use new
> helpers which could guarantee buffer locked and submit IO if needed.
>
> Signed-off-by: Zhang Yi <[email protected]>

Looks good. Feel free to add:

Reviewed-by: Jan Kara <[email protected]>

Honza

> ---
> fs/buffer.c | 12 +++++-------
> 1 file changed, 5 insertions(+), 7 deletions(-)
>
> diff --git a/fs/buffer.c b/fs/buffer.c
> index a6bc769e665d..aec568b3ae52 100644
> --- a/fs/buffer.c
> +++ b/fs/buffer.c
> @@ -562,7 +562,7 @@ void write_boundary_block(struct block_device *bdev,
> struct buffer_head *bh = __find_get_block(bdev, bblock + 1, blocksize);
> if (bh) {
> if (buffer_dirty(bh))
> - ll_rw_block(REQ_OP_WRITE, 1, &bh);
> + write_dirty_buffer(bh, 0);
> put_bh(bh);
> }
> }
> @@ -1342,7 +1342,7 @@ void __breadahead(struct block_device *bdev, sector_t block, unsigned size)
> {
> struct buffer_head *bh = __getblk(bdev, block, size);
> if (likely(bh)) {
> - ll_rw_block(REQ_OP_READ | REQ_RAHEAD, 1, &bh);
> + bh_readahead(bh, REQ_RAHEAD);
> brelse(bh);
> }
> }
> @@ -2022,7 +2022,7 @@ int __block_write_begin_int(struct folio *folio, loff_t pos, unsigned len,
> if (!buffer_uptodate(bh) && !buffer_delay(bh) &&
> !buffer_unwritten(bh) &&
> (block_start < from || block_end > to)) {
> - ll_rw_block(REQ_OP_READ, 1, &bh);
> + bh_read_nowait(bh, 0);
> *wait_bh++=bh;
> }
> }
> @@ -2582,11 +2582,9 @@ int block_truncate_page(struct address_space *mapping,
> set_buffer_uptodate(bh);
>
> if (!buffer_uptodate(bh) && !buffer_delay(bh) && !buffer_unwritten(bh)) {
> - err = -EIO;
> - ll_rw_block(REQ_OP_READ, 1, &bh);
> - wait_on_buffer(bh);
> + err = bh_read(bh, 0);
> /* Uhhuh. Read error. Complain and punt. */
> - if (!buffer_uptodate(bh))
> + if (err < 0)
> goto unlock;
> }
>
> --
> 2.31.1
>
--
Jan Kara <[email protected]>
SUSE Labs, CR

2022-09-01 15:57:48

by Jan Kara

[permalink] [raw]
Subject: Re: [PATCH v2 04/14] gfs2: replace ll_rw_block()

On Thu 01-09-22 21:34:55, Zhang Yi wrote:
> ll_rw_block() is not safe for the sync read path because it cannot
> guarantee that always submitting read IO if the buffer has been locked,
> so stop using it. We also switch to new bh_readahead() helper for the
> readahead path.
>
> Signed-off-by: Zhang Yi <[email protected]>

Looks good to me. Feel free to add:

Reviewed-by: Jan Kara <[email protected]>

Honza

> ---
> fs/gfs2/meta_io.c | 7 ++-----
> fs/gfs2/quota.c | 8 ++------
> 2 files changed, 4 insertions(+), 11 deletions(-)
>
> diff --git a/fs/gfs2/meta_io.c b/fs/gfs2/meta_io.c
> index 7e70e0ba5a6c..6ed728aae9a5 100644
> --- a/fs/gfs2/meta_io.c
> +++ b/fs/gfs2/meta_io.c
> @@ -525,8 +525,7 @@ struct buffer_head *gfs2_meta_ra(struct gfs2_glock *gl, u64 dblock, u32 extlen)
>
> if (buffer_uptodate(first_bh))
> goto out;
> - if (!buffer_locked(first_bh))
> - ll_rw_block(REQ_OP_READ | REQ_META | REQ_PRIO, 1, &first_bh);
> + bh_read_nowait(first_bh, REQ_META | REQ_PRIO);
>
> dblock++;
> extlen--;
> @@ -534,9 +533,7 @@ struct buffer_head *gfs2_meta_ra(struct gfs2_glock *gl, u64 dblock, u32 extlen)
> while (extlen) {
> bh = gfs2_getbuf(gl, dblock, CREATE);
>
> - if (!buffer_uptodate(bh) && !buffer_locked(bh))
> - ll_rw_block(REQ_OP_READ | REQ_RAHEAD | REQ_META |
> - REQ_PRIO, 1, &bh);
> + bh_readahead(bh, REQ_RAHEAD | REQ_META | REQ_PRIO);
> brelse(bh);
> dblock++;
> extlen--;
> diff --git a/fs/gfs2/quota.c b/fs/gfs2/quota.c
> index f201eaf59d0d..1ed17226d9ed 100644
> --- a/fs/gfs2/quota.c
> +++ b/fs/gfs2/quota.c
> @@ -745,12 +745,8 @@ static int gfs2_write_buf_to_page(struct gfs2_inode *ip, unsigned long index,
> }
> if (PageUptodate(page))
> set_buffer_uptodate(bh);
> - if (!buffer_uptodate(bh)) {
> - ll_rw_block(REQ_OP_READ | REQ_META | REQ_PRIO, 1, &bh);
> - wait_on_buffer(bh);
> - if (!buffer_uptodate(bh))
> - goto unlock_out;
> - }
> + if (bh_read(bh, REQ_META | REQ_PRIO) < 0)
> + goto unlock_out;
> if (gfs2_is_jdata(ip))
> gfs2_trans_add_data(ip->i_gl, bh);
> else
> --
> 2.31.1
>
--
Jan Kara <[email protected]>
SUSE Labs, CR

2022-09-01 15:59:17

by Jan Kara

[permalink] [raw]
Subject: Re: [PATCH v2 07/14] ntfs3: replace ll_rw_block()

On Thu 01-09-22 21:34:58, Zhang Yi wrote:
> ll_rw_block() is not safe for the sync read path because it cannot
> guarantee that submitting read IO if the buffer has been locked. We
> could get false positive EIO after wait_on_buffer() if the buffer has
> been locked by others. So stop using ll_rw_block() in
> ntfs_get_block_vbo().
>
> Signed-off-by: Zhang Yi <[email protected]>

Looks good. Feel free to add:

Reviewed-by: Jan Kara <[email protected]>

Honza

> ---
> fs/ntfs3/inode.c | 7 ++-----
> 1 file changed, 2 insertions(+), 5 deletions(-)
>
> diff --git a/fs/ntfs3/inode.c b/fs/ntfs3/inode.c
> index 51363d4e8636..cadbfa111539 100644
> --- a/fs/ntfs3/inode.c
> +++ b/fs/ntfs3/inode.c
> @@ -630,12 +630,9 @@ static noinline int ntfs_get_block_vbo(struct inode *inode, u64 vbo,
> bh->b_size = block_size;
> off = vbo & (PAGE_SIZE - 1);
> set_bh_page(bh, page, off);
> - ll_rw_block(REQ_OP_READ, 1, &bh);
> - wait_on_buffer(bh);
> - if (!buffer_uptodate(bh)) {
> - err = -EIO;
> + err = bh_read(bh, 0);
> + if (err < 0)
> goto out;
> - }
> zero_user_segment(page, off + voff, off + block_size);
> }
> }
> --
> 2.31.1
>
--
Jan Kara <[email protected]>
SUSE Labs, CR

2022-09-01 16:00:46

by Jan Kara

[permalink] [raw]
Subject: Re: [PATCH v2 06/14] jbd2: replace ll_rw_block()

On Thu 01-09-22 21:34:57, Zhang Yi wrote:
> ll_rw_block() is not safe for the sync read path because it cannot
> guarantee that submitting read IO if the buffer has been locked. We
> could get false positive EIO after wait_on_buffer() if the buffer has
> been locked by others. So stop using ll_rw_block() in
> journal_get_superblock(). We also switch to new bh_readahead_batch()
> for the buffer array readahead path.
>
> Signed-off-by: Zhang Yi <[email protected]>

Looks good to me. Feel free to add:

Reviewed-by: Jan Kara <[email protected]>

Honza

> ---
> fs/jbd2/journal.c | 15 ++++++---------
> fs/jbd2/recovery.c | 16 ++++++++++------
> 2 files changed, 16 insertions(+), 15 deletions(-)
>
> diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
> index 6350d3857c89..140b070471c0 100644
> --- a/fs/jbd2/journal.c
> +++ b/fs/jbd2/journal.c
> @@ -1893,19 +1893,16 @@ static int journal_get_superblock(journal_t *journal)
> {
> struct buffer_head *bh;
> journal_superblock_t *sb;
> - int err = -EIO;
> + int err;
>
> bh = journal->j_sb_buffer;
>
> J_ASSERT(bh != NULL);
> - if (!buffer_uptodate(bh)) {
> - ll_rw_block(REQ_OP_READ, 1, &bh);
> - wait_on_buffer(bh);
> - if (!buffer_uptodate(bh)) {
> - printk(KERN_ERR
> - "JBD2: IO error reading journal superblock\n");
> - goto out;
> - }
> + err = bh_read(bh, 0);
> + if (err < 0) {
> + printk(KERN_ERR
> + "JBD2: IO error reading journal superblock\n");
> + goto out;
> }
>
> if (buffer_verified(bh))
> diff --git a/fs/jbd2/recovery.c b/fs/jbd2/recovery.c
> index f548479615c6..1f878c315b03 100644
> --- a/fs/jbd2/recovery.c
> +++ b/fs/jbd2/recovery.c
> @@ -100,7 +100,7 @@ static int do_readahead(journal_t *journal, unsigned int start)
> if (!buffer_uptodate(bh) && !buffer_locked(bh)) {
> bufs[nbufs++] = bh;
> if (nbufs == MAXBUF) {
> - ll_rw_block(REQ_OP_READ, nbufs, bufs);
> + bh_readahead_batch(nbufs, bufs, 0);
> journal_brelse_array(bufs, nbufs);
> nbufs = 0;
> }
> @@ -109,7 +109,7 @@ static int do_readahead(journal_t *journal, unsigned int start)
> }
>
> if (nbufs)
> - ll_rw_block(REQ_OP_READ, nbufs, bufs);
> + bh_readahead_batch(nbufs, bufs, 0);
> err = 0;
>
> failed:
> @@ -152,9 +152,14 @@ static int jread(struct buffer_head **bhp, journal_t *journal,
> return -ENOMEM;
>
> if (!buffer_uptodate(bh)) {
> - /* If this is a brand new buffer, start readahead.
> - Otherwise, we assume we are already reading it. */
> - if (!buffer_req(bh))
> + /*
> + * If this is a brand new buffer, start readahead.
> + * Otherwise, we assume we are already reading it.
> + */
> + bool need_readahead = !buffer_req(bh);
> +
> + bh_read_nowait(bh, 0);
> + if (need_readahead)
> do_readahead(journal, offset);
> wait_on_buffer(bh);
> }
> @@ -687,7 +692,6 @@ static int do_one_pass(journal_t *journal,
> mark_buffer_dirty(nbh);
> BUFFER_TRACE(nbh, "marking uptodate");
> ++info->nr_replays;
> - /* ll_rw_block(WRITE, 1, &nbh); */
> unlock_buffer(nbh);
> brelse(obh);
> brelse(nbh);
> --
> 2.31.1
>
--
Jan Kara <[email protected]>
SUSE Labs, CR

2022-09-01 16:01:47

by Jan Kara

[permalink] [raw]
Subject: Re: [PATCH v2 08/14] ocfs2: replace ll_rw_block()

On Thu 01-09-22 21:34:59, Zhang Yi wrote:
> ll_rw_block() is not safe for the sync read path because it cannot
> guarantee that submitting read IO if the buffer has been locked. We
> could get false positive EIO after wait_on_buffer() if the buffer has
> been locked by others. So stop using ll_rw_block() in ocfs2.
>
> Signed-off-by: Zhang Yi <[email protected]>

Looks good to me. Feel free to add:

Reviewed-by: Jan Kara <[email protected]>

Honza

> ---
> fs/ocfs2/aops.c | 2 +-
> fs/ocfs2/super.c | 4 +---
> 2 files changed, 2 insertions(+), 4 deletions(-)
>
> diff --git a/fs/ocfs2/aops.c b/fs/ocfs2/aops.c
> index af4157f61927..1d65f6ef00ca 100644
> --- a/fs/ocfs2/aops.c
> +++ b/fs/ocfs2/aops.c
> @@ -636,7 +636,7 @@ int ocfs2_map_page_blocks(struct page *page, u64 *p_blkno,
> !buffer_new(bh) &&
> ocfs2_should_read_blk(inode, page, block_start) &&
> (block_start < from || block_end > to)) {
> - ll_rw_block(REQ_OP_READ, 1, &bh);
> + bh_read_nowait(bh, 0);
> *wait_bh++=bh;
> }
>
> diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c
> index e2cc9eec287c..26b4c2bfee49 100644
> --- a/fs/ocfs2/super.c
> +++ b/fs/ocfs2/super.c
> @@ -1764,9 +1764,7 @@ static int ocfs2_get_sector(struct super_block *sb,
> if (!buffer_dirty(*bh))
> clear_buffer_uptodate(*bh);
> unlock_buffer(*bh);
> - ll_rw_block(REQ_OP_READ, 1, bh);
> - wait_on_buffer(*bh);
> - if (!buffer_uptodate(*bh)) {
> + if (bh_read(*bh, 0) < 0) {
> mlog_errno(-EIO);
> brelse(*bh);
> *bh = NULL;
> --
> 2.31.1
>
--
Jan Kara <[email protected]>
SUSE Labs, CR

2022-09-01 16:04:17

by Jan Kara

[permalink] [raw]
Subject: Re: [PATCH v2 09/14] reiserfs: replace ll_rw_block()

On Thu 01-09-22 21:35:00, Zhang Yi wrote:
> ll_rw_block() is not safe for the sync read/write path because it cannot
> guarantee that submitting read/write IO if the buffer has been locked.
> We could get false positive EIO after wait_on_buffer() in read path if
> the buffer has been locked by others. So stop using ll_rw_block() in
> reiserfs. We also switch to new bh_readahead_batch() helper for the
> buffer array readahead path.
>
> Signed-off-by: Zhang Yi <[email protected]>


Looks good to me. Feel free to add:

Reviewed-by: Jan Kara <[email protected]>

Honza

> ---
> fs/reiserfs/journal.c | 11 ++++++-----
> fs/reiserfs/stree.c | 4 ++--
> fs/reiserfs/super.c | 4 +---
> 3 files changed, 9 insertions(+), 10 deletions(-)
>
> diff --git a/fs/reiserfs/journal.c b/fs/reiserfs/journal.c
> index 94addfcefede..9f62da7471c9 100644
> --- a/fs/reiserfs/journal.c
> +++ b/fs/reiserfs/journal.c
> @@ -868,7 +868,7 @@ static int write_ordered_buffers(spinlock_t * lock,
> */
> if (buffer_dirty(bh) && unlikely(bh->b_page->mapping == NULL)) {
> spin_unlock(lock);
> - ll_rw_block(REQ_OP_WRITE, 1, &bh);
> + write_dirty_buffer(bh, 0);
> spin_lock(lock);
> }
> put_bh(bh);
> @@ -1054,7 +1054,7 @@ static int flush_commit_list(struct super_block *s,
> if (tbh) {
> if (buffer_dirty(tbh)) {
> depth = reiserfs_write_unlock_nested(s);
> - ll_rw_block(REQ_OP_WRITE, 1, &tbh);
> + write_dirty_buffer(tbh, 0);
> reiserfs_write_lock_nested(s, depth);
> }
> put_bh(tbh) ;
> @@ -2240,7 +2240,7 @@ static int journal_read_transaction(struct super_block *sb,
> }
> }
> /* read in the log blocks, memcpy to the corresponding real block */
> - ll_rw_block(REQ_OP_READ, get_desc_trans_len(desc), log_blocks);
> + bh_read_batch(get_desc_trans_len(desc), log_blocks);
> for (i = 0; i < get_desc_trans_len(desc); i++) {
>
> wait_on_buffer(log_blocks[i]);
> @@ -2342,10 +2342,11 @@ static struct buffer_head *reiserfs_breada(struct block_device *dev,
> } else
> bhlist[j++] = bh;
> }
> - ll_rw_block(REQ_OP_READ, j, bhlist);
> + bh = bhlist[0];
> + bh_read_nowait(bh, 0);
> + bh_readahead_batch(j - 1, &bhlist[1], 0);
> for (i = 1; i < j; i++)
> brelse(bhlist[i]);
> - bh = bhlist[0];
> wait_on_buffer(bh);
> if (buffer_uptodate(bh))
> return bh;
> diff --git a/fs/reiserfs/stree.c b/fs/reiserfs/stree.c
> index 9a293609a022..84c12a1947b2 100644
> --- a/fs/reiserfs/stree.c
> +++ b/fs/reiserfs/stree.c
> @@ -579,7 +579,7 @@ static int search_by_key_reada(struct super_block *s,
> if (!buffer_uptodate(bh[j])) {
> if (depth == -1)
> depth = reiserfs_write_unlock_nested(s);
> - ll_rw_block(REQ_OP_READ | REQ_RAHEAD, 1, bh + j);
> + bh_readahead(bh[j], REQ_RAHEAD);
> }
> brelse(bh[j]);
> }
> @@ -685,7 +685,7 @@ int search_by_key(struct super_block *sb, const struct cpu_key *key,
> if (!buffer_uptodate(bh) && depth == -1)
> depth = reiserfs_write_unlock_nested(sb);
>
> - ll_rw_block(REQ_OP_READ, 1, &bh);
> + bh_read_nowait(bh, 0);
> wait_on_buffer(bh);
>
> if (depth != -1)
> diff --git a/fs/reiserfs/super.c b/fs/reiserfs/super.c
> index c88cd2ce0665..a5ffec0c7517 100644
> --- a/fs/reiserfs/super.c
> +++ b/fs/reiserfs/super.c
> @@ -1702,9 +1702,7 @@ static int read_super_block(struct super_block *s, int offset)
> /* after journal replay, reread all bitmap and super blocks */
> static int reread_meta_blocks(struct super_block *s)
> {
> - ll_rw_block(REQ_OP_READ, 1, &SB_BUFFER_WITH_SB(s));
> - wait_on_buffer(SB_BUFFER_WITH_SB(s));
> - if (!buffer_uptodate(SB_BUFFER_WITH_SB(s))) {
> + if (bh_read(SB_BUFFER_WITH_SB(s), 0) < 0) {
> reiserfs_warning(s, "reiserfs-2504", "error reading the super");
> return 1;
> }
> --
> 2.31.1
>
--
Jan Kara <[email protected]>
SUSE Labs, CR

2022-09-01 16:05:37

by Jan Kara

[permalink] [raw]
Subject: Re: [PATCH v2 10/14] udf: replace ll_rw_block()

On Thu 01-09-22 21:35:01, Zhang Yi wrote:
> ll_rw_block() is not safe for the sync read path because it cannot
> guarantee that submitting read IO if the buffer has been locked. We
> could get false positive EIO after wait_on_buffer() if the buffer has
> been locked by others. So stop using ll_rw_block(). We also switch to
> new bh_readahead_batch() helper for the buffer array readahead path.
>
> Signed-off-by: Zhang Yi <[email protected]>

Looks good to me.

Honza

> ---
> fs/udf/dir.c | 2 +-
> fs/udf/directory.c | 2 +-
> fs/udf/inode.c | 8 +-------
> 3 files changed, 3 insertions(+), 9 deletions(-)
>
> diff --git a/fs/udf/dir.c b/fs/udf/dir.c
> index cad3772f9dbe..be640f4b2f2c 100644
> --- a/fs/udf/dir.c
> +++ b/fs/udf/dir.c
> @@ -130,7 +130,7 @@ static int udf_readdir(struct file *file, struct dir_context *ctx)
> brelse(tmp);
> }
> if (num) {
> - ll_rw_block(REQ_OP_READ | REQ_RAHEAD, num, bha);
> + bh_readahead_batch(num, bha, REQ_RAHEAD);
> for (i = 0; i < num; i++)
> brelse(bha[i]);
> }
> diff --git a/fs/udf/directory.c b/fs/udf/directory.c
> index a2adf6293093..16bcf2c6b8b3 100644
> --- a/fs/udf/directory.c
> +++ b/fs/udf/directory.c
> @@ -89,7 +89,7 @@ struct fileIdentDesc *udf_fileident_read(struct inode *dir, loff_t *nf_pos,
> brelse(tmp);
> }
> if (num) {
> - ll_rw_block(REQ_OP_READ | REQ_RAHEAD, num, bha);
> + bh_readahead_batch(num, bha, REQ_RAHEAD);
> for (i = 0; i < num; i++)
> brelse(bha[i]);
> }
> diff --git a/fs/udf/inode.c b/fs/udf/inode.c
> index 8d06daed549f..dce6ae9ae306 100644
> --- a/fs/udf/inode.c
> +++ b/fs/udf/inode.c
> @@ -1211,13 +1211,7 @@ struct buffer_head *udf_bread(struct inode *inode, udf_pblk_t block,
> if (!bh)
> return NULL;
>
> - if (buffer_uptodate(bh))
> - return bh;
> -
> - ll_rw_block(REQ_OP_READ, 1, &bh);
> -
> - wait_on_buffer(bh);
> - if (buffer_uptodate(bh))
> + if (bh_read(bh, 0) >= 0)
> return bh;
>
> brelse(bh);
> --
> 2.31.1
>
--
Jan Kara <[email protected]>
SUSE Labs, CR

2022-09-01 16:06:18

by Jan Kara

[permalink] [raw]
Subject: Re: [PATCH v2 13/14] ext2: replace bh_submit_read() helper with bh_read_locked()

On Thu 01-09-22 21:35:04, Zhang Yi wrote:
> bh_submit_read() and the uptodate check logic in bh_uptodate_or_lock()
> has been integrated in bh_read() helper, so switch to use it directly.
>
> Signed-off-by: Zhang Yi <[email protected]>

Looks good to me. Feel free to add:

Reviewed-by: Jan Kara <ja[email protected]>

Honza

> ---
> fs/ext2/balloc.c | 7 ++++---
> 1 file changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/fs/ext2/balloc.c b/fs/ext2/balloc.c
> index c17ccc19b938..5dc0a31f4a08 100644
> --- a/fs/ext2/balloc.c
> +++ b/fs/ext2/balloc.c
> @@ -126,6 +126,7 @@ read_block_bitmap(struct super_block *sb, unsigned int block_group)
> struct ext2_group_desc * desc;
> struct buffer_head * bh = NULL;
> ext2_fsblk_t bitmap_blk;
> + int ret;
>
> desc = ext2_get_group_desc(sb, block_group, NULL);
> if (!desc)
> @@ -139,10 +140,10 @@ read_block_bitmap(struct super_block *sb, unsigned int block_group)
> block_group, le32_to_cpu(desc->bg_block_bitmap));
> return NULL;
> }
> - if (likely(bh_uptodate_or_lock(bh)))
> + ret = bh_read(bh, 0);
> + if (ret > 0)
> return bh;
> -
> - if (bh_submit_read(bh) < 0) {
> + if (ret < 0) {
> brelse(bh);
> ext2_error(sb, __func__,
> "Cannot read block bitmap - "
> --
> 2.31.1
>
--
Jan Kara <[email protected]>
SUSE Labs, CR

2022-09-01 16:06:20

by Jan Kara

[permalink] [raw]
Subject: Re: [PATCH v2 11/14] ufs: replace ll_rw_block()

On Thu 01-09-22 21:35:02, Zhang Yi wrote:
> ll_rw_block() is not safe for the sync read path because it cannot
> guarantee that submitting read IO if the buffer has been locked. We
> could get false positive EIO after wait_on_buffer() if the buffer has
> been locked by others. So stop using ll_rw_block() in ufs.
>
> Signed-off-by: Zhang Yi <[email protected]>

Looks good to me. Feel free to add:

Reviewed-by: Jan Kara <[email protected]>

Honza

> ---
> fs/ufs/balloc.c | 12 ++++--------
> 1 file changed, 4 insertions(+), 8 deletions(-)
>
> diff --git a/fs/ufs/balloc.c b/fs/ufs/balloc.c
> index bd810d8239f2..2436e3f82147 100644
> --- a/fs/ufs/balloc.c
> +++ b/fs/ufs/balloc.c
> @@ -295,14 +295,10 @@ static void ufs_change_blocknr(struct inode *inode, sector_t beg,
>
> if (!buffer_mapped(bh))
> map_bh(bh, inode->i_sb, oldb + pos);
> - if (!buffer_uptodate(bh)) {
> - ll_rw_block(REQ_OP_READ, 1, &bh);
> - wait_on_buffer(bh);
> - if (!buffer_uptodate(bh)) {
> - ufs_error(inode->i_sb, __func__,
> - "read of block failed\n");
> - break;
> - }
> + if (bh_read(bh, 0) < 0) {
> + ufs_error(inode->i_sb, __func__,
> + "read of block failed\n");
> + break;
> }
>
> UFSD(" change from %llu to %llu, pos %u\n",
> --
> 2.31.1
>
--
Jan Kara <[email protected]>
SUSE Labs, CR

2022-09-02 00:33:25

by Al Viro

[permalink] [raw]
Subject: Re: [PATCH v2 13/14] ext2: replace bh_submit_read() helper with bh_read_locked()

On Thu, Sep 01, 2022 at 09:35:04PM +0800, Zhang Yi wrote:
> bh_submit_read() and the uptodate check logic in bh_uptodate_or_lock()
> has been integrated in bh_read() helper, so switch to use it directly.

s/bh_read_locked/bh_read/ in the summary?

2022-09-02 01:35:45

by Zhang Yi

[permalink] [raw]
Subject: Re: [PATCH v2 13/14] ext2: replace bh_submit_read() helper with bh_read_locked()

On 2022/9/2 8:30, Al Viro wrote:
> On Thu, Sep 01, 2022 at 09:35:04PM +0800, Zhang Yi wrote:
>> bh_submit_read() and the uptodate check logic in bh_uptodate_or_lock()
>> has been integrated in bh_read() helper, so switch to use it directly.
>
> s/bh_read_locked/bh_read/ in the summary?
>

Sorry, I don't get your question, I have already replace bh_read_locked()
with bh_read() in the commit message, there is no bh_read_locked in the whole
patch. Am I missing something?

Thanks,
Yi.

2022-09-02 01:59:27

by Al Viro

[permalink] [raw]
Subject: Re: [PATCH v2 13/14] ext2: replace bh_submit_read() helper with bh_read_locked()

On Fri, Sep 02, 2022 at 09:32:53AM +0800, Zhang Yi wrote:
> On 2022/9/2 8:30, Al Viro wrote:
> > On Thu, Sep 01, 2022 at 09:35:04PM +0800, Zhang Yi wrote:
> >> bh_submit_read() and the uptodate check logic in bh_uptodate_or_lock()
> >> has been integrated in bh_read() helper, so switch to use it directly.
> >
> > s/bh_read_locked/bh_read/ in the summary?
> >
>
> Sorry, I don't get your question, I have already replace bh_read_locked()
> with bh_read() in the commit message, there is no bh_read_locked in the whole
> patch. Am I missing something?

Take a look at the subject ;-)

2022-09-02 01:59:51

by Zhang Yi

[permalink] [raw]
Subject: Re: [PATCH v2 13/14] ext2: replace bh_submit_read() helper with bh_read_locked()



On 2022/9/2 9:51, Al Viro wrote:
> On Fri, Sep 02, 2022 at 09:32:53AM +0800, Zhang Yi wrote:
>> On 2022/9/2 8:30, Al Viro wrote:
>>> On Thu, Sep 01, 2022 at 09:35:04PM +0800, Zhang Yi wrote:
>>>> bh_submit_read() and the uptodate check logic in bh_uptodate_or_lock()
>>>> has been integrated in bh_read() helper, so switch to use it directly.
>>>
>>> s/bh_read_locked/bh_read/ in the summary?
>>>
>>
>> Sorry, I don't get your question, I have already replace bh_read_locked()
>> with bh_read() in the commit message, there is no bh_read_locked in the whole
>> patch. Am I missing something?
>
> Take a look at the subject ;-)

Oh, yes, I forgot to update the subject, will do.

Thanks,
Yi.

2022-09-05 05:56:19

by Christoph Hellwig

[permalink] [raw]

2022-09-05 05:56:24

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH v2 03/14] fs/buffer: replace ll_rw_block()

Looks good:

Reviewed-by: Christoph Hellwig <[email protected]>

2022-09-05 05:56:42

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH v2 04/14] gfs2: replace ll_rw_block()

Looks good:

Reviewed-by: Christoph Hellwig <[email protected]>

2022-09-05 05:57:02

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH v2 05/14] isofs: replace ll_rw_block()

Looks good:

Reviewed-by: Christoph Hellwig <[email protected]>

2022-09-05 05:57:19

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH v2 06/14] jbd2: replace ll_rw_block()

Looks good:

Reviewed-by: Christoph Hellwig <[email protected]>

2022-09-05 05:57:34

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH v2 09/14] reiserfs: replace ll_rw_block()

Looks good:

Reviewed-by: Christoph Hellwig <[email protected]>

2022-09-05 05:57:35

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH v2 07/14] ntfs3: replace ll_rw_block()

Looks good:

Reviewed-by: Christoph Hellwig <[email protected]>

2022-09-05 05:57:40

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH v2 08/14] ocfs2: replace ll_rw_block()

Looks good:

Reviewed-by: Christoph Hellwig <[email protected]>

2022-09-05 05:57:57

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH v2 10/14] udf: replace ll_rw_block()

Looks good:

Reviewed-by: Christoph Hellwig <[email protected]>

2022-09-05 05:58:03

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH v2 11/14] ufs: replace ll_rw_block()

Looks good:

Reviewed-by: Christoph Hellwig <[email protected]>

2022-09-05 05:59:05

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH v2 12/14] fs/buffer: remove ll_rw_block() helper

Looks good:

Reviewed-by: Christoph Hellwig <[email protected]>

2022-09-05 06:07:03

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH v2 14/14] fs/buffer: remove bh_submit_read() helper

Looks good:

Reviewed-by: Christoph Hellwig <[email protected]>

2022-09-05 08:28:04

by Theodore Ts'o

[permalink] [raw]
Subject: Re: [PATCH v2 06/14] jbd2: replace ll_rw_block()

On Thu, Sep 01, 2022 at 09:34:57PM +0800, Zhang Yi wrote:
> ll_rw_block() is not safe for the sync read path because it cannot
> guarantee that submitting read IO if the buffer has been locked. We
> could get false positive EIO after wait_on_buffer() if the buffer has
> been locked by others. So stop using ll_rw_block() in
> journal_get_superblock(). We also switch to new bh_readahead_batch()
> for the buffer array readahead path.
>
> Signed-off-by: Zhang Yi <[email protected]>

Thanks, looks good.

Reviewed-by: Theodore Ts'o <[email protected]>


- Ted

2022-09-06 01:29:53

by Joseph Qi

[permalink] [raw]
Subject: Re: [Ocfs2-devel] [PATCH v2 08/14] ocfs2: replace ll_rw_block()



On 9/1/22 9:34 PM, Zhang Yi via Ocfs2-devel wrote:
> ll_rw_block() is not safe for the sync read path because it cannot
> guarantee that submitting read IO if the buffer has been locked. We
> could get false positive EIO after wait_on_buffer() if the buffer has
> been locked by others. So stop using ll_rw_block() in ocfs2.
>
> Signed-off-by: Zhang Yi <[email protected]>

Looks good to me.
Reviewed-by: Joseph Qi <[email protected]>

> ---
> fs/ocfs2/aops.c | 2 +-
> fs/ocfs2/super.c | 4 +---
> 2 files changed, 2 insertions(+), 4 deletions(-)
>
> diff --git a/fs/ocfs2/aops.c b/fs/ocfs2/aops.c
> index af4157f61927..1d65f6ef00ca 100644
> --- a/fs/ocfs2/aops.c
> +++ b/fs/ocfs2/aops.c
> @@ -636,7 +636,7 @@ int ocfs2_map_page_blocks(struct page *page, u64 *p_blkno,
> !buffer_new(bh) &&
> ocfs2_should_read_blk(inode, page, block_start) &&
> (block_start < from || block_end > to)) {
> - ll_rw_block(REQ_OP_READ, 1, &bh);
> + bh_read_nowait(bh, 0);
> *wait_bh++=bh;
> }
>
> diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c
> index e2cc9eec287c..26b4c2bfee49 100644
> --- a/fs/ocfs2/super.c
> +++ b/fs/ocfs2/super.c
> @@ -1764,9 +1764,7 @@ static int ocfs2_get_sector(struct super_block *sb,
> if (!buffer_dirty(*bh))
> clear_buffer_uptodate(*bh);
> unlock_buffer(*bh);
> - ll_rw_block(REQ_OP_READ, 1, bh);
> - wait_on_buffer(*bh);
> - if (!buffer_uptodate(*bh)) {
> + if (bh_read(*bh, 0) < 0) {
> mlog_errno(-EIO);
> brelse(*bh);
> *bh = NULL;