Hi Jens, Al, Christoph,
Here are patches to provide support for extracting pages from an iov_iter
and to use this in the extraction functions in the block layer bio code.
The patches make the following changes:
(1) Change generic_file_splice_read() to no longer use ITER_PIPE for doing
a read from an O_DIRECT file fd, but rather load up an ITER_BVEC
iterator with sufficient pages and use that rather than using an
ITER_PIPE. This avoids a problem[2] when __iomap_dio_rw() calls
iov_iter_revert() to shorten an iterator when it races with
truncation. The reversion causes the pipe iterator to prematurely
release the pages it was retaining - despite the read still being in
progress. This caused memory corruption.
(2) Change generic_file_splice_read() to no longer use ITER_PIPE for doing
a read from a buffered file fd, but rather get pages directly from the
pagecache using filemap_get_pages() do all the readahead, reading,
waiting and extraction, and then feed the pages directly into the
pipe.
(3) filemap_get_pages() is altered so that it doesn't take an iterator
(which we don't have in (2)), but rather the count and a flag
indicating if we can handle partially uptodate pages are passed in and
down to its subsidiary functions.
(4) Remove ITER_PIPE and its paraphernalia as generic_file_splice_read()
was the only user.
(5) Add a function, iov_iter_extract_pages() to replace
iov_iter_get_pages*() that gets refs, pins or just lists the pages as
appropriate to the iterator type.
Add a function, iov_iter_extract_will_pin() that will indicate from
the iterator type how the cleanup is to be performed, returning true
if the pages will need unpinning, false otherwise.
(6) Make the bio struct carry a pair of flags to indicate the cleanup
mode. BIO_NO_PAGE_REF is replaced with BIO_PAGE_REFFED (indicating
FOLL_GET was used) and BIO_PAGE_PINNED (indicating FOLL_PIN was used)
is added.
BIO_PAGE_REFFED will go away, but at the moment fs/direct-io.c sets it
and this series does not fully address that file.
(7) Add a function, bio_release_page(), to release a page appropriately to
the cleanup mode indicated by the BIO_PAGE_* flags.
(8) Make the iter-to-bio code use iov_iter_extract_pages() to retain the
pages appropriately and clean them up later.
(9) Fix bio_flagged() so that it doesn't prevent a gcc optimisation.
I've pushed the patches here also:
https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=iov-extract
David
Changes:
========
ver #14)
- Some changes to generic_file_buffered_splice_read():
- Rename to filemap_splice_read() and move to mm/filemap.c.
- Create a helper, pipe_head_buf().
- Use init_sync_kiocb().
- Some changes to generic_file_direct_splice_read():
- Use alloc_pages_bulk_array() rather than alloc_pages_bulk_list().
- Use release_pages() instead of __free_page() in a loop.
- Rename to direct_splice_read().
- Rearrange the patches to implement filemap_splice_read() and
direct_splice_read() separately to changing generic_file_splice_read().
- Don't call generic_file_splice_read() when there isn't a ->read_folio().
- Insert patches to fix read_folio-less cases:
- Make tty, procfs, kernfs and (u)random use direct_splice_read().
- Make overlayfs and coda call down to a lower layer.
- Give shmem its own splice-read that doesn't insert missing pages.
- Fixed a min() with mixed type args on some arches.
ver #13)
- Only use allocation in advance and ITER_BVEC for DIO read-splice.
- Make buffered read-splice get pages directly from the pagecache.
- Alter filemap_get_pages() & co. so that it doesn't need an iterator.
ver #12)
- Added the missing __bitwise on the iov_iter_extraction_t typedef.
- Rebased on -rc7.
- Don't specify FOLL_PIN to pin_user_pages_fast().
- Inserted patch at front to fix race between DIO read and truncation that
caused memory corruption when iov_iter_revert() got called on an
ITER_PIPE iterator[2].
- Inserted a patch after that to remove the now-unused ITER_PIPE and its
helper functions.
- Removed the ITER_PIPE bits from iov_iter_extract_pages().
ver #11)
- Fix iov_iter_extract_kvec_pages() to include the offset into the page in
the returned starting offset.
- Use __bitwise for the extraction flags
ver #10)
- Fix use of i->kvec in iov_iter_extract_bvec_pages() to be i->bvec.
- Drop bio_set_cleanup_mode(), open coding it instead.
ver #9)
- It's now not permitted to use FOLL_PIN outside of mm/, so:
- Change iov_iter_extract_mode() into iov_iter_extract_will_pin() and
return true/false instead of FOLL_PIN/0.
- Drop of folio_put_unpin() and page_put_unpin() and instead call
unpin_user_page() (and put_page()) directly as necessary.
- Make __bio_release_pages() call bio_release_page() instead of
unpin_user_page() as there's no BIO_* -> FOLL_* translation to do.
- Drop the FOLL_* renumbering patch.
- Change extract_flags to extraction_flags.
ver #8)
- Import Christoph Hellwig's changes.
- Split the conversion-to-extraction patch.
- Drop the extract_flags arg from iov_iter_extract_mode().
- Don't default bios to BIO_PAGE_REFFED, but set explicitly.
- Switch FOLL_PIN and FOLL_GET when renumbering so PIN is at bit 0.
- Switch BIO_PAGE_PINNED and BIO_PAGE_REFFED so PINNED is at bit 0.
- We should always be using FOLL_PIN (not FOLL_GET) for DIO, so adjust the
patches for that.
ver #7)
- For now, drop the parts to pass the I/O direction to iov_iter_*pages*()
as it turned out to be a lot more complicated, with places not setting
IOCB_WRITE when they should, for example.
- Drop all the patches that changed things other then the block layer's
bio handling. The netfslib and cifs changes can go into a separate
patchset.
- Add support for extracting pages from KVEC-type iterators.
- When extracting from BVEC/KVEC, skip over empty vecs at the front.
ver #6)
- Fix write() syscall and co. not setting IOCB_WRITE.
- Added iocb_is_read() and iocb_is_write() to check IOCB_WRITE.
- Use op_is_write() in bio_copy_user_iov().
- Drop the iterator direction checks from smbd_recv().
- Define FOLL_SOURCE_BUF and FOLL_DEST_BUF and pass them in as part of
gup_flags to iov_iter_get/extract_pages*().
- Replace iov_iter_get_pages*2() with iov_iter_get_pages*() and remove.
- Add back the function to indicate the cleanup mode.
- Drop the cleanup_mode return arg to iov_iter_extract_pages().
- Provide a helper to clean up a page.
- Renumbered FOLL_GET and FOLL_PIN and made BIO_PAGE_REFFED/PINNED have
the same numerical values, enforced with an assertion.
- Converted AF_ALG, SCSI vhost, generic DIO, FUSE, splice to pipe, 9P and
NFS.
- Added in the patches to make CIFS do top-to-bottom iterators and use
various of the added extraction functions.
- Added a pair of work-in-progess patches to make sk_buff fragments store
FOLL_GET and FOLL_PIN.
ver #5)
- Replace BIO_NO_PAGE_REF with BIO_PAGE_REFFED and split into own patch.
- Transcribe FOLL_GET/PIN into BIO_PAGE_REFFED/PINNED flags.
- Add patch to allow bio_flagged() to be combined by gcc.
ver #4)
- Drop the patch to move the FOLL_* flags to linux/mm_types.h as they're
no longer referenced by linux/uio.h.
- Add ITER_SOURCE/DEST cleanup patches.
- Make iov_iter/netfslib iter extraction patches use ITER_SOURCE/DEST.
- Allow additional gup_flags to be passed into iov_iter_extract_pages().
- Add struct bio patch.
ver #3)
- Switch to using EXPORT_SYMBOL_GPL to prevent indirect 3rd-party access
to get/pin_user_pages_fast()[1].
ver #2)
- Rolled the extraction cleanup mode query function into the extraction
function, returning the indication through the argument list.
- Fixed patch 4 (extract to scatterlist) to actually use the new
extraction API.
Link: https://lore.kernel.org/r/Y3zFzdWnWlEJ8X8/@infradead.org/ [1]
Link: https://lore.kernel.org/r/[email protected]/ [2]
Link: https://lore.kernel.org/r/166697254399.61150.1256557652599252121.stgit@warthog.procyon.org.uk/ # rfc
Link: https://lore.kernel.org/r/166722777223.2555743.162508599131141451.stgit@warthog.procyon.org.uk/ # rfc
Link: https://lore.kernel.org/r/166732024173.3186319.18204305072070871546.stgit@warthog.procyon.org.uk/ # rfc
Link: https://lore.kernel.org/r/166869687556.3723671.10061142538708346995.stgit@warthog.procyon.org.uk/ # rfc
Link: https://lore.kernel.org/r/166920902005.1461876.2786264600108839814.stgit@warthog.procyon.org.uk/ # v2
Link: https://lore.kernel.org/r/166997419665.9475.15014699817597102032.stgit@warthog.procyon.org.uk/ # v3
Link: https://lore.kernel.org/r/167305160937.1521586.133299343565358971.stgit@warthog.procyon.org.uk/ # v4
Link: https://lore.kernel.org/r/167344725490.2425628.13771289553670112965.stgit@warthog.procyon.org.uk/ # v5
Link: https://lore.kernel.org/r/167391047703.2311931.8115712773222260073.stgit@warthog.procyon.org.uk/ # v6
Link: https://lore.kernel.org/r/[email protected]/ # v7
Link: https://lore.kernel.org/r/[email protected]/ # v8
Link: https://lore.kernel.org/r/[email protected]/ # v9
Link: https://lore.kernel.org/r/[email protected]/ # v10
Link: https://lore.kernel.org/r/[email protected]/ # v11
Link: https://lore.kernel.org/r/[email protected]/ # v12
Link: https://lore.kernel.org/r/[email protected]/ # v13
Additional patches that got folded in:
Link: https://lore.kernel.org/r/[email protected]/ # v1
Link: https://lore.kernel.org/r/[email protected]/ # v2
Link: https://lore.kernel.org/r/[email protected]/ # v3
Christoph Hellwig (1):
block: Replace BIO_NO_PAGE_REF with BIO_PAGE_REFFED with inverted
logic
David Howells (16):
mm: Pass info, not iter, into filemap_get_pages()
splice: Add a func to do a splice from a buffered file without
ITER_PIPE
splice: Add a func to do a splice from an O_DIRECT file without
ITER_PIPE
shmem: Implement splice-read
overlayfs: Implement splice-read
coda: Implement splice-read
tty, proc, kernfs, random: Use direct_splice_read()
splice: Do splice read from a file without using ITER_PIPE
iov_iter: Kill ITER_PIPE
iov_iter: Define flags to qualify page extraction.
iov_iter: Add a function to extract a page list from an iterator
iomap: Don't get an reference on ZERO_PAGE for direct I/O block
zeroing
block: Fix bio_flagged() so that gcc can better optimise it
block: Add BIO_PAGE_PINNED and associated infrastructure
block: Convert bio_iov_iter_get_pages to use iov_iter_extract_pages
block: convert bio_map_user_iov to use iov_iter_extract_pages
block/bio.c | 33 +-
block/blk-map.c | 26 +-
block/blk.h | 12 +
drivers/char/random.c | 4 +-
drivers/tty/tty_io.c | 4 +-
fs/cifs/file.c | 8 +-
fs/coda/file.c | 36 +-
fs/direct-io.c | 2 +
fs/iomap/direct-io.c | 1 -
fs/kernfs/file.c | 2 +-
fs/overlayfs/file.c | 36 +-
fs/proc/inode.c | 4 +-
fs/proc/proc_sysctl.c | 2 +-
fs/proc_namespace.c | 6 +-
fs/splice.c | 114 +++++-
include/linux/bio.h | 5 +-
include/linux/blk_types.h | 3 +-
include/linux/fs.h | 6 +
include/linux/pipe_fs_i.h | 20 ++
include/linux/uio.h | 49 ++-
lib/iov_iter.c | 713 +++++++++++++++-----------------------
mm/filemap.c | 154 +++++++-
mm/internal.h | 6 +
mm/shmem.c | 124 ++++++-
24 files changed, 830 insertions(+), 540 deletions(-)
Provide a function to do splice read from a buffered file, pulling the
folios out of the pagecache directly by calling filemap_get_pages() to do
any required reading and then pasting the returned folios into the pipe.
A helper function is provided to do the actual folio pasting and will
handle multipage folios by splicing as many of the relevant subpages as
will fit into the pipe.
The code is loosely based on filemap_read() and might belong in
mm/filemap.c with that as it needs to use filemap_get_pages().
Signed-off-by: David Howells <[email protected]>
cc: Jens Axboe <[email protected]>
cc: Christoph Hellwig <[email protected]>
cc: Al Viro <[email protected]>
cc: David Hildenbrand <[email protected]>
cc: John Hubbard <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
---
Notes:
ver #14)
- Rename to filemap_splice_read().
- Create a helper, pipe_head_buf(), to get the head buffer.
- Use init_sync_kiocb().
- Move to mm/filemap.c.
- Split the implementation of filemap_splice_read() from the patch to
make generic_file_splice_read() use it and direct_splice_read().
include/linux/fs.h | 3 ++
mm/filemap.c | 128 +++++++++++++++++++++++++++++++++++++++++++++
mm/internal.h | 6 +++
3 files changed, 137 insertions(+)
diff --git a/include/linux/fs.h b/include/linux/fs.h
index c1769a2c5d70..28743e38df91 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -3163,6 +3163,9 @@ ssize_t vfs_iocb_iter_write(struct file *file, struct kiocb *iocb,
struct iov_iter *iter);
/* fs/splice.c */
+ssize_t filemap_splice_read(struct file *in, loff_t *ppos,
+ struct pipe_inode_info *pipe,
+ size_t len, unsigned int flags);
extern ssize_t generic_file_splice_read(struct file *, loff_t *,
struct pipe_inode_info *, size_t, unsigned int);
extern ssize_t iter_file_splice_write(struct pipe_inode_info *,
diff --git a/mm/filemap.c b/mm/filemap.c
index 876e77278d2a..8c7b135c8e23 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -42,6 +42,8 @@
#include <linux/ramfs.h>
#include <linux/page_idle.h>
#include <linux/migrate.h>
+#include <linux/pipe_fs_i.h>
+#include <linux/splice.h>
#include <asm/pgalloc.h>
#include <asm/tlbflush.h>
#include "internal.h"
@@ -2842,6 +2844,132 @@ generic_file_read_iter(struct kiocb *iocb, struct iov_iter *iter)
}
EXPORT_SYMBOL(generic_file_read_iter);
+/*
+ * Splice subpages from a folio into a pipe.
+ */
+size_t splice_folio_into_pipe(struct pipe_inode_info *pipe,
+ struct folio *folio, loff_t fpos, size_t size)
+{
+ struct page *page;
+ size_t spliced = 0, offset = offset_in_folio(folio, fpos);
+
+ page = folio_page(folio, offset / PAGE_SIZE);
+ size = min(size, folio_size(folio) - offset);
+ offset %= PAGE_SIZE;
+
+ while (spliced < size &&
+ !pipe_full(pipe->head, pipe->tail, pipe->max_usage)) {
+ struct pipe_buffer *buf = pipe_head_buf(pipe);
+ size_t part = min_t(size_t, PAGE_SIZE - offset, size - spliced);
+
+ *buf = (struct pipe_buffer) {
+ .ops = &page_cache_pipe_buf_ops,
+ .page = page,
+ .offset = offset,
+ .len = part,
+ };
+ folio_get(folio);
+ pipe->head++;
+ page++;
+ spliced += part;
+ offset = 0;
+ }
+
+ return spliced;
+}
+
+/*
+ * Splice folios from the pagecache of a buffered (ie. non-O_DIRECT) file into
+ * a pipe.
+ */
+ssize_t filemap_splice_read(struct file *in, loff_t *ppos,
+ struct pipe_inode_info *pipe,
+ size_t len, unsigned int flags)
+{
+ struct folio_batch fbatch;
+ struct kiocb iocb;
+ size_t total_spliced = 0, used, npages;
+ loff_t isize, end_offset;
+ bool writably_mapped;
+ int i, error = 0;
+
+ init_sync_kiocb(&iocb, in);
+ iocb.ki_pos = *ppos;
+
+ /* Work out how much data we can actually add into the pipe */
+ used = pipe_occupancy(pipe->head, pipe->tail);
+ npages = max_t(ssize_t, pipe->max_usage - used, 0);
+ len = min_t(size_t, len, npages * PAGE_SIZE);
+
+ folio_batch_init(&fbatch);
+
+ do {
+ cond_resched();
+
+ if (*ppos >= i_size_read(file_inode(in)))
+ break;
+
+ iocb.ki_pos = *ppos;
+ error = filemap_get_pages(&iocb, len, &fbatch, true);
+ if (error < 0)
+ break;
+
+ /*
+ * i_size must be checked after we know the pages are Uptodate.
+ *
+ * Checking i_size after the check allows us to calculate
+ * the correct value for "nr", which means the zero-filled
+ * part of the page is not copied back to userspace (unless
+ * another truncate extends the file - this is desired though).
+ */
+ isize = i_size_read(file_inode(in));
+ if (unlikely(*ppos >= isize))
+ break;
+ end_offset = min_t(loff_t, isize, *ppos + len);
+
+ /*
+ * Once we start copying data, we don't want to be touching any
+ * cachelines that might be contended:
+ */
+ writably_mapped = mapping_writably_mapped(in->f_mapping);
+
+ for (i = 0; i < folio_batch_count(&fbatch); i++) {
+ struct folio *folio = fbatch.folios[i];
+ size_t n;
+
+ if (folio_pos(folio) >= end_offset)
+ goto out;
+ folio_mark_accessed(folio);
+
+ /*
+ * If users can be writing to this folio using arbitrary
+ * virtual addresses, take care of potential aliasing
+ * before reading the folio on the kernel side.
+ */
+ if (writably_mapped)
+ flush_dcache_folio(folio);
+
+ n = splice_folio_into_pipe(pipe, folio, *ppos, len);
+ if (!n)
+ goto out;
+ len -= n;
+ total_spliced += n;
+ *ppos += n;
+ in->f_ra.prev_pos = *ppos;
+ if (pipe_full(pipe->head, pipe->tail, pipe->max_usage))
+ goto out;
+ }
+
+ folio_batch_release(&fbatch);
+ } while (len);
+
+out:
+ folio_batch_release(&fbatch);
+ file_accessed(in);
+
+ return total_spliced ? total_spliced : error;
+}
+
static inline loff_t folio_seek_hole_data(struct xa_state *xas,
struct address_space *mapping, struct folio *folio,
loff_t start, loff_t end, bool seek_data)
diff --git a/mm/internal.h b/mm/internal.h
index bcf75a8b032d..6d4ca98f3844 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -794,6 +794,12 @@ struct migration_target_control {
gfp_t gfp_mask;
};
+/*
+ * mm/filemap.c
+ */
+size_t splice_folio_into_pipe(struct pipe_inode_info *pipe,
+ struct folio *folio, loff_t fpos, size_t size);
+
/*
* mm/vmalloc.c
*/
The new filemap_splice_read() has an implicit expectation via
filemap_get_pages() that ->read_folio() exists if ->readahead() doesn't
fully populate the pagecache of the file it is reading from[1], potentially
leading to a jump to NULL if this doesn't exist. shmem, however, (and by
extension, tmpfs, ramfs and rootfs), doesn't have ->read_folio(),
Work around this by equipping shmem with its own splice-read
implementation, based on filemap_splice_read(), but able to paste in
zero_page when there's a page missing.
Signed-off-by: David Howells <[email protected]>
cc: Daniel Golle <[email protected]>
cc: Guenter Roeck <[email protected]>
cc: Christoph Hellwig <[email protected]>
cc: Jens Axboe <[email protected]>
cc: Al Viro <[email protected]>
cc: John Hubbard <[email protected]>
cc: David Hildenbrand <[email protected]>
cc: Matthew Wilcox <[email protected]>
cc: Hugh Dickins <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
Link: https://lore.kernel.org/r/[email protected]/ [1]
---
mm/shmem.c | 124 ++++++++++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 123 insertions(+), 1 deletion(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index 0005ab2c29af..7145a5345f4d 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2711,6 +2711,128 @@ static ssize_t shmem_file_read_iter(struct kiocb *iocb, struct iov_iter *to)
return retval ? retval : error;
}
+static bool zero_pipe_buf_try_steal(struct pipe_inode_info *pipe,
+ struct pipe_buffer *buf)
+{
+ return false;
+}
+
+static const struct pipe_buf_operations zero_pipe_buf_ops = {
+ .release = generic_pipe_buf_release,
+ .try_steal = zero_pipe_buf_try_steal,
+ .get = generic_pipe_buf_get,
+};
+
+static size_t splice_zeropage_into_pipe(struct pipe_inode_info *pipe,
+ loff_t fpos, size_t size)
+{
+ size_t offset = fpos & ~PAGE_MASK;
+
+ size = min_t(size_t, size, PAGE_SIZE - offset);
+
+ if (!pipe_full(pipe->head, pipe->tail, pipe->max_usage)) {
+ struct pipe_buffer *buf = pipe_head_buf(pipe);
+
+ *buf = (struct pipe_buffer) {
+ .ops = &zero_pipe_buf_ops,
+ .page = ZERO_PAGE(0),
+ .offset = offset,
+ .len = size,
+ };
+ get_page(buf->page);
+ pipe->head++;
+ }
+
+ return size;
+}
+
+static ssize_t shmem_file_splice_read(struct file *in, loff_t *ppos,
+ struct pipe_inode_info *pipe,
+ size_t len, unsigned int flags)
+{
+ struct inode *inode = file_inode(in);
+ struct address_space *mapping = inode->i_mapping;
+ struct folio *folio = NULL;
+ size_t total_spliced = 0, used, npages, n, part;
+ loff_t isize;
+ int error = 0;
+
+ /* Work out how much data we can actually add into the pipe */
+ used = pipe_occupancy(pipe->head, pipe->tail);
+ npages = max_t(ssize_t, pipe->max_usage - used, 0);
+ len = min_t(size_t, len, npages * PAGE_SIZE);
+
+ do {
+ if (*ppos >= i_size_read(inode))
+ break;
+
+ error = shmem_get_folio(inode, *ppos / PAGE_SIZE, &folio, SGP_READ);
+ if (error) {
+ if (error == -EINVAL)
+ error = 0;
+ break;
+ }
+ if (folio) {
+ folio_unlock(folio);
+
+ if (folio_test_hwpoison(folio)) {
+ error = -EIO;
+ break;
+ }
+ }
+
+ /*
+ * i_size must be checked after we know the pages are Uptodate.
+ *
+ * Checking i_size after the check allows us to calculate
+ * the correct value for "nr", which means the zero-filled
+ * part of the page is not copied back to userspace (unless
+ * another truncate extends the file - this is desired though).
+ */
+ isize = i_size_read(inode);
+ if (unlikely(*ppos >= isize))
+ break;
+ part = min_t(loff_t, isize - *ppos, len);
+
+ if (folio) {
+ /*
+ * If users can be writing to this page using arbitrary
+ * virtual addresses, take care about potential aliasing
+ * before reading the page on the kernel side.
+ */
+ if (mapping_writably_mapped(mapping))
+ flush_dcache_folio(folio);
+ folio_mark_accessed(folio);
+ /*
+ * Ok, we have the page, and it's up-to-date, so we can
+ * now splice it into the pipe.
+ */
+ n = splice_folio_into_pipe(pipe, folio, *ppos, part);
+ folio_put(folio);
+ folio = NULL;
+ } else {
+ n = splice_zeropage_into_pipe(pipe, *ppos, len);
+ }
+
+ if (!n)
+ break;
+ len -= n;
+ total_spliced += n;
+ *ppos += n;
+ in->f_ra.prev_pos = *ppos;
+ if (pipe_full(pipe->head, pipe->tail, pipe->max_usage))
+ break;
+
+ cond_resched();
+ } while (len);
+
+ if (folio)
+ folio_put(folio);
+
+ file_accessed(in);
+ return total_spliced ? total_spliced : error;
+}
+
static loff_t shmem_file_llseek(struct file *file, loff_t offset, int whence)
{
struct address_space *mapping = file->f_mapping;
@@ -3929,7 +4051,7 @@ static const struct file_operations shmem_file_operations = {
.read_iter = shmem_file_read_iter,
.write_iter = generic_file_write_iter,
.fsync = noop_fsync,
- .splice_read = generic_file_splice_read,
+ .splice_read = shmem_file_splice_read,
.splice_write = iter_file_splice_write,
.fallocate = shmem_fallocate,
#endif
Implement a function, direct_file_splice(), that deals with this by using
an ITER_BVEC iterator instead of an ITER_PIPE iterator as the former won't
free its buffers when reverted. The function bulk allocates all the
buffers it thinks it is going to use in advance, does the read
synchronously and only then trims the buffer down. The pages we did use
get pushed into the pipe.
This fixes a problem with the upcoming iov_iter_extract_pages() function,
whereby pages extracted from a non-user-backed iterator such as ITER_PIPE
aren't pinned. __iomap_dio_rw(), however, calls iov_iter_revert() to
shorten the iterator to just the bufferage it is going to use - which has
the side-effect of freeing the excess pipe buffers, even though they're
attached to a bio and may get written to by DMA (thanks to Hillf Danton for
spotting this[1]).
This then causes memory corruption that is particularly noticable when the
syzbot test[2] is run. The test boils down to:
out = creat(argv[1], 0666);
ftruncate(out, 0x800);
lseek(out, 0x200, SEEK_SET);
in = open(argv[1], O_RDONLY | O_DIRECT | O_NOFOLLOW);
sendfile(out, in, NULL, 0x1dd00);
run repeatedly in parallel. What I think is happening is that ftruncate()
occasionally shortens the DIO read that's about to be made by sendfile's
splice core by reducing i_size.
This should be more efficient for DIO read by virtue of doing a bulk page
allocation, but slightly less efficient by ignoring any partial page in the
pipe.
Reported-by: [email protected]
Signed-off-by: David Howells <[email protected]>
cc: Jens Axboe <[email protected]>
cc: Christoph Hellwig <[email protected]>
cc: Al Viro <[email protected]>
cc: David Hildenbrand <[email protected]>
cc: John Hubbard <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
Link: https://lore.kernel.org/r/[email protected]/ [1]
Link: https://lore.kernel.org/r/[email protected]/ [2]
---
Notes:
ver #14)
- Use alloc_pages_bulk_array() rather than alloc_pages_bulk_list().
- Use release_pages() rather than a loop calling __free_page().
- Rename to direct_splice_read().
- Don't call from generic_file_splice_read() yet.
ver #13)
- Don't completely replace generic_file_splice_read(), but rather only use
this if we're doing a splicing from an O_DIRECT file fd.
fs/splice.c | 92 +++++++++++++++++++++++++++++++++++++++
include/linux/fs.h | 3 ++
include/linux/pipe_fs_i.h | 20 +++++++++
lib/iov_iter.c | 6 ---
4 files changed, 115 insertions(+), 6 deletions(-)
diff --git a/fs/splice.c b/fs/splice.c
index 5969b7a1d353..4c6332854b63 100644
--- a/fs/splice.c
+++ b/fs/splice.c
@@ -282,6 +282,98 @@ void splice_shrink_spd(struct splice_pipe_desc *spd)
kfree(spd->partial);
}
+/*
+ * Splice data from an O_DIRECT file into pages and then add them to the output
+ * pipe.
+ */
+ssize_t direct_splice_read(struct file *in, loff_t *ppos,
+ struct pipe_inode_info *pipe,
+ size_t len, unsigned int flags)
+{
+ struct iov_iter to;
+ struct bio_vec *bv;
+ struct kiocb kiocb;
+ struct page **pages;
+ ssize_t ret;
+ size_t used, npages, chunk, remain, reclaim;
+ int i;
+
+ /* Work out how much data we can actually add into the pipe */
+ used = pipe_occupancy(pipe->head, pipe->tail);
+ npages = max_t(ssize_t, pipe->max_usage - used, 0);
+ len = min_t(size_t, len, npages * PAGE_SIZE);
+ npages = DIV_ROUND_UP(len, PAGE_SIZE);
+
+ bv = kzalloc(array_size(npages, sizeof(bv[0])) +
+ array_size(npages, sizeof(struct page *)), GFP_KERNEL);
+ if (!bv)
+ return -ENOMEM;
+
+ pages = (void *)(bv + npages);
+ npages = alloc_pages_bulk_array(GFP_USER, npages, pages);
+ if (!npages) {
+ kfree(bv);
+ return -ENOMEM;
+ }
+
+ remain = len = min_t(size_t, len, npages * PAGE_SIZE);
+
+ for (i = 0; i < npages; i++) {
+ chunk = min_t(size_t, PAGE_SIZE, remain);
+ bv[i].bv_page = pages[i];
+ bv[i].bv_offset = 0;
+ bv[i].bv_len = chunk;
+ remain -= chunk;
+ }
+
+ /* Do the I/O */
+ iov_iter_bvec(&to, ITER_DEST, bv, npages, len);
+ init_sync_kiocb(&kiocb, in);
+ kiocb.ki_pos = *ppos;
+ ret = call_read_iter(in, &kiocb, &to);
+
+ reclaim = npages * PAGE_SIZE;
+ remain = 0;
+ if (ret > 0) {
+ reclaim -= ret;
+ remain = ret;
+ *ppos = kiocb.ki_pos;
+ file_accessed(in);
+ } else if (ret < 0) {
+ /*
+ * callers of ->splice_read() expect -EAGAIN on
+ * "can't put anything in there", rather than -EFAULT.
+ */
+ if (ret == -EFAULT)
+ ret = -EAGAIN;
+ }
+
+ /* Free any pages that didn't get touched at all. */
+ reclaim /= PAGE_SIZE;
+ if (reclaim) {
+ npages -= reclaim;
+ release_pages(pages + npages, reclaim);
+ }
+
+ /* Push the remaining pages into the pipe. */
+ for (i = 0; i < npages; i++) {
+ struct pipe_buffer *buf = pipe_head_buf(pipe);
+
+ chunk = min_t(size_t, remain, PAGE_SIZE);
+ *buf = (struct pipe_buffer) {
+ .ops = &default_pipe_buf_ops,
+ .page = bv[i].bv_page,
+ .offset = 0,
+ .len = chunk,
+ };
+ pipe->head++;
+ remain -= chunk;
+ }
+
+ kfree(bv);
+ return ret;
+}
+
/**
* generic_file_splice_read - splice data from file to a pipe
* @in: file to splice from
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 28743e38df91..551c9403f9b3 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -3166,6 +3166,9 @@ ssize_t vfs_iocb_iter_write(struct file *file, struct kiocb *iocb,
ssize_t filemap_splice_read(struct file *in, loff_t *ppos,
struct pipe_inode_info *pipe,
size_t len, unsigned int flags);
+ssize_t direct_splice_read(struct file *in, loff_t *ppos,
+ struct pipe_inode_info *pipe,
+ size_t len, unsigned int flags);
extern ssize_t generic_file_splice_read(struct file *, loff_t *,
struct pipe_inode_info *, size_t, unsigned int);
extern ssize_t iter_file_splice_write(struct pipe_inode_info *,
diff --git a/include/linux/pipe_fs_i.h b/include/linux/pipe_fs_i.h
index 6cb65df3e3ba..d2c3f16cf6b1 100644
--- a/include/linux/pipe_fs_i.h
+++ b/include/linux/pipe_fs_i.h
@@ -156,6 +156,26 @@ static inline bool pipe_full(unsigned int head, unsigned int tail,
return pipe_occupancy(head, tail) >= limit;
}
+/**
+ * pipe_buf - Return the pipe buffer for the specified slot in the pipe ring
+ * @pipe: The pipe to access
+ * @slot: The slot of interest
+ */
+static inline struct pipe_buffer *pipe_buf(const struct pipe_inode_info *pipe,
+ unsigned int slot)
+{
+ return &pipe->bufs[slot & (pipe->ring_size - 1)];
+}
+
+/**
+ * pipe_head_buf - Return the pipe buffer at the head of the pipe ring
+ * @pipe: The pipe to access
+ */
+static inline struct pipe_buffer *pipe_head_buf(const struct pipe_inode_info *pipe)
+{
+ return pipe_buf(pipe, pipe->head);
+}
+
/**
* pipe_buf_get - get a reference to a pipe_buffer
* @pipe: the pipe that the buffer belongs to
diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index f9a3ff37ecd1..47c484551c59 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -186,12 +186,6 @@ static int copyin(void *to, const void __user *from, size_t n)
return res;
}
-static inline struct pipe_buffer *pipe_buf(const struct pipe_inode_info *pipe,
- unsigned int slot)
-{
- return &pipe->bufs[slot & (pipe->ring_size - 1)];
-}
-
#ifdef PIPE_PARANOIA
static bool sanity(const struct iov_iter *i)
{
Implement splice-read for overlayfs by passing the request down a layer
rather than going through generic_file_splice_read() which is going to be
changed to assume that ->read_folio() is present on buffered files.
Signed-off-by: David Howells <[email protected]>
cc: Christoph Hellwig <[email protected]>
cc: Jens Axboe <[email protected]>
cc: Al Viro <[email protected]>
cc: John Hubbard <[email protected]>
cc: David Hildenbrand <[email protected]>
cc: Matthew Wilcox <[email protected]>
cc: Miklos Szeredi <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
cc: [email protected]
---
fs/overlayfs/file.c | 36 +++++++++++++++++++++++++++++++++++-
1 file changed, 35 insertions(+), 1 deletion(-)
diff --git a/fs/overlayfs/file.c b/fs/overlayfs/file.c
index c9d0c362c7ef..267b61df6fcd 100644
--- a/fs/overlayfs/file.c
+++ b/fs/overlayfs/file.c
@@ -419,6 +419,40 @@ static ssize_t ovl_write_iter(struct kiocb *iocb, struct iov_iter *iter)
return ret;
}
+static ssize_t ovl_splice_read(struct file *in, loff_t *ppos,
+ struct pipe_inode_info *pipe, size_t len,
+ unsigned int flags)
+{
+ const struct cred *old_cred;
+ struct fd real;
+ ssize_t ret;
+
+ ret = ovl_real_fdget(in, &real);
+ if (ret)
+ return ret;
+
+ ret = -EINVAL;
+ if (in->f_flags & O_DIRECT &&
+ !(real.file->f_mode & FMODE_CAN_ODIRECT))
+ goto out_fdput;
+ if (!real.file->f_op->splice_read)
+ goto out_fdput;
+
+ ret = rw_verify_area(READ, in, ppos, len);
+ if (unlikely(ret < 0))
+ return ret;
+
+ old_cred = ovl_override_creds(file_inode(in)->i_sb);
+ ret = real.file->f_op->splice_read(real.file, ppos, pipe, len, flags);
+
+ revert_creds(old_cred);
+ ovl_file_accessed(in);
+out_fdput:
+ fdput(real);
+
+ return ret;
+}
+
/*
* Calling iter_file_splice_write() directly from overlay's f_op may deadlock
* due to lock order inversion between pipe->mutex in iter_file_splice_write()
@@ -695,7 +729,7 @@ const struct file_operations ovl_file_operations = {
.fallocate = ovl_fallocate,
.fadvise = ovl_fadvise,
.flush = ovl_flush,
- .splice_read = generic_file_splice_read,
+ .splice_read = ovl_splice_read,
.splice_write = ovl_splice_write,
.copy_file_range = ovl_copy_file_range,
Use direct_splice_read() for tty, procfs, kernfs and random files rather
than going through generic_file_splice_read() as they just copy the file
into the output buffer and don't splice pages. This avoids the need for
them to have a ->read_folio() to satisfy filemap_splice_read().
Signed-off-by: David Howells <[email protected]>
cc: Christoph Hellwig <[email protected]>
cc: Jens Axboe <[email protected]>
cc: Al Viro <[email protected]>
cc: John Hubbard <[email protected]>
cc: David Hildenbrand <[email protected]>
cc: Matthew Wilcox <[email protected]>
cc: Miklos Szeredi <[email protected]>
cc: Arnd Bergmann <[email protected]>
cc: Greg Kroah-Hartman <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
---
drivers/char/random.c | 4 ++--
drivers/tty/tty_io.c | 4 ++--
fs/kernfs/file.c | 2 +-
fs/proc/inode.c | 4 ++--
fs/proc/proc_sysctl.c | 2 +-
fs/proc_namespace.c | 6 +++---
6 files changed, 11 insertions(+), 11 deletions(-)
diff --git a/drivers/char/random.c b/drivers/char/random.c
index ce3ccd172cc8..792713616ba8 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1546,7 +1546,7 @@ const struct file_operations random_fops = {
.compat_ioctl = compat_ptr_ioctl,
.fasync = random_fasync,
.llseek = noop_llseek,
- .splice_read = generic_file_splice_read,
+ .splice_read = direct_splice_read,
.splice_write = iter_file_splice_write,
};
@@ -1557,7 +1557,7 @@ const struct file_operations urandom_fops = {
.compat_ioctl = compat_ptr_ioctl,
.fasync = random_fasync,
.llseek = noop_llseek,
- .splice_read = generic_file_splice_read,
+ .splice_read = direct_splice_read,
.splice_write = iter_file_splice_write,
};
diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
index 3149114bf130..495678e9b95e 100644
--- a/drivers/tty/tty_io.c
+++ b/drivers/tty/tty_io.c
@@ -466,7 +466,7 @@ static const struct file_operations tty_fops = {
.llseek = no_llseek,
.read_iter = tty_read,
.write_iter = tty_write,
- .splice_read = generic_file_splice_read,
+ .splice_read = direct_splice_read,
.splice_write = iter_file_splice_write,
.poll = tty_poll,
.unlocked_ioctl = tty_ioctl,
@@ -481,7 +481,7 @@ static const struct file_operations console_fops = {
.llseek = no_llseek,
.read_iter = tty_read,
.write_iter = redirected_tty_write,
- .splice_read = generic_file_splice_read,
+ .splice_read = direct_splice_read,
.splice_write = iter_file_splice_write,
.poll = tty_poll,
.unlocked_ioctl = tty_ioctl,
diff --git a/fs/kernfs/file.c b/fs/kernfs/file.c
index e4a50e4ff0d2..9d23b8141db7 100644
--- a/fs/kernfs/file.c
+++ b/fs/kernfs/file.c
@@ -1011,7 +1011,7 @@ const struct file_operations kernfs_file_fops = {
.release = kernfs_fop_release,
.poll = kernfs_fop_poll,
.fsync = noop_fsync,
- .splice_read = generic_file_splice_read,
+ .splice_read = direct_splice_read,
.splice_write = iter_file_splice_write,
};
diff --git a/fs/proc/inode.c b/fs/proc/inode.c
index f495fdb39151..711f12706469 100644
--- a/fs/proc/inode.c
+++ b/fs/proc/inode.c
@@ -591,7 +591,7 @@ static const struct file_operations proc_iter_file_ops = {
.llseek = proc_reg_llseek,
.read_iter = proc_reg_read_iter,
.write = proc_reg_write,
- .splice_read = generic_file_splice_read,
+ .splice_read = direct_splice_read,
.poll = proc_reg_poll,
.unlocked_ioctl = proc_reg_unlocked_ioctl,
.mmap = proc_reg_mmap,
@@ -617,7 +617,7 @@ static const struct file_operations proc_reg_file_ops_compat = {
static const struct file_operations proc_iter_file_ops_compat = {
.llseek = proc_reg_llseek,
.read_iter = proc_reg_read_iter,
- .splice_read = generic_file_splice_read,
+ .splice_read = direct_splice_read,
.write = proc_reg_write,
.poll = proc_reg_poll,
.unlocked_ioctl = proc_reg_unlocked_ioctl,
diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c
index 48f2d60bd78a..92533bd0e67b 100644
--- a/fs/proc/proc_sysctl.c
+++ b/fs/proc/proc_sysctl.c
@@ -869,7 +869,7 @@ static const struct file_operations proc_sys_file_operations = {
.poll = proc_sys_poll,
.read_iter = proc_sys_read,
.write_iter = proc_sys_write,
- .splice_read = generic_file_splice_read,
+ .splice_read = direct_splice_read,
.splice_write = iter_file_splice_write,
.llseek = default_llseek,
};
diff --git a/fs/proc_namespace.c b/fs/proc_namespace.c
index 846f9455ae22..492abbbeff5e 100644
--- a/fs/proc_namespace.c
+++ b/fs/proc_namespace.c
@@ -324,7 +324,7 @@ static int mountstats_open(struct inode *inode, struct file *file)
const struct file_operations proc_mounts_operations = {
.open = mounts_open,
.read_iter = seq_read_iter,
- .splice_read = generic_file_splice_read,
+ .splice_read = direct_splice_read,
.llseek = seq_lseek,
.release = mounts_release,
.poll = mounts_poll,
@@ -333,7 +333,7 @@ const struct file_operations proc_mounts_operations = {
const struct file_operations proc_mountinfo_operations = {
.open = mountinfo_open,
.read_iter = seq_read_iter,
- .splice_read = generic_file_splice_read,
+ .splice_read = direct_splice_read,
.llseek = seq_lseek,
.release = mounts_release,
.poll = mounts_poll,
@@ -342,7 +342,7 @@ const struct file_operations proc_mountinfo_operations = {
const struct file_operations proc_mountstats_operations = {
.open = mountstats_open,
.read_iter = seq_read_iter,
- .splice_read = generic_file_splice_read,
+ .splice_read = direct_splice_read,
.llseek = seq_lseek,
.release = mounts_release,
};
Make generic_file_splice_read() use filemap_splice_read() and
direct_splice_read() rather than using an ITER_PIPE and call_read_iter().
With this, ITER_PIPE is no longer used.
Signed-off-by: David Howells <[email protected]>
cc: Jens Axboe <[email protected]>
cc: Christoph Hellwig <[email protected]>
cc: Al Viro <[email protected]>
cc: David Hildenbrand <[email protected]>
cc: John Hubbard <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
---
Notes:
ver #14)
- Split out filemap_splice_read() into a separate patch.
fs/splice.c | 30 +++++++-----------------------
1 file changed, 7 insertions(+), 23 deletions(-)
diff --git a/fs/splice.c b/fs/splice.c
index 4c6332854b63..a93478338cec 100644
--- a/fs/splice.c
+++ b/fs/splice.c
@@ -391,29 +391,13 @@ ssize_t generic_file_splice_read(struct file *in, loff_t *ppos,
struct pipe_inode_info *pipe, size_t len,
unsigned int flags)
{
- struct iov_iter to;
- struct kiocb kiocb;
- int ret;
-
- iov_iter_pipe(&to, ITER_DEST, pipe, len);
- init_sync_kiocb(&kiocb, in);
- kiocb.ki_pos = *ppos;
- ret = call_read_iter(in, &kiocb, &to);
- if (ret > 0) {
- *ppos = kiocb.ki_pos;
- file_accessed(in);
- } else if (ret < 0) {
- /* free what was emitted */
- pipe_discard_from(pipe, to.start_head);
- /*
- * callers of ->splice_read() expect -EAGAIN on
- * "can't put anything in there", rather than -EFAULT.
- */
- if (ret == -EFAULT)
- ret = -EAGAIN;
- }
-
- return ret;
+ if (unlikely(*ppos >= file_inode(in)->i_sb->s_maxbytes))
+ return 0;
+ if (unlikely(!len))
+ return 0;
+ if (in->f_flags & O_DIRECT)
+ return direct_splice_read(in, ppos, pipe, len, flags);
+ return filemap_splice_read(in, ppos, pipe, len, flags);
}
EXPORT_SYMBOL(generic_file_splice_read);
The ITER_PIPE-type iterator was only used for generic_file_splice_read(),
but that has now been switched to either pull pages directly from the
pagecache for buffered file splice-reads or to use ITER_BVEC instead for
O_DIRECT file splice-reads. This leaves ITER_PIPE unused - so remove it.
Signed-off-by: David Howells <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
cc: Jens Axboe <[email protected]>
cc: Al Viro <[email protected]>
cc: David Hildenbrand <[email protected]>
cc: John Hubbard <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
---
fs/cifs/file.c | 8 +-
include/linux/uio.h | 14 --
lib/iov_iter.c | 429 +-------------------------------------------
mm/filemap.c | 3 +-
4 files changed, 5 insertions(+), 449 deletions(-)
diff --git a/fs/cifs/file.c b/fs/cifs/file.c
index 22dfc1f8b4f1..57ca4eea69dd 100644
--- a/fs/cifs/file.c
+++ b/fs/cifs/file.c
@@ -3806,13 +3806,7 @@ cifs_readdata_to_iov(struct cifs_readdata *rdata, struct iov_iter *iter)
size_t copy = min_t(size_t, remaining, PAGE_SIZE);
size_t written;
- if (unlikely(iov_iter_is_pipe(iter))) {
- void *addr = kmap_atomic(page);
-
- written = copy_to_iter(addr, copy, iter);
- kunmap_atomic(addr);
- } else
- written = copy_page_to_iter(page, 0, copy, iter);
+ written = copy_page_to_iter(page, 0, copy, iter);
remaining -= written;
if (written < copy && iov_iter_count(iter) > 0)
break;
diff --git a/include/linux/uio.h b/include/linux/uio.h
index 9f158238edba..dcc0ca5ef491 100644
--- a/include/linux/uio.h
+++ b/include/linux/uio.h
@@ -11,7 +11,6 @@
#include <uapi/linux/uio.h>
struct page;
-struct pipe_inode_info;
struct kvec {
void *iov_base; /* and that should *never* hold a userland pointer */
@@ -23,7 +22,6 @@ enum iter_type {
ITER_IOVEC,
ITER_KVEC,
ITER_BVEC,
- ITER_PIPE,
ITER_XARRAY,
ITER_DISCARD,
ITER_UBUF,
@@ -53,15 +51,10 @@ struct iov_iter {
const struct kvec *kvec;
const struct bio_vec *bvec;
struct xarray *xarray;
- struct pipe_inode_info *pipe;
void __user *ubuf;
};
union {
unsigned long nr_segs;
- struct {
- unsigned int head;
- unsigned int start_head;
- };
loff_t xarray_start;
};
};
@@ -99,11 +92,6 @@ static inline bool iov_iter_is_bvec(const struct iov_iter *i)
return iov_iter_type(i) == ITER_BVEC;
}
-static inline bool iov_iter_is_pipe(const struct iov_iter *i)
-{
- return iov_iter_type(i) == ITER_PIPE;
-}
-
static inline bool iov_iter_is_discard(const struct iov_iter *i)
{
return iov_iter_type(i) == ITER_DISCARD;
@@ -245,8 +233,6 @@ void iov_iter_kvec(struct iov_iter *i, unsigned int direction, const struct kvec
unsigned long nr_segs, size_t count);
void iov_iter_bvec(struct iov_iter *i, unsigned int direction, const struct bio_vec *bvec,
unsigned long nr_segs, size_t count);
-void iov_iter_pipe(struct iov_iter *i, unsigned int direction, struct pipe_inode_info *pipe,
- size_t count);
void iov_iter_discard(struct iov_iter *i, unsigned int direction, size_t count);
void iov_iter_xarray(struct iov_iter *i, unsigned int direction, struct xarray *xarray,
loff_t start, size_t count);
diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index 47c484551c59..adc5e8aa8ae8 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -14,8 +14,6 @@
#include <linux/scatterlist.h>
#include <linux/instrumented.h>
-#define PIPE_PARANOIA /* for now */
-
/* covers ubuf and kbuf alike */
#define iterate_buf(i, n, base, len, off, __p, STEP) { \
size_t __maybe_unused off = 0; \
@@ -186,150 +184,6 @@ static int copyin(void *to, const void __user *from, size_t n)
return res;
}
-#ifdef PIPE_PARANOIA
-static bool sanity(const struct iov_iter *i)
-{
- struct pipe_inode_info *pipe = i->pipe;
- unsigned int p_head = pipe->head;
- unsigned int p_tail = pipe->tail;
- unsigned int p_occupancy = pipe_occupancy(p_head, p_tail);
- unsigned int i_head = i->head;
- unsigned int idx;
-
- if (i->last_offset) {
- struct pipe_buffer *p;
- if (unlikely(p_occupancy == 0))
- goto Bad; // pipe must be non-empty
- if (unlikely(i_head != p_head - 1))
- goto Bad; // must be at the last buffer...
-
- p = pipe_buf(pipe, i_head);
- if (unlikely(p->offset + p->len != abs(i->last_offset)))
- goto Bad; // ... at the end of segment
- } else {
- if (i_head != p_head)
- goto Bad; // must be right after the last buffer
- }
- return true;
-Bad:
- printk(KERN_ERR "idx = %d, offset = %d\n", i_head, i->last_offset);
- printk(KERN_ERR "head = %d, tail = %d, buffers = %d\n",
- p_head, p_tail, pipe->ring_size);
- for (idx = 0; idx < pipe->ring_size; idx++)
- printk(KERN_ERR "[%p %p %d %d]\n",
- pipe->bufs[idx].ops,
- pipe->bufs[idx].page,
- pipe->bufs[idx].offset,
- pipe->bufs[idx].len);
- WARN_ON(1);
- return false;
-}
-#else
-#define sanity(i) true
-#endif
-
-static struct page *push_anon(struct pipe_inode_info *pipe, unsigned size)
-{
- struct page *page = alloc_page(GFP_USER);
- if (page) {
- struct pipe_buffer *buf = pipe_buf(pipe, pipe->head++);
- *buf = (struct pipe_buffer) {
- .ops = &default_pipe_buf_ops,
- .page = page,
- .offset = 0,
- .len = size
- };
- }
- return page;
-}
-
-static void push_page(struct pipe_inode_info *pipe, struct page *page,
- unsigned int offset, unsigned int size)
-{
- struct pipe_buffer *buf = pipe_buf(pipe, pipe->head++);
- *buf = (struct pipe_buffer) {
- .ops = &page_cache_pipe_buf_ops,
- .page = page,
- .offset = offset,
- .len = size
- };
- get_page(page);
-}
-
-static inline int last_offset(const struct pipe_buffer *buf)
-{
- if (buf->ops == &default_pipe_buf_ops)
- return buf->len; // buf->offset is 0 for those
- else
- return -(buf->offset + buf->len);
-}
-
-static struct page *append_pipe(struct iov_iter *i, size_t size,
- unsigned int *off)
-{
- struct pipe_inode_info *pipe = i->pipe;
- int offset = i->last_offset;
- struct pipe_buffer *buf;
- struct page *page;
-
- if (offset > 0 && offset < PAGE_SIZE) {
- // some space in the last buffer; add to it
- buf = pipe_buf(pipe, pipe->head - 1);
- size = min_t(size_t, size, PAGE_SIZE - offset);
- buf->len += size;
- i->last_offset += size;
- i->count -= size;
- *off = offset;
- return buf->page;
- }
- // OK, we need a new buffer
- *off = 0;
- size = min_t(size_t, size, PAGE_SIZE);
- if (pipe_full(pipe->head, pipe->tail, pipe->max_usage))
- return NULL;
- page = push_anon(pipe, size);
- if (!page)
- return NULL;
- i->head = pipe->head - 1;
- i->last_offset = size;
- i->count -= size;
- return page;
-}
-
-static size_t copy_page_to_iter_pipe(struct page *page, size_t offset, size_t bytes,
- struct iov_iter *i)
-{
- struct pipe_inode_info *pipe = i->pipe;
- unsigned int head = pipe->head;
-
- if (unlikely(bytes > i->count))
- bytes = i->count;
-
- if (unlikely(!bytes))
- return 0;
-
- if (!sanity(i))
- return 0;
-
- if (offset && i->last_offset == -offset) { // could we merge it?
- struct pipe_buffer *buf = pipe_buf(pipe, head - 1);
- if (buf->page == page) {
- buf->len += bytes;
- i->last_offset -= bytes;
- i->count -= bytes;
- return bytes;
- }
- }
- if (pipe_full(pipe->head, pipe->tail, pipe->max_usage))
- return 0;
-
- push_page(pipe, page, offset, bytes);
- i->last_offset = -(offset + bytes);
- i->head = head;
- i->count -= bytes;
- return bytes;
-}
-
/*
* fault_in_iov_iter_readable - fault in iov iterator for reading
* @i: iterator
@@ -433,46 +287,6 @@ void iov_iter_init(struct iov_iter *i, unsigned int direction,
}
EXPORT_SYMBOL(iov_iter_init);
-// returns the offset in partial buffer (if any)
-static inline unsigned int pipe_npages(const struct iov_iter *i, int *npages)
-{
- struct pipe_inode_info *pipe = i->pipe;
- int used = pipe->head - pipe->tail;
- int off = i->last_offset;
-
- *npages = max((int)pipe->max_usage - used, 0);
-
- if (off > 0 && off < PAGE_SIZE) { // anon and not full
- (*npages)++;
- return off;
- }
- return 0;
-}
-
-static size_t copy_pipe_to_iter(const void *addr, size_t bytes,
- struct iov_iter *i)
-{
- unsigned int off, chunk;
-
- if (unlikely(bytes > i->count))
- bytes = i->count;
- if (unlikely(!bytes))
- return 0;
-
- if (!sanity(i))
- return 0;
-
- for (size_t n = bytes; n; n -= chunk) {
- struct page *page = append_pipe(i, n, &off);
- chunk = min_t(size_t, n, PAGE_SIZE - off);
- if (!page)
- return bytes - n;
- memcpy_to_page(page, off, addr, chunk);
- addr += chunk;
- }
- return bytes;
-}
-
static __wsum csum_and_memcpy(void *to, const void *from, size_t len,
__wsum sum, size_t off)
{
@@ -480,44 +294,10 @@ static __wsum csum_and_memcpy(void *to, const void *from, size_t len,
return csum_block_add(sum, next, off);
}
-static size_t csum_and_copy_to_pipe_iter(const void *addr, size_t bytes,
- struct iov_iter *i, __wsum *sump)
-{
- __wsum sum = *sump;
- size_t off = 0;
- unsigned int chunk, r;
-
- if (unlikely(bytes > i->count))
- bytes = i->count;
- if (unlikely(!bytes))
- return 0;
-
- if (!sanity(i))
- return 0;
-
- while (bytes) {
- struct page *page = append_pipe(i, bytes, &r);
- char *p;
-
- if (!page)
- break;
- chunk = min_t(size_t, bytes, PAGE_SIZE - r);
- p = kmap_local_page(page);
- sum = csum_and_memcpy(p + r, addr + off, chunk, sum, off);
- kunmap_local(p);
- off += chunk;
- bytes -= chunk;
- }
- *sump = sum;
- return off;
-}
-
size_t _copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i)
{
if (WARN_ON_ONCE(i->data_source))
return 0;
- if (unlikely(iov_iter_is_pipe(i)))
- return copy_pipe_to_iter(addr, bytes, i);
if (user_backed_iter(i))
might_fault();
iterate_and_advance(i, bytes, base, len, off,
@@ -539,42 +319,6 @@ static int copyout_mc(void __user *to, const void *from, size_t n)
return n;
}
-static size_t copy_mc_pipe_to_iter(const void *addr, size_t bytes,
- struct iov_iter *i)
-{
- size_t xfer = 0;
- unsigned int off, chunk;
-
- if (unlikely(bytes > i->count))
- bytes = i->count;
- if (unlikely(!bytes))
- return 0;
-
- if (!sanity(i))
- return 0;
-
- while (bytes) {
- struct page *page = append_pipe(i, bytes, &off);
- unsigned long rem;
- char *p;
-
- if (!page)
- break;
- chunk = min_t(size_t, bytes, PAGE_SIZE - off);
- p = kmap_local_page(page);
- rem = copy_mc_to_kernel(p + off, addr + xfer, chunk);
- chunk -= rem;
- kunmap_local(p);
- xfer += chunk;
- bytes -= chunk;
- if (rem) {
- iov_iter_revert(i, rem);
- break;
- }
- }
- return xfer;
-}
-
/**
* _copy_mc_to_iter - copy to iter with source memory error exception handling
* @addr: source kernel address
@@ -594,9 +338,8 @@ static size_t copy_mc_pipe_to_iter(const void *addr, size_t bytes,
* alignment and poison alignment assumptions to avoid re-triggering
* hardware exceptions.
*
- * * ITER_KVEC, ITER_PIPE, and ITER_BVEC can return short copies.
- * Compare to copy_to_iter() where only ITER_IOVEC attempts might return
- * a short copy.
+ * * ITER_KVEC and ITER_BVEC can return short copies. Compare to
+ * copy_to_iter() where only ITER_IOVEC attempts might return a short copy.
*
* Return: number of bytes copied (may be %0)
*/
@@ -604,8 +347,6 @@ size_t _copy_mc_to_iter(const void *addr, size_t bytes, struct iov_iter *i)
{
if (WARN_ON_ONCE(i->data_source))
return 0;
- if (unlikely(iov_iter_is_pipe(i)))
- return copy_mc_pipe_to_iter(addr, bytes, i);
if (user_backed_iter(i))
might_fault();
__iterate_and_advance(i, bytes, base, len, off,
@@ -711,8 +452,6 @@ size_t copy_page_to_iter(struct page *page, size_t offset, size_t bytes,
return 0;
if (WARN_ON_ONCE(i->data_source))
return 0;
- if (unlikely(iov_iter_is_pipe(i)))
- return copy_page_to_iter_pipe(page, offset, bytes, i);
page += offset / PAGE_SIZE; // first subpage
offset %= PAGE_SIZE;
while (1) {
@@ -761,36 +500,8 @@ size_t copy_page_from_iter(struct page *page, size_t offset, size_t bytes,
}
EXPORT_SYMBOL(copy_page_from_iter);
-static size_t pipe_zero(size_t bytes, struct iov_iter *i)
-{
- unsigned int chunk, off;
-
- if (unlikely(bytes > i->count))
- bytes = i->count;
- if (unlikely(!bytes))
- return 0;
-
- if (!sanity(i))
- return 0;
-
- for (size_t n = bytes; n; n -= chunk) {
- struct page *page = append_pipe(i, n, &off);
- char *p;
-
- if (!page)
- return bytes - n;
- chunk = min_t(size_t, n, PAGE_SIZE - off);
- p = kmap_local_page(page);
- memset(p + off, 0, chunk);
- kunmap_local(p);
- }
- return bytes;
-}
-
size_t iov_iter_zero(size_t bytes, struct iov_iter *i)
{
- if (unlikely(iov_iter_is_pipe(i)))
- return pipe_zero(bytes, i);
iterate_and_advance(i, bytes, base, len, count,
clear_user(base, len),
memset(base, 0, len)
@@ -821,32 +532,6 @@ size_t copy_page_from_iter_atomic(struct page *page, unsigned offset, size_t byt
}
EXPORT_SYMBOL(copy_page_from_iter_atomic);
-static void pipe_advance(struct iov_iter *i, size_t size)
-{
- struct pipe_inode_info *pipe = i->pipe;
- int off = i->last_offset;
-
- if (!off && !size) {
- pipe_discard_from(pipe, i->start_head); // discard everything
- return;
- }
- i->count -= size;
- while (1) {
- struct pipe_buffer *buf = pipe_buf(pipe, i->head);
- if (off) /* make it relative to the beginning of buffer */
- size += abs(off) - buf->offset;
- if (size <= buf->len) {
- buf->len = size;
- i->last_offset = last_offset(buf);
- break;
- }
- size -= buf->len;
- i->head++;
- off = 0;
- }
- pipe_discard_from(pipe, i->head + 1); // discard everything past this one
-}
-
static void iov_iter_bvec_advance(struct iov_iter *i, size_t size)
{
const struct bio_vec *bvec, *end;
@@ -898,8 +583,6 @@ void iov_iter_advance(struct iov_iter *i, size_t size)
iov_iter_iovec_advance(i, size);
} else if (iov_iter_is_bvec(i)) {
iov_iter_bvec_advance(i, size);
- } else if (iov_iter_is_pipe(i)) {
- pipe_advance(i, size);
} else if (iov_iter_is_discard(i)) {
i->count -= size;
}
@@ -913,26 +596,6 @@ void iov_iter_revert(struct iov_iter *i, size_t unroll)
if (WARN_ON(unroll > MAX_RW_COUNT))
return;
i->count += unroll;
- if (unlikely(iov_iter_is_pipe(i))) {
- struct pipe_inode_info *pipe = i->pipe;
- unsigned int head = pipe->head;
-
- while (head > i->start_head) {
- struct pipe_buffer *b = pipe_buf(pipe, --head);
- if (unroll < b->len) {
- b->len -= unroll;
- i->last_offset = last_offset(b);
- i->head = head;
- return;
- }
- unroll -= b->len;
- pipe_buf_release(pipe, b);
- pipe->head--;
- }
- i->last_offset = 0;
- i->head = head;
- return;
- }
if (unlikely(iov_iter_is_discard(i)))
return;
if (unroll <= i->iov_offset) {
@@ -1020,24 +683,6 @@ void iov_iter_bvec(struct iov_iter *i, unsigned int direction,
}
EXPORT_SYMBOL(iov_iter_bvec);
-void iov_iter_pipe(struct iov_iter *i, unsigned int direction,
- struct pipe_inode_info *pipe,
- size_t count)
-{
- BUG_ON(direction != READ);
- WARN_ON(pipe_full(pipe->head, pipe->tail, pipe->ring_size));
- *i = (struct iov_iter){
- .iter_type = ITER_PIPE,
- .data_source = false,
- .pipe = pipe,
- .head = pipe->head,
- .start_head = pipe->head,
- .last_offset = 0,
- .count = count
- };
-}
-EXPORT_SYMBOL(iov_iter_pipe);
-
/**
* iov_iter_xarray - Initialise an I/O iterator to use the pages in an xarray
* @i: The iterator to initialise.
@@ -1162,19 +807,6 @@ bool iov_iter_is_aligned(const struct iov_iter *i, unsigned addr_mask,
if (iov_iter_is_bvec(i))
return iov_iter_aligned_bvec(i, addr_mask, len_mask);
- if (iov_iter_is_pipe(i)) {
- size_t size = i->count;
-
- if (size & len_mask)
- return false;
- if (size && i->last_offset > 0) {
- if (i->last_offset & addr_mask)
- return false;
- }
-
- return true;
- }
-
if (iov_iter_is_xarray(i)) {
if (i->count & len_mask)
return false;
@@ -1244,14 +876,6 @@ unsigned long iov_iter_alignment(const struct iov_iter *i)
if (iov_iter_is_bvec(i))
return iov_iter_alignment_bvec(i);
- if (iov_iter_is_pipe(i)) {
- size_t size = i->count;
-
- if (size && i->last_offset > 0)
- return size | i->last_offset;
- return size;
- }
-
if (iov_iter_is_xarray(i))
return (i->xarray_start + i->iov_offset) | i->count;
@@ -1303,36 +927,6 @@ static int want_pages_array(struct page ***res, size_t size,
return count;
}
-static ssize_t pipe_get_pages(struct iov_iter *i,
- struct page ***pages, size_t maxsize, unsigned maxpages,
- size_t *start)
-{
- unsigned int npages, count, off, chunk;
- struct page **p;
- size_t left;
-
- if (!sanity(i))
- return -EFAULT;
-
- *start = off = pipe_npages(i, &npages);
- if (!npages)
- return -EFAULT;
- count = want_pages_array(pages, maxsize, off, min(npages, maxpages));
- if (!count)
- return -ENOMEM;
- p = *pages;
- for (npages = 0, left = maxsize ; npages < count; npages++, left -= chunk) {
- struct page *page = append_pipe(i, left, &off);
- if (!page)
- break;
- chunk = min_t(size_t, left, PAGE_SIZE - off);
- get_page(*p++ = page);
- }
- if (!npages)
- return -EFAULT;
- return maxsize - left;
-}
-
static ssize_t iter_xarray_populate_pages(struct page **pages, struct xarray *xa,
pgoff_t index, unsigned int nr_pages)
{
@@ -1480,8 +1074,6 @@ static ssize_t __iov_iter_get_pages_alloc(struct iov_iter *i,
}
return maxsize;
}
- if (iov_iter_is_pipe(i))
- return pipe_get_pages(i, pages, maxsize, maxpages, start);
if (iov_iter_is_xarray(i))
return iter_xarray_get_pages(i, pages, maxsize, maxpages, start);
return -EFAULT;
@@ -1571,9 +1163,7 @@ size_t csum_and_copy_to_iter(const void *addr, size_t bytes, void *_csstate,
}
sum = csum_shift(csstate->csum, csstate->off);
- if (unlikely(iov_iter_is_pipe(i)))
- bytes = csum_and_copy_to_pipe_iter(addr, bytes, i, &sum);
- else iterate_and_advance(i, bytes, base, len, off, ({
+ iterate_and_advance(i, bytes, base, len, off, ({
next = csum_and_copy_to_user(addr + off, base, len);
sum = csum_block_add(sum, next, off);
next ? 0 : len;
@@ -1658,15 +1248,6 @@ int iov_iter_npages(const struct iov_iter *i, int maxpages)
return iov_npages(i, maxpages);
if (iov_iter_is_bvec(i))
return bvec_npages(i, maxpages);
- if (iov_iter_is_pipe(i)) {
- int npages;
-
- if (!sanity(i))
- return 0;
-
- pipe_npages(i, &npages);
- return min(npages, maxpages);
- }
if (iov_iter_is_xarray(i)) {
unsigned offset = (i->xarray_start + i->iov_offset) % PAGE_SIZE;
int npages = DIV_ROUND_UP(offset + i->count, PAGE_SIZE);
@@ -1679,10 +1260,6 @@ EXPORT_SYMBOL(iov_iter_npages);
const void *dup_iter(struct iov_iter *new, struct iov_iter *old, gfp_t flags)
{
*new = *old;
- if (unlikely(iov_iter_is_pipe(new))) {
- WARN_ON(1);
- return NULL;
- }
if (iov_iter_is_bvec(new))
return new->bvec = kmemdup(new->bvec,
new->nr_segs * sizeof(struct bio_vec),
diff --git a/mm/filemap.c b/mm/filemap.c
index 8c7b135c8e23..c01bbcb9fa92 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2693,8 +2693,7 @@ ssize_t filemap_read(struct kiocb *iocb, struct iov_iter *iter,
if (unlikely(iocb->ki_pos >= i_size_read(inode)))
break;
- error = filemap_get_pages(iocb, iter->count, &fbatch,
- iov_iter_is_pipe(iter));
+ error = filemap_get_pages(iocb, iter->count, &fbatch, false);
if (error < 0)
break;
Implement splice-read for coda by passing the request down a layer rather
than going through generic_file_splice_read() which is going to be changed
to assume that ->read_folio() is present on buffered files.
Signed-off-by: David Howells <[email protected]>
cc: Christoph Hellwig <[email protected]>
cc: Jens Axboe <[email protected]>
cc: Al Viro <[email protected]>
cc: John Hubbard <[email protected]>
cc: David Hildenbrand <[email protected]>
cc: Matthew Wilcox <[email protected]>
cc: Jan Harkes <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
cc: [email protected]
cc: [email protected]
cc: [email protected]
---
fs/coda/file.c | 36 +++++++++++++++++++++++++++++++++++-
1 file changed, 35 insertions(+), 1 deletion(-)
diff --git a/fs/coda/file.c b/fs/coda/file.c
index 3f3c81e6b1ab..33cd7880d30e 100644
--- a/fs/coda/file.c
+++ b/fs/coda/file.c
@@ -23,6 +23,7 @@
#include <linux/slab.h>
#include <linux/uaccess.h>
#include <linux/uio.h>
+#include <linux/splice.h>
#include <linux/coda.h>
#include "coda_psdev.h"
@@ -94,6 +95,39 @@ coda_file_write_iter(struct kiocb *iocb, struct iov_iter *to)
return ret;
}
+static ssize_t
+coda_file_splice_read(struct file *coda_file, loff_t *ppos,
+ struct pipe_inode_info *pipe,
+ size_t len, unsigned int flags)
+{
+ struct inode *coda_inode = file_inode(coda_file);
+ struct coda_file_info *cfi = coda_ftoc(coda_file);
+ struct file *in = cfi->cfi_container;
+ loff_t ki_pos = *ppos;
+ ssize_t ret;
+
+ if (!in->f_op->splice_read)
+ return -EINVAL;
+
+ ret = rw_verify_area(READ, in, ppos, len);
+ if (unlikely(ret < 0))
+ return ret;
+
+ ret = venus_access_intent(coda_inode->i_sb, coda_i2f(coda_inode),
+ &cfi->cfi_access_intent,
+ len, ki_pos, CODA_ACCESS_TYPE_READ);
+ if (ret)
+ goto finish_read;
+
+ ret = in->f_op->splice_read(in, ppos, pipe, len, flags);
+
+finish_read:
+ venus_access_intent(coda_inode->i_sb, coda_i2f(coda_inode),
+ &cfi->cfi_access_intent,
+ len, ki_pos, CODA_ACCESS_TYPE_READ_FINISH);
+ return ret;
+}
+
static void
coda_vm_open(struct vm_area_struct *vma)
{
@@ -302,5 +336,5 @@ const struct file_operations coda_file_operations = {
.open = coda_open,
.release = coda_release,
.fsync = coda_fsync,
- .splice_read = generic_file_splice_read,
+ .splice_read = coda_file_splice_read,
};
Add a function, iov_iter_extract_pages(), to extract a list of pages from
an iterator. The pages may be returned with a pin added or nothing,
depending on the type of iterator.
Add a second function, iov_iter_extract_will_pin(), to determine how the
cleanup should be done.
There are two cases:
(1) ITER_IOVEC or ITER_UBUF iterator.
Extracted pages will have pins (FOLL_PIN) obtained on them so that a
concurrent fork() will forcibly copy the page so that DMA is done
to/from the parent's buffer and is unavailable to/unaffected by the
child process.
iov_iter_extract_will_pin() will return true for this case. The
caller should use something like unpin_user_page() to dispose of the
page.
(2) Any other sort of iterator.
No refs or pins are obtained on the page, the assumption is made that
the caller will manage page retention.
iov_iter_extract_will_pin() will return false. The pages don't need
additional disposal.
Signed-off-by: David Howells <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
cc: Al Viro <[email protected]>
cc: John Hubbard <[email protected]>
cc: David Hildenbrand <[email protected]>
cc: Matthew Wilcox <[email protected]>
cc: [email protected]
cc: [email protected]
---
Notes:
ver #12)
- ITER_PIPE is gone, so drop related bits.
- Don't specify FOLL_PIN as that's implied by pin_user_pages_fast().
ver #11)
- Fix iov_iter_extract_kvec_pages() to include the offset into the page in
the returned starting offset.
- Use __bitwise for the extraction flags
ver #10)
- Fix use of i->kvec in iov_iter_extract_bvec_pages() to be i->bvec.
ver #9)
- Rename iov_iter_extract_mode() to iov_iter_extract_will_pin() and make
it return true/false not FOLL_PIN/0 as FOLL_PIN is going to be made
private to mm/.
- Change extract_flags to extraction_flags.
ver #8)
- It seems that all DIO is supposed to be done under FOLL_PIN now, and not
FOLL_GET, so switch to only using pin_user_pages() for user-backed
iters.
- Wrap an argument in brackets in the iov_iter_extract_mode() macro.
- Drop the extract_flags argument to iov_iter_extract_mode() for now
[hch].
ver #7)
- Switch to passing in iter-specific flags rather than FOLL_* flags.
- Drop the direction flags for now.
- Use ITER_ALLOW_P2PDMA to request FOLL_PCI_P2PDMA.
- Disallow use of ITER_ALLOW_P2PDMA with non-user-backed iter.
- Add support for extraction from KVEC-type iters.
- Use iov_iter_advance() rather than open-coding it.
- Make BVEC- and KVEC-type skip over initial empty vectors.
ver #6)
- Add back the function to indicate the cleanup mode.
- Drop the cleanup_mode return arg to iov_iter_extract_pages().
- Pass FOLL_SOURCE/DEST_BUF in gup_flags. Check this against the iter
data_source.
ver #4)
- Use ITER_SOURCE/DEST instead of WRITE/READ.
- Allow additional FOLL_* flags, such as FOLL_PCI_P2PDMA to be passed in.
ver #3)
- Switch to using EXPORT_SYMBOL_GPL to prevent indirect 3rd-party access
to get/pin_user_pages_fast()[1].
include/linux/uio.h | 27 ++++-
lib/iov_iter.c | 264 ++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 290 insertions(+), 1 deletion(-)
diff --git a/include/linux/uio.h b/include/linux/uio.h
index af70e4c9ea27..cf6658066736 100644
--- a/include/linux/uio.h
+++ b/include/linux/uio.h
@@ -347,9 +347,34 @@ static inline void iov_iter_ubuf(struct iov_iter *i, unsigned int direction,
.count = count
};
}
-
/* Flags for iov_iter_get/extract_pages*() */
/* Allow P2PDMA on the extracted pages */
#define ITER_ALLOW_P2PDMA ((__force iov_iter_extraction_t)0x01)
+ssize_t iov_iter_extract_pages(struct iov_iter *i, struct page ***pages,
+ size_t maxsize, unsigned int maxpages,
+ iov_iter_extraction_t extraction_flags,
+ size_t *offset0);
+
+/**
+ * iov_iter_extract_will_pin - Indicate how pages from the iterator will be retained
+ * @iter: The iterator
+ *
+ * Examine the iterator and indicate by returning true or false as to how, if
+ * at all, pages extracted from the iterator will be retained by the extraction
+ * function.
+ *
+ * %true indicates that the pages will have a pin placed in them that the
+ * caller must unpin. This is must be done for DMA/async DIO to force fork()
+ * to forcibly copy a page for the child (the parent must retain the original
+ * page).
+ *
+ * %false indicates that no measures are taken and that it's up to the caller
+ * to retain the pages.
+ */
+static inline bool iov_iter_extract_will_pin(const struct iov_iter *iter)
+{
+ return user_backed_iter(iter);
+}
+
#endif
diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index 34ee3764d0fa..8d34b6552179 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -1487,3 +1487,267 @@ void iov_iter_restore(struct iov_iter *i, struct iov_iter_state *state)
i->iov -= state->nr_segs - i->nr_segs;
i->nr_segs = state->nr_segs;
}
+
+/*
+ * Extract a list of contiguous pages from an ITER_XARRAY iterator. This does not
+ * get references on the pages, nor does it get a pin on them.
+ */
+static ssize_t iov_iter_extract_xarray_pages(struct iov_iter *i,
+ struct page ***pages, size_t maxsize,
+ unsigned int maxpages,
+ iov_iter_extraction_t extraction_flags,
+ size_t *offset0)
+{
+ struct page *page, **p;
+ unsigned int nr = 0, offset;
+ loff_t pos = i->xarray_start + i->iov_offset;
+ pgoff_t index = pos >> PAGE_SHIFT;
+ XA_STATE(xas, i->xarray, index);
+
+ offset = pos & ~PAGE_MASK;
+ *offset0 = offset;
+
+ maxpages = want_pages_array(pages, maxsize, offset, maxpages);
+ if (!maxpages)
+ return -ENOMEM;
+ p = *pages;
+
+ rcu_read_lock();
+ for (page = xas_load(&xas); page; page = xas_next(&xas)) {
+ if (xas_retry(&xas, page))
+ continue;
+
+ /* Has the page moved or been split? */
+ if (unlikely(page != xas_reload(&xas))) {
+ xas_reset(&xas);
+ continue;
+ }
+
+ p[nr++] = find_subpage(page, xas.xa_index);
+ if (nr == maxpages)
+ break;
+ }
+ rcu_read_unlock();
+
+ maxsize = min_t(size_t, nr * PAGE_SIZE - offset, maxsize);
+ iov_iter_advance(i, maxsize);
+ return maxsize;
+}
+
+/*
+ * Extract a list of contiguous pages from an ITER_BVEC iterator. This does
+ * not get references on the pages, nor does it get a pin on them.
+ */
+static ssize_t iov_iter_extract_bvec_pages(struct iov_iter *i,
+ struct page ***pages, size_t maxsize,
+ unsigned int maxpages,
+ iov_iter_extraction_t extraction_flags,
+ size_t *offset0)
+{
+ struct page **p, *page;
+ size_t skip = i->iov_offset, offset;
+ int k;
+
+ for (;;) {
+ if (i->nr_segs == 0)
+ return 0;
+ maxsize = min(maxsize, i->bvec->bv_len - skip);
+ if (maxsize)
+ break;
+ i->iov_offset = 0;
+ i->nr_segs--;
+ i->bvec++;
+ skip = 0;
+ }
+
+ skip += i->bvec->bv_offset;
+ page = i->bvec->bv_page + skip / PAGE_SIZE;
+ offset = skip % PAGE_SIZE;
+ *offset0 = offset;
+
+ maxpages = want_pages_array(pages, maxsize, offset, maxpages);
+ if (!maxpages)
+ return -ENOMEM;
+ p = *pages;
+ for (k = 0; k < maxpages; k++)
+ p[k] = page + k;
+
+ maxsize = min_t(size_t, maxsize, maxpages * PAGE_SIZE - offset);
+ iov_iter_advance(i, maxsize);
+ return maxsize;
+}
+
+/*
+ * Extract a list of virtually contiguous pages from an ITER_KVEC iterator.
+ * This does not get references on the pages, nor does it get a pin on them.
+ */
+static ssize_t iov_iter_extract_kvec_pages(struct iov_iter *i,
+ struct page ***pages, size_t maxsize,
+ unsigned int maxpages,
+ iov_iter_extraction_t extraction_flags,
+ size_t *offset0)
+{
+ struct page **p, *page;
+ const void *kaddr;
+ size_t skip = i->iov_offset, offset, len;
+ int k;
+
+ for (;;) {
+ if (i->nr_segs == 0)
+ return 0;
+ maxsize = min(maxsize, i->kvec->iov_len - skip);
+ if (maxsize)
+ break;
+ i->iov_offset = 0;
+ i->nr_segs--;
+ i->kvec++;
+ skip = 0;
+ }
+
+ kaddr = i->kvec->iov_base + skip;
+ offset = (unsigned long)kaddr & ~PAGE_MASK;
+ *offset0 = offset;
+
+ maxpages = want_pages_array(pages, maxsize, offset, maxpages);
+ if (!maxpages)
+ return -ENOMEM;
+ p = *pages;
+
+ kaddr -= offset;
+ len = offset + maxsize;
+ for (k = 0; k < maxpages; k++) {
+ size_t seg = min_t(size_t, len, PAGE_SIZE);
+
+ if (is_vmalloc_or_module_addr(kaddr))
+ page = vmalloc_to_page(kaddr);
+ else
+ page = virt_to_page(kaddr);
+
+ p[k] = page;
+ len -= seg;
+ kaddr += PAGE_SIZE;
+ }
+
+ maxsize = min_t(size_t, maxsize, maxpages * PAGE_SIZE - offset);
+ iov_iter_advance(i, maxsize);
+ return maxsize;
+}
+
+/*
+ * Extract a list of contiguous pages from a user iterator and get a pin on
+ * each of them. This should only be used if the iterator is user-backed
+ * (IOBUF/UBUF).
+ *
+ * It does not get refs on the pages, but the pages must be unpinned by the
+ * caller once the transfer is complete.
+ *
+ * This is safe to be used where background IO/DMA *is* going to be modifying
+ * the buffer; using a pin rather than a ref makes forces fork() to give the
+ * child a copy of the page.
+ */
+static ssize_t iov_iter_extract_user_pages(struct iov_iter *i,
+ struct page ***pages,
+ size_t maxsize,
+ unsigned int maxpages,
+ iov_iter_extraction_t extraction_flags,
+ size_t *offset0)
+{
+ unsigned long addr;
+ unsigned int gup_flags = 0;
+ size_t offset;
+ int res;
+
+ if (i->data_source == ITER_DEST)
+ gup_flags |= FOLL_WRITE;
+ if (extraction_flags & ITER_ALLOW_P2PDMA)
+ gup_flags |= FOLL_PCI_P2PDMA;
+ if (i->nofault)
+ gup_flags |= FOLL_NOFAULT;
+
+ addr = first_iovec_segment(i, &maxsize);
+ *offset0 = offset = addr % PAGE_SIZE;
+ addr &= PAGE_MASK;
+ maxpages = want_pages_array(pages, maxsize, offset, maxpages);
+ if (!maxpages)
+ return -ENOMEM;
+ res = pin_user_pages_fast(addr, maxpages, gup_flags, *pages);
+ if (unlikely(res <= 0))
+ return res;
+ maxsize = min_t(size_t, maxsize, res * PAGE_SIZE - offset);
+ iov_iter_advance(i, maxsize);
+ return maxsize;
+}
+
+/**
+ * iov_iter_extract_pages - Extract a list of contiguous pages from an iterator
+ * @i: The iterator to extract from
+ * @pages: Where to return the list of pages
+ * @maxsize: The maximum amount of iterator to extract
+ * @maxpages: The maximum size of the list of pages
+ * @extraction_flags: Flags to qualify request
+ * @offset0: Where to return the starting offset into (*@pages)[0]
+ *
+ * Extract a list of contiguous pages from the current point of the iterator,
+ * advancing the iterator. The maximum number of pages and the maximum amount
+ * of page contents can be set.
+ *
+ * If *@pages is NULL, a page list will be allocated to the required size and
+ * *@pages will be set to its base. If *@pages is not NULL, it will be assumed
+ * that the caller allocated a page list at least @maxpages in size and this
+ * will be filled in.
+ *
+ * @extraction_flags can have ITER_ALLOW_P2PDMA set to request peer-to-peer DMA
+ * be allowed on the pages extracted.
+ *
+ * The iov_iter_extract_will_pin() function can be used to query how cleanup
+ * should be performed.
+ *
+ * Extra refs or pins on the pages may be obtained as follows:
+ *
+ * (*) If the iterator is user-backed (ITER_IOVEC/ITER_UBUF), pins will be
+ * added to the pages, but refs will not be taken.
+ * iov_iter_extract_will_pin() will return true.
+ *
+ * (*) If the iterator is ITER_KVEC, ITER_BVEC or ITER_XARRAY, the pages are
+ * merely listed; no extra refs or pins are obtained.
+ * iov_iter_extract_will_pin() will return 0.
+ *
+ * Note also:
+ *
+ * (*) Use with ITER_DISCARD is not supported as that has no content.
+ *
+ * On success, the function sets *@pages to the new pagelist, if allocated, and
+ * sets *offset0 to the offset into the first page.
+ *
+ * It may also return -ENOMEM and -EFAULT.
+ */
+ssize_t iov_iter_extract_pages(struct iov_iter *i,
+ struct page ***pages,
+ size_t maxsize,
+ unsigned int maxpages,
+ iov_iter_extraction_t extraction_flags,
+ size_t *offset0)
+{
+ maxsize = min_t(size_t, min_t(size_t, maxsize, i->count), MAX_RW_COUNT);
+ if (!maxsize)
+ return 0;
+
+ if (likely(user_backed_iter(i)))
+ return iov_iter_extract_user_pages(i, pages, maxsize,
+ maxpages, extraction_flags,
+ offset0);
+ if (iov_iter_is_kvec(i))
+ return iov_iter_extract_kvec_pages(i, pages, maxsize,
+ maxpages, extraction_flags,
+ offset0);
+ if (iov_iter_is_bvec(i))
+ return iov_iter_extract_bvec_pages(i, pages, maxsize,
+ maxpages, extraction_flags,
+ offset0);
+ if (iov_iter_is_xarray(i))
+ return iov_iter_extract_xarray_pages(i, pages, maxsize,
+ maxpages, extraction_flags,
+ offset0);
+ return -EFAULT;
+}
+EXPORT_SYMBOL_GPL(iov_iter_extract_pages);
Fix bio_flagged() so that multiple instances of it, such as:
if (bio_flagged(bio, BIO_PAGE_REFFED) ||
bio_flagged(bio, BIO_PAGE_PINNED))
can be combined by the gcc optimiser into a single test in assembly
(arguably, this is a compiler optimisation issue[1]).
The missed optimisation stems from bio_flagged() comparing the result of
the bitwise-AND to zero. This results in an out-of-line bio_release_page()
being compiled to something like:
<+0>: mov 0x14(%rdi),%eax
<+3>: test $0x1,%al
<+5>: jne 0xffffffff816dac53 <bio_release_pages+11>
<+7>: test $0x2,%al
<+9>: je 0xffffffff816dac5c <bio_release_pages+20>
<+11>: movzbl %sil,%esi
<+15>: jmp 0xffffffff816daba1 <__bio_release_pages>
<+20>: jmp 0xffffffff81d0b800 <__x86_return_thunk>
However, the test is superfluous as the return type is bool. Removing it
results in:
<+0>: testb $0x3,0x14(%rdi)
<+4>: je 0xffffffff816e4af4 <bio_release_pages+15>
<+6>: movzbl %sil,%esi
<+10>: jmp 0xffffffff816dab7c <__bio_release_pages>
<+15>: jmp 0xffffffff81d0b7c0 <__x86_return_thunk>
instead.
Also, the MOVZBL instruction looks unnecessary[2] - I think it's just
're-booling' the mark_dirty parameter.
Signed-off-by: David Howells <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: John Hubbard <[email protected]>
cc: Jens Axboe <[email protected]>
cc: [email protected]
Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108370 [1]
Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108371 [2]
Link: https://lore.kernel.org/r/167391056756.2311931.356007731815807265.stgit@warthog.procyon.org.uk/ # v6
---
include/linux/bio.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/bio.h b/include/linux/bio.h
index c1da63f6c808..10366b8bdb13 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -227,7 +227,7 @@ static inline void bio_cnt_set(struct bio *bio, unsigned int count)
static inline bool bio_flagged(struct bio *bio, unsigned int bit)
{
- return (bio->bi_flags & (1U << bit)) != 0;
+ return bio->bi_flags & (1U << bit);
}
static inline void bio_set_flag(struct bio *bio, unsigned int bit)
Define flags to qualify page extraction to pass into iov_iter_*_pages*()
rather than passing in FOLL_* flags.
For now only a flag to allow peer-to-peer DMA is supported.
Signed-off-by: David Howells <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: John Hubbard <[email protected]>
cc: Al Viro <[email protected]>
cc: Jens Axboe <[email protected]>
cc: Logan Gunthorpe <[email protected]>
cc: [email protected]
cc: [email protected]
---
Notes:
ver #12)
- Use __bitwise for the extraction flags typedef.
ver #11)
- Use __bitwise for the extraction flags.
ver #9)
- Change extract_flags to extraction_flags.
ver #7)
- Don't use FOLL_* as a parameter, but rather define constants
specifically to use with iov_iter_*_pages*().
- Drop the I/O direction constants for now.
block/bio.c | 6 +++---
block/blk-map.c | 8 ++++----
include/linux/uio.h | 10 ++++++++--
lib/iov_iter.c | 14 ++++++++------
4 files changed, 23 insertions(+), 15 deletions(-)
diff --git a/block/bio.c b/block/bio.c
index ab59a491a883..b97f3991c904 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1245,11 +1245,11 @@ static int bio_iov_add_zone_append_page(struct bio *bio, struct page *page,
*/
static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
{
+ iov_iter_extraction_t extraction_flags = 0;
unsigned short nr_pages = bio->bi_max_vecs - bio->bi_vcnt;
unsigned short entries_left = bio->bi_max_vecs - bio->bi_vcnt;
struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt;
struct page **pages = (struct page **)bv;
- unsigned int gup_flags = 0;
ssize_t size, left;
unsigned len, i = 0;
size_t offset, trim;
@@ -1264,7 +1264,7 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
pages += entries_left * (PAGE_PTRS_PER_BVEC - 1);
if (bio->bi_bdev && blk_queue_pci_p2pdma(bio->bi_bdev->bd_disk->queue))
- gup_flags |= FOLL_PCI_P2PDMA;
+ extraction_flags |= ITER_ALLOW_P2PDMA;
/*
* Each segment in the iov is required to be a block size multiple.
@@ -1275,7 +1275,7 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
*/
size = iov_iter_get_pages(iter, pages,
UINT_MAX - bio->bi_iter.bi_size,
- nr_pages, &offset, gup_flags);
+ nr_pages, &offset, extraction_flags);
if (unlikely(size <= 0))
return size ? size : -EFAULT;
diff --git a/block/blk-map.c b/block/blk-map.c
index 19940c978c73..080dd60485be 100644
--- a/block/blk-map.c
+++ b/block/blk-map.c
@@ -265,9 +265,9 @@ static struct bio *blk_rq_map_bio_alloc(struct request *rq,
static int bio_map_user_iov(struct request *rq, struct iov_iter *iter,
gfp_t gfp_mask)
{
+ iov_iter_extraction_t extraction_flags = 0;
unsigned int max_sectors = queue_max_hw_sectors(rq->q);
unsigned int nr_vecs = iov_iter_npages(iter, BIO_MAX_VECS);
- unsigned int gup_flags = 0;
struct bio *bio;
int ret;
int j;
@@ -280,7 +280,7 @@ static int bio_map_user_iov(struct request *rq, struct iov_iter *iter,
return -ENOMEM;
if (blk_queue_pci_p2pdma(rq->q))
- gup_flags |= FOLL_PCI_P2PDMA;
+ extraction_flags |= ITER_ALLOW_P2PDMA;
while (iov_iter_count(iter)) {
struct page **pages, *stack_pages[UIO_FASTIOV];
@@ -291,10 +291,10 @@ static int bio_map_user_iov(struct request *rq, struct iov_iter *iter,
if (nr_vecs <= ARRAY_SIZE(stack_pages)) {
pages = stack_pages;
bytes = iov_iter_get_pages(iter, pages, LONG_MAX,
- nr_vecs, &offs, gup_flags);
+ nr_vecs, &offs, extraction_flags);
} else {
bytes = iov_iter_get_pages_alloc(iter, &pages,
- LONG_MAX, &offs, gup_flags);
+ LONG_MAX, &offs, extraction_flags);
}
if (unlikely(bytes <= 0)) {
ret = bytes ? bytes : -EFAULT;
diff --git a/include/linux/uio.h b/include/linux/uio.h
index dcc0ca5ef491..af70e4c9ea27 100644
--- a/include/linux/uio.h
+++ b/include/linux/uio.h
@@ -12,6 +12,8 @@
struct page;
+typedef unsigned int __bitwise iov_iter_extraction_t;
+
struct kvec {
void *iov_base; /* and that should *never* hold a userland pointer */
size_t iov_len;
@@ -238,12 +240,12 @@ void iov_iter_xarray(struct iov_iter *i, unsigned int direction, struct xarray *
loff_t start, size_t count);
ssize_t iov_iter_get_pages(struct iov_iter *i, struct page **pages,
size_t maxsize, unsigned maxpages, size_t *start,
- unsigned gup_flags);
+ iov_iter_extraction_t extraction_flags);
ssize_t iov_iter_get_pages2(struct iov_iter *i, struct page **pages,
size_t maxsize, unsigned maxpages, size_t *start);
ssize_t iov_iter_get_pages_alloc(struct iov_iter *i,
struct page ***pages, size_t maxsize, size_t *start,
- unsigned gup_flags);
+ iov_iter_extraction_t extraction_flags);
ssize_t iov_iter_get_pages_alloc2(struct iov_iter *i, struct page ***pages,
size_t maxsize, size_t *start);
int iov_iter_npages(const struct iov_iter *i, int maxpages);
@@ -346,4 +348,8 @@ static inline void iov_iter_ubuf(struct iov_iter *i, unsigned int direction,
};
}
+/* Flags for iov_iter_get/extract_pages*() */
+/* Allow P2PDMA on the extracted pages */
+#define ITER_ALLOW_P2PDMA ((__force iov_iter_extraction_t)0x01)
+
#endif
diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index adc5e8aa8ae8..34ee3764d0fa 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -1020,9 +1020,9 @@ static struct page *first_bvec_segment(const struct iov_iter *i,
static ssize_t __iov_iter_get_pages_alloc(struct iov_iter *i,
struct page ***pages, size_t maxsize,
unsigned int maxpages, size_t *start,
- unsigned int gup_flags)
+ iov_iter_extraction_t extraction_flags)
{
- unsigned int n;
+ unsigned int n, gup_flags = 0;
if (maxsize > i->count)
maxsize = i->count;
@@ -1030,6 +1030,8 @@ static ssize_t __iov_iter_get_pages_alloc(struct iov_iter *i,
return 0;
if (maxsize > MAX_RW_COUNT)
maxsize = MAX_RW_COUNT;
+ if (extraction_flags & ITER_ALLOW_P2PDMA)
+ gup_flags |= FOLL_PCI_P2PDMA;
if (likely(user_backed_iter(i))) {
unsigned long addr;
@@ -1081,14 +1083,14 @@ static ssize_t __iov_iter_get_pages_alloc(struct iov_iter *i,
ssize_t iov_iter_get_pages(struct iov_iter *i,
struct page **pages, size_t maxsize, unsigned maxpages,
- size_t *start, unsigned gup_flags)
+ size_t *start, iov_iter_extraction_t extraction_flags)
{
if (!maxpages)
return 0;
BUG_ON(!pages);
return __iov_iter_get_pages_alloc(i, &pages, maxsize, maxpages,
- start, gup_flags);
+ start, extraction_flags);
}
EXPORT_SYMBOL_GPL(iov_iter_get_pages);
@@ -1101,14 +1103,14 @@ EXPORT_SYMBOL(iov_iter_get_pages2);
ssize_t iov_iter_get_pages_alloc(struct iov_iter *i,
struct page ***pages, size_t maxsize,
- size_t *start, unsigned gup_flags)
+ size_t *start, iov_iter_extraction_t extraction_flags)
{
ssize_t len;
*pages = NULL;
len = __iov_iter_get_pages_alloc(i, pages, maxsize, ~0U, start,
- gup_flags);
+ extraction_flags);
if (len <= 0) {
kvfree(*pages);
*pages = NULL;
ZERO_PAGE can't go away, no need to hold an extra reference.
Signed-off-by: David Howells <[email protected]>
Reviewed-by: David Hildenbrand <[email protected]>
Reviewed-by: John Hubbard <[email protected]>
cc: Al Viro <[email protected]>
cc: David Hildenbrand <[email protected]>
cc: [email protected]
---
fs/iomap/direct-io.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c
index 9804714b1751..47db4ead1e74 100644
--- a/fs/iomap/direct-io.c
+++ b/fs/iomap/direct-io.c
@@ -202,7 +202,7 @@ static void iomap_dio_zero(const struct iomap_iter *iter, struct iomap_dio *dio,
bio->bi_private = dio;
bio->bi_end_io = iomap_dio_bio_end_io;
- get_page(page);
+ bio_set_flag(bio, BIO_NO_PAGE_REF);
__bio_add_page(bio, page, len, 0);
iomap_dio_submit_bio(iter, dio, bio, pos);
}
From: Christoph Hellwig <[email protected]>
Replace BIO_NO_PAGE_REF with a BIO_PAGE_REFFED flag that has the inverted
meaning is only set when a page reference has been acquired that needs to
be released by bio_release_pages().
Signed-off-by: Christoph Hellwig <[email protected]>
Signed-off-by: David Howells <[email protected]>
Reviewed-by: John Hubbard <[email protected]>
cc: Al Viro <[email protected]>
cc: Jens Axboe <[email protected]>
cc: Jan Kara <[email protected]>
cc: Matthew Wilcox <[email protected]>
cc: Logan Gunthorpe <[email protected]>
cc: [email protected]
---
Notes:
ver #8)
- Split out from another patch [hch].
- Don't default to BIO_PAGE_REFFED [hch].
ver #5)
- Split from patch that uses iov_iter_extract_pages().
block/bio.c | 2 +-
block/blk-map.c | 1 +
fs/direct-io.c | 2 ++
fs/iomap/direct-io.c | 1 -
include/linux/bio.h | 2 +-
include/linux/blk_types.h | 2 +-
6 files changed, 6 insertions(+), 4 deletions(-)
diff --git a/block/bio.c b/block/bio.c
index b97f3991c904..bf9bf53232be 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1198,7 +1198,6 @@ void bio_iov_bvec_set(struct bio *bio, struct iov_iter *iter)
bio->bi_io_vec = (struct bio_vec *)iter->bvec;
bio->bi_iter.bi_bvec_done = iter->iov_offset;
bio->bi_iter.bi_size = size;
- bio_set_flag(bio, BIO_NO_PAGE_REF);
bio_set_flag(bio, BIO_CLONED);
}
@@ -1343,6 +1342,7 @@ int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
return 0;
}
+ bio_set_flag(bio, BIO_PAGE_REFFED);
do {
ret = __bio_iov_iter_get_pages(bio, iter);
} while (!ret && iov_iter_count(iter) && !bio_full(bio, 0));
diff --git a/block/blk-map.c b/block/blk-map.c
index 080dd60485be..f1f70b50388d 100644
--- a/block/blk-map.c
+++ b/block/blk-map.c
@@ -282,6 +282,7 @@ static int bio_map_user_iov(struct request *rq, struct iov_iter *iter,
if (blk_queue_pci_p2pdma(rq->q))
extraction_flags |= ITER_ALLOW_P2PDMA;
+ bio_set_flag(bio, BIO_PAGE_REFFED);
while (iov_iter_count(iter)) {
struct page **pages, *stack_pages[UIO_FASTIOV];
ssize_t bytes;
diff --git a/fs/direct-io.c b/fs/direct-io.c
index 03d381377ae1..07810465fc9d 100644
--- a/fs/direct-io.c
+++ b/fs/direct-io.c
@@ -403,6 +403,8 @@ dio_bio_alloc(struct dio *dio, struct dio_submit *sdio,
bio->bi_end_io = dio_bio_end_aio;
else
bio->bi_end_io = dio_bio_end_io;
+ /* for now require references for all pages */
+ bio_set_flag(bio, BIO_PAGE_REFFED);
sdio->bio = bio;
sdio->logical_offset_in_bio = sdio->cur_page_fs_offset;
}
diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c
index 47db4ead1e74..c0e75900e754 100644
--- a/fs/iomap/direct-io.c
+++ b/fs/iomap/direct-io.c
@@ -202,7 +202,6 @@ static void iomap_dio_zero(const struct iomap_iter *iter, struct iomap_dio *dio,
bio->bi_private = dio;
bio->bi_end_io = iomap_dio_bio_end_io;
- bio_set_flag(bio, BIO_NO_PAGE_REF);
__bio_add_page(bio, page, len, 0);
iomap_dio_submit_bio(iter, dio, bio, pos);
}
diff --git a/include/linux/bio.h b/include/linux/bio.h
index 10366b8bdb13..805957c99147 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -484,7 +484,7 @@ void zero_fill_bio(struct bio *bio);
static inline void bio_release_pages(struct bio *bio, bool mark_dirty)
{
- if (!bio_flagged(bio, BIO_NO_PAGE_REF))
+ if (bio_flagged(bio, BIO_PAGE_REFFED))
__bio_release_pages(bio, mark_dirty);
}
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index 99be590f952f..7daa261f4f98 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -318,7 +318,7 @@ struct bio {
* bio flags
*/
enum {
- BIO_NO_PAGE_REF, /* don't put release vec pages */
+ BIO_PAGE_REFFED, /* put pages in bio_release_pages() */
BIO_CLONED, /* doesn't own data */
BIO_BOUNCED, /* bio is a bounce bio */
BIO_QUIET, /* Make BIO Quiet */
This will pin pages or leave them unaltered rather than getting a ref on
them as appropriate to the iterator.
The pages need to be pinned for DIO rather than having refs taken on them
to prevent VM copy-on-write from malfunctioning during a concurrent fork()
(the result of the I/O could otherwise end up being visible to/affected by
the child process).
Signed-off-by: David Howells <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: John Hubbard <[email protected]>
cc: Al Viro <[email protected]>
cc: Jens Axboe <[email protected]>
cc: Jan Kara <[email protected]>
cc: Matthew Wilcox <[email protected]>
cc: Logan Gunthorpe <[email protected]>
cc: [email protected]
---
Notes:
ver #10)
- Drop bio_set_cleanup_mode(), open coding it instead.
ver #8)
- Split the patch up a bit [hch].
- We should only be using pinned/non-pinned pages and not ref'd pages,
so adjust the comments appropriately.
ver #7)
- Don't treat BIO_PAGE_REFFED/PINNED as being the same as FOLL_GET/PIN.
ver #5)
- Transcribe the FOLL_* flags returned by iov_iter_extract_pages() to
BIO_* flags and got rid of bi_cleanup_mode.
- Replaced BIO_NO_PAGE_REF to BIO_PAGE_REFFED in the preceding patch.
block/blk-map.c | 23 +++++++++++------------
1 file changed, 11 insertions(+), 12 deletions(-)
diff --git a/block/blk-map.c b/block/blk-map.c
index f1f70b50388d..0f1593e144da 100644
--- a/block/blk-map.c
+++ b/block/blk-map.c
@@ -281,22 +281,21 @@ static int bio_map_user_iov(struct request *rq, struct iov_iter *iter,
if (blk_queue_pci_p2pdma(rq->q))
extraction_flags |= ITER_ALLOW_P2PDMA;
+ if (iov_iter_extract_will_pin(iter))
+ bio_set_flag(bio, BIO_PAGE_PINNED);
- bio_set_flag(bio, BIO_PAGE_REFFED);
while (iov_iter_count(iter)) {
- struct page **pages, *stack_pages[UIO_FASTIOV];
+ struct page *stack_pages[UIO_FASTIOV];
+ struct page **pages = stack_pages;
ssize_t bytes;
size_t offs;
int npages;
- if (nr_vecs <= ARRAY_SIZE(stack_pages)) {
- pages = stack_pages;
- bytes = iov_iter_get_pages(iter, pages, LONG_MAX,
- nr_vecs, &offs, extraction_flags);
- } else {
- bytes = iov_iter_get_pages_alloc(iter, &pages,
- LONG_MAX, &offs, extraction_flags);
- }
+ if (nr_vecs > ARRAY_SIZE(stack_pages))
+ pages = NULL;
+
+ bytes = iov_iter_extract_pages(iter, &pages, LONG_MAX,
+ nr_vecs, extraction_flags, &offs);
if (unlikely(bytes <= 0)) {
ret = bytes ? bytes : -EFAULT;
goto out_unmap;
@@ -318,7 +317,7 @@ static int bio_map_user_iov(struct request *rq, struct iov_iter *iter,
if (!bio_add_hw_page(rq->q, bio, page, n, offs,
max_sectors, &same_page)) {
if (same_page)
- put_page(page);
+ bio_release_page(bio, page);
break;
}
@@ -330,7 +329,7 @@ static int bio_map_user_iov(struct request *rq, struct iov_iter *iter,
* release the pages we didn't map into the bio, if any
*/
while (j < npages)
- put_page(pages[j++]);
+ bio_release_page(bio, pages[j++]);
if (pages != stack_pages)
kvfree(pages);
/* couldn't stuff something into bio? */
Add BIO_PAGE_PINNED to indicate that the pages in a bio are pinned
(FOLL_PIN) and that the pin will need removing.
Signed-off-by: David Howells <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: John Hubbard <[email protected]>
cc: Al Viro <[email protected]>
cc: Jens Axboe <[email protected]>
cc: Jan Kara <[email protected]>
cc: Matthew Wilcox <[email protected]>
cc: Logan Gunthorpe <[email protected]>
cc: [email protected]
---
Notes:
ver #10)
- Drop bio_set_cleanup_mode(), open coding it instead.
ver #9)
- Only consider pinning in bio_set_cleanup_mode(). Ref'ing pages in
struct bio is going away.
- page_put_unpin() is removed; call unpin_user_page() and put_page()
directly.
- Use bio_release_page() in __bio_release_pages().
- BIO_PAGE_PINNED and BIO_PAGE_REFFED can't both be set, so use if-else
when testing both of them.
ver #8)
- Move the infrastructure to clean up pinned pages to this patch [hch].
- Put BIO_PAGE_PINNED before BIO_PAGE_REFFED as the latter should
probably be removed at some point. FOLL_PIN can then be renumbered
first.
block/bio.c | 6 +++---
block/blk.h | 12 ++++++++++++
include/linux/bio.h | 3 ++-
include/linux/blk_types.h | 1 +
4 files changed, 18 insertions(+), 4 deletions(-)
diff --git a/block/bio.c b/block/bio.c
index bf9bf53232be..547e38883934 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1176,7 +1176,7 @@ void __bio_release_pages(struct bio *bio, bool mark_dirty)
bio_for_each_segment_all(bvec, bio, iter_all) {
if (mark_dirty && !PageCompound(bvec->bv_page))
set_page_dirty_lock(bvec->bv_page);
- put_page(bvec->bv_page);
+ bio_release_page(bio, bvec->bv_page);
}
}
EXPORT_SYMBOL_GPL(__bio_release_pages);
@@ -1496,8 +1496,8 @@ void bio_set_pages_dirty(struct bio *bio)
* the BIO and re-dirty the pages in process context.
*
* It is expected that bio_check_pages_dirty() will wholly own the BIO from
- * here on. It will run one put_page() against each page and will run one
- * bio_put() against the BIO.
+ * here on. It will unpin each page and will run one bio_put() against the
+ * BIO.
*/
static void bio_dirty_fn(struct work_struct *work);
diff --git a/block/blk.h b/block/blk.h
index 4c3b3325219a..f02381405311 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -425,6 +425,18 @@ int bio_add_hw_page(struct request_queue *q, struct bio *bio,
struct page *page, unsigned int len, unsigned int offset,
unsigned int max_sectors, bool *same_page);
+/*
+ * Clean up a page appropriately, where the page may be pinned, may have a
+ * ref taken on it or neither.
+ */
+static inline void bio_release_page(struct bio *bio, struct page *page)
+{
+ if (bio_flagged(bio, BIO_PAGE_PINNED))
+ unpin_user_page(page);
+ else if (bio_flagged(bio, BIO_PAGE_REFFED))
+ put_page(page);
+}
+
struct request_queue *blk_alloc_queue(int node_id);
int disk_scan_partitions(struct gendisk *disk, fmode_t mode, void *owner);
diff --git a/include/linux/bio.h b/include/linux/bio.h
index 805957c99147..b2c09997d79c 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -484,7 +484,8 @@ void zero_fill_bio(struct bio *bio);
static inline void bio_release_pages(struct bio *bio, bool mark_dirty)
{
- if (bio_flagged(bio, BIO_PAGE_REFFED))
+ if (bio_flagged(bio, BIO_PAGE_REFFED) ||
+ bio_flagged(bio, BIO_PAGE_PINNED))
__bio_release_pages(bio, mark_dirty);
}
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index 7daa261f4f98..a0e339ff3d09 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -318,6 +318,7 @@ struct bio {
* bio flags
*/
enum {
+ BIO_PAGE_PINNED, /* Unpin pages in bio_release_pages() */
BIO_PAGE_REFFED, /* put pages in bio_release_pages() */
BIO_CLONED, /* doesn't own data */
BIO_BOUNCED, /* bio is a bounce bio */
This will pin pages or leave them unaltered rather than getting a ref on
them as appropriate to the iterator.
The pages need to be pinned for DIO rather than having refs taken on them to
prevent VM copy-on-write from malfunctioning during a concurrent fork() (the
result of the I/O could otherwise end up being affected by/visible to the
child process).
Signed-off-by: David Howells <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: John Hubbard <[email protected]>
cc: Al Viro <[email protected]>
cc: Jens Axboe <[email protected]>
cc: Jan Kara <[email protected]>
cc: Matthew Wilcox <[email protected]>
cc: Logan Gunthorpe <[email protected]>
cc: [email protected]
---
Notes:
ver #10)
- Drop bio_set_cleanup_mode(), open coding it instead.
ver #8)
- Split the patch up a bit [hch].
- We should only be using pinned/non-pinned pages and not ref'd pages,
so adjust the comments appropriately.
ver #7)
- Don't treat BIO_PAGE_REFFED/PINNED as being the same as FOLL_GET/PIN.
ver #5)
- Transcribe the FOLL_* flags returned by iov_iter_extract_pages() to
BIO_* flags and got rid of bi_cleanup_mode.
- Replaced BIO_NO_PAGE_REF to BIO_PAGE_REFFED in the preceding patch.
block/bio.c | 23 ++++++++++++-----------
1 file changed, 12 insertions(+), 11 deletions(-)
diff --git a/block/bio.c b/block/bio.c
index 547e38883934..fc57f0aa098e 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1212,7 +1212,7 @@ static int bio_iov_add_page(struct bio *bio, struct page *page,
}
if (same_page)
- put_page(page);
+ bio_release_page(bio, page);
return 0;
}
@@ -1226,7 +1226,7 @@ static int bio_iov_add_zone_append_page(struct bio *bio, struct page *page,
queue_max_zone_append_sectors(q), &same_page) != len)
return -EINVAL;
if (same_page)
- put_page(page);
+ bio_release_page(bio, page);
return 0;
}
@@ -1237,10 +1237,10 @@ static int bio_iov_add_zone_append_page(struct bio *bio, struct page *page,
* @bio: bio to add pages to
* @iter: iov iterator describing the region to be mapped
*
- * Pins pages from *iter and appends them to @bio's bvec array. The
- * pages will have to be released using put_page() when done.
- * For multi-segment *iter, this function only adds pages from the
- * next non-empty segment of the iov iterator.
+ * Extracts pages from *iter and appends them to @bio's bvec array. The pages
+ * will have to be cleaned up in the way indicated by the BIO_PAGE_PINNED flag.
+ * For a multi-segment *iter, this function only adds pages from the next
+ * non-empty segment of the iov iterator.
*/
static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
{
@@ -1272,9 +1272,9 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
* result to ensure the bio's total size is correct. The remainder of
* the iov data will be picked up in the next bio iteration.
*/
- size = iov_iter_get_pages(iter, pages,
- UINT_MAX - bio->bi_iter.bi_size,
- nr_pages, &offset, extraction_flags);
+ size = iov_iter_extract_pages(iter, &pages,
+ UINT_MAX - bio->bi_iter.bi_size,
+ nr_pages, extraction_flags, &offset);
if (unlikely(size <= 0))
return size ? size : -EFAULT;
@@ -1307,7 +1307,7 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
iov_iter_revert(iter, left);
out:
while (i < nr_pages)
- put_page(pages[i++]);
+ bio_release_page(bio, pages[i++]);
return ret;
}
@@ -1342,7 +1342,8 @@ int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
return 0;
}
- bio_set_flag(bio, BIO_PAGE_REFFED);
+ if (iov_iter_extract_will_pin(iter))
+ bio_set_flag(bio, BIO_PAGE_PINNED);
do {
ret = __bio_iov_iter_get_pages(bio, iter);
} while (!ret && iov_iter_count(iter) && !bio_full(bio, 0));
On Tue, Feb 14, 2023 at 05:13:19PM +0000, David Howells wrote:
> Implement splice-read for coda by passing the request down a layer rather
> than going through generic_file_splice_read() which is going to be changed
> to assume that ->read_folio() is present on buffered files.
>
> Signed-off-by: David Howells <[email protected]>
> cc: Christoph Hellwig <[email protected]>
> cc: Jens Axboe <[email protected]>
> cc: Al Viro <[email protected]>
> cc: John Hubbard <[email protected]>
> cc: David Hildenbrand <[email protected]>
> cc: Matthew Wilcox <[email protected]>
> cc: Jan Harkes <[email protected]>
> cc: [email protected]
> cc: [email protected]
> cc: [email protected]
> cc: [email protected]
> cc: [email protected]
> cc: [email protected]
> ---
Acked-by: Jan Harkes <[email protected]>
Hi Jens,
If you decide not to take my patches in this merge window, would you have any
objection to my patches 1-3 and 10-11 in this series going through Steve
French's cifs tree so that he can take my cifs iteratorisation patches?
Patches 1-3 would add filemap_splice_read() and direct_splice_read(), but not
connect them up to anything and 10-11 would add iov_iter_extract_pages(). I
can then give Steve a patch to make cifs use them as part of my patches for
that.
This would only affect cifs. See my iov-cifs branch:
https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=iov-cifs
for an example of how this would look.
David
David Howells <[email protected]> wrote:
> This would only affect cifs.
Actually, that's not quite true: it changes the flags argument to
iov_iter_get_pages*().
David
On 2/14/23 3:56 PM, David Howells wrote:
> Hi Jens,
>
> If you decide not to take my patches in this merge window, would you have any
> objection to my patches 1-3 and 10-11 in this series going through Steve
> French's cifs tree so that he can take my cifs iteratorisation patches?
>
> Patches 1-3 would add filemap_splice_read() and direct_splice_read(), but not
> connect them up to anything and 10-11 would add iov_iter_extract_pages(). I
> can then give Steve a patch to make cifs use them as part of my patches for
> that.
>
> This would only affect cifs. See my iov-cifs branch:
>
> https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=iov-cifs
>
> for an example of how this would look.
Let's update the branch and see how it goes... If there's more fallout, then
let's make a fallback plan for the first few.
--
Jens Axboe
Jens Axboe <[email protected]> wrote:
> Let's update the branch and see how it goes... If there's more fallout, then
> let's make a fallback plan for the first few.
I forgot to export the new functions, as Steve found out. Fix attached.
David
---
splice: Export filemap/direct_splice_read()
filemap_splice_read() and direct_splice_read() should be exported.
Signed-off-by: David Howells <[email protected]>
cc: Steve French <[email protected]>
cc: Jens Axboe <[email protected]>
cc: Christoph Hellwig <[email protected]>
cc: Al Viro <[email protected]>
cc: David Hildenbrand <[email protected]>
cc: John Hubbard <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
cc: [email protected]
---
fs/splice.c | 1 +
mm/filemap.c | 1 +
2 files changed, 2 insertions(+)
diff --git a/fs/splice.c b/fs/splice.c
index 4c6332854b63..928c7be2f318 100644
--- a/fs/splice.c
+++ b/fs/splice.c
@@ -373,6 +373,7 @@ ssize_t direct_splice_read(struct file *in, loff_t *ppos,
kfree(bv);
return ret;
}
+EXPORT_SYMBOL(direct_splice_read);
/**
* generic_file_splice_read - splice data from file to a pipe
diff --git a/mm/filemap.c b/mm/filemap.c
index 8c7b135c8e23..570f86578f7c 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2969,6 +2969,7 @@ ssize_t filemap_splice_read(struct file *in, loff_t *ppos,
return total_spliced ? total_spliced : error;
}
+EXPORT_SYMBOL(filemap_splice_read);
static inline loff_t folio_seek_hole_data(struct xa_state *xas,
struct address_space *mapping, struct folio *folio,
David Howells <[email protected]> wrote:
> Jens Axboe <[email protected]> wrote:
>
> > Let's update the branch and see how it goes... If there's more fallout, then
> > let's make a fallback plan for the first few.
>
> I forgot to export the new functions, as Steve found out. Fix attached.
That said, nothing in your tree that calls these functions directly can be
built as a module, so this can be left to the cifs tree for now.
David
On Tue, 14 Feb 2023 at 18:14, David Howells <[email protected]> wrote:
>
> Implement splice-read for overlayfs by passing the request down a layer
> rather than going through generic_file_splice_read() which is going to be
> changed to assume that ->read_folio() is present on buffered files.
>
> Signed-off-by: David Howells <[email protected]>
> cc: Christoph Hellwig <[email protected]>
> cc: Jens Axboe <[email protected]>
> cc: Al Viro <[email protected]>
> cc: John Hubbard <[email protected]>
> cc: David Hildenbrand <[email protected]>
> cc: Matthew Wilcox <[email protected]>
> cc: Miklos Szeredi <[email protected]>
> cc: [email protected]
> cc: [email protected]
> cc: [email protected]
> cc: [email protected]
> ---
> fs/overlayfs/file.c | 36 +++++++++++++++++++++++++++++++++++-
> 1 file changed, 35 insertions(+), 1 deletion(-)
>
> diff --git a/fs/overlayfs/file.c b/fs/overlayfs/file.c
> index c9d0c362c7ef..267b61df6fcd 100644
> --- a/fs/overlayfs/file.c
> +++ b/fs/overlayfs/file.c
> @@ -419,6 +419,40 @@ static ssize_t ovl_write_iter(struct kiocb *iocb, struct iov_iter *iter)
> return ret;
> }
>
> +static ssize_t ovl_splice_read(struct file *in, loff_t *ppos,
> + struct pipe_inode_info *pipe, size_t len,
> + unsigned int flags)
> +{
> + const struct cred *old_cred;
> + struct fd real;
> + ssize_t ret;
> +
> + ret = ovl_real_fdget(in, &real);
> + if (ret)
> + return ret;
> +
> + ret = -EINVAL;
> + if (in->f_flags & O_DIRECT &&
> + !(real.file->f_mode & FMODE_CAN_ODIRECT))
> + goto out_fdput;
This is unnecessary, as it was already done in ovl_real_fdget() ->
ovl_real_fdget_meta() -> ovl_change_flags().
> + if (!real.file->f_op->splice_read)
> + goto out_fdput;
> +
> + ret = rw_verify_area(READ, in, ppos, len);
Should be on real.file.
> + if (unlikely(ret < 0))
> + return ret;
Leaks fd.
> +
> + old_cred = ovl_override_creds(file_inode(in)->i_sb);
> + ret = real.file->f_op->splice_read(real.file, ppos, pipe, len, flags);
> +
> + revert_creds(old_cred);
> + ovl_file_accessed(in);
> +out_fdput:
> + fdput(real);
> +
> + return ret;
> +}
> +
> /*
> * Calling iter_file_splice_write() directly from overlay's f_op may deadlock
> * due to lock order inversion between pipe->mutex in iter_file_splice_write()
> @@ -695,7 +729,7 @@ const struct file_operations ovl_file_operations = {
> .fallocate = ovl_fallocate,
> .fadvise = ovl_fadvise,
> .flush = ovl_flush,
> - .splice_read = generic_file_splice_read,
> + .splice_read = ovl_splice_read,
> .splice_write = ovl_splice_write,
>
> .copy_file_range = ovl_copy_file_range,
>
On Wed, Feb 15, 2023 at 08:07:58AM +0000, David Howells wrote:
> return total_spliced ? total_spliced : error;
> }
> +EXPORT_SYMBOL(filemap_splice_read);
EXPORT_SYMBOL_GPL for filemap internals, please.
And who is using filemap_splice_read directly anyway? I can't find a
modular user in any of the branches.
Miklos Szeredi <[email protected]> wrote:
> > + ret = -EINVAL;
> > + if (in->f_flags & O_DIRECT &&
> > + !(real.file->f_mode & FMODE_CAN_ODIRECT))
> > + goto out_fdput;
>
> This is unnecessary, as it was already done in ovl_real_fdget() ->
> ovl_real_fdget_meta() -> ovl_change_flags().
Does that mean ovl_read_iter() and ovl_write_iter() shouldn't be doing it,
then?
David
On Wed, 15 Feb 2023 at 16:04, David Howells <[email protected]> wrote:
>
> Miklos Szeredi <[email protected]> wrote:
>
> > > + ret = -EINVAL;
> > > + if (in->f_flags & O_DIRECT &&
> > > + !(real.file->f_mode & FMODE_CAN_ODIRECT))
> > > + goto out_fdput;
> >
> > This is unnecessary, as it was already done in ovl_real_fdget() ->
> > ovl_real_fdget_meta() -> ovl_change_flags().
>
> Does that mean ovl_read_iter() and ovl_write_iter() shouldn't be doing it,
> then?
That's a different thing, because ovl_*_iter() are checking on
ki->flags, not f_flags.
Thanks,
Miklos
How about the attached then?
David
---
overlayfs: Implement splice-read
Implement splice-read for overlayfs by passing the request down a layer
rather than going through generic_file_splice_read() which is going to be
changed to assume that ->read_folio() is present on buffered files.
Signed-off-by: David Howells <[email protected]>
cc: Christoph Hellwig <[email protected]>
cc: Jens Axboe <[email protected]>
cc: Al Viro <[email protected]>
cc: John Hubbard <[email protected]>
cc: David Hildenbrand <[email protected]>
cc: Matthew Wilcox <[email protected]>
cc: Miklos Szeredi <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
cc: [email protected]
---
Notes:
ver #15)
- Remove redundant FMODE_CAN_ODIRECT check on real file.
- Do rw_verify_area() on the real file, not the overlay file.
- Fix a file leak.
fs/overlayfs/file.c | 33 ++++++++++++++++++++++++++++++++-
1 file changed, 32 insertions(+), 1 deletion(-)
diff --git a/fs/overlayfs/file.c b/fs/overlayfs/file.c
index c9d0c362c7ef..72a545da51a2 100644
--- a/fs/overlayfs/file.c
+++ b/fs/overlayfs/file.c
@@ -419,6 +419,37 @@ static ssize_t ovl_write_iter(struct kiocb *iocb, struct iov_iter *iter)
return ret;
}
+static ssize_t ovl_splice_read(struct file *in, loff_t *ppos,
+ struct pipe_inode_info *pipe, size_t len,
+ unsigned int flags)
+{
+ const struct cred *old_cred;
+ struct fd real;
+ ssize_t ret;
+
+ ret = ovl_real_fdget(in, &real);
+ if (ret)
+ return ret;
+
+ ret = -EINVAL;
+ if (!real.file->f_op->splice_read)
+ goto out_fdput;
+
+ ret = rw_verify_area(READ, real.file, ppos, len);
+ if (unlikely(ret < 0))
+ goto out_fdput;
+
+ old_cred = ovl_override_creds(file_inode(in)->i_sb);
+ ret = real.file->f_op->splice_read(real.file, ppos, pipe, len, flags);
+
+ revert_creds(old_cred);
+ ovl_file_accessed(in);
+out_fdput:
+ fdput(real);
+
+ return ret;
+}
+
/*
* Calling iter_file_splice_write() directly from overlay's f_op may deadlock
* due to lock order inversion between pipe->mutex in iter_file_splice_write()
@@ -695,7 +726,7 @@ const struct file_operations ovl_file_operations = {
.fallocate = ovl_fallocate,
.fadvise = ovl_fadvise,
.flush = ovl_flush,
- .splice_read = generic_file_splice_read,
+ .splice_read = ovl_splice_read,
.splice_write = ovl_splice_write,
.copy_file_range = ovl_copy_file_range,
Christoph Hellwig <[email protected]> wrote:
> And who is using filemap_splice_read directly anyway? I can't find a
> modular user in any of the branches.
Fair point. I have a subset of the patches on my iov-cifs branch that doesn't
make the change to generic_file_read_splice(), but rather does that bit in
cifs.ko - that does require access to filemap_splice_read().
David
On Wed, 15 Feb 2023 at 16:41, David Howells <[email protected]> wrote:
>
> How about the attached then?
>
> David
> ---
> overlayfs: Implement splice-read
>
> Implement splice-read for overlayfs by passing the request down a layer
> rather than going through generic_file_splice_read() which is going to be
> changed to assume that ->read_folio() is present on buffered files.
Looks good. One more suggestion: add a vfs_splice() helper and use
that from do_splice_to() as well.
Thanks,
Miklos
On Wed, Feb 15, 2023 at 04:50:04PM +0100, Miklos Szeredi wrote:
> Looks good. One more suggestion: add a vfs_splice() helper and use
> that from do_splice_to() as well.
I really hate call_read_iter() etc. Please don't perpetuate that
pattern.
Miklos Szeredi <[email protected]> wrote:
> Looks good.
Can I put that down as an Acked-by?
Thanks,
David
On Wed, Feb 15, 2023 at 03:53:23PM +0000, Matthew Wilcox wrote:
> On Wed, Feb 15, 2023 at 04:50:04PM +0100, Miklos Szeredi wrote:
> > Looks good. One more suggestion: add a vfs_splice() helper and use
> > that from do_splice_to() as well.
>
> I really hate call_read_iter() etc. Please don't perpetuate that
> pattern.
I think it's time to kill it. I'll prepare a patch for it.
On Wed, 15 Feb 2023 at 16:53, Matthew Wilcox <[email protected]> wrote:
>
> On Wed, Feb 15, 2023 at 04:50:04PM +0100, Miklos Szeredi wrote:
> > Looks good. One more suggestion: add a vfs_splice() helper and use
> > that from do_splice_to() as well.
>
> I really hate call_read_iter() etc. Please don't perpetuate that
> pattern.
I didn't suggest call_splice_read(). vfs_splice_read() would have the
rw_verify_area() as well as the check for non-null ->splice_read().
Doing it that way from the start would have prevented two of the bugs
that David introduced in the first version.
Thanks,
Miklos
On Tue, Feb 14, 2023 at 05:13:21PM +0000, David Howells wrote:
> Make generic_file_splice_read() use filemap_splice_read() and
> direct_splice_read() rather than using an ITER_PIPE and call_read_iter().
>
> With this, ITER_PIPE is no longer used.
>
> Signed-off-by: David Howells <[email protected]>
> cc: Jens Axboe <[email protected]>
> cc: Christoph Hellwig <[email protected]>
> cc: Al Viro <[email protected]>
> cc: David Hildenbrand <[email protected]>
> cc: John Hubbard <[email protected]>
> cc: [email protected]
> cc: [email protected]
> cc: [email protected]
> ---
>
> Notes:
> ver #14)
> - Split out filemap_splice_read() into a separate patch.
>
> fs/splice.c | 30 +++++++-----------------------
> 1 file changed, 7 insertions(+), 23 deletions(-)
>
> diff --git a/fs/splice.c b/fs/splice.c
> index 4c6332854b63..a93478338cec 100644
> --- a/fs/splice.c
> +++ b/fs/splice.c
> @@ -391,29 +391,13 @@ ssize_t generic_file_splice_read(struct file *in, loff_t *ppos,
> struct pipe_inode_info *pipe, size_t len,
> unsigned int flags)
> {
> - struct iov_iter to;
> - struct kiocb kiocb;
> - int ret;
> -
> - iov_iter_pipe(&to, ITER_DEST, pipe, len);
> - init_sync_kiocb(&kiocb, in);
> - kiocb.ki_pos = *ppos;
> - ret = call_read_iter(in, &kiocb, &to);
> - if (ret > 0) {
> - *ppos = kiocb.ki_pos;
> - file_accessed(in);
> - } else if (ret < 0) {
> - /* free what was emitted */
> - pipe_discard_from(pipe, to.start_head);
> - /*
> - * callers of ->splice_read() expect -EAGAIN on
> - * "can't put anything in there", rather than -EFAULT.
> - */
> - if (ret == -EFAULT)
> - ret = -EAGAIN;
> - }
> -
> - return ret;
> + if (unlikely(*ppos >= file_inode(in)->i_sb->s_maxbytes))
> + return 0;
> + if (unlikely(!len))
> + return 0;
> + if (in->f_flags & O_DIRECT)
> + return direct_splice_read(in, ppos, pipe, len, flags);
Hello David,
I have one question, for dio, pages need to map to userspace
memory, but direct_splice_read() just allocates pages and not
see when the user mapping is setup, can you give one hint?
Thanks,
Ming
On Fri, Feb 17, 2023 at 4:22 PM Ming Lei <[email protected]> wrote:
>
> On Tue, Feb 14, 2023 at 05:13:21PM +0000, David Howells wrote:
> > Make generic_file_splice_read() use filemap_splice_read() and
> > direct_splice_read() rather than using an ITER_PIPE and call_read_iter().
> >
> > With this, ITER_PIPE is no longer used.
> >
> > Signed-off-by: David Howells <[email protected]>
> > cc: Jens Axboe <[email protected]>
> > cc: Christoph Hellwig <[email protected]>
> > cc: Al Viro <[email protected]>
> > cc: David Hildenbrand <[email protected]>
> > cc: John Hubbard <[email protected]>
> > cc: [email protected]
> > cc: [email protected]
> > cc: [email protected]
> > ---
> >
> > Notes:
> > ver #14)
> > - Split out filemap_splice_read() into a separate patch.
> >
> > fs/splice.c | 30 +++++++-----------------------
> > 1 file changed, 7 insertions(+), 23 deletions(-)
> >
> > diff --git a/fs/splice.c b/fs/splice.c
> > index 4c6332854b63..a93478338cec 100644
> > --- a/fs/splice.c
> > +++ b/fs/splice.c
> > @@ -391,29 +391,13 @@ ssize_t generic_file_splice_read(struct file *in, loff_t *ppos,
> > struct pipe_inode_info *pipe, size_t len,
> > unsigned int flags)
> > {
> > - struct iov_iter to;
> > - struct kiocb kiocb;
> > - int ret;
> > -
> > - iov_iter_pipe(&to, ITER_DEST, pipe, len);
> > - init_sync_kiocb(&kiocb, in);
> > - kiocb.ki_pos = *ppos;
> > - ret = call_read_iter(in, &kiocb, &to);
> > - if (ret > 0) {
> > - *ppos = kiocb.ki_pos;
> > - file_accessed(in);
> > - } else if (ret < 0) {
> > - /* free what was emitted */
> > - pipe_discard_from(pipe, to.start_head);
> > - /*
> > - * callers of ->splice_read() expect -EAGAIN on
> > - * "can't put anything in there", rather than -EFAULT.
> > - */
> > - if (ret == -EFAULT)
> > - ret = -EAGAIN;
> > - }
> > -
> > - return ret;
> > + if (unlikely(*ppos >= file_inode(in)->i_sb->s_maxbytes))
> > + return 0;
> > + if (unlikely(!len))
> > + return 0;
> > + if (in->f_flags & O_DIRECT)
> > + return direct_splice_read(in, ppos, pipe, len, flags);
>
> Hello David,
>
> I have one question, for dio, pages need to map to userspace
> memory, but direct_splice_read() just allocates pages and not
> see when the user mapping is setup, can you give one hint?
oops, it is ->splice_read, not ->read_iter, and pipe buffer isn't
user-backed, sorry for the noise.
Thanks,
Ming
Hello,
this commit breaks our s390x tests on linux-next which use Python 3
among other things.
We are using the Python 3 tox module and for some reason,
the above commit makes Python create files with padded zeroes.
--- a/tox/distro/lib/python3.11/site-packages/_distutils_hack/__init__.py
+++ b/tox/distro/lib/python3.11/site-packages/_distutils_hack/__init__.py
@@ -381,133 +381,4 @@
000017c0 49 4c 53 5f 46 49 4e 44 45 52 29 0a 20 20 20 20 |ILS_FINDER). |
000017d0 65 78 63 65 70 74 20 56 61 6c 75 65 45 72 72 6f |except ValueErro|
000017e0 72 3a 0a 20 20 20 20 20 20 20 20 70 61 73 73 0a |r:. pass.|
-000017f0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
-00001800 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
-00001810 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
-00001820 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
-00001830 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
-00001840 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
-00001850 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
-00001860 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
...
How to reproduce on Fedora
--------------------------
# dnf install -y python3-tox
# cat >tox.ini <<EOF
[tox]
envlist = dev,distro,doc,lint
skipsdist = True
[testenv:distro]
sitepackages = true
EOF
# python3 -m tox -v --notest -e distro
Error processing line 1 of /mnt/test/.tox/distro/lib/python3.11/site-packages/distutils-precedence.pth:
Traceback (most recent call last):
File "<frozen site>", line 186, in addpackage
File "<string>", line 1, in <module>
ValueError: source code string cannot contain null bytes
Remainder of file ignored
# The above error message disappears if one reverts the bad commit.
Regards
Alex
Forgot to mention another related problem we had recently.
We have seen LTP test suite failing in 'sendfile09' test case on 2023-02-15:
[ 5157.233192] CPU: 0 PID: 2435571 Comm: sendfile09 Tainted: G OE K N 6.2.0-20230214.rc8.git112.3ebb0ac55efa.300.fc37.s390x+next #1
[ 5157.233197] Hardware name: IBM 8561 T01 701 (z/VM 7.3.0)
[ 5157.233199] Krnl PSW : 0704d00180000000 0000000000000002 (0x2)
[ 5157.233207] R:0 T:1 IO:1 EX:1 Key:0 M:1 W:0 P:0 AS:3 CC:1 PM:0 RI:0 EA:3
[ 5157.233210] Krnl GPRS: 0000000000000008 0000000000000000 00000000893d1800 00000372088bd040
[ 5157.233213] 00000372088bd040 000000018c836000 0000000000000000 0000038000a4bb80
[ 5157.233215] 00000372088bd040 0000000000000000 00000000893d1800 00000372088bd040
[ 5157.233218] 0000000081182100 00000000893d1800 000000001f691596 0000038000a4b9b0
[ 5157.233222] Krnl Code:#0000000000000000: 0000 illegal
>0000000000000002: 0000 illegal
0000000000000004: 0000 illegal
0000000000000006: 0000 illegal
0000000000000008: 0000 illegal
000000000000000a: 0000 illegal
000000000000000c: 0000 illegal
000000000000000e: 0000 illegal
[ 5157.233281] Call Trace:
[ 5157.233284] [<0000000000000002>] 0x2
[ 5157.233288] [<000000001f694e26>] filemap_get_pages+0x276/0x3b0
[ 5157.233296] [<000000001f7c7256>] generic_file_buffered_splice_read.constprop.0+0xd6/0x370
[ 5157.233301] [<000000001f7c6aa0>] do_splice_to+0x98/0xc8
[ 5157.233304] [<000000001f7c6eee>] splice_direct_to_actor+0xb6/0x270
[ 5157.233306] [<000000001f7c714a>] do_splice_direct+0xa2/0xd8
[ 5157.233309] [<000000001f77c79c>] do_sendfile+0x39c/0x4e8
[ 5157.233314] [<000000001f77ca00>] __do_sys_sendfile64+0x68/0xc0
[ 5157.233317] [<00000000200b4bb4>] __do_syscall+0x1d4/0x200
[ 5157.233322] [<00000000200c42f2>] system_call+0x82/0xb0
[ 5157.233328] Last Breaking-Event-Address:
[ 5157.233329] [<000000001f691594>] filemap_read_folio+0x3c/0xd0
[ 5157.233338] Kernel panic - not syncing: Fatal exception: panic_on_oops
But it was apparently fixed on the next day.
Regards
Alex
Alexander Egorenkov <[email protected]> wrote:
> Subject: Re: [PATCH v14 08/17] splice: Do splice read from a file without
> using ITER_PIPE
Well, I can reproduce it. Putting a printks in generic_file_splice_read(),
direct_splice_read() and filemap_splice_read(), however, seems to show that it
isn't using any of those functions; nor can I see any sign of a splice syscall
in a strace:-/
David
[email protected] wrote:
> [ 5157.233284] [<0000000000000002>] 0x2
> [ 5157.233288] [<000000001f694e26>] filemap_get_pages+0x276/0x3b0
Yeah. I think this was fixed by the provision of a shmem-specific splice read
(patch 04/17 in this series).
David
Does the attached fix the problem for you? The data being written into the
pipe needs to be limited to the size of the file.
David
diff --git a/mm/filemap.c b/mm/filemap.c
index c01bbcb9fa92..6362ac697a70 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2948,7 +2948,8 @@ ssize_t filemap_splice_read(struct file *in, loff_t *ppos,
if (writably_mapped)
flush_dcache_folio(folio);
- n = splice_folio_into_pipe(pipe, folio, *ppos, len);
+ n = min_t(loff_t, len, isize - *ppos);
+ n = splice_folio_into_pipe(pipe, folio, *ppos, n);
if (!n)
goto out;
len -= n;
On Tue, Feb 14, 2023 at 05:13:15PM +0000, David Howells wrote:
> Provide a function to do splice read from a buffered file, pulling the
> folios out of the pagecache directly by calling filemap_get_pages() to do
> any required reading and then pasting the returned folios into the pipe.
>
> A helper function is provided to do the actual folio pasting and will
> handle multipage folios by splicing as many of the relevant subpages as
> will fit into the pipe.
>
> The code is loosely based on filemap_read() and might belong in
> mm/filemap.c with that as it needs to use filemap_get_pages().
>
> Signed-off-by: David Howells <[email protected]>
> cc: Jens Axboe <[email protected]>
> cc: Christoph Hellwig <[email protected]>
> cc: Al Viro <[email protected]>
> cc: David Hildenbrand <[email protected]>
> cc: John Hubbard <[email protected]>
> cc: [email protected]
> cc: [email protected]
> cc: [email protected]
> ---
>
> Notes:
> ver #14)
> - Rename to filemap_splice_read().
> - Create a helper, pipe_head_buf(), to get the head buffer.
> - Use init_sync_kiocb().
> - Move to mm/filemap.c.
> - Split the implementation of filemap_splice_read() from the patch to
> make generic_file_splice_read() use it and direct_splice_read().
>
> include/linux/fs.h | 3 ++
> mm/filemap.c | 128 +++++++++++++++++++++++++++++++++++++++++++++
> mm/internal.h | 6 +++
> 3 files changed, 137 insertions(+)
>
> diff --git a/include/linux/fs.h b/include/linux/fs.h
> index c1769a2c5d70..28743e38df91 100644
> --- a/include/linux/fs.h
> +++ b/include/linux/fs.h
> @@ -3163,6 +3163,9 @@ ssize_t vfs_iocb_iter_write(struct file *file, struct kiocb *iocb,
> struct iov_iter *iter);
>
> /* fs/splice.c */
> +ssize_t filemap_splice_read(struct file *in, loff_t *ppos,
> + struct pipe_inode_info *pipe,
> + size_t len, unsigned int flags);
> extern ssize_t generic_file_splice_read(struct file *, loff_t *,
> struct pipe_inode_info *, size_t, unsigned int);
> extern ssize_t iter_file_splice_write(struct pipe_inode_info *,
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 876e77278d2a..8c7b135c8e23 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -42,6 +42,8 @@
> #include <linux/ramfs.h>
> #include <linux/page_idle.h>
> #include <linux/migrate.h>
> +#include <linux/pipe_fs_i.h>
> +#include <linux/splice.h>
> #include <asm/pgalloc.h>
> #include <asm/tlbflush.h>
> #include "internal.h"
> @@ -2842,6 +2844,132 @@ generic_file_read_iter(struct kiocb *iocb, struct iov_iter *iter)
> }
> EXPORT_SYMBOL(generic_file_read_iter);
>
> +/*
> + * Splice subpages from a folio into a pipe.
> + */
> +size_t splice_folio_into_pipe(struct pipe_inode_info *pipe,
> + struct folio *folio, loff_t fpos, size_t size)
> +{
> + struct page *page;
> + size_t spliced = 0, offset = offset_in_folio(folio, fpos);
> +
> + page = folio_page(folio, offset / PAGE_SIZE);
> + size = min(size, folio_size(folio) - offset);
> + offset %= PAGE_SIZE;
> +
> + while (spliced < size &&
> + !pipe_full(pipe->head, pipe->tail, pipe->max_usage)) {
> + struct pipe_buffer *buf = pipe_head_buf(pipe);
> + size_t part = min_t(size_t, PAGE_SIZE - offset, size - spliced);
> +
> + *buf = (struct pipe_buffer) {
> + .ops = &page_cache_pipe_buf_ops,
> + .page = page,
> + .offset = offset,
> + .len = part,
> + };
> + folio_get(folio);
> + pipe->head++;
> + page++;
> + spliced += part;
> + offset = 0;
It should be better to replace above with add_to_pipe().
> + }
> +
> + return spliced;
> +}
> +
> +/*
> + * Splice folios from the pagecache of a buffered (ie. non-O_DIRECT) file into
> + * a pipe.
> + */
> +ssize_t filemap_splice_read(struct file *in, loff_t *ppos,
> + struct pipe_inode_info *pipe,
> + size_t len, unsigned int flags)
> +{
> + struct folio_batch fbatch;
> + struct kiocb iocb;
> + size_t total_spliced = 0, used, npages;
> + loff_t isize, end_offset;
> + bool writably_mapped;
> + int i, error = 0;
> +
> + init_sync_kiocb(&iocb, in);
> + iocb.ki_pos = *ppos;
> +
> + /* Work out how much data we can actually add into the pipe */
> + used = pipe_occupancy(pipe->head, pipe->tail);
> + npages = max_t(ssize_t, pipe->max_usage - used, 0);
> + len = min_t(size_t, len, npages * PAGE_SIZE);
Do we need to consider offset in 1st page here?
thanks,
Ming
Hi David,
David Howells <[email protected]> writes:
> Does the attached fix the problem for you? The data being written into the
> pipe needs to be limited to the size of the file.
>
> David
>
> diff --git a/mm/filemap.c b/mm/filemap.c
> index c01bbcb9fa92..6362ac697a70 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -2948,7 +2948,8 @@ ssize_t filemap_splice_read(struct file *in, loff_t *ppos,
> if (writably_mapped)
> flush_dcache_folio(folio);
>
> - n = splice_folio_into_pipe(pipe, folio, *ppos, len);
> + n = min_t(loff_t, len, isize - *ppos);
> + n = splice_folio_into_pipe(pipe, folio, *ppos, n);
> if (!n)
> goto out;
> len -= n;
Yes, this change fixed the problem.
Thanks
Regards
Alex
Alexander Egorenkov <[email protected]> wrote:
> > + n = min_t(loff_t, len, isize - *ppos);
> > + n = splice_folio_into_pipe(pipe, folio, *ppos, n);
> > if (!n)
> > goto out;
> > len -= n;
>
> Yes, this change fixed the problem.
Thanks!
David
Ming Lei <[email protected]> wrote:
> > + /* Work out how much data we can actually add into the pipe */
> > + used = pipe_occupancy(pipe->head, pipe->tail);
> > + npages = max_t(ssize_t, pipe->max_usage - used, 0);
> > + len = min_t(size_t, len, npages * PAGE_SIZE);
>
> Do we need to consider offset in 1st page here?
Well, it won't break since we check further on that we don't overrun the ring,
but it's probably a bit more efficient to subtract the offset into the page at
this point.
That said, we don't know how big the first folio is yet, though I'm not sure
if that matters.
David