This is the current version of the patchset I presented at the LSF-MM
Summit in San Francisco in April. I apologize for letting it go so
long before re-submitting.
This patchset was begun by Zach Brown and was originally submitted for
review in October, 2009. Feedback was positive, and I have picked up
where he left off, porting his patches to the latest mainline kernel
and adding support more file systems.
This patch series adds a kernel interface to fs/aio.c so that kernel code can
issue concurrent asynchronous IO to file systems. It adds an aio command and
file system methods which specify io memory with pages instead of userspace
addresses.
This series was written to reduce the current overhead loop imposes by
performing synchronus buffered file system IO from a kernel thread. These
patches turn loop into a light weight layer that translates bios into iocbs.
The downside of this is that in its current implementation, performance takes
a big hit for non-synchonous I/O, since the underlying page cache is bypassed.
The tradeoff is that all writes to the loop device make it to the underlying
media, making loop-mounted file systems recoverable.
These patches apply to 3.7-rc1 and are also available at:
git://github.com/kleikamp/linux-shaggy.git loop_2012_10_18
Asias He (1):
block_dev: add support for read_iter, write_iter
Dave Kleikamp (8):
iov_iter: iov_iter_copy_from_user() should use non-atomic copy
fuse: convert fuse to use iov_iter_copy_[to|from]_user
dio: Convert direct_IO to use iov_iter
dio: add bio_vec support to __blockdev_direct_IO()
fs: add read_iter and write_iter to several file systems
ext4: add support for read_iter and write_iter
nfs: add support for read_iter, write_iter
btrfs: add support for read_iter and write_iter
From: Zach Brown (2):
aio: add aio support for iov_iter arguments
loop: use aio to perform io on the underlying file
Zach Brown (11):
iov_iter: move into its own file
iov_iter: add copy_to_user support
iov_iter: hide iovec details behind ops function pointers
iov_iter: add bvec support
iov_iter: add a shorten call
iov_iter: let callers extract iovecs and bio_vecs
dio: create a dio_aligned() helper function
fs: pull iov_iter use higher up the stack
aio: add aio_kernel_() interface
bio: add bvec_length(), like iov_length()
ocfs2: add support for read_iter, write_iter, and direct_IO_bvec
Documentation/filesystems/Locking | 4 +-
Documentation/filesystems/vfs.txt | 4 +-
drivers/block/loop.c | 129 +++++++++----
fs/9p/vfs_addr.c | 8 +-
fs/9p/vfs_file.c | 4 +
fs/aio.c | 156 +++++++++++++++
fs/block_dev.c | 44 ++++-
fs/btrfs/file.c | 55 +++---
fs/btrfs/inode.c | 61 +++---
fs/ceph/addr.c | 3 +-
fs/cifs/file.c | 4 +-
fs/direct-io.c | 253 ++++++++++++++++--------
fs/ext2/file.c | 2 +
fs/ext2/inode.c | 8 +-
fs/ext3/file.c | 2 +
fs/ext3/inode.c | 15 +-
fs/ext4/ext4.h | 3 +-
fs/ext4/file.c | 49 +++--
fs/ext4/indirect.c | 16 +-
fs/ext4/inode.c | 27 ++-
fs/fat/file.c | 2 +
fs/fat/inode.c | 10 +-
fs/fuse/file.c | 40 ++--
fs/gfs2/aops.c | 7 +-
fs/hfs/inode.c | 9 +-
fs/hfsplus/inode.c | 8 +-
fs/jfs/file.c | 2 +
fs/jfs/inode.c | 7 +-
fs/nfs/direct.c | 176 +++++++++++------
fs/nfs/file.c | 48 +++--
fs/nfs/internal.h | 2 +
fs/nfs/nfs4file.c | 4 +
fs/nilfs2/file.c | 2 +
fs/nilfs2/inode.c | 8 +-
fs/ocfs2/aops.c | 8 +-
fs/ocfs2/file.c | 82 +++++---
fs/ocfs2/ocfs2_trace.h | 6 +-
fs/reiserfs/file.c | 2 +
fs/reiserfs/inode.c | 7 +-
fs/udf/file.c | 3 +-
fs/udf/inode.c | 10 +-
fs/xfs/xfs_aops.c | 13 +-
include/linux/aio.h | 18 ++
include/linux/bio.h | 8 +
include/linux/fs.h | 135 +++++++++++--
include/linux/nfs_fs.h | 9 +-
include/uapi/linux/aio_abi.h | 2 +
include/uapi/linux/loop.h | 1 +
mm/Makefile | 2 +-
mm/filemap.c | 395 ++++++++++++++++----------------------
mm/iov-iter.c | 383 ++++++++++++++++++++++++++++++++++++
mm/page_io.c | 8 +-
52 files changed, 1609 insertions(+), 655 deletions(-)
create mode 100644 mm/iov-iter.c
--
1.7.12.3
From: Zach Brown <[email protected]>
This adds a set of iov_iter_ops calls which work with memory which is
specified by an array of bio_vec structs instead of an array of iovec
structs.
The big difference is that the pages referenced by the bio_vec elements
are pinned. They don't need to be faulted in and we can always use
kmap_atomic() to map them one at a time.
Signed-off-by: Dave Kleikamp <[email protected]>
Cc: Zach Brown <[email protected]>
---
include/linux/fs.h | 17 ++++++++
mm/iov-iter.c | 126 +++++++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 143 insertions(+)
diff --git a/include/linux/fs.h b/include/linux/fs.h
index f7ee6d4..afb1343 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -275,6 +275,23 @@ static inline size_t iov_iter_single_seg_count(struct iov_iter *i)
return i->ops->ii_single_seg_count(i);
}
+extern struct iov_iter_ops ii_bvec_ops;
+
+struct bio_vec;
+static inline void iov_iter_init_bvec(struct iov_iter *i,
+ struct bio_vec *bvec,
+ unsigned long nr_segs,
+ size_t count, size_t written)
+{
+ i->ops = &ii_bvec_ops;
+ i->data = (unsigned long)bvec;
+ i->nr_segs = nr_segs;
+ i->iov_offset = 0;
+ i->count = count + written;
+
+ iov_iter_advance(i, written);
+}
+
extern struct iov_iter_ops ii_iovec_ops;
static inline void iov_iter_init(struct iov_iter *i,
diff --git a/mm/iov-iter.c b/mm/iov-iter.c
index bae1553..c5d0a9e 100644
--- a/mm/iov-iter.c
+++ b/mm/iov-iter.c
@@ -5,6 +5,7 @@
#include <linux/hardirq.h>
#include <linux/highmem.h>
#include <linux/pagemap.h>
+#include <linux/bio.h>
static size_t __iovec_copy_to_user(char *vaddr, const struct iovec *iov,
size_t base, size_t bytes, int atomic)
@@ -86,6 +87,131 @@ static size_t ii_iovec_copy_to_user(struct page *page,
return copied;
}
+/*
+ * As an easily verifiable first pass, we implement all the methods that
+ * copy data to and from bvec pages with one function. We implement it
+ * all with kmap_atomic().
+ */
+static size_t bvec_copy_tofrom_page(struct iov_iter *iter, struct page *page,
+ unsigned long page_offset, size_t bytes,
+ int topage)
+{
+ struct bio_vec *bvec = (struct bio_vec *)iter->data;
+ size_t bvec_offset = iter->iov_offset;
+ size_t remaining = bytes;
+ void *bvec_map;
+ void *page_map;
+ size_t copy;
+
+ page_map = kmap_atomic(page);
+
+ BUG_ON(bytes > iter->count);
+ while (remaining) {
+ BUG_ON(bvec->bv_len == 0);
+ BUG_ON(bvec_offset >= bvec->bv_len);
+ copy = min(remaining, bvec->bv_len - bvec_offset);
+ bvec_map = kmap_atomic(bvec->bv_page);
+ if (topage)
+ memcpy(page_map + page_offset,
+ bvec_map + bvec->bv_offset + bvec_offset,
+ copy);
+ else
+ memcpy(bvec_map + bvec->bv_offset + bvec_offset,
+ page_map + page_offset,
+ copy);
+ kunmap_atomic(bvec_map);
+ remaining -= copy;
+ bvec_offset += copy;
+ page_offset += copy;
+ if (bvec_offset == bvec->bv_len) {
+ bvec_offset = 0;
+ bvec++;
+ }
+ }
+
+ kunmap_atomic(page_map);
+
+ return bytes;
+}
+
+static size_t ii_bvec_copy_to_user_atomic(struct page *page, struct iov_iter *i,
+ unsigned long offset, size_t bytes)
+{
+ return bvec_copy_tofrom_page(i, page, offset, bytes, 0);
+}
+static size_t ii_bvec_copy_to_user(struct page *page, struct iov_iter *i,
+ unsigned long offset, size_t bytes)
+{
+ return bvec_copy_tofrom_page(i, page, offset, bytes, 0);
+}
+static size_t ii_bvec_copy_from_user_atomic(struct page *page,
+ struct iov_iter *i,
+ unsigned long offset, size_t bytes)
+{
+ return bvec_copy_tofrom_page(i, page, offset, bytes, 1);
+}
+static size_t ii_bvec_copy_from_user(struct page *page, struct iov_iter *i,
+ unsigned long offset, size_t bytes)
+{
+ return bvec_copy_tofrom_page(i, page, offset, bytes, 1);
+}
+
+/*
+ * bio_vecs have a stricter structure than iovecs that might have
+ * come from userspace. There are no zero length bio_vec elements.
+ */
+static void ii_bvec_advance(struct iov_iter *i, size_t bytes)
+{
+ struct bio_vec *bvec = (struct bio_vec *)i->data;
+ size_t offset = i->iov_offset;
+ size_t delta;
+
+ BUG_ON(i->count < bytes);
+ while (bytes) {
+ BUG_ON(bvec->bv_len == 0);
+ BUG_ON(bvec->bv_len <= offset);
+ delta = min(bytes, bvec->bv_len - offset);
+ offset += delta;
+ i->count -= delta;
+ bytes -= delta;
+ if (offset == bvec->bv_len) {
+ bvec++;
+ offset = 0;
+ }
+ }
+
+ i->data = (unsigned long)bvec;
+ i->iov_offset = offset;
+}
+
+/*
+ * pages pointed to by bio_vecs are always pinned.
+ */
+static int ii_bvec_fault_in_readable(struct iov_iter *i, size_t bytes)
+{
+ return 0;
+}
+
+static size_t ii_bvec_single_seg_count(struct iov_iter *i)
+{
+ const struct bio_vec *bvec = (struct bio_vec *)i->data;
+ if (i->nr_segs == 1)
+ return i->count;
+ else
+ return min(i->count, bvec->bv_len - i->iov_offset);
+}
+
+struct iov_iter_ops ii_bvec_ops = {
+ .ii_copy_to_user_atomic = ii_bvec_copy_to_user_atomic,
+ .ii_copy_to_user = ii_bvec_copy_to_user,
+ .ii_copy_from_user_atomic = ii_bvec_copy_from_user_atomic,
+ .ii_copy_from_user = ii_bvec_copy_from_user,
+ .ii_advance = ii_bvec_advance,
+ .ii_fault_in_readable = ii_bvec_fault_in_readable,
+ .ii_single_seg_count = ii_bvec_single_seg_count,
+};
+EXPORT_SYMBOL(ii_bvec_ops);
+
static size_t __iovec_copy_from_user(char *vaddr, const struct iovec *iov,
size_t base, size_t bytes, int atomic)
{
--
1.7.12.3
From: Zach Brown <[email protected]>
This moves the iov_iter functions in to their own file. We're going to
be working on them in upcoming patches. They become sufficiently large,
and remain self-contained, to justify seperating them from the rest of
the huge mm/filemap.c.
Signed-off-by: Dave Kleikamp <[email protected]>
Acked-by: Jeff Moyer <[email protected]>
Cc: Zach Brown <[email protected]>
---
mm/Makefile | 2 +-
mm/filemap.c | 144 -------------------------------------------------------
mm/iov-iter.c | 151 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 152 insertions(+), 145 deletions(-)
create mode 100644 mm/iov-iter.c
diff --git a/mm/Makefile b/mm/Makefile
index 6b025f8..1332d60 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -16,7 +16,7 @@ obj-y := filemap.o mempool.o oom_kill.o fadvise.o \
readahead.o swap.o truncate.o vmscan.o shmem.o \
util.o mmzone.o vmstat.o backing-dev.o \
mm_init.o mmu_context.o percpu.o slab_common.o \
- compaction.o interval_tree.o $(mmu-y)
+ compaction.o interval_tree.o iov-iter.o $(mmu-y)
obj-y += init-mm.o
diff --git a/mm/filemap.c b/mm/filemap.c
index 83efee7..753ec48 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1922,150 +1922,6 @@ struct page *read_cache_page(struct address_space *mapping,
}
EXPORT_SYMBOL(read_cache_page);
-static size_t __iovec_copy_from_user_inatomic(char *vaddr,
- const struct iovec *iov, size_t base, size_t bytes)
-{
- size_t copied = 0, left = 0;
-
- while (bytes) {
- char __user *buf = iov->iov_base + base;
- int copy = min(bytes, iov->iov_len - base);
-
- base = 0;
- left = __copy_from_user_inatomic(vaddr, buf, copy);
- copied += copy;
- bytes -= copy;
- vaddr += copy;
- iov++;
-
- if (unlikely(left))
- break;
- }
- return copied - left;
-}
-
-/*
- * Copy as much as we can into the page and return the number of bytes which
- * were successfully copied. If a fault is encountered then return the number of
- * bytes which were copied.
- */
-size_t iov_iter_copy_from_user_atomic(struct page *page,
- struct iov_iter *i, unsigned long offset, size_t bytes)
-{
- char *kaddr;
- size_t copied;
-
- BUG_ON(!in_atomic());
- kaddr = kmap_atomic(page);
- if (likely(i->nr_segs == 1)) {
- int left;
- char __user *buf = i->iov->iov_base + i->iov_offset;
- left = __copy_from_user_inatomic(kaddr + offset, buf, bytes);
- copied = bytes - left;
- } else {
- copied = __iovec_copy_from_user_inatomic(kaddr + offset,
- i->iov, i->iov_offset, bytes);
- }
- kunmap_atomic(kaddr);
-
- return copied;
-}
-EXPORT_SYMBOL(iov_iter_copy_from_user_atomic);
-
-/*
- * This has the same sideeffects and return value as
- * iov_iter_copy_from_user_atomic().
- * The difference is that it attempts to resolve faults.
- * Page must not be locked.
- */
-size_t iov_iter_copy_from_user(struct page *page,
- struct iov_iter *i, unsigned long offset, size_t bytes)
-{
- char *kaddr;
- size_t copied;
-
- kaddr = kmap(page);
- if (likely(i->nr_segs == 1)) {
- int left;
- char __user *buf = i->iov->iov_base + i->iov_offset;
- left = __copy_from_user(kaddr + offset, buf, bytes);
- copied = bytes - left;
- } else {
- copied = __iovec_copy_from_user_inatomic(kaddr + offset,
- i->iov, i->iov_offset, bytes);
- }
- kunmap(page);
- return copied;
-}
-EXPORT_SYMBOL(iov_iter_copy_from_user);
-
-void iov_iter_advance(struct iov_iter *i, size_t bytes)
-{
- BUG_ON(i->count < bytes);
-
- if (likely(i->nr_segs == 1)) {
- i->iov_offset += bytes;
- i->count -= bytes;
- } else {
- const struct iovec *iov = i->iov;
- size_t base = i->iov_offset;
- unsigned long nr_segs = i->nr_segs;
-
- /*
- * The !iov->iov_len check ensures we skip over unlikely
- * zero-length segments (without overruning the iovec).
- */
- while (bytes || unlikely(i->count && !iov->iov_len)) {
- int copy;
-
- copy = min(bytes, iov->iov_len - base);
- BUG_ON(!i->count || i->count < copy);
- i->count -= copy;
- bytes -= copy;
- base += copy;
- if (iov->iov_len == base) {
- iov++;
- nr_segs--;
- base = 0;
- }
- }
- i->iov = iov;
- i->iov_offset = base;
- i->nr_segs = nr_segs;
- }
-}
-EXPORT_SYMBOL(iov_iter_advance);
-
-/*
- * Fault in the first iovec of the given iov_iter, to a maximum length
- * of bytes. Returns 0 on success, or non-zero if the memory could not be
- * accessed (ie. because it is an invalid address).
- *
- * writev-intensive code may want this to prefault several iovecs -- that
- * would be possible (callers must not rely on the fact that _only_ the
- * first iovec will be faulted with the current implementation).
- */
-int iov_iter_fault_in_readable(struct iov_iter *i, size_t bytes)
-{
- char __user *buf = i->iov->iov_base + i->iov_offset;
- bytes = min(bytes, i->iov->iov_len - i->iov_offset);
- return fault_in_pages_readable(buf, bytes);
-}
-EXPORT_SYMBOL(iov_iter_fault_in_readable);
-
-/*
- * Return the count of just the current iov_iter segment.
- */
-size_t iov_iter_single_seg_count(struct iov_iter *i)
-{
- const struct iovec *iov = i->iov;
- if (i->nr_segs == 1)
- return i->count;
- else
- return min(i->count, iov->iov_len - i->iov_offset);
-}
-EXPORT_SYMBOL(iov_iter_single_seg_count);
-
/*
* Performs necessary checks before doing a write
*
diff --git a/mm/iov-iter.c b/mm/iov-iter.c
new file mode 100644
index 0000000..83f7594
--- /dev/null
+++ b/mm/iov-iter.c
@@ -0,0 +1,151 @@
+#include <linux/module.h>
+#include <linux/fs.h>
+#include <linux/uaccess.h>
+#include <linux/uio.h>
+#include <linux/hardirq.h>
+#include <linux/highmem.h>
+#include <linux/pagemap.h>
+
+static size_t __iovec_copy_from_user_inatomic(char *vaddr,
+ const struct iovec *iov, size_t base, size_t bytes)
+{
+ size_t copied = 0, left = 0;
+
+ while (bytes) {
+ char __user *buf = iov->iov_base + base;
+ int copy = min(bytes, iov->iov_len - base);
+
+ base = 0;
+ left = __copy_from_user_inatomic(vaddr, buf, copy);
+ copied += copy;
+ bytes -= copy;
+ vaddr += copy;
+ iov++;
+
+ if (unlikely(left))
+ break;
+ }
+ return copied - left;
+}
+
+/*
+ * Copy as much as we can into the page and return the number of bytes which
+ * were successfully copied. If a fault is encountered then return the number
+ * of bytes which were copied.
+ */
+size_t iov_iter_copy_from_user_atomic(struct page *page,
+ struct iov_iter *i, unsigned long offset, size_t bytes)
+{
+ char *kaddr;
+ size_t copied;
+
+ BUG_ON(!in_atomic());
+ kaddr = kmap_atomic(page);
+ if (likely(i->nr_segs == 1)) {
+ int left;
+ char __user *buf = i->iov->iov_base + i->iov_offset;
+ left = __copy_from_user_inatomic(kaddr + offset, buf, bytes);
+ copied = bytes - left;
+ } else {
+ copied = __iovec_copy_from_user_inatomic(kaddr + offset,
+ i->iov, i->iov_offset, bytes);
+ }
+ kunmap_atomic(kaddr);
+
+ return copied;
+}
+EXPORT_SYMBOL(iov_iter_copy_from_user_atomic);
+
+/*
+ * This has the same sideeffects and return value as
+ * iov_iter_copy_from_user_atomic().
+ * The difference is that it attempts to resolve faults.
+ * Page must not be locked.
+ */
+size_t iov_iter_copy_from_user(struct page *page,
+ struct iov_iter *i, unsigned long offset, size_t bytes)
+{
+ char *kaddr;
+ size_t copied;
+
+ kaddr = kmap(page);
+ if (likely(i->nr_segs == 1)) {
+ int left;
+ char __user *buf = i->iov->iov_base + i->iov_offset;
+ left = __copy_from_user(kaddr + offset, buf, bytes);
+ copied = bytes - left;
+ } else {
+ copied = __iovec_copy_from_user_inatomic(kaddr + offset,
+ i->iov, i->iov_offset, bytes);
+ }
+ kunmap(page);
+ return copied;
+}
+EXPORT_SYMBOL(iov_iter_copy_from_user);
+
+void iov_iter_advance(struct iov_iter *i, size_t bytes)
+{
+ BUG_ON(i->count < bytes);
+
+ if (likely(i->nr_segs == 1)) {
+ i->iov_offset += bytes;
+ i->count -= bytes;
+ } else {
+ const struct iovec *iov = i->iov;
+ size_t base = i->iov_offset;
+ unsigned long nr_segs = i->nr_segs;
+
+ /*
+ * The !iov->iov_len check ensures we skip over unlikely
+ * zero-length segments (without overruning the iovec).
+ */
+ while (bytes || unlikely(i->count && !iov->iov_len)) {
+ int copy;
+
+ copy = min(bytes, iov->iov_len - base);
+ BUG_ON(!i->count || i->count < copy);
+ i->count -= copy;
+ bytes -= copy;
+ base += copy;
+ if (iov->iov_len == base) {
+ iov++;
+ nr_segs--;
+ base = 0;
+ }
+ }
+ i->iov = iov;
+ i->iov_offset = base;
+ i->nr_segs = nr_segs;
+ }
+}
+EXPORT_SYMBOL(iov_iter_advance);
+
+/*
+ * Fault in the first iovec of the given iov_iter, to a maximum length
+ * of bytes. Returns 0 on success, or non-zero if the memory could not be
+ * accessed (ie. because it is an invalid address).
+ *
+ * writev-intensive code may want this to prefault several iovecs -- that
+ * would be possible (callers must not rely on the fact that _only_ the
+ * first iovec will be faulted with the current implementation).
+ */
+int iov_iter_fault_in_readable(struct iov_iter *i, size_t bytes)
+{
+ char __user *buf = i->iov->iov_base + i->iov_offset;
+ bytes = min(bytes, i->iov->iov_len - i->iov_offset);
+ return fault_in_pages_readable(buf, bytes);
+}
+EXPORT_SYMBOL(iov_iter_fault_in_readable);
+
+/*
+ * Return the count of just the current iov_iter segment.
+ */
+size_t iov_iter_single_seg_count(struct iov_iter *i)
+{
+ const struct iovec *iov = i->iov;
+ if (i->nr_segs == 1)
+ return i->count;
+ else
+ return min(i->count, iov->iov_len - i->iov_offset);
+}
+EXPORT_SYMBOL(iov_iter_single_seg_count);
--
1.7.12.3
From: Zach Brown <[email protected]>
ocfs2's .aio_read and .aio_write methods are changed to take
iov_iter and pass it to generic functions. Wrappers are made to pack
the iovecs into iters and call these new functions.
Signed-off-by: Dave Kleikamp <[email protected]>
Cc: Zach Brown <[email protected]>
Cc: Mark Fasheh <[email protected]>
Cc: Joel Becker <[email protected]>
Cc: [email protected]
---
fs/ocfs2/file.c | 82 +++++++++++++++++++++++++++++++++++---------------
fs/ocfs2/ocfs2_trace.h | 6 +++-
2 files changed, 63 insertions(+), 25 deletions(-)
diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c
index 5a4ee77..457b9fb 100644
--- a/fs/ocfs2/file.c
+++ b/fs/ocfs2/file.c
@@ -2237,15 +2237,13 @@ out:
return ret;
}
-static ssize_t ocfs2_file_aio_write(struct kiocb *iocb,
- const struct iovec *iov,
- unsigned long nr_segs,
- loff_t pos)
+static ssize_t ocfs2_file_write_iter(struct kiocb *iocb,
+ struct iov_iter *iter,
+ loff_t pos)
{
int ret, direct_io, appending, rw_level, have_alloc_sem = 0;
int can_do_direct, has_refcount = 0;
ssize_t written = 0;
- size_t ocount; /* original count */
size_t count; /* after file limit checks */
loff_t old_size, *ppos = &iocb->ki_pos;
u32 old_clusters;
@@ -2256,11 +2254,11 @@ static ssize_t ocfs2_file_aio_write(struct kiocb *iocb,
OCFS2_MOUNT_COHERENCY_BUFFERED);
int unaligned_dio = 0;
- trace_ocfs2_file_aio_write(inode, file, file->f_path.dentry,
+ trace_ocfs2_file_write_iter(inode, file, file->f_path.dentry,
(unsigned long long)OCFS2_I(inode)->ip_blkno,
file->f_path.dentry->d_name.len,
file->f_path.dentry->d_name.name,
- (unsigned int)nr_segs);
+ (unsigned long long)pos);
if (iocb->ki_left == 0)
return 0;
@@ -2362,28 +2360,24 @@ relock:
/* communicate with ocfs2_dio_end_io */
ocfs2_iocb_set_rw_locked(iocb, rw_level);
- ret = generic_segment_checks(iov, &nr_segs, &ocount,
- VERIFY_READ);
- if (ret)
- goto out_dio;
- count = ocount;
+ count = iov_iter_count(iter);
ret = generic_write_checks(file, ppos, &count,
S_ISBLK(inode->i_mode));
if (ret)
goto out_dio;
if (direct_io) {
- written = generic_file_direct_write(iocb, iov, &nr_segs, *ppos,
- ppos, count, ocount);
+ written = generic_file_direct_write_iter(iocb, iter, *ppos,
+ ppos, count);
if (written < 0) {
ret = written;
goto out_dio;
}
} else {
current->backing_dev_info = file->f_mapping->backing_dev_info;
- written = generic_file_buffered_write(iocb, iov, nr_segs, *ppos,
- ppos, count, 0);
+ written = generic_file_buffered_write_iter(iocb, iter, *ppos,
+ ppos, count, 0);
current->backing_dev_info = NULL;
}
@@ -2447,6 +2441,25 @@ out_sems:
return ret;
}
+static ssize_t ocfs2_file_aio_write(struct kiocb *iocb,
+ const struct iovec *iov,
+ unsigned long nr_segs,
+ loff_t pos)
+{
+ struct iov_iter iter;
+ size_t count;
+ int ret;
+
+ count = 0;
+ ret = generic_segment_checks(iov, &nr_segs, &count, VERIFY_READ);
+ if (ret)
+ return ret;
+
+ iov_iter_init(&iter, iov, nr_segs, count, 0);
+
+ return ocfs2_file_write_iter(iocb, &iter, pos);
+}
+
static int ocfs2_splice_to_file(struct pipe_inode_info *pipe,
struct file *out,
struct splice_desc *sd)
@@ -2560,19 +2573,18 @@ bail:
return ret;
}
-static ssize_t ocfs2_file_aio_read(struct kiocb *iocb,
- const struct iovec *iov,
- unsigned long nr_segs,
+static ssize_t ocfs2_file_read_iter(struct kiocb *iocb,
+ struct iov_iter *iter,
loff_t pos)
{
int ret = 0, rw_level = -1, have_alloc_sem = 0, lock_level = 0;
struct file *filp = iocb->ki_filp;
struct inode *inode = filp->f_path.dentry->d_inode;
- trace_ocfs2_file_aio_read(inode, filp, filp->f_path.dentry,
+ trace_ocfs2_file_read_iter(inode, filp, filp->f_path.dentry,
(unsigned long long)OCFS2_I(inode)->ip_blkno,
filp->f_path.dentry->d_name.len,
- filp->f_path.dentry->d_name.name, nr_segs);
+ filp->f_path.dentry->d_name.name, pos);
if (!inode) {
@@ -2608,7 +2620,7 @@ static ssize_t ocfs2_file_aio_read(struct kiocb *iocb,
*
* Take and drop the meta data lock to update inode fields
* like i_size. This allows the checks down below
- * generic_file_aio_read() a chance of actually working.
+ * generic_file_read_iter() a chance of actually working.
*/
ret = ocfs2_inode_lock_atime(inode, filp->f_vfsmnt, &lock_level);
if (ret < 0) {
@@ -2617,8 +2629,8 @@ static ssize_t ocfs2_file_aio_read(struct kiocb *iocb,
}
ocfs2_inode_unlock(inode, lock_level);
- ret = generic_file_aio_read(iocb, iov, nr_segs, iocb->ki_pos);
- trace_generic_file_aio_read_ret(ret);
+ ret = generic_file_read_iter(iocb, iter, iocb->ki_pos);
+ trace_generic_file_read_iter_ret(ret);
/* buffered aio wouldn't have proper lock coverage today */
BUG_ON(ret == -EIOCBQUEUED && !(filp->f_flags & O_DIRECT));
@@ -2690,6 +2702,24 @@ out:
return offset;
}
+static ssize_t ocfs2_file_aio_read(struct kiocb *iocb,
+ const struct iovec *iov,
+ unsigned long nr_segs,
+ loff_t pos)
+{
+ struct iov_iter iter;
+ size_t count;
+ int ret;
+
+ ret = generic_segment_checks(iov, &nr_segs, &count, VERIFY_WRITE);
+ if (ret)
+ return ret;
+
+ iov_iter_init(&iter, iov, nr_segs, count, 0);
+
+ return ocfs2_file_read_iter(iocb, &iter, pos);
+}
+
const struct inode_operations ocfs2_file_iops = {
.setattr = ocfs2_setattr,
.getattr = ocfs2_getattr,
@@ -2723,6 +2753,8 @@ const struct file_operations ocfs2_fops = {
.open = ocfs2_file_open,
.aio_read = ocfs2_file_aio_read,
.aio_write = ocfs2_file_aio_write,
+ .read_iter = ocfs2_file_read_iter,
+ .write_iter = ocfs2_file_write_iter,
.unlocked_ioctl = ocfs2_ioctl,
#ifdef CONFIG_COMPAT
.compat_ioctl = ocfs2_compat_ioctl,
@@ -2771,6 +2803,8 @@ const struct file_operations ocfs2_fops_no_plocks = {
.open = ocfs2_file_open,
.aio_read = ocfs2_file_aio_read,
.aio_write = ocfs2_file_aio_write,
+ .read_iter = ocfs2_file_read_iter,
+ .write_iter = ocfs2_file_write_iter,
.unlocked_ioctl = ocfs2_ioctl,
#ifdef CONFIG_COMPAT
.compat_ioctl = ocfs2_compat_ioctl,
diff --git a/fs/ocfs2/ocfs2_trace.h b/fs/ocfs2/ocfs2_trace.h
index 3b481f4..8409f00 100644
--- a/fs/ocfs2/ocfs2_trace.h
+++ b/fs/ocfs2/ocfs2_trace.h
@@ -1312,12 +1312,16 @@ DEFINE_OCFS2_FILE_OPS(ocfs2_sync_file);
DEFINE_OCFS2_FILE_OPS(ocfs2_file_aio_write);
+DEFINE_OCFS2_FILE_OPS(ocfs2_file_write_iter);
+
DEFINE_OCFS2_FILE_OPS(ocfs2_file_splice_write);
DEFINE_OCFS2_FILE_OPS(ocfs2_file_splice_read);
DEFINE_OCFS2_FILE_OPS(ocfs2_file_aio_read);
+DEFINE_OCFS2_FILE_OPS(ocfs2_file_read_iter);
+
DEFINE_OCFS2_ULL_ULL_ULL_EVENT(ocfs2_truncate_file);
DEFINE_OCFS2_ULL_ULL_EVENT(ocfs2_truncate_file_error);
@@ -1474,7 +1478,7 @@ TRACE_EVENT(ocfs2_prepare_inode_for_write,
__entry->direct_io, __entry->has_refcount)
);
-DEFINE_OCFS2_INT_EVENT(generic_file_aio_read_ret);
+DEFINE_OCFS2_INT_EVENT(generic_file_read_iter_ret);
/* End of trace events for fs/ocfs2/file.c. */
--
1.7.12.3
btrfs can use generic_file_read_iter(). Base btrfs_file_write_iter()
on btrfs_file_aio_write(), then have the latter call the former.
Signed-off-by: Dave Kleikamp <[email protected]>
Cc: Zach Brown <[email protected]>
Cc: Chris Mason <[email protected]>
Cc: [email protected]
---
fs/btrfs/file.c | 55 ++++++++++++++++++++++++++++++-------------------------
1 file changed, 30 insertions(+), 25 deletions(-)
diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
index 9ab1bed..576d2f0 100644
--- a/fs/btrfs/file.c
+++ b/fs/btrfs/file.c
@@ -1361,27 +1361,23 @@ static noinline ssize_t __btrfs_buffered_write(struct file *file,
}
static ssize_t __btrfs_direct_write(struct kiocb *iocb,
- const struct iovec *iov,
- unsigned long nr_segs, loff_t pos,
- loff_t *ppos, size_t count, size_t ocount)
+ struct iov_iter *iter, loff_t pos,
+ loff_t *ppos, size_t count)
{
struct file *file = iocb->ki_filp;
- struct iov_iter i;
ssize_t written;
ssize_t written_buffered;
loff_t endbyte;
int err;
- written = generic_file_direct_write(iocb, iov, &nr_segs, pos, ppos,
- count, ocount);
+ written = generic_file_direct_write_iter(iocb, iter, pos, ppos, count);
if (written < 0 || written == count)
return written;
pos += written;
count -= written;
- iov_iter_init(&i, iov, nr_segs, count, written);
- written_buffered = __btrfs_buffered_write(file, &i, pos);
+ written_buffered = __btrfs_buffered_write(file, iter, pos);
if (written_buffered < 0) {
err = written_buffered;
goto out;
@@ -1398,9 +1394,8 @@ out:
return written ? written : err;
}
-static ssize_t btrfs_file_aio_write(struct kiocb *iocb,
- const struct iovec *iov,
- unsigned long nr_segs, loff_t pos)
+static ssize_t btrfs_file_write_iter(struct kiocb *iocb,
+ struct iov_iter *iter, loff_t pos)
{
struct file *file = iocb->ki_filp;
struct inode *inode = fdentry(file)->d_inode;
@@ -1409,18 +1404,13 @@ static ssize_t btrfs_file_aio_write(struct kiocb *iocb,
u64 start_pos;
ssize_t num_written = 0;
ssize_t err = 0;
- size_t count, ocount;
+ size_t count;
sb_start_write(inode->i_sb);
mutex_lock(&inode->i_mutex);
- err = generic_segment_checks(iov, &nr_segs, &ocount, VERIFY_READ);
- if (err) {
- mutex_unlock(&inode->i_mutex);
- goto out;
- }
- count = ocount;
+ count = iov_iter_count(iter);
current->backing_dev_info = inode->i_mapping->backing_dev_info;
err = generic_write_checks(file, &pos, &count, S_ISBLK(inode->i_mode));
@@ -1468,14 +1458,10 @@ static ssize_t btrfs_file_aio_write(struct kiocb *iocb,
}
if (unlikely(file->f_flags & O_DIRECT)) {
- num_written = __btrfs_direct_write(iocb, iov, nr_segs,
- pos, ppos, count, ocount);
+ num_written = __btrfs_direct_write(iocb, iter, pos, ppos,
+ count);
} else {
- struct iov_iter i;
-
- iov_iter_init(&i, iov, nr_segs, count, num_written);
-
- num_written = __btrfs_buffered_write(file, &i, pos);
+ num_written = __btrfs_buffered_write(file, iter, pos);
if (num_written > 0)
*ppos = pos + num_written;
}
@@ -1506,6 +1492,23 @@ out:
return num_written ? num_written : err;
}
+static ssize_t btrfs_file_aio_write(struct kiocb *iocb,
+ const struct iovec *iov,
+ unsigned long nr_segs, loff_t pos)
+{
+ struct iov_iter i;
+ int ret;
+ size_t count;
+
+ ret = generic_segment_checks(iov, &nr_segs, &count, VERIFY_WRITE);
+ if (ret)
+ return ret;
+
+ iov_iter_init(&i, iov, nr_segs, count, 0);
+
+ return btrfs_file_write_iter(iocb, &i, pos);
+}
+
int btrfs_release_file(struct inode *inode, struct file *filp)
{
/*
@@ -2282,7 +2285,9 @@ const struct file_operations btrfs_file_operations = {
.write = do_sync_write,
.aio_read = generic_file_aio_read,
.splice_read = generic_file_splice_read,
+ .read_iter = generic_file_read_iter,
.aio_write = btrfs_file_aio_write,
+ .write_iter = btrfs_file_write_iter,
.mmap = btrfs_file_mmap,
.open = generic_file_open,
.release = btrfs_release_file,
--
1.7.12.3
These are the simple ones.
File systems that use generic_file_aio_read() and generic_file_aio_write()
can trivially support generic_file_read_iter() and generic_file_write_iter().
This patch adds those file_operations for 9p, ext2, ext3, fat, hfs, hfsplus,
jfs, nilfs2, and reiserfs.
Signed-off-by: Dave Kleikamp <[email protected]>
Cc: Zach Brown <[email protected]>
Cc: [email protected]
Cc: Jan Kara <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Andreas Dilger <[email protected]>
Cc: [email protected]
Cc: OGAWA Hirofumi <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
---
fs/9p/vfs_file.c | 4 ++++
fs/ext2/file.c | 2 ++
fs/ext3/file.c | 2 ++
fs/fat/file.c | 2 ++
fs/hfs/inode.c | 2 ++
fs/hfsplus/inode.c | 2 ++
fs/jfs/file.c | 2 ++
fs/nilfs2/file.c | 2 ++
fs/reiserfs/file.c | 2 ++
9 files changed, 20 insertions(+)
diff --git a/fs/9p/vfs_file.c b/fs/9p/vfs_file.c
index c2483e9..d19458b 100644
--- a/fs/9p/vfs_file.c
+++ b/fs/9p/vfs_file.c
@@ -748,6 +748,8 @@ const struct file_operations v9fs_cached_file_operations = {
.write = v9fs_cached_file_write,
.aio_read = generic_file_aio_read,
.aio_write = generic_file_aio_write,
+ .read_iter = generic_file_read_iter,
+ .write_iter = generic_file_write_iter,
.open = v9fs_file_open,
.release = v9fs_dir_release,
.lock = v9fs_file_lock,
@@ -761,6 +763,8 @@ const struct file_operations v9fs_cached_file_operations_dotl = {
.write = v9fs_cached_file_write,
.aio_read = generic_file_aio_read,
.aio_write = generic_file_aio_write,
+ .read_iter = generic_file_read_iter,
+ .write_iter = generic_file_write_iter,
.open = v9fs_file_open,
.release = v9fs_dir_release,
.lock = v9fs_file_lock_dotl,
diff --git a/fs/ext2/file.c b/fs/ext2/file.c
index a5b3a5d..eee8f86 100644
--- a/fs/ext2/file.c
+++ b/fs/ext2/file.c
@@ -66,6 +66,8 @@ const struct file_operations ext2_file_operations = {
.write = do_sync_write,
.aio_read = generic_file_aio_read,
.aio_write = generic_file_aio_write,
+ .read_iter = generic_file_read_iter,
+ .write_iter = generic_file_write_iter,
.unlocked_ioctl = ext2_ioctl,
#ifdef CONFIG_COMPAT
.compat_ioctl = ext2_compat_ioctl,
diff --git a/fs/ext3/file.c b/fs/ext3/file.c
index 25cb413..86828f7 100644
--- a/fs/ext3/file.c
+++ b/fs/ext3/file.c
@@ -54,6 +54,8 @@ const struct file_operations ext3_file_operations = {
.write = do_sync_write,
.aio_read = generic_file_aio_read,
.aio_write = generic_file_aio_write,
+ .read_iter = generic_file_read_iter,
+ .write_iter = generic_file_write_iter,
.unlocked_ioctl = ext3_ioctl,
#ifdef CONFIG_COMPAT
.compat_ioctl = ext3_compat_ioctl,
diff --git a/fs/fat/file.c b/fs/fat/file.c
index a62e0ec..250d1b7 100644
--- a/fs/fat/file.c
+++ b/fs/fat/file.c
@@ -166,6 +166,8 @@ const struct file_operations fat_file_operations = {
.write = do_sync_write,
.aio_read = generic_file_aio_read,
.aio_write = generic_file_aio_write,
+ .read_iter = generic_file_read_iter,
+ .write_iter = generic_file_write_iter,
.mmap = generic_file_mmap,
.release = fat_file_release,
.unlocked_ioctl = fat_generic_ioctl,
diff --git a/fs/hfs/inode.c b/fs/hfs/inode.c
index 986cb91..f750dbe 100644
--- a/fs/hfs/inode.c
+++ b/fs/hfs/inode.c
@@ -656,8 +656,10 @@ static const struct file_operations hfs_file_operations = {
.llseek = generic_file_llseek,
.read = do_sync_read,
.aio_read = generic_file_aio_read,
+ .read_iter = generic_file_read_iter,
.write = do_sync_write,
.aio_write = generic_file_aio_write,
+ .write_iter = generic_file_write_iter,
.mmap = generic_file_mmap,
.splice_read = generic_file_splice_read,
.fsync = hfs_file_fsync,
diff --git a/fs/hfsplus/inode.c b/fs/hfsplus/inode.c
index 94fc092..0191a0d 100644
--- a/fs/hfsplus/inode.c
+++ b/fs/hfsplus/inode.c
@@ -369,8 +369,10 @@ static const struct file_operations hfsplus_file_operations = {
.llseek = generic_file_llseek,
.read = do_sync_read,
.aio_read = generic_file_aio_read,
+ .read_iter = generic_file_read_iter,
.write = do_sync_write,
.aio_write = generic_file_aio_write,
+ .write_iter = generic_file_write_iter,
.mmap = generic_file_mmap,
.splice_read = generic_file_splice_read,
.fsync = hfsplus_file_fsync,
diff --git a/fs/jfs/file.c b/fs/jfs/file.c
index 9d3afd1..a49ede70 100644
--- a/fs/jfs/file.c
+++ b/fs/jfs/file.c
@@ -151,6 +151,8 @@ const struct file_operations jfs_file_operations = {
.read = do_sync_read,
.aio_read = generic_file_aio_read,
.aio_write = generic_file_aio_write,
+ .read_iter = generic_file_read_iter,
+ .write_iter = generic_file_write_iter,
.mmap = generic_file_mmap,
.splice_read = generic_file_splice_read,
.splice_write = generic_file_splice_write,
diff --git a/fs/nilfs2/file.c b/fs/nilfs2/file.c
index 16f35f7..f4c58c9 100644
--- a/fs/nilfs2/file.c
+++ b/fs/nilfs2/file.c
@@ -155,6 +155,8 @@ const struct file_operations nilfs_file_operations = {
.write = do_sync_write,
.aio_read = generic_file_aio_read,
.aio_write = generic_file_aio_write,
+ .read_iter = generic_file_read_iter,
+ .write_iter = generic_file_write_iter,
.unlocked_ioctl = nilfs_ioctl,
#ifdef CONFIG_COMPAT
.compat_ioctl = nilfs_compat_ioctl,
diff --git a/fs/reiserfs/file.c b/fs/reiserfs/file.c
index 8375c92..8f86c2b 100644
--- a/fs/reiserfs/file.c
+++ b/fs/reiserfs/file.c
@@ -306,6 +306,8 @@ const struct file_operations reiserfs_file_operations = {
.fsync = reiserfs_sync_file,
.aio_read = generic_file_aio_read,
.aio_write = generic_file_aio_write,
+ .read_iter = generic_file_read_iter,
+ .write_iter = generic_file_write_iter,
.splice_read = generic_file_splice_read,
.splice_write = generic_file_splice_write,
.llseek = generic_file_llseek,
--
1.7.12.3
A future patch hides the internals of struct iov_iter, so fuse should
be using the supported interface.
Signed-off-by: Dave Kleikamp <[email protected]>
Acked-by: Miklos Szeredi <[email protected]>
Cc: [email protected]
---
fs/fuse/file.c | 29 ++++++++---------------------
1 file changed, 8 insertions(+), 21 deletions(-)
diff --git a/fs/fuse/file.c b/fs/fuse/file.c
index 78d2837..4e42e95 100644
--- a/fs/fuse/file.c
+++ b/fs/fuse/file.c
@@ -1631,30 +1631,17 @@ static int fuse_ioctl_copy_user(struct page **pages, struct iovec *iov,
while (iov_iter_count(&ii)) {
struct page *page = pages[page_idx++];
size_t todo = min_t(size_t, PAGE_SIZE, iov_iter_count(&ii));
- void *kaddr;
+ size_t left;
- kaddr = kmap(page);
-
- while (todo) {
- char __user *uaddr = ii.iov->iov_base + ii.iov_offset;
- size_t iov_len = ii.iov->iov_len - ii.iov_offset;
- size_t copy = min(todo, iov_len);
- size_t left;
-
- if (!to_user)
- left = copy_from_user(kaddr, uaddr, copy);
- else
- left = copy_to_user(uaddr, kaddr, copy);
-
- if (unlikely(left))
- return -EFAULT;
+ if (!to_user)
+ left = iov_iter_copy_from_user(page, &ii, 0, todo);
+ else
+ left = iov_iter_copy_to_user(page, &ii, 0, todo);
- iov_iter_advance(&ii, copy);
- todo -= copy;
- kaddr += copy;
- }
+ if (unlikely(left))
+ return -EFAULT;
- kunmap(page);
+ iov_iter_advance(&ii, todo);
}
return 0;
--
1.7.12.3
From: Zach Brown <[email protected]>
This adds iov_iter wrappers around copy_to_user() to match the existing
wrappers around copy_from_user().
This will be used by the generic file system buffered read path.
Signed-off-by: Dave Kleikamp <[email protected]>
Cc: Zach Brown <[email protected]>
---
include/linux/fs.h | 4 +++
mm/iov-iter.c | 80 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 84 insertions(+)
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 001c7cf..36f9291 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -227,6 +227,10 @@ struct iov_iter {
size_t count;
};
+size_t iov_iter_copy_to_user_atomic(struct page *page,
+ struct iov_iter *i, unsigned long offset, size_t bytes);
+size_t iov_iter_copy_to_user(struct page *page,
+ struct iov_iter *i, unsigned long offset, size_t bytes);
size_t iov_iter_copy_from_user_atomic(struct page *page,
struct iov_iter *i, unsigned long offset, size_t bytes);
size_t iov_iter_copy_from_user(struct page *page,
diff --git a/mm/iov-iter.c b/mm/iov-iter.c
index 5c4f3a5..d68b67f 100644
--- a/mm/iov-iter.c
+++ b/mm/iov-iter.c
@@ -6,6 +6,86 @@
#include <linux/highmem.h>
#include <linux/pagemap.h>
+static size_t __iovec_copy_to_user(char *vaddr, const struct iovec *iov,
+ size_t base, size_t bytes, int atomic)
+{
+ size_t copied = 0, left = 0;
+
+ while (bytes) {
+ char __user *buf = iov->iov_base + base;
+ int copy = min(bytes, iov->iov_len - base);
+
+ base = 0;
+ if (atomic)
+ left = __copy_to_user_inatomic(buf, vaddr, copy);
+ else
+ left = copy_to_user(buf, vaddr, copy);
+ copied += copy;
+ bytes -= copy;
+ vaddr += copy;
+ iov++;
+
+ if (unlikely(left))
+ break;
+ }
+ return copied - left;
+}
+
+/*
+ * Copy as much as we can into the page and return the number of bytes which
+ * were sucessfully copied. If a fault is encountered then return the number of
+ * bytes which were copied.
+ */
+size_t iov_iter_copy_to_user_atomic(struct page *page,
+ struct iov_iter *i, unsigned long offset, size_t bytes)
+{
+ char *kaddr;
+ size_t copied;
+
+ BUG_ON(!in_atomic());
+ kaddr = kmap_atomic(page);
+ if (likely(i->nr_segs == 1)) {
+ int left;
+ char __user *buf = i->iov->iov_base + i->iov_offset;
+ left = __copy_to_user_inatomic(buf, kaddr + offset, bytes);
+ copied = bytes - left;
+ } else {
+ copied = __iovec_copy_to_user(kaddr + offset, i->iov,
+ i->iov_offset, bytes, 1);
+ }
+ kunmap_atomic(kaddr);
+
+ return copied;
+}
+EXPORT_SYMBOL(iov_iter_copy_to_user_atomic);
+
+/*
+ * This has the same sideeffects and return value as
+ * iov_iter_copy_to_user_atomic().
+ * The difference is that it attempts to resolve faults.
+ * Page must not be locked.
+ */
+size_t iov_iter_copy_to_user(struct page *page,
+ struct iov_iter *i, unsigned long offset, size_t bytes)
+{
+ char *kaddr;
+ size_t copied;
+
+ kaddr = kmap(page);
+ if (likely(i->nr_segs == 1)) {
+ int left;
+ char __user *buf = i->iov->iov_base + i->iov_offset;
+ left = copy_to_user(buf, kaddr + offset, bytes);
+ copied = bytes - left;
+ } else {
+ copied = __iovec_copy_to_user(kaddr + offset, i->iov,
+ i->iov_offset, bytes, 0);
+ }
+ kunmap(page);
+ return copied;
+}
+EXPORT_SYMBOL(iov_iter_copy_to_user);
+
static size_t __iovec_copy_from_user(char *vaddr, const struct iovec *iov,
size_t base, size_t bytes, int atomic)
{
--
1.7.12.3
Signed-off-by: Dave Kleikamp <[email protected]>
---
mm/iov-iter.c | 17 ++++++++++-------
1 file changed, 10 insertions(+), 7 deletions(-)
diff --git a/mm/iov-iter.c b/mm/iov-iter.c
index 83f7594..5c4f3a5 100644
--- a/mm/iov-iter.c
+++ b/mm/iov-iter.c
@@ -6,8 +6,8 @@
#include <linux/highmem.h>
#include <linux/pagemap.h>
-static size_t __iovec_copy_from_user_inatomic(char *vaddr,
- const struct iovec *iov, size_t base, size_t bytes)
+static size_t __iovec_copy_from_user(char *vaddr, const struct iovec *iov,
+ size_t base, size_t bytes, int atomic)
{
size_t copied = 0, left = 0;
@@ -16,7 +16,10 @@ static size_t __iovec_copy_from_user_inatomic(char *vaddr,
int copy = min(bytes, iov->iov_len - base);
base = 0;
- left = __copy_from_user_inatomic(vaddr, buf, copy);
+ if (atomic)
+ left = __copy_from_user_inatomic(vaddr, buf, copy);
+ else
+ left = __copy_from_user(vaddr, buf, copy);
copied += copy;
bytes -= copy;
vaddr += copy;
@@ -47,8 +50,8 @@ size_t iov_iter_copy_from_user_atomic(struct page *page,
left = __copy_from_user_inatomic(kaddr + offset, buf, bytes);
copied = bytes - left;
} else {
- copied = __iovec_copy_from_user_inatomic(kaddr + offset,
- i->iov, i->iov_offset, bytes);
+ copied = __iovec_copy_from_user(kaddr + offset, i->iov,
+ i->iov_offset, bytes, 1);
}
kunmap_atomic(kaddr);
@@ -75,8 +78,8 @@ size_t iov_iter_copy_from_user(struct page *page,
left = __copy_from_user(kaddr + offset, buf, bytes);
copied = bytes - left;
} else {
- copied = __iovec_copy_from_user_inatomic(kaddr + offset,
- i->iov, i->iov_offset, bytes);
+ copied = __iovec_copy_from_user(kaddr + offset, i->iov,
+ i->iov_offset, bytes, 0);
}
kunmap(page);
return copied;
--
1.7.12.3
From: "From: Zach Brown" <[email protected]>
This uses the new kernel aio interface to process loopback IO by
submitting concurrent direct aio. Previously loop's IO was serialized
by synchronous processing in a thread.
The aio operations specify the memory for the IO with the bio_vec arrays
directly instead of mappings of the pages.
The use of aio operations is enabled when the backing file supports the
read_iter and write_iter methods. These methods must only be added when
O_DIRECT on bio_vecs is supported.
Signed-off-by: Dave Kleikamp <[email protected]>
Cc: Zach Brown <[email protected]>
---
drivers/block/loop.c | 135 ++++++++++++++++++++++++++++++++++------------
include/uapi/linux/loop.h | 1 +
2 files changed, 103 insertions(+), 33 deletions(-)
diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index e9d594f..a221222 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -76,6 +76,7 @@
#include <linux/sysfs.h>
#include <linux/miscdevice.h>
#include <linux/falloc.h>
+#include <linux/aio.h>
#include <asm/uaccess.h>
@@ -213,6 +214,48 @@ lo_do_transfer(struct loop_device *lo, int cmd,
return lo->transfer(lo, cmd, rpage, roffs, lpage, loffs, size, rblock);
}
+#ifdef CONFIG_AIO
+static void lo_rw_aio_complete(u64 data, long res)
+{
+ struct bio *bio = (struct bio *)(uintptr_t)data;
+
+ if (res > 0)
+ res = 0;
+ else if (res < 0)
+ res = -EIO;
+
+ bio_endio(bio, res);
+}
+
+static int lo_rw_aio(struct loop_device *lo, struct bio *bio)
+{
+ struct file *file = lo->lo_backing_file;
+ struct kiocb *iocb;
+ unsigned short op;
+ struct iov_iter iter;
+ struct bio_vec *bvec;
+ size_t nr_segs;
+ loff_t pos = ((loff_t) bio->bi_sector << 9) + lo->lo_offset;
+
+ iocb = aio_kernel_alloc(GFP_NOIO);
+ if (!iocb)
+ return -ENOMEM;
+
+ if (bio_rw(bio) & WRITE)
+ op = IOCB_CMD_WRITE_ITER;
+ else
+ op = IOCB_CMD_READ_ITER;
+
+ bvec = bio_iovec_idx(bio, bio->bi_idx);
+ nr_segs = bio_segments(bio);
+ iov_iter_init_bvec(&iter, bvec, nr_segs, bvec_length(bvec, nr_segs), 0);
+ aio_kernel_init_iter(iocb, file, op, &iter, pos);
+ aio_kernel_init_callback(iocb, lo_rw_aio_complete, (u64)(uintptr_t)bio);
+
+ return aio_kernel_submit(iocb);
+}
+#endif /* CONFIG_AIO */
+
/**
* __do_lo_send_write - helper for writing data to a loop device
*
@@ -413,37 +456,6 @@ static int do_bio_filebacked(struct loop_device *lo, struct bio *bio)
if (bio_rw(bio) == WRITE) {
struct file *file = lo->lo_backing_file;
- if (bio->bi_rw & REQ_FLUSH) {
- ret = vfs_fsync(file, 0);
- if (unlikely(ret && ret != -EINVAL)) {
- ret = -EIO;
- goto out;
- }
- }
-
- /*
- * We use punch hole to reclaim the free space used by the
- * image a.k.a. discard. However we do not support discard if
- * encryption is enabled, because it may give an attacker
- * useful information.
- */
- if (bio->bi_rw & REQ_DISCARD) {
- struct file *file = lo->lo_backing_file;
- int mode = FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE;
-
- if ((!file->f_op->fallocate) ||
- lo->lo_encrypt_key_size) {
- ret = -EOPNOTSUPP;
- goto out;
- }
- ret = file->f_op->fallocate(file, mode, pos,
- bio->bi_size);
- if (unlikely(ret && ret != -EINVAL &&
- ret != -EOPNOTSUPP))
- ret = -EIO;
- goto out;
- }
-
ret = lo_send(lo, bio, pos);
if ((bio->bi_rw & REQ_FUA) && !ret) {
@@ -454,7 +466,29 @@ static int do_bio_filebacked(struct loop_device *lo, struct bio *bio)
} else
ret = lo_receive(lo, bio, lo->lo_blocksize, pos);
-out:
+ return ret;
+}
+
+static int lo_discard(struct loop_device *lo, struct bio *bio)
+{
+ struct file *file = lo->lo_backing_file;
+ int mode = FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE;
+ loff_t pos = ((loff_t) bio->bi_sector << 9) + lo->lo_offset;
+ int ret;
+
+ /*
+ * We use punch hole to reclaim the free space used by the
+ * image a.k.a. discard. However we do not support discard if
+ * encryption is enabled, because it may give an attacker
+ * useful information.
+ */
+
+ if ((!file->f_op->fallocate) || lo->lo_encrypt_key_size)
+ return -EOPNOTSUPP;
+
+ ret = file->f_op->fallocate(file, mode, pos, bio->bi_size);
+ if (unlikely(ret && ret != -EINVAL && ret != -EOPNOTSUPP))
+ ret = -EIO;
return ret;
}
@@ -512,7 +546,29 @@ static inline void loop_handle_bio(struct loop_device *lo, struct bio *bio)
do_loop_switch(lo, bio->bi_private);
bio_put(bio);
} else {
- int ret = do_bio_filebacked(lo, bio);
+ int ret;
+
+ if (bio_rw(bio) == WRITE) {
+ if (bio->bi_rw & REQ_FLUSH) {
+ ret = vfs_fsync(lo->lo_backing_file, 1);
+ if (unlikely(ret && ret != -EINVAL))
+ goto out;
+ }
+ if (bio->bi_rw & REQ_DISCARD) {
+ ret = lo_discard(lo, bio);
+ goto out;
+ }
+ }
+#ifdef CONFIG_AIO
+ if (lo->lo_flags & LO_FLAGS_USE_AIO &&
+ lo->transfer == transfer_none) {
+ ret = lo_rw_aio(lo, bio);
+ if (ret == 0)
+ return;
+ } else
+#endif
+ ret = do_bio_filebacked(lo, bio);
+out:
bio_endio(bio, ret);
}
}
@@ -534,6 +590,12 @@ static int loop_thread(void *data)
struct loop_device *lo = data;
struct bio *bio;
+ /*
+ * In cases where the underlying filesystem calls balance_dirty_pages()
+ * we want less throttling to avoid lock ups trying to write dirty
+ * pages through the loop device
+ */
+ current->flags |= PF_LESS_THROTTLE;
set_user_nice(current, -20);
while (!kthread_should_stop() || !bio_list_empty(&lo->lo_bio_list)) {
@@ -854,6 +916,13 @@ static int loop_set_fd(struct loop_device *lo, fmode_t mode,
!file->f_op->write)
lo_flags |= LO_FLAGS_READ_ONLY;
+#ifdef CONFIG_AIO
+ if (file->f_op->write_iter && file->f_op->read_iter) {
+ file->f_flags |= O_DIRECT;
+ lo_flags |= LO_FLAGS_USE_AIO;
+ }
+#endif
+
lo_blocksize = S_ISBLK(inode->i_mode) ?
inode->i_bdev->bd_block_size : PAGE_SIZE;
diff --git a/include/uapi/linux/loop.h b/include/uapi/linux/loop.h
index e0cecd2..6edc6b6 100644
--- a/include/uapi/linux/loop.h
+++ b/include/uapi/linux/loop.h
@@ -21,6 +21,7 @@ enum {
LO_FLAGS_READ_ONLY = 1,
LO_FLAGS_AUTOCLEAR = 4,
LO_FLAGS_PARTSCAN = 8,
+ LO_FLAGS_USE_AIO = 16,
};
#include <asm/posix_types.h> /* for __kernel_old_dev_t */
--
1.7.12.3
From: Zach Brown <[email protected]>
direct IO treats memory from user iovecs and memory from arrays of
kernel pages very differently. User memory is pinned and worked with in
batches while kernel pages are always pinned and don't require
additional processing.
Rather than try and provide an abstraction that includes these
different behaviours we let direct IO extract the memory structs and
hand them to the existing code.
Signed-off-by: Dave Kleikamp <[email protected]>
Cc: Zach Brown <[email protected]>
---
include/linux/fs.h | 17 +++++++++++++++++
1 file changed, 17 insertions(+)
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 1a986ed..6ece092 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -297,6 +297,17 @@ static inline void iov_iter_init_bvec(struct iov_iter *i,
iov_iter_advance(i, written);
}
+static inline int iov_iter_has_bvec(struct iov_iter *i)
+{
+ return i->ops == &ii_bvec_ops;
+}
+
+static inline struct bio_vec *iov_iter_bvec(struct iov_iter *i)
+{
+ BUG_ON(!iov_iter_has_bvec(i));
+ return (struct bio_vec *)i->data;
+}
+
extern struct iov_iter_ops ii_iovec_ops;
static inline void iov_iter_init(struct iov_iter *i,
@@ -312,8 +323,14 @@ static inline void iov_iter_init(struct iov_iter *i,
iov_iter_advance(i, written);
}
+static inline int iov_iter_has_iovec(struct iov_iter *i)
+{
+ return i->ops == &ii_iovec_ops;
+}
+
static inline struct iovec *iov_iter_iovec(struct iov_iter *i)
{
+ BUG_ON(!iov_iter_has_iovec(i));
return (struct iovec *)i->data;
}
--
1.7.12.3
From: Zach Brown <[email protected]>
The generic direct write path wants to shorten its memory vector. It
does this when it finds that it has to perform a partial write due to
LIMIT_FSIZE. .direct_IO() always performs IO on all of the referenced
memory because it doesn't have an argument to specify the length of the
IO.
We add an iov_iter operation for this so that the generic path can ask
to shorten the memory vector without having to know what kind it is.
We're happy to shorten the kernel copy of the iovec array, but we refuse
to shorten the bio_vec array and return an error in this case.
Signed-off-by: Dave Kleikamp <[email protected]>
Cc: Zach Brown <[email protected]>
---
include/linux/fs.h | 5 +++++
mm/iov-iter.c | 14 ++++++++++++++
2 files changed, 19 insertions(+)
diff --git a/include/linux/fs.h b/include/linux/fs.h
index afb1343..1a986ed 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -240,6 +240,7 @@ struct iov_iter_ops {
void (*ii_advance)(struct iov_iter *, size_t);
int (*ii_fault_in_readable)(struct iov_iter *, size_t);
size_t (*ii_single_seg_count)(struct iov_iter *);
+ int (*ii_shorten)(struct iov_iter *, size_t);
};
static inline size_t iov_iter_copy_to_user_atomic(struct page *page,
@@ -274,6 +275,10 @@ static inline size_t iov_iter_single_seg_count(struct iov_iter *i)
{
return i->ops->ii_single_seg_count(i);
}
+static inline int iov_iter_shorten(struct iov_iter *i, size_t count)
+{
+ return i->ops->ii_shorten(i, count);
+}
extern struct iov_iter_ops ii_bvec_ops;
diff --git a/mm/iov-iter.c b/mm/iov-iter.c
index c5d0a9e..fcced89 100644
--- a/mm/iov-iter.c
+++ b/mm/iov-iter.c
@@ -201,6 +201,11 @@ static size_t ii_bvec_single_seg_count(struct iov_iter *i)
return min(i->count, bvec->bv_len - i->iov_offset);
}
+static int ii_bvec_shorten(struct iov_iter *i, size_t count)
+{
+ return -EINVAL;
+}
+
struct iov_iter_ops ii_bvec_ops = {
.ii_copy_to_user_atomic = ii_bvec_copy_to_user_atomic,
.ii_copy_to_user = ii_bvec_copy_to_user,
@@ -209,6 +214,7 @@ struct iov_iter_ops ii_bvec_ops = {
.ii_advance = ii_bvec_advance,
.ii_fault_in_readable = ii_bvec_fault_in_readable,
.ii_single_seg_count = ii_bvec_single_seg_count,
+ .ii_shorten = ii_bvec_shorten,
};
EXPORT_SYMBOL(ii_bvec_ops);
@@ -358,6 +364,13 @@ static size_t ii_iovec_single_seg_count(struct iov_iter *i)
return min(i->count, iov->iov_len - i->iov_offset);
}
+static int ii_iovec_shorten(struct iov_iter *i, size_t count)
+{
+ struct iovec *iov = (struct iovec *)i->data;
+ i->nr_segs = iov_shorten(iov, i->nr_segs, count);
+ return 0;
+}
+
struct iov_iter_ops ii_iovec_ops = {
.ii_copy_to_user_atomic = ii_iovec_copy_to_user_atomic,
.ii_copy_to_user = ii_iovec_copy_to_user,
@@ -366,5 +379,6 @@ struct iov_iter_ops ii_iovec_ops = {
.ii_advance = ii_iovec_advance,
.ii_fault_in_readable = ii_iovec_fault_in_readable,
.ii_single_seg_count = ii_iovec_single_seg_count,
+ .ii_shorten = ii_iovec_shorten,
};
EXPORT_SYMBOL(ii_iovec_ops);
--
1.7.12.3
From: Zach Brown <[email protected]>
__blockdev_direct_IO() had two instances of the same code to determine
if a given offset wasn't aligned first to the inode's blkbits and then
to the underlying device's blkbits. This was confusing enough but
we're about to add code that performs the same check on offsets in bvec
arrays. Rather than add yet more copies of this code let's have
everyone call a helper.
Signed-off-by: Dave Kleikamp <[email protected]>
Cc: Zach Brown <[email protected]>
---
fs/direct-io.c | 59 ++++++++++++++++++++++++++++++++++++----------------------
1 file changed, 37 insertions(+), 22 deletions(-)
diff --git a/fs/direct-io.c b/fs/direct-io.c
index f86c720..035c0a3 100644
--- a/fs/direct-io.c
+++ b/fs/direct-io.c
@@ -1020,6 +1020,39 @@ static inline int drop_refcount(struct dio *dio)
}
/*
+ * Returns true if the given offset is aligned to either the IO size
+ * specified by the given blkbits or by the logical block size of the
+ * given block device.
+ *
+ * If the given offset isn't aligned to the blkbits arguments as this is
+ * called then blkbits is set to the block size of the specified block
+ * device. The call can then return either true or false.
+ *
+ * This bizarre calling convention matches the code paths that
+ * duplicated the functionality that this helper was built from. We
+ * reproduce the behaviour to avoid introducing subtle bugs.
+ */
+static int dio_aligned(unsigned long offset, unsigned *blkbits,
+ struct block_device *bdev)
+{
+ unsigned mask = (1 << *blkbits) - 1;
+
+ /*
+ * Avoid references to bdev if not absolutely needed to give
+ * the early prefetch in the caller enough time.
+ */
+
+ if (offset & mask) {
+ if (bdev)
+ *blkbits = blksize_bits(bdev_logical_block_size(bdev));
+ mask = (1 << *blkbits) - 1;
+ return !(offset & mask);
+ }
+
+ return 1;
+}
+
+/*
* This is a library function for use by filesystem drivers.
*
* The locking rules are governed by the flags parameter:
@@ -1054,7 +1087,6 @@ do_blockdev_direct_IO(int rw, struct kiocb *iocb, struct inode *inode,
size_t size;
unsigned long addr;
unsigned blkbits = inode->i_blkbits;
- unsigned blocksize_mask = (1 << blkbits) - 1;
ssize_t retval = -EINVAL;
loff_t end = offset;
struct dio *dio;
@@ -1067,33 +1099,16 @@ do_blockdev_direct_IO(int rw, struct kiocb *iocb, struct inode *inode,
if (rw & WRITE)
rw = WRITE_ODIRECT;
- /*
- * Avoid references to bdev if not absolutely needed to give
- * the early prefetch in the caller enough time.
- */
-
- if (offset & blocksize_mask) {
- if (bdev)
- blkbits = blksize_bits(bdev_logical_block_size(bdev));
- blocksize_mask = (1 << blkbits) - 1;
- if (offset & blocksize_mask)
- goto out;
- }
+ if (!dio_aligned(offset, &blkbits, bdev))
+ goto out;
/* Check the memory alignment. Blocks cannot straddle pages */
for (seg = 0; seg < nr_segs; seg++) {
addr = (unsigned long)iov[seg].iov_base;
size = iov[seg].iov_len;
end += size;
- if (unlikely((addr & blocksize_mask) ||
- (size & blocksize_mask))) {
- if (bdev)
- blkbits = blksize_bits(
- bdev_logical_block_size(bdev));
- blocksize_mask = (1 << blkbits) - 1;
- if ((addr & blocksize_mask) || (size & blocksize_mask))
- goto out;
- }
+ if (!dio_aligned(addr|size, &blkbits, bdev))
+ goto out;
}
/* watch out for a 0 len io from a tricksy fs */
--
1.7.12.3
This patch implements the read_iter and write_iter file operations which
allow kernel code to initiate directIO. This allows the loop device to
read and write directly to the server, bypassing the page cache.
Signed-off-by: Dave Kleikamp <[email protected]>
Cc: Zach Brown <[email protected]>
Cc: Trond Myklebust <[email protected]>
Cc: [email protected]
---
fs/nfs/direct.c | 169 +++++++++++++++++++++++++++++++++----------------
fs/nfs/file.c | 48 ++++++++++----
fs/nfs/internal.h | 2 +
fs/nfs/nfs4file.c | 2 +
include/linux/nfs_fs.h | 6 +-
5 files changed, 155 insertions(+), 72 deletions(-)
diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
index 4532781..b1fda1c 100644
--- a/fs/nfs/direct.c
+++ b/fs/nfs/direct.c
@@ -90,6 +90,7 @@ struct nfs_direct_req {
int flags;
#define NFS_ODIRECT_DO_COMMIT (1) /* an unstable reply was received */
#define NFS_ODIRECT_RESCHED_WRITES (2) /* write verification failed */
+#define NFS_ODIRECT_MARK_DIRTY (4) /* mark read pages dirty */
struct nfs_writeverf verf; /* unstable write verifier */
};
@@ -131,15 +132,13 @@ ssize_t nfs_direct_IO(int rw, struct kiocb *iocb, struct iov_iter *iter,
return -EINVAL;
#else
- const struct iovec *iov = iov_iter_iovec(iter);
-
VM_BUG_ON(iocb->ki_left != PAGE_SIZE);
VM_BUG_ON(iocb->ki_nbytes != PAGE_SIZE);
if (rw == READ || rw == KERNEL_READ)
- return nfs_file_direct_read(iocb, iov, iter->nr_segs, pos,
+ return nfs_file_direct_read(iocb, iter, pos,
rw == READ ? true : false);
- return nfs_file_direct_write(iocb, iov, iter->nr_segs, pos,
+ return nfs_file_direct_write(iocb, iter, pos,
rw == WRITE ? true : false);
#endif /* CONFIG_NFS_SWAP */
}
@@ -277,7 +276,8 @@ static void nfs_direct_read_completion(struct nfs_pgio_header *hdr)
hdr->good_bytes & ~PAGE_MASK,
PAGE_SIZE);
}
- if (!PageCompound(page)) {
+ if ((dreq->flags & NFS_ODIRECT_MARK_DIRTY) &&
+ !PageCompound(page)) {
if (test_bit(NFS_IOHDR_ERROR, &hdr->flags)) {
if (bytes < hdr->good_bytes)
set_page_dirty(page);
@@ -414,10 +414,9 @@ static ssize_t nfs_direct_read_schedule_segment(struct nfs_pageio_descriptor *de
return result < 0 ? (ssize_t) result : -EFAULT;
}
-static ssize_t nfs_direct_read_schedule_iovec(struct nfs_direct_req *dreq,
- const struct iovec *iov,
- unsigned long nr_segs,
- loff_t pos, bool uio)
+static ssize_t nfs_direct_read_schedule(struct nfs_direct_req *dreq,
+ struct iov_iter *iter, loff_t pos,
+ bool uio)
{
struct nfs_pageio_descriptor desc;
ssize_t result = -EINVAL;
@@ -429,16 +428,47 @@ static ssize_t nfs_direct_read_schedule_iovec(struct nfs_direct_req *dreq,
get_dreq(dreq);
desc.pg_dreq = dreq;
- for (seg = 0; seg < nr_segs; seg++) {
- const struct iovec *vec = &iov[seg];
- result = nfs_direct_read_schedule_segment(&desc, vec, pos, uio);
- if (result < 0)
- break;
- requested_bytes += result;
- if ((size_t)result < vec->iov_len)
- break;
- pos += vec->iov_len;
- }
+ if (iov_iter_has_iovec(iter)) {
+ const struct iovec *iov = iov_iter_iovec(iter);
+ if (uio)
+ dreq->flags = NFS_ODIRECT_MARK_DIRTY;
+ for (seg = 0; seg < iter->nr_segs; seg++) {
+ const struct iovec *vec = &iov[seg];
+ result = nfs_direct_read_schedule_segment(&desc, vec,
+ pos, uio);
+ if (result < 0)
+ break;
+ requested_bytes += result;
+ if ((size_t)result < vec->iov_len)
+ break;
+ pos += vec->iov_len;
+ }
+ } else if (iov_iter_has_bvec(iter)) {
+ struct nfs_open_context *ctx = dreq->ctx;
+ struct inode *inode = ctx->dentry->d_inode;
+ struct bio_vec *bvec = iov_iter_bvec(iter);
+ for (seg = 0; seg < iter->nr_segs; seg++) {
+ struct nfs_page *req;
+ unsigned int req_len = bvec[seg].bv_len;
+ req = nfs_create_request(ctx, inode,
+ bvec[seg].bv_page,
+ bvec[seg].bv_offset, req_len);
+ if (IS_ERR(req)) {
+ result = PTR_ERR(req);
+ break;
+ }
+ req->wb_index = pos >> PAGE_SHIFT;
+ req->wb_offset = pos & ~PAGE_MASK;
+ if (!nfs_pageio_add_request(&desc, req)) {
+ result = desc.pg_error;
+ nfs_release_request(req);
+ break;
+ }
+ requested_bytes += req_len;
+ pos += req_len;
+ }
+ } else
+ BUG();
nfs_pageio_complete(&desc);
@@ -456,8 +486,8 @@ static ssize_t nfs_direct_read_schedule_iovec(struct nfs_direct_req *dreq,
return 0;
}
-static ssize_t nfs_direct_read(struct kiocb *iocb, const struct iovec *iov,
- unsigned long nr_segs, loff_t pos, bool uio)
+static ssize_t nfs_direct_read(struct kiocb *iocb, struct iov_iter *iter,
+ loff_t pos, bool uio)
{
ssize_t result = -ENOMEM;
struct inode *inode = iocb->ki_filp->f_mapping->host;
@@ -469,7 +499,7 @@ static ssize_t nfs_direct_read(struct kiocb *iocb, const struct iovec *iov,
goto out;
dreq->inode = inode;
- dreq->bytes_left = iov_length(iov, nr_segs);
+ dreq->bytes_left = iov_iter_count(iter);
dreq->ctx = get_nfs_open_context(nfs_file_open_context(iocb->ki_filp));
l_ctx = nfs_get_lock_context(dreq->ctx);
if (IS_ERR(l_ctx)) {
@@ -480,8 +510,8 @@ static ssize_t nfs_direct_read(struct kiocb *iocb, const struct iovec *iov,
if (!is_sync_kiocb(iocb))
dreq->iocb = iocb;
- NFS_I(inode)->read_io += iov_length(iov, nr_segs);
- result = nfs_direct_read_schedule_iovec(dreq, iov, nr_segs, pos, uio);
+ NFS_I(inode)->read_io += iov_iter_count(iter);
+ result = nfs_direct_read_schedule(dreq, iter, pos, uio);
if (!result)
result = nfs_direct_wait(dreq);
out_release:
@@ -815,10 +845,9 @@ static const struct nfs_pgio_completion_ops nfs_direct_write_completion_ops = {
.completion = nfs_direct_write_completion,
};
-static ssize_t nfs_direct_write_schedule_iovec(struct nfs_direct_req *dreq,
- const struct iovec *iov,
- unsigned long nr_segs,
- loff_t pos, bool uio)
+static ssize_t nfs_direct_write_schedule(struct nfs_direct_req *dreq,
+ struct iov_iter *iter, loff_t pos,
+ bool uio)
{
struct nfs_pageio_descriptor desc;
struct inode *inode = dreq->inode;
@@ -832,17 +861,48 @@ static ssize_t nfs_direct_write_schedule_iovec(struct nfs_direct_req *dreq,
get_dreq(dreq);
atomic_inc(&inode->i_dio_count);
- NFS_I(dreq->inode)->write_io += iov_length(iov, nr_segs);
- for (seg = 0; seg < nr_segs; seg++) {
- const struct iovec *vec = &iov[seg];
- result = nfs_direct_write_schedule_segment(&desc, vec, pos, uio);
- if (result < 0)
- break;
- requested_bytes += result;
- if ((size_t)result < vec->iov_len)
- break;
- pos += vec->iov_len;
- }
+ NFS_I(dreq->inode)->write_io += iov_iter_count(iter);
+
+ if (iov_iter_has_iovec(iter)) {
+ const struct iovec *iov = iov_iter_iovec(iter);
+ for (seg = 0; seg < iter->nr_segs; seg++) {
+ const struct iovec *vec = &iov[seg];
+ result = nfs_direct_write_schedule_segment(&desc, vec,
+ pos, uio);
+ if (result < 0)
+ break;
+ requested_bytes += result;
+ if ((size_t)result < vec->iov_len)
+ break;
+ pos += vec->iov_len;
+ }
+ } else if (iov_iter_has_bvec(iter)) {
+ struct nfs_open_context *ctx = dreq->ctx;
+ struct bio_vec *bvec = iov_iter_bvec(iter);
+ for (seg = 0; seg < iter->nr_segs; seg++) {
+ struct nfs_page *req;
+ unsigned int req_len = bvec[seg].bv_len;
+
+ req = nfs_create_request(ctx, inode, bvec[seg].bv_page,
+ bvec[seg].bv_offset, req_len);
+ if (IS_ERR(req)) {
+ result = PTR_ERR(req);
+ break;
+ }
+ nfs_lock_request(req);
+ req->wb_index = pos >> PAGE_SHIFT;
+ req->wb_offset = pos & ~PAGE_MASK;
+ if (!nfs_pageio_add_request(&desc, req)) {
+ result = desc.pg_error;
+ nfs_unlock_and_release_request(req);
+ break;
+ }
+ requested_bytes += req_len;
+ pos += req_len;
+ }
+ } else
+ BUG();
+
nfs_pageio_complete(&desc);
/*
@@ -860,9 +920,8 @@ static ssize_t nfs_direct_write_schedule_iovec(struct nfs_direct_req *dreq,
return 0;
}
-static ssize_t nfs_direct_write(struct kiocb *iocb, const struct iovec *iov,
- unsigned long nr_segs, loff_t pos,
- size_t count, bool uio)
+static ssize_t nfs_direct_write(struct kiocb *iocb, struct iov_iter *iter,
+ loff_t pos, bool uio)
{
ssize_t result = -ENOMEM;
struct inode *inode = iocb->ki_filp->f_mapping->host;
@@ -874,7 +933,7 @@ static ssize_t nfs_direct_write(struct kiocb *iocb, const struct iovec *iov,
goto out;
dreq->inode = inode;
- dreq->bytes_left = count;
+ dreq->bytes_left = iov_iter_count(iter);
dreq->ctx = get_nfs_open_context(nfs_file_open_context(iocb->ki_filp));
l_ctx = nfs_get_lock_context(dreq->ctx);
if (IS_ERR(l_ctx)) {
@@ -885,7 +944,7 @@ static ssize_t nfs_direct_write(struct kiocb *iocb, const struct iovec *iov,
if (!is_sync_kiocb(iocb))
dreq->iocb = iocb;
- result = nfs_direct_write_schedule_iovec(dreq, iov, nr_segs, pos, uio);
+ result = nfs_direct_write_schedule(dreq, iter, pos, uio);
if (!result)
result = nfs_direct_wait(dreq);
out_release:
@@ -897,8 +956,7 @@ out:
/**
* nfs_file_direct_read - file direct read operation for NFS files
* @iocb: target I/O control block
- * @iov: vector of user buffers into which to read data
- * @nr_segs: size of iov vector
+ * @iter: vector of buffers into which to read data
* @pos: byte offset in file where reading starts
*
* We use this function for direct reads instead of calling
@@ -915,15 +973,15 @@ out:
* client must read the updated atime from the server back into its
* cache.
*/
-ssize_t nfs_file_direct_read(struct kiocb *iocb, const struct iovec *iov,
- unsigned long nr_segs, loff_t pos, bool uio)
+ssize_t nfs_file_direct_read(struct kiocb *iocb, struct iov_iter *iter,
+ loff_t pos, bool uio)
{
ssize_t retval = -EINVAL;
struct file *file = iocb->ki_filp;
struct address_space *mapping = file->f_mapping;
size_t count;
- count = iov_length(iov, nr_segs);
+ count = iov_iter_count(iter);
nfs_add_stats(mapping->host, NFSIOS_DIRECTREADBYTES, count);
dfprintk(FILE, "NFS: direct read(%s/%s, %zd@%Ld)\n",
@@ -941,7 +999,7 @@ ssize_t nfs_file_direct_read(struct kiocb *iocb, const struct iovec *iov,
task_io_account_read(count);
- retval = nfs_direct_read(iocb, iov, nr_segs, pos, uio);
+ retval = nfs_direct_read(iocb, iter, pos, uio);
if (retval > 0)
iocb->ki_pos = pos + retval;
@@ -952,8 +1010,7 @@ out:
/**
* nfs_file_direct_write - file direct write operation for NFS files
* @iocb: target I/O control block
- * @iov: vector of user buffers from which to write data
- * @nr_segs: size of iov vector
+ * @iter: vector of buffers from which to write data
* @pos: byte offset in file where writing starts
*
* We use this function for direct writes instead of calling
@@ -971,15 +1028,15 @@ out:
* Note that O_APPEND is not supported for NFS direct writes, as there
* is no atomic O_APPEND write facility in the NFS protocol.
*/
-ssize_t nfs_file_direct_write(struct kiocb *iocb, const struct iovec *iov,
- unsigned long nr_segs, loff_t pos, bool uio)
+ssize_t nfs_file_direct_write(struct kiocb *iocb, struct iov_iter *iter,
+ loff_t pos, bool uio)
{
ssize_t retval = -EINVAL;
struct file *file = iocb->ki_filp;
struct address_space *mapping = file->f_mapping;
size_t count;
- count = iov_length(iov, nr_segs);
+ count = iov_iter_count(iter);
nfs_add_stats(mapping->host, NFSIOS_DIRECTWRITTENBYTES, count);
dfprintk(FILE, "NFS: direct write(%s/%s, %zd@%Ld)\n",
@@ -1004,7 +1061,7 @@ ssize_t nfs_file_direct_write(struct kiocb *iocb, const struct iovec *iov,
task_io_account_write(count);
- retval = nfs_direct_write(iocb, iov, nr_segs, pos, count, uio);
+ retval = nfs_direct_write(iocb, iter, pos, uio);
if (retval > 0) {
struct inode *inode = mapping->host;
diff --git a/fs/nfs/file.c b/fs/nfs/file.c
index 582bb88..b4bf6ef 100644
--- a/fs/nfs/file.c
+++ b/fs/nfs/file.c
@@ -172,28 +172,39 @@ nfs_file_flush(struct file *file, fl_owner_t id)
EXPORT_SYMBOL_GPL(nfs_file_flush);
ssize_t
-nfs_file_read(struct kiocb *iocb, const struct iovec *iov,
- unsigned long nr_segs, loff_t pos)
+nfs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter, loff_t pos)
{
struct dentry * dentry = iocb->ki_filp->f_path.dentry;
struct inode * inode = dentry->d_inode;
ssize_t result;
if (iocb->ki_filp->f_flags & O_DIRECT)
- return nfs_file_direct_read(iocb, iov, nr_segs, pos, true);
+ return nfs_file_direct_read(iocb, iter, pos, true);
- dprintk("NFS: read(%s/%s, %lu@%lu)\n",
+ dprintk("NFS: read_iter(%s/%s, %lu@%lu)\n",
dentry->d_parent->d_name.name, dentry->d_name.name,
- (unsigned long) iov_length(iov, nr_segs), (unsigned long) pos);
+ (unsigned long) iov_iter_count(iter), (unsigned long) pos);
result = nfs_revalidate_mapping(inode, iocb->ki_filp->f_mapping);
if (!result) {
- result = generic_file_aio_read(iocb, iov, nr_segs, pos);
+ result = generic_file_read_iter(iocb, iter, pos);
if (result > 0)
nfs_add_stats(inode, NFSIOS_NORMALREADBYTES, result);
}
return result;
}
+EXPORT_SYMBOL_GPL(nfs_file_read_iter);
+
+ssize_t
+nfs_file_read(struct kiocb *iocb, const struct iovec *iov,
+ unsigned long nr_segs, loff_t pos)
+{
+ struct iov_iter iter;
+
+ iov_iter_init(&iter, iov, nr_segs, iov_length(iov, nr_segs), 0);
+
+ return nfs_file_read_iter(iocb, &iter, pos);
+}
EXPORT_SYMBOL_GPL(nfs_file_read);
ssize_t
@@ -610,19 +621,19 @@ static int nfs_need_sync_write(struct file *filp, struct inode *inode)
return 0;
}
-ssize_t nfs_file_write(struct kiocb *iocb, const struct iovec *iov,
- unsigned long nr_segs, loff_t pos)
+ssize_t nfs_file_write_iter(struct kiocb *iocb, struct iov_iter *iter,
+ loff_t pos)
{
struct dentry * dentry = iocb->ki_filp->f_path.dentry;
struct inode * inode = dentry->d_inode;
unsigned long written = 0;
ssize_t result;
- size_t count = iov_length(iov, nr_segs);
+ size_t count = iov_iter_count(iter);
if (iocb->ki_filp->f_flags & O_DIRECT)
- return nfs_file_direct_write(iocb, iov, nr_segs, pos, true);
+ return nfs_file_direct_write(iocb, iter, pos, true);
- dprintk("NFS: write(%s/%s, %lu@%Ld)\n",
+ dprintk("NFS: write_iter(%s/%s, %lu@%lld)\n",
dentry->d_parent->d_name.name, dentry->d_name.name,
(unsigned long) count, (long long) pos);
@@ -642,7 +653,7 @@ ssize_t nfs_file_write(struct kiocb *iocb, const struct iovec *iov,
if (!count)
goto out;
- result = generic_file_aio_write(iocb, iov, nr_segs, pos);
+ result = generic_file_write_iter(iocb, iter, pos);
if (result > 0)
written = result;
@@ -661,6 +672,17 @@ out_swapfile:
printk(KERN_INFO "NFS: attempt to write to active swap file!\n");
goto out;
}
+EXPORT_SYMBOL_GPL(nfs_file_write_iter);
+
+ssize_t nfs_file_write(struct kiocb *iocb, const struct iovec *iov,
+ unsigned long nr_segs, loff_t pos)
+{
+ struct iov_iter iter;
+
+ iov_iter_init(&iter, iov, nr_segs, iov_length(iov, nr_segs), 0);
+
+ return nfs_file_write_iter(iocb, &iter, pos);
+}
EXPORT_SYMBOL_GPL(nfs_file_write);
ssize_t nfs_file_splice_write(struct pipe_inode_info *pipe,
@@ -914,6 +936,8 @@ const struct file_operations nfs_file_operations = {
.write = do_sync_write,
.aio_read = nfs_file_read,
.aio_write = nfs_file_write,
+ .read_iter = nfs_file_read_iter,
+ .write_iter = nfs_file_write_iter,
.mmap = nfs_file_mmap,
.open = nfs_file_open,
.flush = nfs_file_flush,
diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
index 59b133c..8db3b11 100644
--- a/fs/nfs/internal.h
+++ b/fs/nfs/internal.h
@@ -302,10 +302,12 @@ int nfs_file_fsync_commit(struct file *, loff_t, loff_t, int);
loff_t nfs_file_llseek(struct file *, loff_t, int);
int nfs_file_flush(struct file *, fl_owner_t);
ssize_t nfs_file_read(struct kiocb *, const struct iovec *, unsigned long, loff_t);
+ssize_t nfs_file_read_iter(struct kiocb *, struct iov_iter *, loff_t);
ssize_t nfs_file_splice_read(struct file *, loff_t *, struct pipe_inode_info *,
size_t, unsigned int);
int nfs_file_mmap(struct file *, struct vm_area_struct *);
ssize_t nfs_file_write(struct kiocb *, const struct iovec *, unsigned long, loff_t);
+ssize_t nfs_file_write_iter(struct kiocb *, struct iov_iter *, loff_t);
int nfs_file_release(struct inode *, struct file *);
int nfs_lock(struct file *, int, struct file_lock *);
int nfs_flock(struct file *, int, struct file_lock *);
diff --git a/fs/nfs/nfs4file.c b/fs/nfs/nfs4file.c
index afddd66..195188e 100644
--- a/fs/nfs/nfs4file.c
+++ b/fs/nfs/nfs4file.c
@@ -123,6 +123,8 @@ const struct file_operations nfs4_file_operations = {
.write = do_sync_write,
.aio_read = nfs_file_read,
.aio_write = nfs_file_write,
+ .read_iter = nfs_file_read_iter,
+ .write_iter = nfs_file_write_iter,
.mmap = nfs_file_mmap,
.open = nfs4_file_open,
.flush = nfs_file_flush,
diff --git a/include/linux/nfs_fs.h b/include/linux/nfs_fs.h
index 4913e3c..9f8e8a9 100644
--- a/include/linux/nfs_fs.h
+++ b/include/linux/nfs_fs.h
@@ -445,11 +445,9 @@ extern int nfs3_removexattr (struct dentry *, const char *name);
* linux/fs/nfs/direct.c
*/
extern ssize_t nfs_direct_IO(int, struct kiocb *, struct iov_iter *, loff_t);
-extern ssize_t nfs_file_direct_read(struct kiocb *iocb,
- const struct iovec *iov, unsigned long nr_segs,
+extern ssize_t nfs_file_direct_read(struct kiocb *iocb, struct iov_iter *iter,
loff_t pos, bool uio);
-extern ssize_t nfs_file_direct_write(struct kiocb *iocb,
- const struct iovec *iov, unsigned long nr_segs,
+extern ssize_t nfs_file_direct_write(struct kiocb *iocb, struct iov_iter *iter,
loff_t pos, bool uio);
/*
--
1.7.12.3
From: Zach Brown <[email protected]>
This moves the current iov_iter functions behind an ops struct of
function pointers. The current iov_iter functions all work with memory
which is specified by iovec arrays of user space pointers.
This patch is part of a series that lets us specify memory with bio_vec
arrays of page pointers. By moving to an iov_iter operation struct we
can add that support in later patches in this series by adding another
set of function pointers.
I only came to this after having initialy tried to teach the current
iov_iter functions about bio_vecs by introducing conditional branches
that dealt with bio_vecs in all the functions. It wasn't pretty. This
approach seems to be the lesser evil.
Signed-off-by: Dave Kleikamp <[email protected]>
Cc: Zach Brown <[email protected]>
---
fs/cifs/file.c | 4 ++--
include/linux/fs.h | 70 ++++++++++++++++++++++++++++++++++++++++++++----------
mm/iov-iter.c | 66 ++++++++++++++++++++++++++++----------------------
3 files changed, 97 insertions(+), 43 deletions(-)
diff --git a/fs/cifs/file.c b/fs/cifs/file.c
index edb25b4..8b5e2aa 100644
--- a/fs/cifs/file.c
+++ b/fs/cifs/file.c
@@ -2654,8 +2654,8 @@ cifs_readdata_to_iov(struct cifs_readdata *rdata, const struct iovec *iov,
/* go while there's data to be copied and no errors */
if (copy && !rc) {
pdata = kmap(page);
- rc = memcpy_toiovecend(ii.iov, pdata, ii.iov_offset,
- (int)copy);
+ rc = memcpy_toiovecend(iov_iter_iovec(&ii), pdata,
+ ii.iov_offset, (int)copy);
kunmap(page);
if (!rc) {
*copied += copy;
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 36f9291..f7ee6d4 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -221,29 +221,68 @@ struct address_space;
struct writeback_control;
struct iov_iter {
- const struct iovec *iov;
+ struct iov_iter_ops *ops;
+ unsigned long data;
unsigned long nr_segs;
size_t iov_offset;
size_t count;
};
-size_t iov_iter_copy_to_user_atomic(struct page *page,
- struct iov_iter *i, unsigned long offset, size_t bytes);
-size_t iov_iter_copy_to_user(struct page *page,
- struct iov_iter *i, unsigned long offset, size_t bytes);
-size_t iov_iter_copy_from_user_atomic(struct page *page,
- struct iov_iter *i, unsigned long offset, size_t bytes);
-size_t iov_iter_copy_from_user(struct page *page,
- struct iov_iter *i, unsigned long offset, size_t bytes);
-void iov_iter_advance(struct iov_iter *i, size_t bytes);
-int iov_iter_fault_in_readable(struct iov_iter *i, size_t bytes);
-size_t iov_iter_single_seg_count(struct iov_iter *i);
+struct iov_iter_ops {
+ size_t (*ii_copy_to_user_atomic)(struct page *, struct iov_iter *,
+ unsigned long, size_t);
+ size_t (*ii_copy_to_user)(struct page *, struct iov_iter *,
+ unsigned long, size_t);
+ size_t (*ii_copy_from_user_atomic)(struct page *, struct iov_iter *,
+ unsigned long, size_t);
+ size_t (*ii_copy_from_user)(struct page *, struct iov_iter *,
+ unsigned long, size_t);
+ void (*ii_advance)(struct iov_iter *, size_t);
+ int (*ii_fault_in_readable)(struct iov_iter *, size_t);
+ size_t (*ii_single_seg_count)(struct iov_iter *);
+};
+
+static inline size_t iov_iter_copy_to_user_atomic(struct page *page,
+ struct iov_iter *i, unsigned long offset, size_t bytes)
+{
+ return i->ops->ii_copy_to_user_atomic(page, i, offset, bytes);
+}
+static inline size_t iov_iter_copy_to_user(struct page *page,
+ struct iov_iter *i, unsigned long offset, size_t bytes)
+{
+ return i->ops->ii_copy_to_user(page, i, offset, bytes);
+}
+static inline size_t iov_iter_copy_from_user_atomic(struct page *page,
+ struct iov_iter *i, unsigned long offset, size_t bytes)
+{
+ return i->ops->ii_copy_from_user_atomic(page, i, offset, bytes);
+}
+static inline size_t iov_iter_copy_from_user(struct page *page,
+ struct iov_iter *i, unsigned long offset, size_t bytes)
+{
+ return i->ops->ii_copy_from_user(page, i, offset, bytes);
+}
+static inline void iov_iter_advance(struct iov_iter *i, size_t bytes)
+{
+ return i->ops->ii_advance(i, bytes);
+}
+static inline int iov_iter_fault_in_readable(struct iov_iter *i, size_t bytes)
+{
+ return i->ops->ii_fault_in_readable(i, bytes);
+}
+static inline size_t iov_iter_single_seg_count(struct iov_iter *i)
+{
+ return i->ops->ii_single_seg_count(i);
+}
+
+extern struct iov_iter_ops ii_iovec_ops;
static inline void iov_iter_init(struct iov_iter *i,
const struct iovec *iov, unsigned long nr_segs,
size_t count, size_t written)
{
- i->iov = iov;
+ i->ops = &ii_iovec_ops;
+ i->data = (unsigned long)iov;
i->nr_segs = nr_segs;
i->iov_offset = 0;
i->count = count + written;
@@ -251,6 +290,11 @@ static inline void iov_iter_init(struct iov_iter *i,
iov_iter_advance(i, written);
}
+static inline struct iovec *iov_iter_iovec(struct iov_iter *i)
+{
+ return (struct iovec *)i->data;
+}
+
static inline size_t iov_iter_count(struct iov_iter *i)
{
return i->count;
diff --git a/mm/iov-iter.c b/mm/iov-iter.c
index d68b67f..bae1553 100644
--- a/mm/iov-iter.c
+++ b/mm/iov-iter.c
@@ -36,9 +36,10 @@ static size_t __iovec_copy_to_user(char *vaddr, const struct iovec *iov,
* were sucessfully copied. If a fault is encountered then return the number of
* bytes which were copied.
*/
-size_t iov_iter_copy_to_user_atomic(struct page *page,
+static size_t ii_iovec_copy_to_user_atomic(struct page *page,
struct iov_iter *i, unsigned long offset, size_t bytes)
{
+ struct iovec *iov = (struct iovec *)i->data;
char *kaddr;
size_t copied;
@@ -46,45 +47,44 @@ size_t iov_iter_copy_to_user_atomic(struct page *page,
kaddr = kmap_atomic(page);
if (likely(i->nr_segs == 1)) {
int left;
- char __user *buf = i->iov->iov_base + i->iov_offset;
+ char __user *buf = iov->iov_base + i->iov_offset;
left = __copy_to_user_inatomic(buf, kaddr + offset, bytes);
copied = bytes - left;
} else {
- copied = __iovec_copy_to_user(kaddr + offset, i->iov,
+ copied = __iovec_copy_to_user(kaddr + offset, iov,
i->iov_offset, bytes, 1);
}
kunmap_atomic(kaddr);
return copied;
}
-EXPORT_SYMBOL(iov_iter_copy_to_user_atomic);
/*
* This has the same sideeffects and return value as
- * iov_iter_copy_to_user_atomic().
+ * ii_iovec_copy_to_user_atomic().
* The difference is that it attempts to resolve faults.
* Page must not be locked.
*/
-size_t iov_iter_copy_to_user(struct page *page,
+static size_t ii_iovec_copy_to_user(struct page *page,
struct iov_iter *i, unsigned long offset, size_t bytes)
{
+ struct iovec *iov = (struct iovec *)i->data;
char *kaddr;
size_t copied;
kaddr = kmap(page);
if (likely(i->nr_segs == 1)) {
int left;
- char __user *buf = i->iov->iov_base + i->iov_offset;
+ char __user *buf = iov->iov_base + i->iov_offset;
left = copy_to_user(buf, kaddr + offset, bytes);
copied = bytes - left;
} else {
- copied = __iovec_copy_to_user(kaddr + offset, i->iov,
+ copied = __iovec_copy_to_user(kaddr + offset, iov,
i->iov_offset, bytes, 0);
}
kunmap(page);
return copied;
}
-EXPORT_SYMBOL(iov_iter_copy_to_user);
static size_t __iovec_copy_from_user(char *vaddr, const struct iovec *iov,
size_t base, size_t bytes, int atomic)
@@ -116,9 +116,10 @@ static size_t __iovec_copy_from_user(char *vaddr, const struct iovec *iov,
* were successfully copied. If a fault is encountered then return the number
* of bytes which were copied.
*/
-size_t iov_iter_copy_from_user_atomic(struct page *page,
+static size_t ii_iovec_copy_from_user_atomic(struct page *page,
struct iov_iter *i, unsigned long offset, size_t bytes)
{
+ struct iovec *iov = (struct iovec *)i->data;
char *kaddr;
size_t copied;
@@ -126,11 +127,11 @@ size_t iov_iter_copy_from_user_atomic(struct page *page,
kaddr = kmap_atomic(page);
if (likely(i->nr_segs == 1)) {
int left;
- char __user *buf = i->iov->iov_base + i->iov_offset;
+ char __user *buf = iov->iov_base + i->iov_offset;
left = __copy_from_user_inatomic(kaddr + offset, buf, bytes);
copied = bytes - left;
} else {
- copied = __iovec_copy_from_user(kaddr + offset, i->iov,
+ copied = __iovec_copy_from_user(kaddr + offset, iov,
i->iov_offset, bytes, 1);
}
kunmap_atomic(kaddr);
@@ -141,32 +142,32 @@ EXPORT_SYMBOL(iov_iter_copy_from_user_atomic);
/*
* This has the same sideeffects and return value as
- * iov_iter_copy_from_user_atomic().
+ * ii_iovec_copy_from_user_atomic().
* The difference is that it attempts to resolve faults.
* Page must not be locked.
*/
-size_t iov_iter_copy_from_user(struct page *page,
+static size_t ii_iovec_copy_from_user(struct page *page,
struct iov_iter *i, unsigned long offset, size_t bytes)
{
+ struct iovec *iov = (struct iovec *)i->data;
char *kaddr;
size_t copied;
kaddr = kmap(page);
if (likely(i->nr_segs == 1)) {
int left;
- char __user *buf = i->iov->iov_base + i->iov_offset;
+ char __user *buf = iov->iov_base + i->iov_offset;
left = __copy_from_user(kaddr + offset, buf, bytes);
copied = bytes - left;
} else {
- copied = __iovec_copy_from_user(kaddr + offset, i->iov,
+ copied = __iovec_copy_from_user(kaddr + offset, iov,
i->iov_offset, bytes, 0);
}
kunmap(page);
return copied;
}
-EXPORT_SYMBOL(iov_iter_copy_from_user);
-void iov_iter_advance(struct iov_iter *i, size_t bytes)
+static void ii_iovec_advance(struct iov_iter *i, size_t bytes)
{
BUG_ON(i->count < bytes);
@@ -174,7 +175,7 @@ void iov_iter_advance(struct iov_iter *i, size_t bytes)
i->iov_offset += bytes;
i->count -= bytes;
} else {
- const struct iovec *iov = i->iov;
+ struct iovec *iov = (struct iovec *)i->data;
size_t base = i->iov_offset;
unsigned long nr_segs = i->nr_segs;
@@ -196,12 +197,11 @@ void iov_iter_advance(struct iov_iter *i, size_t bytes)
base = 0;
}
}
- i->iov = iov;
+ i->data = (unsigned long)iov;
i->iov_offset = base;
i->nr_segs = nr_segs;
}
}
-EXPORT_SYMBOL(iov_iter_advance);
/*
* Fault in the first iovec of the given iov_iter, to a maximum length
@@ -212,23 +212,33 @@ EXPORT_SYMBOL(iov_iter_advance);
* would be possible (callers must not rely on the fact that _only_ the
* first iovec will be faulted with the current implementation).
*/
-int iov_iter_fault_in_readable(struct iov_iter *i, size_t bytes)
+static int ii_iovec_fault_in_readable(struct iov_iter *i, size_t bytes)
{
- char __user *buf = i->iov->iov_base + i->iov_offset;
- bytes = min(bytes, i->iov->iov_len - i->iov_offset);
+ struct iovec *iov = (struct iovec *)i->data;
+ char __user *buf = iov->iov_base + i->iov_offset;
+ bytes = min(bytes, iov->iov_len - i->iov_offset);
return fault_in_pages_readable(buf, bytes);
}
-EXPORT_SYMBOL(iov_iter_fault_in_readable);
/*
* Return the count of just the current iov_iter segment.
*/
-size_t iov_iter_single_seg_count(struct iov_iter *i)
+static size_t ii_iovec_single_seg_count(struct iov_iter *i)
{
- const struct iovec *iov = i->iov;
+ struct iovec *iov = (struct iovec *)i->data;
if (i->nr_segs == 1)
return i->count;
else
return min(i->count, iov->iov_len - i->iov_offset);
}
-EXPORT_SYMBOL(iov_iter_single_seg_count);
+
+struct iov_iter_ops ii_iovec_ops = {
+ .ii_copy_to_user_atomic = ii_iovec_copy_to_user_atomic,
+ .ii_copy_to_user = ii_iovec_copy_to_user,
+ .ii_copy_from_user_atomic = ii_iovec_copy_from_user_atomic,
+ .ii_copy_from_user = ii_iovec_copy_from_user,
+ .ii_advance = ii_iovec_advance,
+ .ii_fault_in_readable = ii_iovec_fault_in_readable,
+ .ii_single_seg_count = ii_iovec_single_seg_count,
+};
+EXPORT_SYMBOL(ii_iovec_ops);
--
1.7.12.3
The trick here is to initialize the dio state so that do_direct_IO()
consumes the pages we provide and never tries to map user pages. This
is done by making sure that final_block_in_request covers the page that
we set in the dio. do_direct_IO() will return before running out of
pages.
The caller is responsible for dirtying these pages, if needed. We add
an option to the dio struct that makes sure we only dirty pages when
we're operating on iovecs of user addresses.
Signed-off-by: Dave Kleikamp <[email protected]>
Cc: Zach Brown <[email protected]>
---
fs/direct-io.c | 185 +++++++++++++++++++++++++++++++++++++++++----------------
1 file changed, 133 insertions(+), 52 deletions(-)
diff --git a/fs/direct-io.c b/fs/direct-io.c
index e2733e4..8417a3f 100644
--- a/fs/direct-io.c
+++ b/fs/direct-io.c
@@ -126,6 +126,7 @@ struct dio {
spinlock_t bio_lock; /* protects BIO fields below */
int page_errors; /* errno from get_user_pages() */
int is_async; /* is IO async ? */
+ int should_dirty; /* should we mark read pages dirty? */
int io_error; /* IO error in completion path */
unsigned long refcount; /* direct_io_worker() and bios */
struct bio *bio_list; /* singly linked via bi_private */
@@ -376,7 +377,7 @@ static inline void dio_bio_submit(struct dio *dio, struct dio_submit *sdio)
dio->refcount++;
spin_unlock_irqrestore(&dio->bio_lock, flags);
- if (dio->is_async && dio->rw == READ)
+ if (dio->is_async && dio->rw == READ && dio->should_dirty)
bio_set_pages_dirty(bio);
if (sdio->submit_io)
@@ -447,13 +448,14 @@ static int dio_bio_complete(struct dio *dio, struct bio *bio)
if (!uptodate)
dio->io_error = -EIO;
- if (dio->is_async && dio->rw == READ) {
+ if (dio->is_async && dio->rw == READ && dio->should_dirty) {
bio_check_pages_dirty(bio); /* transfers ownership */
} else {
for (page_no = 0; page_no < bio->bi_vcnt; page_no++) {
struct page *page = bvec[page_no].bv_page;
- if (dio->rw == READ && !PageCompound(page))
+ if (dio->rw == READ && !PageCompound(page) &&
+ dio->should_dirty)
set_page_dirty_lock(page);
page_cache_release(page);
}
@@ -1052,6 +1054,101 @@ static int dio_aligned(unsigned long offset, unsigned *blkbits,
return 1;
}
+static ssize_t direct_IO_iovec(const struct iovec *iov, unsigned long nr_segs,
+ struct dio *dio, struct dio_submit *sdio,
+ unsigned blkbits, struct buffer_head *map_bh)
+{
+ size_t bytes;
+ ssize_t retval = 0;
+ int seg;
+ unsigned long user_addr;
+
+ for (seg = 0; seg < nr_segs; seg++) {
+ user_addr = (unsigned long)iov[seg].iov_base;
+ sdio->pages_in_io +=
+ ((user_addr + iov[seg].iov_len + PAGE_SIZE-1) /
+ PAGE_SIZE - user_addr / PAGE_SIZE);
+ }
+
+ dio->should_dirty = 1;
+
+ for (seg = 0; seg < nr_segs; seg++) {
+ user_addr = (unsigned long)iov[seg].iov_base;
+ sdio->size += bytes = iov[seg].iov_len;
+
+ /* Index into the first page of the first block */
+ sdio->first_block_in_page = (user_addr & ~PAGE_MASK) >> blkbits;
+ sdio->final_block_in_request = sdio->block_in_file +
+ (bytes >> blkbits);
+ /* Page fetching state */
+ sdio->head = 0;
+ sdio->tail = 0;
+ sdio->curr_page = 0;
+
+ sdio->total_pages = 0;
+ if (user_addr & (PAGE_SIZE-1)) {
+ sdio->total_pages++;
+ bytes -= PAGE_SIZE - (user_addr & (PAGE_SIZE - 1));
+ }
+ sdio->total_pages += (bytes + PAGE_SIZE - 1) / PAGE_SIZE;
+ sdio->curr_user_address = user_addr;
+
+ retval = do_direct_IO(dio, sdio, map_bh);
+
+ dio->result += iov[seg].iov_len -
+ ((sdio->final_block_in_request - sdio->block_in_file) <<
+ blkbits);
+
+ if (retval) {
+ dio_cleanup(dio, sdio);
+ break;
+ }
+ } /* end iovec loop */
+
+ return retval;
+}
+
+static ssize_t direct_IO_bvec(struct bio_vec *bvec, unsigned long nr_segs,
+ struct dio *dio, struct dio_submit *sdio,
+ unsigned blkbits, struct buffer_head *map_bh)
+{
+ ssize_t retval = 0;
+ int seg;
+
+ sdio->pages_in_io += nr_segs;
+
+ for (seg = 0; seg < nr_segs; seg++) {
+ sdio->size += bvec[seg].bv_len;
+
+ /* Index into the first page of the first block */
+ sdio->first_block_in_page = bvec[seg].bv_offset >> blkbits;
+ sdio->final_block_in_request = sdio->block_in_file +
+ (bvec[seg].bv_len >> blkbits);
+ /* Page fetching state */
+ sdio->curr_page = 0;
+ page_cache_get(bvec[seg].bv_page);
+ dio->pages[0] = bvec[seg].bv_page;
+ sdio->head = 0;
+ sdio->tail = 1;
+
+ sdio->total_pages = 1;
+ sdio->curr_user_address = 0;
+
+ retval = do_direct_IO(dio, sdio, map_bh);
+
+ dio->result += bvec[seg].bv_len -
+ ((sdio->final_block_in_request - sdio->block_in_file) <<
+ blkbits);
+
+ if (retval) {
+ dio_cleanup(dio, sdio);
+ break;
+ }
+ }
+
+ return retval;
+}
+
/*
* This is a library function for use by filesystem drivers.
*
@@ -1091,11 +1188,8 @@ do_blockdev_direct_IO(int rw, struct kiocb *iocb, struct inode *inode,
loff_t end = offset;
struct dio *dio;
struct dio_submit sdio = { 0, };
- unsigned long user_addr;
- size_t bytes;
struct buffer_head map_bh = { 0, };
struct blk_plug plug;
- const struct iovec *iov = iov_iter_iovec(iter);
unsigned long nr_segs = iter->nr_segs;
if (rw & WRITE)
@@ -1105,13 +1199,33 @@ do_blockdev_direct_IO(int rw, struct kiocb *iocb, struct inode *inode,
goto out;
/* Check the memory alignment. Blocks cannot straddle pages */
- for (seg = 0; seg < nr_segs; seg++) {
- addr = (unsigned long)iov[seg].iov_base;
- size = iov[seg].iov_len;
- end += size;
- if (!dio_aligned(addr|size, &blkbits, bdev))
- goto out;
- }
+ if (iov_iter_has_iovec(iter)) {
+ const struct iovec *iov = iov_iter_iovec(iter);
+
+ for (seg = 0; seg < nr_segs; seg++) {
+ addr = (unsigned long)iov[seg].iov_base;
+ size = iov[seg].iov_len;
+ end += size;
+ if (!dio_aligned(addr|size, &blkbits, bdev))
+ goto out;
+ }
+ } else if (iov_iter_has_bvec(iter)) {
+ /*
+ * Is this necessary, or can we trust the in-kernel
+ * caller? Can we replace this with
+ * end += iov_iter_count(iter); ?
+ */
+ struct bio_vec *bvec = iov_iter_bvec(iter);
+
+ for (seg = 0; seg < nr_segs; seg++) {
+ addr = bvec[seg].bv_offset;
+ size = bvec[seg].bv_len;
+ end += size;
+ if (!dio_aligned(addr|size, &blkbits, bdev))
+ goto out;
+ }
+ } else
+ BUG();
/* watch out for a 0 len io from a tricksy fs */
if (rw == READ && end == offset)
@@ -1188,47 +1302,14 @@ do_blockdev_direct_IO(int rw, struct kiocb *iocb, struct inode *inode,
if (unlikely(sdio.blkfactor))
sdio.pages_in_io = 2;
- for (seg = 0; seg < nr_segs; seg++) {
- user_addr = (unsigned long)iov[seg].iov_base;
- sdio.pages_in_io +=
- ((user_addr + iov[seg].iov_len + PAGE_SIZE-1) /
- PAGE_SIZE - user_addr / PAGE_SIZE);
- }
-
blk_start_plug(&plug);
- for (seg = 0; seg < nr_segs; seg++) {
- user_addr = (unsigned long)iov[seg].iov_base;
- sdio.size += bytes = iov[seg].iov_len;
-
- /* Index into the first page of the first block */
- sdio.first_block_in_page = (user_addr & ~PAGE_MASK) >> blkbits;
- sdio.final_block_in_request = sdio.block_in_file +
- (bytes >> blkbits);
- /* Page fetching state */
- sdio.head = 0;
- sdio.tail = 0;
- sdio.curr_page = 0;
-
- sdio.total_pages = 0;
- if (user_addr & (PAGE_SIZE-1)) {
- sdio.total_pages++;
- bytes -= PAGE_SIZE - (user_addr & (PAGE_SIZE - 1));
- }
- sdio.total_pages += (bytes + PAGE_SIZE - 1) / PAGE_SIZE;
- sdio.curr_user_address = user_addr;
-
- retval = do_direct_IO(dio, &sdio, &map_bh);
-
- dio->result += iov[seg].iov_len -
- ((sdio.final_block_in_request - sdio.block_in_file) <<
- blkbits);
-
- if (retval) {
- dio_cleanup(dio, &sdio);
- break;
- }
- } /* end iovec loop */
+ if (iov_iter_has_iovec(iter))
+ retval = direct_IO_iovec(iov_iter_iovec(iter), nr_segs, dio,
+ &sdio, blkbits, &map_bh);
+ else
+ retval = direct_IO_bvec(iov_iter_bvec(iter), nr_segs, dio,
+ &sdio, blkbits, &map_bh);
if (retval == -ENOTBLK) {
/*
--
1.7.12.3
From: Zach Brown <[email protected]>
This adds an interface that lets kernel callers submit aio iocbs without
going through the user space syscalls. This lets kernel callers avoid
the management limits and overhead of the context. It will also let us
integrate aio operations with other kernel apis that the user space
interface doesn't have access to.
Signed-off-by: Dave Kleikamp <[email protected]>
Cc: Zach Brown <[email protected]>
---
fs/aio.c | 92 +++++++++++++++++++++++++++++++++++++++++++++++++++++
include/linux/aio.h | 15 +++++++++
2 files changed, 107 insertions(+)
diff --git a/fs/aio.c b/fs/aio.c
index 71f613c..9a1a6fc 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -912,6 +912,10 @@ int aio_complete(struct kiocb *iocb, long res, long res2)
iocb->ki_users = 0;
wake_up_process(iocb->ki_obj.tsk);
return 1;
+ } else if (is_kernel_kiocb(iocb)) {
+ iocb->ki_obj.complete(iocb->ki_user_data, res);
+ aio_kernel_free(iocb);
+ return 0;
}
info = &ctx->ring_info;
@@ -1504,6 +1508,94 @@ static ssize_t aio_setup_iocb(struct kiocb *kiocb, bool compat)
return 0;
}
+ /*
+ * This allocates an iocb that will be used to submit and track completion of
+ * an IO that is issued from kernel space.
+ *
+ * The caller is expected to call the appropriate aio_kernel_init_() functions
+ * and then call aio_kernel_submit(). From that point forward progress is
+ * guaranteed by the file system aio method. Eventually the caller's
+ * completion callback will be called.
+ *
+ * These iocbs are special. They don't have a context, we don't limit the
+ * number pending, they can't be canceled, and can't be retried. In the short
+ * term callers need to be careful not to call operations which might retry by
+ * only calling new ops which never add retry support. In the long term
+ * retry-based AIO should be removed.
+ */
+struct kiocb *aio_kernel_alloc(gfp_t gfp)
+{
+ struct kiocb *iocb = kzalloc(sizeof(struct kiocb), gfp);
+ if (iocb)
+ iocb->ki_key = KIOCB_KERNEL_KEY;
+ return iocb;
+}
+EXPORT_SYMBOL_GPL(aio_kernel_alloc);
+
+void aio_kernel_free(struct kiocb *iocb)
+{
+ kfree(iocb);
+}
+EXPORT_SYMBOL_GPL(aio_kernel_free);
+
+/*
+ * ptr and count can be a buff and bytes or an iov and segs.
+ */
+void aio_kernel_init_rw(struct kiocb *iocb, struct file *filp,
+ unsigned short op, void *ptr, size_t nr, loff_t off)
+{
+ iocb->ki_filp = filp;
+ iocb->ki_opcode = op;
+ iocb->ki_buf = (char __user *)(unsigned long)ptr;
+ iocb->ki_left = nr;
+ iocb->ki_nbytes = nr;
+ iocb->ki_pos = off;
+}
+EXPORT_SYMBOL_GPL(aio_kernel_init_rw);
+
+void aio_kernel_init_callback(struct kiocb *iocb,
+ void (*complete)(u64 user_data, long res),
+ u64 user_data)
+{
+ iocb->ki_obj.complete = complete;
+ iocb->ki_user_data = user_data;
+}
+EXPORT_SYMBOL_GPL(aio_kernel_init_callback);
+
+/*
+ * The iocb is our responsibility once this is called. The caller must not
+ * reference it. This comes from aio_setup_iocb() modifying the iocb.
+ *
+ * Callers must be prepared for their iocb completion callback to be called the
+ * moment they enter this function. The completion callback may be called from
+ * any context.
+ *
+ * Returns: 0: the iocb completion callback will be called with the op result
+ * negative errno: the operation was not submitted and the iocb was freed
+ */
+int aio_kernel_submit(struct kiocb *iocb)
+{
+ int ret;
+
+ BUG_ON(!is_kernel_kiocb(iocb));
+ BUG_ON(!iocb->ki_obj.complete);
+ BUG_ON(!iocb->ki_filp);
+
+ ret = aio_setup_iocb(iocb, 0);
+ if (ret) {
+ aio_kernel_free(iocb);
+ return ret;
+ }
+
+ ret = iocb->ki_retry(iocb);
+ BUG_ON(ret == -EIOCBRETRY);
+ if (ret != -EIOCBQUEUED)
+ aio_complete(iocb, ret, 0);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(aio_kernel_submit);
+
static int io_submit_one(struct kioctx *ctx, struct iocb __user *user_iocb,
struct iocb *iocb, struct kiocb_batch *batch,
bool compat)
diff --git a/include/linux/aio.h b/include/linux/aio.h
index 31ff6db..f9e0292 100644
--- a/include/linux/aio.h
+++ b/include/linux/aio.h
@@ -24,6 +24,7 @@ struct kioctx;
#define KIOCB_C_COMPLETE 0x02
#define KIOCB_SYNC_KEY (~0U)
+#define KIOCB_KERNEL_KEY (~1U)
/* ki_flags bits */
/*
@@ -99,6 +100,7 @@ struct kiocb {
union {
void __user *user;
struct task_struct *tsk;
+ void (*complete)(u64 user_data, long res);
} ki_obj;
__u64 ki_user_data; /* user's data for completion */
@@ -131,6 +133,11 @@ static inline bool is_sync_kiocb(struct kiocb *kiocb)
return kiocb->ki_key == KIOCB_SYNC_KEY;
}
+static inline bool is_kernel_kiocb(struct kiocb *kiocb)
+{
+ return kiocb->ki_key == KIOCB_KERNEL_KEY;
+}
+
static inline void init_sync_kiocb(struct kiocb *kiocb, struct file *filp)
{
*kiocb = (struct kiocb) {
@@ -218,6 +225,14 @@ struct mm_struct;
extern void exit_aio(struct mm_struct *mm);
extern long do_io_submit(aio_context_t ctx_id, long nr,
struct iocb __user *__user *iocbpp, bool compat);
+struct kiocb *aio_kernel_alloc(gfp_t gfp);
+void aio_kernel_free(struct kiocb *iocb);
+void aio_kernel_init_rw(struct kiocb *iocb, struct file *filp,
+ unsigned short op, void *ptr, size_t nr, loff_t off);
+void aio_kernel_init_callback(struct kiocb *iocb,
+ void (*complete)(u64 user_data, long res),
+ u64 user_data);
+int aio_kernel_submit(struct kiocb *iocb);
#else
static inline ssize_t wait_on_sync_kiocb(struct kiocb *iocb) { return 0; }
static inline int aio_put_req(struct kiocb *iocb) { return 0; }
--
1.7.12.3
From: Asias He <[email protected]>
Use generic_file_read_iter for read_iter. Add blkdev_write_iter which is
based on blkdev_aio_write for write_iter.
Signed-off-by: Asias He <[email protected]>
Signed-off-by: Dave Kleikamp <[email protected]>
---
fs/block_dev.c | 36 ++++++++++++++++++++++++++++++++++++
1 file changed, 36 insertions(+)
diff --git a/fs/block_dev.c b/fs/block_dev.c
index ba3ed89..1a3aef9 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -1593,6 +1593,21 @@ static long block_ioctl(struct file *file, unsigned cmd, unsigned long arg)
return blkdev_ioctl(bdev, mode, cmd, arg);
}
+static ssize_t blkdev_read_iter(struct kiocb *iocb, struct iov_iter *iter,
+ loff_t pos)
+{
+ ssize_t ret;
+ struct block_device *bdev = I_BDEV(iocb->ki_filp->f_mapping->host);
+
+ percpu_down_read(&bdev->bd_block_size_semaphore);
+
+ ret = generic_file_read_iter(iocb, iter, pos);
+
+ percpu_up_read(&bdev->bd_block_size_semaphore);
+
+ return ret;
+}
+
ssize_t blkdev_aio_read(struct kiocb *iocb, const struct iovec *iov,
unsigned long nr_segs, loff_t pos)
{
@@ -1609,6 +1624,25 @@ ssize_t blkdev_aio_read(struct kiocb *iocb, const struct iovec *iov,
}
EXPORT_SYMBOL_GPL(blkdev_aio_read);
+ssize_t blkdev_write_iter(struct kiocb *iocb, struct iov_iter *iter, loff_t pos)
+{
+ struct file *file = iocb->ki_filp;
+ ssize_t ret;
+
+ BUG_ON(iocb->ki_pos != pos);
+
+ ret = __generic_file_write_iter(iocb, iter, &pos);
+ if (ret > 0 || ret == -EIOCBQUEUED) {
+ ssize_t err;
+
+ err = generic_write_sync(file, pos, ret);
+ if (err < 0 && ret > 0)
+ ret = err;
+ }
+ return ret;
+}
+EXPORT_SYMBOL_GPL(blkdev_write_iter);
+
/*
* Write data to the block device. Only intended for the block device itself
* and the raw driver which basically is a fake block device.
@@ -1693,6 +1727,8 @@ const struct file_operations def_blk_fops = {
.write = do_sync_write,
.aio_read = blkdev_aio_read,
.aio_write = blkdev_aio_write,
+ .read_iter = blkdev_read_iter,
+ .write_iter = blkdev_write_iter,
.mmap = blkdev_mmap,
.fsync = blkdev_fsync,
.unlocked_ioctl = block_ioctl,
--
1.7.12.3
From: Zach Brown <[email protected]>
Signed-off-by: Dave Kleikamp <[email protected]>
Cc: Zach Brown <[email protected]>
---
include/linux/bio.h | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/include/linux/bio.h b/include/linux/bio.h
index 820e7aa..f5f9829 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -290,6 +290,14 @@ extern struct bio_vec *bvec_alloc_bs(gfp_t, int, unsigned long *, struct bio_set
extern void bvec_free_bs(struct bio_set *, struct bio_vec *, unsigned int);
extern unsigned int bvec_nr_vecs(unsigned short idx);
+static inline ssize_t bvec_length(const struct bio_vec *bvec, unsigned long nr)
+{
+ ssize_t bytes = 0;
+ while (nr--)
+ bytes += (bvec++)->bv_len;
+ return bytes;
+}
+
#ifdef CONFIG_BLK_CGROUP
int bio_associate_current(struct bio *bio);
void bio_disassociate_task(struct bio *bio);
--
1.7.12.3
On 10/22/2012 10:21 AM, Myklebust, Trond wrote:
>> -----Original Message-----
>> From: Dave Kleikamp [mailto:[email protected]]
>> Sent: Monday, October 22, 2012 11:15 AM
>> To: [email protected]
>> Cc: [email protected]; Zach Brown; Maxim V. Patlasov; Dave
>> Kleikamp; Myklebust, Trond; [email protected]
>> Subject: [PATCH 20/22] nfs: add support for read_iter, write_iter
>>
>> This patch implements the read_iter and write_iter file operations which
>> allow kernel code to initiate directIO. This allows the loop device to read and
>> write directly to the server, bypassing the page cache.
>>
>> Signed-off-by: Dave Kleikamp <[email protected]>
>> Cc: Zach Brown <[email protected]>
>> Cc: Trond Myklebust <[email protected]>
>> Cc: [email protected]
>> ---
>> fs/nfs/direct.c | 169 +++++++++++++++++++++++++++++++++-----------
>> -----
>> fs/nfs/file.c | 48 ++++++++++----
>> fs/nfs/internal.h | 2 +
>> fs/nfs/nfs4file.c | 2 +
>> include/linux/nfs_fs.h | 6 +-
>> 5 files changed, 155 insertions(+), 72 deletions(-)
>>
>> diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c index 4532781..b1fda1c 100644
>> --- a/fs/nfs/direct.c
>> +++ b/fs/nfs/direct.c
>> @@ -429,16 +428,47 @@ static ssize_t
>> nfs_direct_read_schedule_iovec(struct nfs_direct_req *dreq,
>> get_dreq(dreq);
>> desc.pg_dreq = dreq;
>>
>> - for (seg = 0; seg < nr_segs; seg++) {
>> - const struct iovec *vec = &iov[seg];
>> - result = nfs_direct_read_schedule_segment(&desc, vec,
>> pos, uio);
>> - if (result < 0)
>> - break;
>> - requested_bytes += result;
>> - if ((size_t)result < vec->iov_len)
>> - break;
>> - pos += vec->iov_len;
>> - }
>> + if (iov_iter_has_iovec(iter)) {
>> + const struct iovec *iov = iov_iter_iovec(iter);
>> + if (uio)
>> + dreq->flags = NFS_ODIRECT_MARK_DIRTY;
>> + for (seg = 0; seg < iter->nr_segs; seg++) {
>> + const struct iovec *vec = &iov[seg];
>> + result = nfs_direct_read_schedule_segment(&desc,
>> vec,
>> + pos, uio);
>> + if (result < 0)
>> + break;
>> + requested_bytes += result;
>> + if ((size_t)result < vec->iov_len)
>> + break;
>> + pos += vec->iov_len;
>> + }
>> + } else if (iov_iter_has_bvec(iter)) {
>> + struct nfs_open_context *ctx = dreq->ctx;
>> + struct inode *inode = ctx->dentry->d_inode;
>> + struct bio_vec *bvec = iov_iter_bvec(iter);
>> + for (seg = 0; seg < iter->nr_segs; seg++) {
>> + struct nfs_page *req;
>> + unsigned int req_len = bvec[seg].bv_len;
>> + req = nfs_create_request(ctx, inode,
>> + bvec[seg].bv_page,
>> + bvec[seg].bv_offset,
>> req_len);
>> + if (IS_ERR(req)) {
>> + result = PTR_ERR(req);
>> + break;
>> + }
>> + req->wb_index = pos >> PAGE_SHIFT;
>> + req->wb_offset = pos & ~PAGE_MASK;
>> + if (!nfs_pageio_add_request(&desc, req)) {
>> + result = desc.pg_error;
>> + nfs_release_request(req);
>> + break;
>> + }
>> + requested_bytes += req_len;
>> + pos += req_len;
>> + }
>> + } else
>> + BUG();
>
> Can we please split the contents of these 2 if statements into 2 helper functions nfs_direct_do_schedule_read_iovec() and nfs_direct_do_schedule_read_bvec()?
>
Sure, no problem.
>> @@ -832,17 +861,48 @@ static ssize_t
>> nfs_direct_write_schedule_iovec(struct nfs_direct_req *dreq,
>> get_dreq(dreq);
>> atomic_inc(&inode->i_dio_count);
>>
>> - NFS_I(dreq->inode)->write_io += iov_length(iov, nr_segs);
>> - for (seg = 0; seg < nr_segs; seg++) {
>> - const struct iovec *vec = &iov[seg];
>> - result = nfs_direct_write_schedule_segment(&desc, vec,
>> pos, uio);
>> - if (result < 0)
>> - break;
>> - requested_bytes += result;
>> - if ((size_t)result < vec->iov_len)
>> - break;
>> - pos += vec->iov_len;
>> - }
>> + NFS_I(dreq->inode)->write_io += iov_iter_count(iter);
>> +
>> + if (iov_iter_has_iovec(iter)) {
>> + const struct iovec *iov = iov_iter_iovec(iter);
>> + for (seg = 0; seg < iter->nr_segs; seg++) {
>> + const struct iovec *vec = &iov[seg];
>> + result = nfs_direct_write_schedule_segment(&desc,
>> vec,
>> + pos, uio);
>> + if (result < 0)
>> + break;
>> + requested_bytes += result;
>> + if ((size_t)result < vec->iov_len)
>> + break;
>> + pos += vec->iov_len;
>> + }
>> + } else if (iov_iter_has_bvec(iter)) {
>> + struct nfs_open_context *ctx = dreq->ctx;
>> + struct bio_vec *bvec = iov_iter_bvec(iter);
>> + for (seg = 0; seg < iter->nr_segs; seg++) {
>> + struct nfs_page *req;
>> + unsigned int req_len = bvec[seg].bv_len;
>> +
>> + req = nfs_create_request(ctx, inode,
>> bvec[seg].bv_page,
>> + bvec[seg].bv_offset,
>> req_len);
>> + if (IS_ERR(req)) {
>> + result = PTR_ERR(req);
>> + break;
>> + }
>> + nfs_lock_request(req);
>> + req->wb_index = pos >> PAGE_SHIFT;
>> + req->wb_offset = pos & ~PAGE_MASK;
>> + if (!nfs_pageio_add_request(&desc, req)) {
>> + result = desc.pg_error;
>> + nfs_unlock_and_release_request(req);
>> + break;
>> + }
>> + requested_bytes += req_len;
>> + pos += req_len;
>> + }
>> + } else
>> + BUG();
>
> Ditto...
ok
>
> Otherwise, everything looks fine to me...
>
> Acked-by: Trond Myklebust <[email protected]>
>
> Cheers
> Trond
Thanks,
Shaggy
From: Zach Brown <[email protected]>
Right now only callers of generic_perform_write() pack their iovec
arguments into an iov_iter structure. All the callers higher up in the
stack work on raw iovec arguments.
This patch introduces the use of the iov_iter abstraction higher up the
stack. Private generic path functions are changed to operation on
iov_iter instead of on raw iovecs. Exported interfaces that take iovecs
immediately pack their arguments into an iov_iter and call into the
shared functions.
File operation struct functions are added with iov_iter as an argument
so that callers to the generic file system functions can specify
abstract memory rather than iovec arrays only.
Almost all of this patch only transforms arguments and shouldn't change
functionality. The buffered read path is the exception. We add a
read_actor function which uses the iov_iter helper functions instead of
operating on each individual iovec element. This may improve
performance as the iov_iter helper can copy multiple iovec elements from
one mapped page cache page.
As always, the direct IO path is special. Sadly, it may still be
cleanest to have it work on the underlying memory structures directly
instead of working through the iov_iter abstraction.
Signed-off-by: Dave Kleikamp <[email protected]>
Cc: Zach Brown <[email protected]>
---
include/linux/fs.h | 12 +++
mm/filemap.c | 258 ++++++++++++++++++++++++++++++++++-------------------
2 files changed, 180 insertions(+), 90 deletions(-)
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 82dd1e9..02e3aae 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -1543,7 +1543,9 @@ struct file_operations {
ssize_t (*read) (struct file *, char __user *, size_t, loff_t *);
ssize_t (*write) (struct file *, const char __user *, size_t, loff_t *);
ssize_t (*aio_read) (struct kiocb *, const struct iovec *, unsigned long, loff_t);
+ ssize_t (*read_iter) (struct kiocb *, struct iov_iter *, loff_t);
ssize_t (*aio_write) (struct kiocb *, const struct iovec *, unsigned long, loff_t);
+ ssize_t (*write_iter) (struct kiocb *, struct iov_iter *, loff_t);
int (*readdir) (struct file *, void *, filldir_t);
unsigned int (*poll) (struct file *, struct poll_table_struct *);
long (*unlocked_ioctl) (struct file *, unsigned int, unsigned long);
@@ -2327,13 +2329,23 @@ extern int generic_file_remap_pages(struct vm_area_struct *, unsigned long addr,
extern int file_read_actor(read_descriptor_t * desc, struct page *page, unsigned long offset, unsigned long size);
int generic_write_checks(struct file *file, loff_t *pos, size_t *count, int isblk);
extern ssize_t generic_file_aio_read(struct kiocb *, const struct iovec *, unsigned long, loff_t);
+extern ssize_t generic_file_read_iter(struct kiocb *, struct iov_iter *,
+ loff_t);
extern ssize_t __generic_file_aio_write(struct kiocb *, const struct iovec *, unsigned long,
loff_t *);
+extern ssize_t __generic_file_write_iter(struct kiocb *, struct iov_iter *,
+ loff_t *);
extern ssize_t generic_file_aio_write(struct kiocb *, const struct iovec *, unsigned long, loff_t);
+extern ssize_t generic_file_write_iter(struct kiocb *, struct iov_iter *,
+ loff_t);
extern ssize_t generic_file_direct_write(struct kiocb *, const struct iovec *,
unsigned long *, loff_t, loff_t *, size_t, size_t);
+extern ssize_t generic_file_direct_write_iter(struct kiocb *, struct iov_iter *,
+ loff_t, loff_t *, size_t);
extern ssize_t generic_file_buffered_write(struct kiocb *, const struct iovec *,
unsigned long, loff_t, loff_t *, size_t, ssize_t);
+extern ssize_t generic_file_buffered_write_iter(struct kiocb *,
+ struct iov_iter *, loff_t, loff_t *, size_t, ssize_t);
extern ssize_t do_sync_read(struct file *filp, char __user *buf, size_t len, loff_t *ppos);
extern ssize_t do_sync_write(struct file *filp, const char __user *buf, size_t len, loff_t *ppos);
extern int generic_segment_checks(const struct iovec *iov,
diff --git a/mm/filemap.c b/mm/filemap.c
index d428020..0d426d9 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1372,31 +1372,41 @@ int generic_segment_checks(const struct iovec *iov,
}
EXPORT_SYMBOL(generic_segment_checks);
+static int file_read_iter_actor(read_descriptor_t *desc, struct page *page,
+ unsigned long offset, unsigned long size)
+{
+ struct iov_iter *iter = desc->arg.data;
+ unsigned long copied = 0;
+
+ if (size > desc->count)
+ size = desc->count;
+
+ copied = iov_iter_copy_to_user(page, iter, offset, size);
+ if (copied < size)
+ desc->error = -EFAULT;
+
+ iov_iter_advance(iter, copied);
+ desc->count -= copied;
+ desc->written += copied;
+
+ return copied;
+}
+
/**
- * generic_file_aio_read - generic filesystem read routine
+ * generic_file_read_iter - generic filesystem read routine
* @iocb: kernel I/O control block
- * @iov: io vector request
- * @nr_segs: number of segments in the iovec
+ * @iov_iter: memory vector
* @pos: current file position
- *
- * This is the "read()" routine for all filesystems
- * that can use the page cache directly.
*/
ssize_t
-generic_file_aio_read(struct kiocb *iocb, const struct iovec *iov,
- unsigned long nr_segs, loff_t pos)
+generic_file_read_iter(struct kiocb *iocb, struct iov_iter *iter, loff_t pos)
{
struct file *filp = iocb->ki_filp;
- ssize_t retval;
- unsigned long seg = 0;
- size_t count;
+ read_descriptor_t desc;
+ ssize_t retval = 0;
+ size_t count = iov_iter_count(iter);
loff_t *ppos = &iocb->ki_pos;
- count = 0;
- retval = generic_segment_checks(iov, &nr_segs, &count, VERIFY_WRITE);
- if (retval)
- return retval;
-
/* coalesce the iovecs and go direct-to-BIO for O_DIRECT */
if (filp->f_flags & O_DIRECT) {
loff_t size;
@@ -1409,16 +1419,11 @@ generic_file_aio_read(struct kiocb *iocb, const struct iovec *iov,
goto out; /* skip atime */
size = i_size_read(inode);
if (pos < size) {
- size_t bytes = iov_length(iov, nr_segs);
retval = filemap_write_and_wait_range(mapping, pos,
- pos + bytes - 1);
- if (!retval) {
- struct iov_iter iter;
-
- iov_iter_init(&iter, iov, nr_segs, bytes, 0);
+ pos + count - 1);
+ if (!retval)
retval = mapping->a_ops->direct_IO(READ, iocb,
- &iter, pos);
- }
+ iter, pos);
if (retval > 0) {
*ppos = pos + retval;
count -= retval;
@@ -1439,42 +1444,47 @@ generic_file_aio_read(struct kiocb *iocb, const struct iovec *iov,
}
}
- count = retval;
- for (seg = 0; seg < nr_segs; seg++) {
- read_descriptor_t desc;
- loff_t offset = 0;
-
- /*
- * If we did a short DIO read we need to skip the section of the
- * iov that we've already read data into.
- */
- if (count) {
- if (count > iov[seg].iov_len) {
- count -= iov[seg].iov_len;
- continue;
- }
- offset = count;
- count = 0;
- }
-
- desc.written = 0;
- desc.arg.buf = iov[seg].iov_base + offset;
- desc.count = iov[seg].iov_len - offset;
- if (desc.count == 0)
- continue;
- desc.error = 0;
- do_generic_file_read(filp, ppos, &desc, file_read_actor);
- retval += desc.written;
- if (desc.error) {
- retval = retval ?: desc.error;
- break;
- }
- if (desc.count > 0)
- break;
- }
+ desc.written = 0;
+ desc.arg.data = iter;
+ desc.count = count;
+ desc.error = 0;
+ do_generic_file_read(filp, ppos, &desc, file_read_iter_actor);
+ if (desc.written)
+ retval = desc.written;
+ else
+ retval = desc.error;
out:
return retval;
}
+EXPORT_SYMBOL(generic_file_read_iter);
+
+/**
+ * generic_file_aio_read - generic filesystem read routine
+ * @iocb: kernel I/O control block
+ * @iov: io vector request
+ * @nr_segs: number of segments in the iovec
+ * @pos: current file position
+ *
+ * This is the "read()" routine for all filesystems
+ * that can use the page cache directly.
+ */
+ssize_t
+generic_file_aio_read(struct kiocb *iocb, const struct iovec *iov,
+ unsigned long nr_segs, loff_t pos)
+{
+ struct iov_iter iter;
+ int ret;
+ size_t count;
+
+ count = 0;
+ ret = generic_segment_checks(iov, &nr_segs, &count, VERIFY_WRITE);
+ if (ret)
+ return ret;
+
+ iov_iter_init(&iter, iov, nr_segs, count, 0);
+
+ return generic_file_read_iter(iocb, &iter, pos);
+}
EXPORT_SYMBOL(generic_file_aio_read);
#ifdef CONFIG_MMU
@@ -2031,9 +2041,8 @@ int pagecache_write_end(struct file *file, struct address_space *mapping,
EXPORT_SYMBOL(pagecache_write_end);
ssize_t
-generic_file_direct_write(struct kiocb *iocb, const struct iovec *iov,
- unsigned long *nr_segs, loff_t pos, loff_t *ppos,
- size_t count, size_t ocount)
+generic_file_direct_write_iter(struct kiocb *iocb, struct iov_iter *iter,
+ loff_t pos, loff_t *ppos, size_t count)
{
struct file *file = iocb->ki_filp;
struct address_space *mapping = file->f_mapping;
@@ -2041,12 +2050,14 @@ generic_file_direct_write(struct kiocb *iocb, const struct iovec *iov,
ssize_t written;
size_t write_len;
pgoff_t end;
- struct iov_iter iter;
- if (count != ocount)
- *nr_segs = iov_shorten((struct iovec *)iov, *nr_segs, count);
+ if (count != iov_iter_count(iter)) {
+ written = iov_iter_shorten(iter, count);
+ if (written)
+ goto out;
+ }
- write_len = iov_length(iov, *nr_segs);
+ write_len = count;
end = (pos + write_len - 1) >> PAGE_CACHE_SHIFT;
written = filemap_write_and_wait_range(mapping, pos, pos + write_len - 1);
@@ -2073,9 +2084,7 @@ generic_file_direct_write(struct kiocb *iocb, const struct iovec *iov,
}
}
- iov_iter_init(&iter, iov, *nr_segs, write_len, 0);
-
- written = mapping->a_ops->direct_IO(WRITE, iocb, &iter, pos);
+ written = mapping->a_ops->direct_IO(WRITE, iocb, iter, pos);
/*
* Finally, try again to invalidate clean pages which might have been
@@ -2101,6 +2110,23 @@ generic_file_direct_write(struct kiocb *iocb, const struct iovec *iov,
out:
return written;
}
+EXPORT_SYMBOL(generic_file_direct_write_iter);
+
+ssize_t
+generic_file_direct_write(struct kiocb *iocb, const struct iovec *iov,
+ unsigned long *nr_segs, loff_t pos, loff_t *ppos,
+ size_t count, size_t ocount)
+{
+ struct iov_iter iter;
+ ssize_t ret;
+
+ iov_iter_init(&iter, iov, *nr_segs, ocount, 0);
+ ret = generic_file_direct_write_iter(iocb, &iter, pos, ppos, count);
+ /* generic_file_direct_write_iter() might have shortened the vec */
+ if (*nr_segs != iter.nr_segs)
+ *nr_segs = iter.nr_segs;
+ return ret;
+}
EXPORT_SYMBOL(generic_file_direct_write);
/*
@@ -2234,16 +2260,19 @@ again:
}
ssize_t
-generic_file_buffered_write(struct kiocb *iocb, const struct iovec *iov,
- unsigned long nr_segs, loff_t pos, loff_t *ppos,
- size_t count, ssize_t written)
+generic_file_buffered_write_iter(struct kiocb *iocb, struct iov_iter *iter,
+ loff_t pos, loff_t *ppos, size_t count, ssize_t written)
{
struct file *file = iocb->ki_filp;
ssize_t status;
- struct iov_iter i;
- iov_iter_init(&i, iov, nr_segs, count, written);
- status = generic_perform_write(file, &i, pos);
+ if ((count + written) != iov_iter_count(iter)) {
+ int rc = iov_iter_shorten(iter, count + written);
+ if (rc)
+ return rc;
+ }
+
+ status = generic_perform_write(file, iter, pos);
if (likely(status >= 0)) {
written += status;
@@ -2252,13 +2281,24 @@ generic_file_buffered_write(struct kiocb *iocb, const struct iovec *iov,
return written ? written : status;
}
+EXPORT_SYMBOL(generic_file_buffered_write_iter);
+
+ssize_t
+generic_file_buffered_write(struct kiocb *iocb, const struct iovec *iov,
+ unsigned long nr_segs, loff_t pos, loff_t *ppos,
+ size_t count, ssize_t written)
+{
+ struct iov_iter iter;
+ iov_iter_init(&iter, iov, nr_segs, count, written);
+ return generic_file_buffered_write_iter(iocb, &iter, pos, ppos,
+ count, written);
+}
EXPORT_SYMBOL(generic_file_buffered_write);
/**
* __generic_file_aio_write - write data to a file
* @iocb: IO state structure (file, offset, etc.)
- * @iov: vector with data to write
- * @nr_segs: number of segments in the vector
+ * @iter: iov_iter specifying memory to write
* @ppos: position where to write
*
* This function does all the work needed for actually writing data to a
@@ -2273,24 +2313,18 @@ EXPORT_SYMBOL(generic_file_buffered_write);
* A caller has to handle it. This is mainly due to the fact that we want to
* avoid syncing under i_mutex.
*/
-ssize_t __generic_file_aio_write(struct kiocb *iocb, const struct iovec *iov,
- unsigned long nr_segs, loff_t *ppos)
+ssize_t __generic_file_write_iter(struct kiocb *iocb, struct iov_iter *iter,
+ loff_t *ppos)
{
struct file *file = iocb->ki_filp;
struct address_space * mapping = file->f_mapping;
- size_t ocount; /* original count */
size_t count; /* after file limit checks */
struct inode *inode = mapping->host;
loff_t pos;
ssize_t written;
ssize_t err;
- ocount = 0;
- err = generic_segment_checks(iov, &nr_segs, &ocount, VERIFY_READ);
- if (err)
- return err;
-
- count = ocount;
+ count = iov_iter_count(iter);
pos = *ppos;
/* We can write back this queue in page reclaim */
@@ -2317,8 +2351,8 @@ ssize_t __generic_file_aio_write(struct kiocb *iocb, const struct iovec *iov,
loff_t endbyte;
ssize_t written_buffered;
- written = generic_file_direct_write(iocb, iov, &nr_segs, pos,
- ppos, count, ocount);
+ written = generic_file_direct_write_iter(iocb, iter, pos,
+ ppos, count);
if (written < 0 || written == count)
goto out;
/*
@@ -2327,9 +2361,9 @@ ssize_t __generic_file_aio_write(struct kiocb *iocb, const struct iovec *iov,
*/
pos += written;
count -= written;
- written_buffered = generic_file_buffered_write(iocb, iov,
- nr_segs, pos, ppos, count,
- written);
+ iov_iter_advance(iter, written);
+ written_buffered = generic_file_buffered_write_iter(iocb, iter,
+ pos, ppos, count, written);
/*
* If generic_file_buffered_write() retuned a synchronous error
* then we want to return the number of bytes which were
@@ -2361,13 +2395,57 @@ ssize_t __generic_file_aio_write(struct kiocb *iocb, const struct iovec *iov,
*/
}
} else {
- written = generic_file_buffered_write(iocb, iov, nr_segs,
+ iter->count = count;
+ written = generic_file_buffered_write_iter(iocb, iter,
pos, ppos, count, written);
}
out:
current->backing_dev_info = NULL;
return written ? written : err;
}
+EXPORT_SYMBOL(__generic_file_write_iter);
+
+ssize_t generic_file_write_iter(struct kiocb *iocb, struct iov_iter *iter,
+ loff_t pos)
+{
+ struct file *file = iocb->ki_filp;
+ struct inode *inode = file->f_mapping->host;
+ ssize_t ret;
+
+ mutex_lock(&inode->i_mutex);
+ ret = __generic_file_write_iter(iocb, iter, &iocb->ki_pos);
+ mutex_unlock(&inode->i_mutex);
+
+ if (ret > 0 || ret == -EIOCBQUEUED) {
+ ssize_t err;
+
+ err = generic_write_sync(file, pos, ret);
+ if (err < 0 && ret > 0)
+ ret = err;
+ }
+ return ret;
+}
+EXPORT_SYMBOL(generic_file_write_iter);
+
+ssize_t
+__generic_file_aio_write(struct kiocb *iocb, const struct iovec *iov,
+ unsigned long nr_segs, loff_t *ppos)
+{
+ struct iov_iter iter;
+ size_t count;
+ int ret;
+
+ count = 0;
+ ret = generic_segment_checks(iov, &nr_segs, &count, VERIFY_READ);
+ if (ret)
+ goto out;
+
+ iov_iter_init(&iter, iov, nr_segs, count, 0);
+
+ ret = __generic_file_write_iter(iocb, &iter, ppos);
+out:
+ return ret;
+}
EXPORT_SYMBOL(__generic_file_aio_write);
/**
--
1.7.12.3
use the generic_file_read_iter(), create ext4_file_write_iter() based on
ext4_file_write(), and make ext4_file_write() a wrapper around
ext4_file_write_iter().
Signed-off-by: Dave Kleikamp <[email protected]>
Cc: Zach Brown <[email protected]>
Cc: "Theodore Ts'o" <[email protected]>
Cc: Andreas Dilger <[email protected]>
Cc: [email protected]
---
fs/ext4/file.c | 49 ++++++++++++++++++++++++++++++++++---------------
1 file changed, 34 insertions(+), 15 deletions(-)
diff --git a/fs/ext4/file.c b/fs/ext4/file.c
index bf3966b..4a49e91 100644
--- a/fs/ext4/file.c
+++ b/fs/ext4/file.c
@@ -72,12 +72,11 @@ void ext4_unwritten_wait(struct inode *inode)
* or one thread will zero the other's data, causing corruption.
*/
static int
-ext4_unaligned_aio(struct inode *inode, const struct iovec *iov,
- unsigned long nr_segs, loff_t pos)
+ext4_unaligned_aio(struct inode *inode, struct iov_iter *iter, loff_t pos)
{
struct super_block *sb = inode->i_sb;
int blockmask = sb->s_blocksize - 1;
- size_t count = iov_length(iov, nr_segs);
+ size_t count = iov_iter_count(iter);
loff_t final_size = pos + count;
if (pos >= inode->i_size)
@@ -90,8 +89,8 @@ ext4_unaligned_aio(struct inode *inode, const struct iovec *iov,
}
static ssize_t
-ext4_file_dio_write(struct kiocb *iocb, const struct iovec *iov,
- unsigned long nr_segs, loff_t pos)
+ext4_file_dio_write(struct kiocb *iocb, struct iov_iter *iter,
+ loff_t pos)
{
struct file *file = iocb->ki_filp;
struct inode *inode = file->f_mapping->host;
@@ -99,11 +98,11 @@ ext4_file_dio_write(struct kiocb *iocb, const struct iovec *iov,
int unaligned_aio = 0;
ssize_t ret;
int overwrite = 0;
- size_t length = iov_length(iov, nr_segs);
+ size_t length = iov_iter_count(iter);
if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS) &&
!is_sync_kiocb(iocb))
- unaligned_aio = ext4_unaligned_aio(inode, iov, nr_segs, pos);
+ unaligned_aio = ext4_unaligned_aio(inode, iter, pos);
/* Unaligned direct AIO must be serialized; see comment above */
if (unaligned_aio) {
@@ -152,7 +151,7 @@ ext4_file_dio_write(struct kiocb *iocb, const struct iovec *iov,
overwrite = 1;
}
- ret = __generic_file_aio_write(iocb, iov, nr_segs, &iocb->ki_pos);
+ ret = __generic_file_write_iter(iocb, iter, &iocb->ki_pos);
mutex_unlock(&inode->i_mutex);
if (ret > 0 || ret == -EIOCBQUEUED) {
@@ -171,8 +170,7 @@ ext4_file_dio_write(struct kiocb *iocb, const struct iovec *iov,
}
static ssize_t
-ext4_file_write(struct kiocb *iocb, const struct iovec *iov,
- unsigned long nr_segs, loff_t pos)
+ext4_file_write_iter(struct kiocb *iocb, struct iov_iter *iter, loff_t pos)
{
struct inode *inode = iocb->ki_filp->f_path.dentry->d_inode;
ssize_t ret;
@@ -184,26 +182,45 @@ ext4_file_write(struct kiocb *iocb, const struct iovec *iov,
if (!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))) {
struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
- size_t length = iov_length(iov, nr_segs);
+ size_t length = iov_iter_count(iter);
if ((pos > sbi->s_bitmap_maxbytes ||
(pos == sbi->s_bitmap_maxbytes && length > 0)))
return -EFBIG;
if (pos + length > sbi->s_bitmap_maxbytes) {
- nr_segs = iov_shorten((struct iovec *)iov, nr_segs,
- sbi->s_bitmap_maxbytes - pos);
+ ret = iov_iter_shorten(iter,
+ sbi->s_bitmap_maxbytes - pos);
+ if (ret)
+ return ret;
}
}
if (unlikely(iocb->ki_filp->f_flags & O_DIRECT))
- ret = ext4_file_dio_write(iocb, iov, nr_segs, pos);
+ ret = ext4_file_dio_write(iocb, iter, pos);
else
- ret = generic_file_aio_write(iocb, iov, nr_segs, pos);
+ ret = generic_file_write_iter(iocb, iter, pos);
return ret;
}
+static ssize_t
+ext4_file_write(struct kiocb *iocb, const struct iovec *iov,
+ unsigned long nr_segs, loff_t pos)
+{
+ struct iov_iter i;
+ int ret;
+ size_t count;
+
+ ret = generic_segment_checks(iov, &nr_segs, &count, VERIFY_WRITE);
+ if (ret)
+ return ret;
+
+ iov_iter_init(&i, iov, nr_segs, count, 0);
+
+ return ext4_file_write_iter(iocb, &i, pos);
+}
+
static const struct vm_operations_struct ext4_file_vm_ops = {
.fault = filemap_fault,
.page_mkwrite = ext4_page_mkwrite,
@@ -310,6 +327,8 @@ const struct file_operations ext4_file_operations = {
.write = do_sync_write,
.aio_read = generic_file_aio_read,
.aio_write = ext4_file_write,
+ .read_iter = generic_file_read_iter,
+ .write_iter = ext4_file_write_iter,
.unlocked_ioctl = ext4_ioctl,
#ifdef CONFIG_COMPAT
.compat_ioctl = ext4_compat_ioctl,
--
1.7.12.3
Change the direct_IO aop to take an iov_iter argument rather than an iovec.
This will get passed down through most filesystems so that only the
__blockdev_direct_IO helper need be aware of whether user or kernel memory
is being passed to the function.
Signed-off-by: Dave Kleikamp <[email protected]>
---
Documentation/filesystems/Locking | 4 +--
Documentation/filesystems/vfs.txt | 4 +--
fs/9p/vfs_addr.c | 8 ++---
fs/block_dev.c | 8 ++---
fs/btrfs/inode.c | 61 ++++++++++++++++++++++++---------------
fs/ceph/addr.c | 3 +-
fs/direct-io.c | 19 ++++++------
fs/ext2/inode.c | 8 ++---
fs/ext3/inode.c | 15 ++++------
fs/ext4/ext4.h | 3 +-
fs/ext4/indirect.c | 16 +++++-----
fs/ext4/inode.c | 27 ++++++++---------
fs/fat/inode.c | 10 +++----
fs/fuse/file.c | 11 +++++--
fs/gfs2/aops.c | 7 ++---
fs/hfs/inode.c | 7 ++---
fs/hfsplus/inode.c | 6 ++--
fs/jfs/inode.c | 7 ++---
fs/nfs/direct.c | 13 +++++----
fs/nilfs2/inode.c | 8 ++---
fs/ocfs2/aops.c | 8 ++---
fs/reiserfs/inode.c | 7 ++---
fs/udf/file.c | 3 +-
fs/udf/inode.c | 10 +++----
fs/xfs/xfs_aops.c | 13 ++++-----
include/linux/fs.h | 18 ++++++------
include/linux/nfs_fs.h | 3 +-
mm/filemap.c | 13 +++++++--
mm/page_io.c | 8 +++--
29 files changed, 165 insertions(+), 163 deletions(-)
diff --git a/Documentation/filesystems/Locking b/Documentation/filesystems/Locking
index e540a24..573673e 100644
--- a/Documentation/filesystems/Locking
+++ b/Documentation/filesystems/Locking
@@ -196,8 +196,8 @@ prototypes:
int (*invalidatepage) (struct page *, unsigned long);
int (*releasepage) (struct page *, int);
void (*freepage)(struct page *);
- int (*direct_IO)(int, struct kiocb *, const struct iovec *iov,
- loff_t offset, unsigned long nr_segs);
+ int (*direct_IO)(int, struct kiocb *, struct iov_iter *iter,
+ loff_t offset);
int (*get_xip_mem)(struct address_space *, pgoff_t, int, void **,
unsigned long *);
int (*migratepage)(struct address_space *, struct page *, struct page *);
diff --git a/Documentation/filesystems/vfs.txt b/Documentation/filesystems/vfs.txt
index 2ee133e..0a6a8d4 100644
--- a/Documentation/filesystems/vfs.txt
+++ b/Documentation/filesystems/vfs.txt
@@ -580,8 +580,8 @@ struct address_space_operations {
int (*invalidatepage) (struct page *, unsigned long);
int (*releasepage) (struct page *, int);
void (*freepage)(struct page *);
- ssize_t (*direct_IO)(int, struct kiocb *, const struct iovec *iov,
- loff_t offset, unsigned long nr_segs);
+ ssize_t (*direct_IO)(int, struct kiocb *, struct iov_iter *iter,
+ loff_t offset);
struct page* (*get_xip_page)(struct address_space *, sector_t,
int);
/* migrate the contents of a page to the specified target */
diff --git a/fs/9p/vfs_addr.c b/fs/9p/vfs_addr.c
index 0ad61c6..e70f239 100644
--- a/fs/9p/vfs_addr.c
+++ b/fs/9p/vfs_addr.c
@@ -239,9 +239,8 @@ static int v9fs_launder_page(struct page *page)
* v9fs_direct_IO - 9P address space operation for direct I/O
* @rw: direction (read or write)
* @iocb: target I/O control block
- * @iov: array of vectors that define I/O buffer
+ * @iter: array of vectors that define I/O buffer
* @pos: offset in file to begin the operation
- * @nr_segs: size of iovec array
*
* The presence of v9fs_direct_IO() in the address space ops vector
* allowes open() O_DIRECT flags which would have failed otherwise.
@@ -255,8 +254,7 @@ static int v9fs_launder_page(struct page *page)
*
*/
static ssize_t
-v9fs_direct_IO(int rw, struct kiocb *iocb, const struct iovec *iov,
- loff_t pos, unsigned long nr_segs)
+v9fs_direct_IO(int rw, struct kiocb *iocb, struct iov_iter *iter, loff_t pos)
{
/*
* FIXME
@@ -265,7 +263,7 @@ v9fs_direct_IO(int rw, struct kiocb *iocb, const struct iovec *iov,
*/
p9_debug(P9_DEBUG_VFS, "v9fs_direct_IO: v9fs_direct_IO (%s) off/no(%lld/%lu) EINVAL\n",
iocb->ki_filp->f_path.dentry->d_name.name,
- (long long)pos, nr_segs);
+ (long long)pos, iter->nr_segs);
return -EINVAL;
}
diff --git a/fs/block_dev.c b/fs/block_dev.c
index b3c1d3d..ba3ed89 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -228,14 +228,14 @@ blkdev_get_blocks(struct inode *inode, sector_t iblock,
}
static ssize_t
-blkdev_direct_IO(int rw, struct kiocb *iocb, const struct iovec *iov,
- loff_t offset, unsigned long nr_segs)
+blkdev_direct_IO(int rw, struct kiocb *iocb, struct iov_iter *iter,
+ loff_t offset)
{
struct file *file = iocb->ki_filp;
struct inode *inode = file->f_mapping->host;
- return __blockdev_direct_IO(rw, iocb, inode, I_BDEV(inode), iov, offset,
- nr_segs, blkdev_get_blocks, NULL, NULL, 0);
+ return __blockdev_direct_IO(rw, iocb, inode, I_BDEV(inode), iter,
+ offset, blkdev_get_blocks, NULL, NULL, 0);
}
int __sync_blockdev(struct block_device *bdev, int wait)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 85a1e50..50d8329 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -6525,8 +6525,7 @@ free_ordered:
}
static ssize_t check_direct_IO(struct btrfs_root *root, int rw, struct kiocb *iocb,
- const struct iovec *iov, loff_t offset,
- unsigned long nr_segs)
+ struct iov_iter *iter, loff_t offset)
{
int seg;
int i;
@@ -6540,46 +6539,60 @@ static ssize_t check_direct_IO(struct btrfs_root *root, int rw, struct kiocb *io
goto out;
/* Check the memory alignment. Blocks cannot straddle pages */
- for (seg = 0; seg < nr_segs; seg++) {
- addr = (unsigned long)iov[seg].iov_base;
- size = iov[seg].iov_len;
- end += size;
- if ((addr & blocksize_mask) || (size & blocksize_mask))
- goto out;
+ if (iov_iter_has_iovec(iter)) {
+ const struct iovec *iov = iov_iter_iovec(iter);
+
+ for (seg = 0; seg < iter->nr_segs; seg++) {
+ addr = (unsigned long)iov[seg].iov_base;
+ size = iov[seg].iov_len;
+ end += size;
+ if ((addr & blocksize_mask) || (size & blocksize_mask))
+ goto out;
- /* If this is a write we don't need to check anymore */
- if (rw & WRITE)
- continue;
+ /* If this is a write we don't need to check anymore */
+ if (rw & WRITE)
+ continue;
- /*
- * Check to make sure we don't have duplicate iov_base's in this
- * iovec, if so return EINVAL, otherwise we'll get csum errors
- * when reading back.
- */
- for (i = seg + 1; i < nr_segs; i++) {
- if (iov[seg].iov_base == iov[i].iov_base)
+ /*
+ * Check to make sure we don't have duplicate iov_base's
+ * in this iovec, if so return EINVAL, otherwise we'll
+ * get csum errors when reading back.
+ */
+ for (i = seg + 1; i < iter->nr_segs; i++) {
+ if (iov[seg].iov_base == iov[i].iov_base)
+ goto out;
+ }
+ }
+ } else if (iov_iter_has_bvec(iter)) {
+ struct bio_vec *bvec = iov_iter_bvec(iter);
+
+ for (seg = 0; seg < iter->nr_segs; seg++) {
+ addr = (unsigned long)bvec[seg].bv_offset;
+ size = bvec[seg].bv_len;
+ end += size;
+ if ((addr & blocksize_mask) || (size & blocksize_mask))
goto out;
}
- }
+ } else
+ BUG();
+
retval = 0;
out:
return retval;
}
static ssize_t btrfs_direct_IO(int rw, struct kiocb *iocb,
- const struct iovec *iov, loff_t offset,
- unsigned long nr_segs)
+ struct iov_iter *iter, loff_t offset)
{
struct file *file = iocb->ki_filp;
struct inode *inode = file->f_mapping->host;
- if (check_direct_IO(BTRFS_I(inode)->root, rw, iocb, iov,
- offset, nr_segs))
+ if (check_direct_IO(BTRFS_I(inode)->root, rw, iocb, iter, offset))
return 0;
return __blockdev_direct_IO(rw, iocb, inode,
BTRFS_I(inode)->root->fs_info->fs_devices->latest_bdev,
- iov, offset, nr_segs, btrfs_get_blocks_direct, NULL,
+ iter, offset, btrfs_get_blocks_direct, NULL,
btrfs_submit_direct, 0);
}
diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index 6690269..a47c4ef 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -1146,8 +1146,7 @@ static int ceph_write_end(struct file *file, struct address_space *mapping,
* never get called.
*/
static ssize_t ceph_direct_io(int rw, struct kiocb *iocb,
- const struct iovec *iov,
- loff_t pos, unsigned long nr_segs)
+ struct iov_iter *iter, loff_t pos)
{
WARN_ON(1);
return -EINVAL;
diff --git a/fs/direct-io.c b/fs/direct-io.c
index 035c0a3..e2733e4 100644
--- a/fs/direct-io.c
+++ b/fs/direct-io.c
@@ -1079,9 +1079,9 @@ static int dio_aligned(unsigned long offset, unsigned *blkbits,
*/
static inline ssize_t
do_blockdev_direct_IO(int rw, struct kiocb *iocb, struct inode *inode,
- struct block_device *bdev, const struct iovec *iov, loff_t offset,
- unsigned long nr_segs, get_block_t get_block, dio_iodone_t end_io,
- dio_submit_t submit_io, int flags)
+ struct block_device *bdev, struct iov_iter *iter, loff_t offset,
+ get_block_t get_block, dio_iodone_t end_io, dio_submit_t submit_io,
+ int flags)
{
int seg;
size_t size;
@@ -1095,6 +1095,8 @@ do_blockdev_direct_IO(int rw, struct kiocb *iocb, struct inode *inode,
size_t bytes;
struct buffer_head map_bh = { 0, };
struct blk_plug plug;
+ const struct iovec *iov = iov_iter_iovec(iter);
+ unsigned long nr_segs = iter->nr_segs;
if (rw & WRITE)
rw = WRITE_ODIRECT;
@@ -1296,9 +1298,9 @@ out:
ssize_t
__blockdev_direct_IO(int rw, struct kiocb *iocb, struct inode *inode,
- struct block_device *bdev, const struct iovec *iov, loff_t offset,
- unsigned long nr_segs, get_block_t get_block, dio_iodone_t end_io,
- dio_submit_t submit_io, int flags)
+ struct block_device *bdev, struct iov_iter *iter, loff_t offset,
+ get_block_t get_block, dio_iodone_t end_io, dio_submit_t submit_io,
+ int flags)
{
/*
* The block device state is needed in the end to finally
@@ -1312,9 +1314,8 @@ __blockdev_direct_IO(int rw, struct kiocb *iocb, struct inode *inode,
prefetch(bdev->bd_queue);
prefetch((char *)bdev->bd_queue + SMP_CACHE_BYTES);
- return do_blockdev_direct_IO(rw, iocb, inode, bdev, iov, offset,
- nr_segs, get_block, end_io,
- submit_io, flags);
+ return do_blockdev_direct_IO(rw, iocb, inode, bdev, iter, offset,
+ get_block, end_io, submit_io, flags);
}
EXPORT_SYMBOL(__blockdev_direct_IO);
diff --git a/fs/ext2/inode.c b/fs/ext2/inode.c
index 6363ac6..f1d65f5 100644
--- a/fs/ext2/inode.c
+++ b/fs/ext2/inode.c
@@ -833,18 +833,16 @@ static sector_t ext2_bmap(struct address_space *mapping, sector_t block)
}
static ssize_t
-ext2_direct_IO(int rw, struct kiocb *iocb, const struct iovec *iov,
- loff_t offset, unsigned long nr_segs)
+ext2_direct_IO(int rw, struct kiocb *iocb, struct iov_iter *iter, loff_t offset)
{
struct file *file = iocb->ki_filp;
struct address_space *mapping = file->f_mapping;
struct inode *inode = mapping->host;
ssize_t ret;
- ret = blockdev_direct_IO(rw, iocb, inode, iov, offset, nr_segs,
- ext2_get_block);
+ ret = blockdev_direct_IO(rw, iocb, inode, iter, offset, ext2_get_block);
if (ret < 0 && (rw & WRITE))
- ext2_write_failed(mapping, offset + iov_length(iov, nr_segs));
+ ext2_write_failed(mapping, offset + iov_iter_count(iter));
return ret;
}
diff --git a/fs/ext3/inode.c b/fs/ext3/inode.c
index 7e87e37..a31d871 100644
--- a/fs/ext3/inode.c
+++ b/fs/ext3/inode.c
@@ -1856,8 +1856,7 @@ static int ext3_releasepage(struct page *page, gfp_t wait)
* VFS code falls back into buffered path in that case so we are safe.
*/
static ssize_t ext3_direct_IO(int rw, struct kiocb *iocb,
- const struct iovec *iov, loff_t offset,
- unsigned long nr_segs)
+ struct iov_iter *iter, loff_t offset)
{
struct file *file = iocb->ki_filp;
struct inode *inode = file->f_mapping->host;
@@ -1865,10 +1864,10 @@ static ssize_t ext3_direct_IO(int rw, struct kiocb *iocb,
handle_t *handle;
ssize_t ret;
int orphan = 0;
- size_t count = iov_length(iov, nr_segs);
+ size_t count = iov_iter_count(iter);
int retries = 0;
- trace_ext3_direct_IO_enter(inode, offset, iov_length(iov, nr_segs), rw);
+ trace_ext3_direct_IO_enter(inode, offset, count, rw);
if (rw == WRITE) {
loff_t final_size = offset + count;
@@ -1892,15 +1891,14 @@ static ssize_t ext3_direct_IO(int rw, struct kiocb *iocb,
}
retry:
- ret = blockdev_direct_IO(rw, iocb, inode, iov, offset, nr_segs,
- ext3_get_block);
+ ret = blockdev_direct_IO(rw, iocb, inode, iter, offset, ext3_get_block);
/*
* In case of error extending write may have instantiated a few
* blocks outside i_size. Trim these off again.
*/
if (unlikely((rw & WRITE) && ret < 0)) {
loff_t isize = i_size_read(inode);
- loff_t end = offset + iov_length(iov, nr_segs);
+ loff_t end = offset + count;
if (end > isize)
ext3_truncate_failed_direct_write(inode);
@@ -1943,8 +1941,7 @@ retry:
ret = err;
}
out:
- trace_ext3_direct_IO_exit(inode, offset,
- iov_length(iov, nr_segs), rw, ret);
+ trace_ext3_direct_IO_exit(inode, offset, count, rw, ret);
return ret;
}
diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
index 3ab2539..f3eba73 100644
--- a/fs/ext4/ext4.h
+++ b/fs/ext4/ext4.h
@@ -2030,8 +2030,7 @@ extern void ext4_da_update_reserve_space(struct inode *inode,
extern int ext4_ind_map_blocks(handle_t *handle, struct inode *inode,
struct ext4_map_blocks *map, int flags);
extern ssize_t ext4_ind_direct_IO(int rw, struct kiocb *iocb,
- const struct iovec *iov, loff_t offset,
- unsigned long nr_segs);
+ struct iov_iter *iter, loff_t offset);
extern int ext4_ind_calc_metadata_amount(struct inode *inode, sector_t lblock);
extern int ext4_ind_trans_blocks(struct inode *inode, int nrblocks, int chunk);
extern void ext4_ind_truncate(struct inode *inode);
diff --git a/fs/ext4/indirect.c b/fs/ext4/indirect.c
index 792e388..40dcec2 100644
--- a/fs/ext4/indirect.c
+++ b/fs/ext4/indirect.c
@@ -772,8 +772,7 @@ out:
* VFS code falls back into buffered path in that case so we are safe.
*/
ssize_t ext4_ind_direct_IO(int rw, struct kiocb *iocb,
- const struct iovec *iov, loff_t offset,
- unsigned long nr_segs)
+ struct iov_iter *iter, loff_t offset)
{
struct file *file = iocb->ki_filp;
struct inode *inode = file->f_mapping->host;
@@ -781,7 +780,7 @@ ssize_t ext4_ind_direct_IO(int rw, struct kiocb *iocb,
handle_t *handle;
ssize_t ret;
int orphan = 0;
- size_t count = iov_length(iov, nr_segs);
+ size_t count = iov_iter_count(iter);
int retries = 0;
if (rw == WRITE) {
@@ -825,18 +824,17 @@ retry:
goto locked;
}
ret = __blockdev_direct_IO(rw, iocb, inode,
- inode->i_sb->s_bdev, iov,
- offset, nr_segs,
- ext4_get_block, NULL, NULL, 0);
+ inode->i_sb->s_bdev, iter,
+ offset, ext4_get_block, NULL, NULL, 0);
inode_dio_done(inode);
} else {
locked:
- ret = blockdev_direct_IO(rw, iocb, inode, iov,
- offset, nr_segs, ext4_get_block);
+ ret = blockdev_direct_IO(rw, iocb, inode, iter,
+ offset, ext4_get_block);
if (unlikely((rw & WRITE) && ret < 0)) {
loff_t isize = i_size_read(inode);
- loff_t end = offset + iov_length(iov, nr_segs);
+ loff_t end = offset + iov_iter_count(iter);
if (end > isize)
ext4_truncate_failed_write(inode);
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index b3c243b..8a85501 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -2992,13 +2992,12 @@ retry:
*
*/
static ssize_t ext4_ext_direct_IO(int rw, struct kiocb *iocb,
- const struct iovec *iov, loff_t offset,
- unsigned long nr_segs)
+ struct iov_iter *iter, loff_t offset)
{
struct file *file = iocb->ki_filp;
struct inode *inode = file->f_mapping->host;
ssize_t ret;
- size_t count = iov_length(iov, nr_segs);
+ size_t count = iov_iter_count(iter);
loff_t final_size = offset + count;
if (rw == WRITE && final_size <= inode->i_size) {
@@ -3058,16 +3057,16 @@ static ssize_t ext4_ext_direct_IO(int rw, struct kiocb *iocb,
if (overwrite)
ret = __blockdev_direct_IO(rw, iocb, inode,
- inode->i_sb->s_bdev, iov,
- offset, nr_segs,
+ inode->i_sb->s_bdev, iter,
+ offset,
ext4_get_block_write_nolock,
ext4_end_io_dio,
NULL,
0);
else
ret = __blockdev_direct_IO(rw, iocb, inode,
- inode->i_sb->s_bdev, iov,
- offset, nr_segs,
+ inode->i_sb->s_bdev, iter,
+ offset,
ext4_get_block_write,
ext4_end_io_dio,
NULL,
@@ -3117,12 +3116,11 @@ static ssize_t ext4_ext_direct_IO(int rw, struct kiocb *iocb,
}
/* for write the the end of file case, we fall back to old way */
- return ext4_ind_direct_IO(rw, iocb, iov, offset, nr_segs);
+ return ext4_ind_direct_IO(rw, iocb, iter, offset);
}
static ssize_t ext4_direct_IO(int rw, struct kiocb *iocb,
- const struct iovec *iov, loff_t offset,
- unsigned long nr_segs)
+ struct iov_iter *iter, loff_t offset)
{
struct file *file = iocb->ki_filp;
struct inode *inode = file->f_mapping->host;
@@ -3134,13 +3132,12 @@ static ssize_t ext4_direct_IO(int rw, struct kiocb *iocb,
if (ext4_should_journal_data(inode))
return 0;
- trace_ext4_direct_IO_enter(inode, offset, iov_length(iov, nr_segs), rw);
+ trace_ext4_direct_IO_enter(inode, offset, iov_iter_count(iter), rw);
if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))
- ret = ext4_ext_direct_IO(rw, iocb, iov, offset, nr_segs);
+ ret = ext4_ext_direct_IO(rw, iocb, iter, offset);
else
- ret = ext4_ind_direct_IO(rw, iocb, iov, offset, nr_segs);
- trace_ext4_direct_IO_exit(inode, offset,
- iov_length(iov, nr_segs), rw, ret);
+ ret = ext4_ind_direct_IO(rw, iocb, iter, offset);
+ trace_ext4_direct_IO_exit(inode, offset, iov_iter_count(iter), rw, ret);
return ret;
}
diff --git a/fs/fat/inode.c b/fs/fat/inode.c
index 5bafaad..4fd356a 100644
--- a/fs/fat/inode.c
+++ b/fs/fat/inode.c
@@ -184,8 +184,7 @@ static int fat_write_end(struct file *file, struct address_space *mapping,
}
static ssize_t fat_direct_IO(int rw, struct kiocb *iocb,
- const struct iovec *iov,
- loff_t offset, unsigned long nr_segs)
+ struct iov_iter *iter, loff_t offset)
{
struct file *file = iocb->ki_filp;
struct address_space *mapping = file->f_mapping;
@@ -202,7 +201,7 @@ static ssize_t fat_direct_IO(int rw, struct kiocb *iocb,
*
* Return 0, and fallback to normal buffered write.
*/
- loff_t size = offset + iov_length(iov, nr_segs);
+ loff_t size = offset + iov_iter_count(iter);
if (MSDOS_I(inode)->mmu_private < size)
return 0;
}
@@ -211,10 +210,9 @@ static ssize_t fat_direct_IO(int rw, struct kiocb *iocb,
* FAT need to use the DIO_LOCKING for avoiding the race
* condition of fat_get_block() and ->truncate().
*/
- ret = blockdev_direct_IO(rw, iocb, inode, iov, offset, nr_segs,
- fat_get_block);
+ ret = blockdev_direct_IO(rw, iocb, inode, iter, offset, fat_get_block);
if (ret < 0 && (rw & WRITE))
- fat_write_failed(mapping, offset + iov_length(iov, nr_segs));
+ fat_write_failed(mapping, offset + iov_iter_count(iter));
return ret;
}
diff --git a/fs/fuse/file.c b/fs/fuse/file.c
index 4e42e95..2dd941c 100644
--- a/fs/fuse/file.c
+++ b/fs/fuse/file.c
@@ -2149,17 +2149,22 @@ static ssize_t fuse_loop_dio(struct file *filp, const struct iovec *iov,
static ssize_t
-fuse_direct_IO(int rw, struct kiocb *iocb, const struct iovec *iov,
- loff_t offset, unsigned long nr_segs)
+fuse_direct_IO(int rw, struct kiocb *iocb, struct iov_iter *iter, loff_t offset)
{
ssize_t ret = 0;
struct file *file = NULL;
loff_t pos = 0;
+ /*
+ * We'll eventually want to work with both iovec and bvec
+ */
+ BUG_ON(!iov_iter_has_iovec(iter));
+
file = iocb->ki_filp;
pos = offset;
- ret = fuse_loop_dio(file, iov, nr_segs, &pos, rw);
+ ret = fuse_loop_dio(file, iov_iter_iovec(iter), iter->nr_segs, &pos,
+ rw);
return ret;
}
diff --git a/fs/gfs2/aops.c b/fs/gfs2/aops.c
index 01c4975..872d05c 100644
--- a/fs/gfs2/aops.c
+++ b/fs/gfs2/aops.c
@@ -995,8 +995,7 @@ static int gfs2_ok_for_dio(struct gfs2_inode *ip, int rw, loff_t offset)
static ssize_t gfs2_direct_IO(int rw, struct kiocb *iocb,
- const struct iovec *iov, loff_t offset,
- unsigned long nr_segs)
+ struct iov_iter *iter, loff_t offset)
{
struct file *file = iocb->ki_filp;
struct inode *inode = file->f_mapping->host;
@@ -1020,8 +1019,8 @@ static ssize_t gfs2_direct_IO(int rw, struct kiocb *iocb,
if (rv != 1)
goto out; /* dio not valid, fall back to buffered i/o */
- rv = __blockdev_direct_IO(rw, iocb, inode, inode->i_sb->s_bdev, iov,
- offset, nr_segs, gfs2_get_block_direct,
+ rv = __blockdev_direct_IO(rw, iocb, inode, inode->i_sb->s_bdev, iter,
+ offset, gfs2_get_block_direct,
NULL, NULL, 0);
out:
gfs2_glock_dq(&gh);
diff --git a/fs/hfs/inode.c b/fs/hfs/inode.c
index 0b35903..986cb91 100644
--- a/fs/hfs/inode.c
+++ b/fs/hfs/inode.c
@@ -117,14 +117,13 @@ static int hfs_releasepage(struct page *page, gfp_t mask)
}
static ssize_t hfs_direct_IO(int rw, struct kiocb *iocb,
- const struct iovec *iov, loff_t offset, unsigned long nr_segs)
+ struct iov_iter *iter, loff_t offset)
{
struct file *file = iocb->ki_filp;
struct inode *inode = file->f_path.dentry->d_inode->i_mapping->host;
ssize_t ret;
- ret = blockdev_direct_IO(rw, iocb, inode, iov, offset, nr_segs,
- hfs_get_block);
+ ret = blockdev_direct_IO(rw, iocb, inode, iter, offset, hfs_get_block);
/*
* In case of error extending write may have instantiated a few
@@ -132,7 +131,7 @@ static ssize_t hfs_direct_IO(int rw, struct kiocb *iocb,
*/
if (unlikely((rw & WRITE) && ret < 0)) {
loff_t isize = i_size_read(inode);
- loff_t end = offset + iov_length(iov, nr_segs);
+ loff_t end = offset + iov_iter_count(iter);
if (end > isize)
vmtruncate(inode, isize);
diff --git a/fs/hfsplus/inode.c b/fs/hfsplus/inode.c
index 2172aa5..94fc092 100644
--- a/fs/hfsplus/inode.c
+++ b/fs/hfsplus/inode.c
@@ -113,13 +113,13 @@ static int hfsplus_releasepage(struct page *page, gfp_t mask)
}
static ssize_t hfsplus_direct_IO(int rw, struct kiocb *iocb,
- const struct iovec *iov, loff_t offset, unsigned long nr_segs)
+ struct iov_iter *iter, loff_t offset)
{
struct file *file = iocb->ki_filp;
struct inode *inode = file->f_path.dentry->d_inode->i_mapping->host;
ssize_t ret;
- ret = blockdev_direct_IO(rw, iocb, inode, iov, offset, nr_segs,
+ ret = blockdev_direct_IO(rw, iocb, inode, iter, offset,
hfsplus_get_block);
/*
@@ -128,7 +128,7 @@ static ssize_t hfsplus_direct_IO(int rw, struct kiocb *iocb,
*/
if (unlikely((rw & WRITE) && ret < 0)) {
loff_t isize = i_size_read(inode);
- loff_t end = offset + iov_length(iov, nr_segs);
+ loff_t end = offset + iov_iter_count(iter);
if (end > isize)
vmtruncate(inode, isize);
diff --git a/fs/jfs/inode.c b/fs/jfs/inode.c
index 4692bf3..7c192e1 100644
--- a/fs/jfs/inode.c
+++ b/fs/jfs/inode.c
@@ -323,14 +323,13 @@ static sector_t jfs_bmap(struct address_space *mapping, sector_t block)
}
static ssize_t jfs_direct_IO(int rw, struct kiocb *iocb,
- const struct iovec *iov, loff_t offset, unsigned long nr_segs)
+ struct iov_iter *iter, loff_t offset)
{
struct file *file = iocb->ki_filp;
struct inode *inode = file->f_mapping->host;
ssize_t ret;
- ret = blockdev_direct_IO(rw, iocb, inode, iov, offset, nr_segs,
- jfs_get_block);
+ ret = blockdev_direct_IO(rw, iocb, inode, iter, offset, jfs_get_block);
/*
* In case of error extending write may have instantiated a few
@@ -338,7 +337,7 @@ static ssize_t jfs_direct_IO(int rw, struct kiocb *iocb,
*/
if (unlikely((rw & WRITE) && ret < 0)) {
loff_t isize = i_size_read(inode);
- loff_t end = offset + iov_length(iov, nr_segs);
+ loff_t end = offset + iov_iter_count(iter);
if (end > isize)
vmtruncate(inode, isize);
diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
index cae26cb..4532781 100644
--- a/fs/nfs/direct.c
+++ b/fs/nfs/direct.c
@@ -112,7 +112,7 @@ static inline int put_dreq(struct nfs_direct_req *dreq)
* nfs_direct_IO - NFS address space operation for direct I/O
* @rw: direction (read or write)
* @iocb: target I/O control block
- * @iov: array of vectors that define I/O buffer
+ * @iter: array of vectors that define I/O buffer
* @pos: offset in file to begin the operation
* @nr_segs: size of iovec array
*
@@ -121,22 +121,25 @@ static inline int put_dreq(struct nfs_direct_req *dreq)
* shunt off direct read and write requests before the VFS gets them,
* so this method is only ever called for swap.
*/
-ssize_t nfs_direct_IO(int rw, struct kiocb *iocb, const struct iovec *iov, loff_t pos, unsigned long nr_segs)
+ssize_t nfs_direct_IO(int rw, struct kiocb *iocb, struct iov_iter *iter,
+ loff_t pos)
{
#ifndef CONFIG_NFS_SWAP
dprintk("NFS: nfs_direct_IO (%s) off/no(%Ld/%lu) EINVAL\n",
iocb->ki_filp->f_path.dentry->d_name.name,
- (long long) pos, nr_segs);
+ (long long) pos, iter->nr_segs);
return -EINVAL;
#else
+ const struct iovec *iov = iov_iter_iovec(iter);
+
VM_BUG_ON(iocb->ki_left != PAGE_SIZE);
VM_BUG_ON(iocb->ki_nbytes != PAGE_SIZE);
if (rw == READ || rw == KERNEL_READ)
- return nfs_file_direct_read(iocb, iov, nr_segs, pos,
+ return nfs_file_direct_read(iocb, iov, iter->nr_segs, pos,
rw == READ ? true : false);
- return nfs_file_direct_write(iocb, iov, nr_segs, pos,
+ return nfs_file_direct_write(iocb, iov, iter->nr_segs, pos,
rw == WRITE ? true : false);
#endif /* CONFIG_NFS_SWAP */
}
diff --git a/fs/nilfs2/inode.c b/fs/nilfs2/inode.c
index 4d31d2c..0577750 100644
--- a/fs/nilfs2/inode.c
+++ b/fs/nilfs2/inode.c
@@ -255,8 +255,8 @@ static int nilfs_write_end(struct file *file, struct address_space *mapping,
}
static ssize_t
-nilfs_direct_IO(int rw, struct kiocb *iocb, const struct iovec *iov,
- loff_t offset, unsigned long nr_segs)
+nilfs_direct_IO(int rw, struct kiocb *iocb, struct iov_iter *iter,
+ loff_t offset)
{
struct file *file = iocb->ki_filp;
struct inode *inode = file->f_mapping->host;
@@ -266,7 +266,7 @@ nilfs_direct_IO(int rw, struct kiocb *iocb, const struct iovec *iov,
return 0;
/* Needs synchronization with the cleaner */
- size = blockdev_direct_IO(rw, iocb, inode, iov, offset, nr_segs,
+ size = blockdev_direct_IO(rw, iocb, inode, iter, offset,
nilfs_get_block);
/*
@@ -275,7 +275,7 @@ nilfs_direct_IO(int rw, struct kiocb *iocb, const struct iovec *iov,
*/
if (unlikely((rw & WRITE) && size < 0)) {
loff_t isize = i_size_read(inode);
- loff_t end = offset + iov_length(iov, nr_segs);
+ loff_t end = offset + iov_iter_count(iter);
if (end > isize)
vmtruncate(inode, isize);
diff --git a/fs/ocfs2/aops.c b/fs/ocfs2/aops.c
index 6577432..ed100d5 100644
--- a/fs/ocfs2/aops.c
+++ b/fs/ocfs2/aops.c
@@ -621,9 +621,8 @@ static int ocfs2_releasepage(struct page *page, gfp_t wait)
static ssize_t ocfs2_direct_IO(int rw,
struct kiocb *iocb,
- const struct iovec *iov,
- loff_t offset,
- unsigned long nr_segs)
+ struct iov_iter *iter,
+ loff_t offset)
{
struct file *file = iocb->ki_filp;
struct inode *inode = file->f_path.dentry->d_inode->i_mapping->host;
@@ -640,8 +639,7 @@ static ssize_t ocfs2_direct_IO(int rw,
return 0;
return __blockdev_direct_IO(rw, iocb, inode, inode->i_sb->s_bdev,
- iov, offset, nr_segs,
- ocfs2_direct_IO_get_blocks,
+ iter, offset, ocfs2_direct_IO_get_blocks,
ocfs2_dio_end_io, NULL, 0);
}
diff --git a/fs/reiserfs/inode.c b/fs/reiserfs/inode.c
index f27f01a..2c7bb7f 100644
--- a/fs/reiserfs/inode.c
+++ b/fs/reiserfs/inode.c
@@ -3064,14 +3064,13 @@ static int reiserfs_releasepage(struct page *page, gfp_t unused_gfp_flags)
/* We thank Mingming Cao for helping us understand in great detail what
to do in this section of the code. */
static ssize_t reiserfs_direct_IO(int rw, struct kiocb *iocb,
- const struct iovec *iov, loff_t offset,
- unsigned long nr_segs)
+ struct iov_iter *iter, loff_t offset)
{
struct file *file = iocb->ki_filp;
struct inode *inode = file->f_mapping->host;
ssize_t ret;
- ret = blockdev_direct_IO(rw, iocb, inode, iov, offset, nr_segs,
+ ret = blockdev_direct_IO(rw, iocb, inode, iter, offset,
reiserfs_get_blocks_direct_io);
/*
@@ -3080,7 +3079,7 @@ static ssize_t reiserfs_direct_IO(int rw, struct kiocb *iocb,
*/
if (unlikely((rw & WRITE) && ret < 0)) {
loff_t isize = i_size_read(inode);
- loff_t end = offset + iov_length(iov, nr_segs);
+ loff_t end = offset + iov_iter_count(iter);
if (end > isize)
vmtruncate(inode, isize);
diff --git a/fs/udf/file.c b/fs/udf/file.c
index 77b5953..c4164dc 100644
--- a/fs/udf/file.c
+++ b/fs/udf/file.c
@@ -119,8 +119,7 @@ static int udf_adinicb_write_end(struct file *file,
}
static ssize_t udf_adinicb_direct_IO(int rw, struct kiocb *iocb,
- const struct iovec *iov,
- loff_t offset, unsigned long nr_segs)
+ struct iov_iter *iter, loff_t offset)
{
/* Fallback to buffered I/O. */
return 0;
diff --git a/fs/udf/inode.c b/fs/udf/inode.c
index df88b95..dd1c3da 100644
--- a/fs/udf/inode.c
+++ b/fs/udf/inode.c
@@ -145,19 +145,17 @@ static int udf_write_begin(struct file *file, struct address_space *mapping,
return ret;
}
-static ssize_t udf_direct_IO(int rw, struct kiocb *iocb,
- const struct iovec *iov,
- loff_t offset, unsigned long nr_segs)
+static ssize_t udf_direct_IO(int rw, struct kiocb *iocb, struct iov_iter *iter,
+ loff_t offset)
{
struct file *file = iocb->ki_filp;
struct address_space *mapping = file->f_mapping;
struct inode *inode = mapping->host;
ssize_t ret;
- ret = blockdev_direct_IO(rw, iocb, inode, iov, offset, nr_segs,
- udf_get_block);
+ ret = blockdev_direct_IO(rw, iocb, inode, iter, offset, udf_get_block);
if (unlikely(ret < 0 && (rw & WRITE)))
- udf_write_failed(mapping, offset + iov_length(iov, nr_segs));
+ udf_write_failed(mapping,offset + iov_iter_count(iter));
return ret;
}
diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
index e562dd4..718921b 100644
--- a/fs/xfs/xfs_aops.c
+++ b/fs/xfs/xfs_aops.c
@@ -1395,9 +1395,8 @@ STATIC ssize_t
xfs_vm_direct_IO(
int rw,
struct kiocb *iocb,
- const struct iovec *iov,
- loff_t offset,
- unsigned long nr_segs)
+ struct iov_iter *iter,
+ loff_t offset)
{
struct inode *inode = iocb->ki_filp->f_mapping->host;
struct block_device *bdev = xfs_find_bdev_for_inode(inode);
@@ -1405,7 +1404,7 @@ xfs_vm_direct_IO(
ssize_t ret;
if (rw & WRITE) {
- size_t size = iov_length(iov, nr_segs);
+ size_t size = iov_iter_count(iter);
/*
* We need to preallocate a transaction for a size update
@@ -1421,15 +1420,13 @@ xfs_vm_direct_IO(
ioend->io_isdirect = 1;
}
- ret = __blockdev_direct_IO(rw, iocb, inode, bdev, iov,
- offset, nr_segs,
+ ret = __blockdev_direct_IO(rw, iocb, inode, bdev, iter, offset,
xfs_get_blocks_direct,
xfs_end_io_direct_write, NULL, 0);
if (ret != -EIOCBQUEUED && iocb->private)
goto out_trans_cancel;
} else {
- ret = __blockdev_direct_IO(rw, iocb, inode, bdev, iov,
- offset, nr_segs,
+ ret = __blockdev_direct_IO(rw, iocb, inode, bdev, iter, offset,
xfs_get_blocks_direct,
NULL, NULL, 0);
}
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 6ece092..82dd1e9 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -386,8 +386,8 @@ struct address_space_operations {
void (*invalidatepage) (struct page *, unsigned long);
int (*releasepage) (struct page *, gfp_t);
void (*freepage)(struct page *);
- ssize_t (*direct_IO)(int, struct kiocb *, const struct iovec *iov,
- loff_t offset, unsigned long nr_segs);
+ ssize_t (*direct_IO)(int, struct kiocb *, struct iov_iter *iter,
+ loff_t offset);
int (*get_xip_mem)(struct address_space *, pgoff_t, int,
void **, unsigned long *);
/*
@@ -2399,16 +2399,16 @@ enum {
void dio_end_io(struct bio *bio, int error);
ssize_t __blockdev_direct_IO(int rw, struct kiocb *iocb, struct inode *inode,
- struct block_device *bdev, const struct iovec *iov, loff_t offset,
- unsigned long nr_segs, get_block_t get_block, dio_iodone_t end_io,
- dio_submit_t submit_io, int flags);
+ struct block_device *bdev, struct iov_iter *iter, loff_t offset,
+ get_block_t get_block, dio_iodone_t end_io, dio_submit_t submit_io,
+ int flags);
static inline ssize_t blockdev_direct_IO(int rw, struct kiocb *iocb,
- struct inode *inode, const struct iovec *iov, loff_t offset,
- unsigned long nr_segs, get_block_t get_block)
+ struct inode *inode, struct iov_iter *iter, loff_t offset,
+ get_block_t get_block)
{
- return __blockdev_direct_IO(rw, iocb, inode, inode->i_sb->s_bdev, iov,
- offset, nr_segs, get_block, NULL, NULL,
+ return __blockdev_direct_IO(rw, iocb, inode, inode->i_sb->s_bdev, iter,
+ offset, get_block, NULL, NULL,
DIO_LOCKING | DIO_SKIP_HOLES);
}
#endif
diff --git a/include/linux/nfs_fs.h b/include/linux/nfs_fs.h
index 1cc2568..4913e3c 100644
--- a/include/linux/nfs_fs.h
+++ b/include/linux/nfs_fs.h
@@ -444,8 +444,7 @@ extern int nfs3_removexattr (struct dentry *, const char *name);
/*
* linux/fs/nfs/direct.c
*/
-extern ssize_t nfs_direct_IO(int, struct kiocb *, const struct iovec *, loff_t,
- unsigned long);
+extern ssize_t nfs_direct_IO(int, struct kiocb *, struct iov_iter *, loff_t);
extern ssize_t nfs_file_direct_read(struct kiocb *iocb,
const struct iovec *iov, unsigned long nr_segs,
loff_t pos, bool uio);
diff --git a/mm/filemap.c b/mm/filemap.c
index 753ec48..d428020 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1409,11 +1409,15 @@ generic_file_aio_read(struct kiocb *iocb, const struct iovec *iov,
goto out; /* skip atime */
size = i_size_read(inode);
if (pos < size) {
+ size_t bytes = iov_length(iov, nr_segs);
retval = filemap_write_and_wait_range(mapping, pos,
- pos + iov_length(iov, nr_segs) - 1);
+ pos + bytes - 1);
if (!retval) {
+ struct iov_iter iter;
+
+ iov_iter_init(&iter, iov, nr_segs, bytes, 0);
retval = mapping->a_ops->direct_IO(READ, iocb,
- iov, pos, nr_segs);
+ &iter, pos);
}
if (retval > 0) {
*ppos = pos + retval;
@@ -2037,6 +2041,7 @@ generic_file_direct_write(struct kiocb *iocb, const struct iovec *iov,
ssize_t written;
size_t write_len;
pgoff_t end;
+ struct iov_iter iter;
if (count != ocount)
*nr_segs = iov_shorten((struct iovec *)iov, *nr_segs, count);
@@ -2068,7 +2073,9 @@ generic_file_direct_write(struct kiocb *iocb, const struct iovec *iov,
}
}
- written = mapping->a_ops->direct_IO(WRITE, iocb, iov, pos, *nr_segs);
+ iov_iter_init(&iter, iov, *nr_segs, write_len, 0);
+
+ written = mapping->a_ops->direct_IO(WRITE, iocb, &iter, pos);
/*
* Finally, try again to invalidate clean pages which might have been
diff --git a/mm/page_io.c b/mm/page_io.c
index 78eee32..33da274 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -208,6 +208,9 @@ int swap_writepage(struct page *page, struct writeback_control *wbc)
.iov_base = kmap(page),
.iov_len = PAGE_SIZE,
};
+ struct iov_iter iter;
+
+ iov_iter_init(&iter, &iov, 1, PAGE_SIZE, 0);
init_sync_kiocb(&kiocb, swap_file);
kiocb.ki_pos = page_file_offset(page);
@@ -215,9 +218,8 @@ int swap_writepage(struct page *page, struct writeback_control *wbc)
kiocb.ki_nbytes = PAGE_SIZE;
unlock_page(page);
- ret = mapping->a_ops->direct_IO(KERNEL_WRITE,
- &kiocb, &iov,
- kiocb.ki_pos, 1);
+ ret = mapping->a_ops->direct_IO(KERNEL_WRITE, &kiocb, &iter,
+ kiocb.ki_pos);
kunmap(page);
if (ret == PAGE_SIZE) {
count_vm_event(PSWPOUT);
--
1.7.12.3
> -----Original Message-----
> From: Dave Kleikamp [mailto:[email protected]]
> Sent: Monday, October 22, 2012 11:15 AM
> To: [email protected]
> Cc: [email protected]; Zach Brown; Maxim V. Patlasov; Dave
> Kleikamp; Myklebust, Trond; [email protected]
> Subject: [PATCH 20/22] nfs: add support for read_iter, write_iter
>
> This patch implements the read_iter and write_iter file operations which
> allow kernel code to initiate directIO. This allows the loop device to read and
> write directly to the server, bypassing the page cache.
>
> Signed-off-by: Dave Kleikamp <[email protected]>
> Cc: Zach Brown <[email protected]>
> Cc: Trond Myklebust <[email protected]>
> Cc: [email protected]
> ---
> fs/nfs/direct.c | 169 +++++++++++++++++++++++++++++++++-----------
> -----
> fs/nfs/file.c | 48 ++++++++++----
> fs/nfs/internal.h | 2 +
> fs/nfs/nfs4file.c | 2 +
> include/linux/nfs_fs.h | 6 +-
> 5 files changed, 155 insertions(+), 72 deletions(-)
>
> diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c index 4532781..b1fda1c 100644
> --- a/fs/nfs/direct.c
> +++ b/fs/nfs/direct.c
> @@ -90,6 +90,7 @@ struct nfs_direct_req {
> int flags;
> #define NFS_ODIRECT_DO_COMMIT (1) /* an unstable reply
> was received */
> #define NFS_ODIRECT_RESCHED_WRITES (2) /* write verification
> failed */
> +#define NFS_ODIRECT_MARK_DIRTY (4) /* mark read pages
> dirty */
> struct nfs_writeverf verf; /* unstable write verifier */
> };
>
> @@ -131,15 +132,13 @@ ssize_t nfs_direct_IO(int rw, struct kiocb *iocb,
> struct iov_iter *iter,
>
> return -EINVAL;
> #else
> - const struct iovec *iov = iov_iter_iovec(iter);
> -
> VM_BUG_ON(iocb->ki_left != PAGE_SIZE);
> VM_BUG_ON(iocb->ki_nbytes != PAGE_SIZE);
>
> if (rw == READ || rw == KERNEL_READ)
> - return nfs_file_direct_read(iocb, iov, iter->nr_segs, pos,
> + return nfs_file_direct_read(iocb, iter, pos,
> rw == READ ? true : false);
> - return nfs_file_direct_write(iocb, iov, iter->nr_segs, pos,
> + return nfs_file_direct_write(iocb, iter, pos,
> rw == WRITE ? true : false);
> #endif /* CONFIG_NFS_SWAP */
> }
> @@ -277,7 +276,8 @@ static void nfs_direct_read_completion(struct
> nfs_pgio_header *hdr)
> hdr->good_bytes & ~PAGE_MASK,
> PAGE_SIZE);
> }
> - if (!PageCompound(page)) {
> + if ((dreq->flags & NFS_ODIRECT_MARK_DIRTY) &&
> + !PageCompound(page)) {
> if (test_bit(NFS_IOHDR_ERROR, &hdr->flags)) {
> if (bytes < hdr->good_bytes)
> set_page_dirty(page);
> @@ -414,10 +414,9 @@ static ssize_t
> nfs_direct_read_schedule_segment(struct nfs_pageio_descriptor *de
> return result < 0 ? (ssize_t) result : -EFAULT; }
>
> -static ssize_t nfs_direct_read_schedule_iovec(struct nfs_direct_req *dreq,
> - const struct iovec *iov,
> - unsigned long nr_segs,
> - loff_t pos, bool uio)
> +static ssize_t nfs_direct_read_schedule(struct nfs_direct_req *dreq,
> + struct iov_iter *iter, loff_t pos,
> + bool uio)
> {
> struct nfs_pageio_descriptor desc;
> ssize_t result = -EINVAL;
> @@ -429,16 +428,47 @@ static ssize_t
> nfs_direct_read_schedule_iovec(struct nfs_direct_req *dreq,
> get_dreq(dreq);
> desc.pg_dreq = dreq;
>
> - for (seg = 0; seg < nr_segs; seg++) {
> - const struct iovec *vec = &iov[seg];
> - result = nfs_direct_read_schedule_segment(&desc, vec,
> pos, uio);
> - if (result < 0)
> - break;
> - requested_bytes += result;
> - if ((size_t)result < vec->iov_len)
> - break;
> - pos += vec->iov_len;
> - }
> + if (iov_iter_has_iovec(iter)) {
> + const struct iovec *iov = iov_iter_iovec(iter);
> + if (uio)
> + dreq->flags = NFS_ODIRECT_MARK_DIRTY;
> + for (seg = 0; seg < iter->nr_segs; seg++) {
> + const struct iovec *vec = &iov[seg];
> + result = nfs_direct_read_schedule_segment(&desc,
> vec,
> + pos, uio);
> + if (result < 0)
> + break;
> + requested_bytes += result;
> + if ((size_t)result < vec->iov_len)
> + break;
> + pos += vec->iov_len;
> + }
> + } else if (iov_iter_has_bvec(iter)) {
> + struct nfs_open_context *ctx = dreq->ctx;
> + struct inode *inode = ctx->dentry->d_inode;
> + struct bio_vec *bvec = iov_iter_bvec(iter);
> + for (seg = 0; seg < iter->nr_segs; seg++) {
> + struct nfs_page *req;
> + unsigned int req_len = bvec[seg].bv_len;
> + req = nfs_create_request(ctx, inode,
> + bvec[seg].bv_page,
> + bvec[seg].bv_offset,
> req_len);
> + if (IS_ERR(req)) {
> + result = PTR_ERR(req);
> + break;
> + }
> + req->wb_index = pos >> PAGE_SHIFT;
> + req->wb_offset = pos & ~PAGE_MASK;
> + if (!nfs_pageio_add_request(&desc, req)) {
> + result = desc.pg_error;
> + nfs_release_request(req);
> + break;
> + }
> + requested_bytes += req_len;
> + pos += req_len;
> + }
> + } else
> + BUG();
Can we please split the contents of these 2 if statements into 2 helper functions nfs_direct_do_schedule_read_iovec() and nfs_direct_do_schedule_read_bvec()?
>
> nfs_pageio_complete(&desc);
>
> @@ -456,8 +486,8 @@ static ssize_t nfs_direct_read_schedule_iovec(struct
> nfs_direct_req *dreq,
> return 0;
> }
>
> -static ssize_t nfs_direct_read(struct kiocb *iocb, const struct iovec *iov,
> - unsigned long nr_segs, loff_t pos, bool uio)
> +static ssize_t nfs_direct_read(struct kiocb *iocb, struct iov_iter *iter,
> + loff_t pos, bool uio)
> {
> ssize_t result = -ENOMEM;
> struct inode *inode = iocb->ki_filp->f_mapping->host; @@ -469,7
> +499,7 @@ static ssize_t nfs_direct_read(struct kiocb *iocb, const struct
> iovec *iov,
> goto out;
>
> dreq->inode = inode;
> - dreq->bytes_left = iov_length(iov, nr_segs);
> + dreq->bytes_left = iov_iter_count(iter);
> dreq->ctx = get_nfs_open_context(nfs_file_open_context(iocb-
> >ki_filp));
> l_ctx = nfs_get_lock_context(dreq->ctx);
> if (IS_ERR(l_ctx)) {
> @@ -480,8 +510,8 @@ static ssize_t nfs_direct_read(struct kiocb *iocb, const
> struct iovec *iov,
> if (!is_sync_kiocb(iocb))
> dreq->iocb = iocb;
>
> - NFS_I(inode)->read_io += iov_length(iov, nr_segs);
> - result = nfs_direct_read_schedule_iovec(dreq, iov, nr_segs, pos,
> uio);
> + NFS_I(inode)->read_io += iov_iter_count(iter);
> + result = nfs_direct_read_schedule(dreq, iter, pos, uio);
> if (!result)
> result = nfs_direct_wait(dreq);
> out_release:
> @@ -815,10 +845,9 @@ static const struct nfs_pgio_completion_ops
> nfs_direct_write_completion_ops = {
> .completion = nfs_direct_write_completion, };
>
> -static ssize_t nfs_direct_write_schedule_iovec(struct nfs_direct_req *dreq,
> - const struct iovec *iov,
> - unsigned long nr_segs,
> - loff_t pos, bool uio)
> +static ssize_t nfs_direct_write_schedule(struct nfs_direct_req *dreq,
> + struct iov_iter *iter, loff_t pos,
> + bool uio)
> {
> struct nfs_pageio_descriptor desc;
> struct inode *inode = dreq->inode;
> @@ -832,17 +861,48 @@ static ssize_t
> nfs_direct_write_schedule_iovec(struct nfs_direct_req *dreq,
> get_dreq(dreq);
> atomic_inc(&inode->i_dio_count);
>
> - NFS_I(dreq->inode)->write_io += iov_length(iov, nr_segs);
> - for (seg = 0; seg < nr_segs; seg++) {
> - const struct iovec *vec = &iov[seg];
> - result = nfs_direct_write_schedule_segment(&desc, vec,
> pos, uio);
> - if (result < 0)
> - break;
> - requested_bytes += result;
> - if ((size_t)result < vec->iov_len)
> - break;
> - pos += vec->iov_len;
> - }
> + NFS_I(dreq->inode)->write_io += iov_iter_count(iter);
> +
> + if (iov_iter_has_iovec(iter)) {
> + const struct iovec *iov = iov_iter_iovec(iter);
> + for (seg = 0; seg < iter->nr_segs; seg++) {
> + const struct iovec *vec = &iov[seg];
> + result = nfs_direct_write_schedule_segment(&desc,
> vec,
> + pos, uio);
> + if (result < 0)
> + break;
> + requested_bytes += result;
> + if ((size_t)result < vec->iov_len)
> + break;
> + pos += vec->iov_len;
> + }
> + } else if (iov_iter_has_bvec(iter)) {
> + struct nfs_open_context *ctx = dreq->ctx;
> + struct bio_vec *bvec = iov_iter_bvec(iter);
> + for (seg = 0; seg < iter->nr_segs; seg++) {
> + struct nfs_page *req;
> + unsigned int req_len = bvec[seg].bv_len;
> +
> + req = nfs_create_request(ctx, inode,
> bvec[seg].bv_page,
> + bvec[seg].bv_offset,
> req_len);
> + if (IS_ERR(req)) {
> + result = PTR_ERR(req);
> + break;
> + }
> + nfs_lock_request(req);
> + req->wb_index = pos >> PAGE_SHIFT;
> + req->wb_offset = pos & ~PAGE_MASK;
> + if (!nfs_pageio_add_request(&desc, req)) {
> + result = desc.pg_error;
> + nfs_unlock_and_release_request(req);
> + break;
> + }
> + requested_bytes += req_len;
> + pos += req_len;
> + }
> + } else
> + BUG();
Ditto...
> +
> nfs_pageio_complete(&desc);
>
> /*
> @@ -860,9 +920,8 @@ static ssize_t nfs_direct_write_schedule_iovec(struct
> nfs_direct_req *dreq,
> return 0;
> }
>
> -static ssize_t nfs_direct_write(struct kiocb *iocb, const struct iovec *iov,
> - unsigned long nr_segs, loff_t pos,
> - size_t count, bool uio)
> +static ssize_t nfs_direct_write(struct kiocb *iocb, struct iov_iter *iter,
> + loff_t pos, bool uio)
> {
> ssize_t result = -ENOMEM;
> struct inode *inode = iocb->ki_filp->f_mapping->host; @@ -874,7
> +933,7 @@ static ssize_t nfs_direct_write(struct kiocb *iocb, const struct
> iovec *iov,
> goto out;
>
> dreq->inode = inode;
> - dreq->bytes_left = count;
> + dreq->bytes_left = iov_iter_count(iter);
> dreq->ctx = get_nfs_open_context(nfs_file_open_context(iocb-
> >ki_filp));
> l_ctx = nfs_get_lock_context(dreq->ctx);
> if (IS_ERR(l_ctx)) {
> @@ -885,7 +944,7 @@ static ssize_t nfs_direct_write(struct kiocb *iocb,
> const struct iovec *iov,
> if (!is_sync_kiocb(iocb))
> dreq->iocb = iocb;
>
> - result = nfs_direct_write_schedule_iovec(dreq, iov, nr_segs, pos,
> uio);
> + result = nfs_direct_write_schedule(dreq, iter, pos, uio);
> if (!result)
> result = nfs_direct_wait(dreq);
> out_release:
> @@ -897,8 +956,7 @@ out:
> /**
> * nfs_file_direct_read - file direct read operation for NFS files
> * @iocb: target I/O control block
> - * @iov: vector of user buffers into which to read data
> - * @nr_segs: size of iov vector
> + * @iter: vector of buffers into which to read data
> * @pos: byte offset in file where reading starts
> *
> * We use this function for direct reads instead of calling @@ -915,15 +973,15
> @@ out:
> * client must read the updated atime from the server back into its
> * cache.
> */
> -ssize_t nfs_file_direct_read(struct kiocb *iocb, const struct iovec *iov,
> - unsigned long nr_segs, loff_t pos, bool uio)
> +ssize_t nfs_file_direct_read(struct kiocb *iocb, struct iov_iter *iter,
> + loff_t pos, bool uio)
> {
> ssize_t retval = -EINVAL;
> struct file *file = iocb->ki_filp;
> struct address_space *mapping = file->f_mapping;
> size_t count;
>
> - count = iov_length(iov, nr_segs);
> + count = iov_iter_count(iter);
> nfs_add_stats(mapping->host, NFSIOS_DIRECTREADBYTES, count);
>
> dfprintk(FILE, "NFS: direct read(%s/%s, %zd@%Ld)\n", @@ -941,7
> +999,7 @@ ssize_t nfs_file_direct_read(struct kiocb *iocb, const struct iovec
> *iov,
>
> task_io_account_read(count);
>
> - retval = nfs_direct_read(iocb, iov, nr_segs, pos, uio);
> + retval = nfs_direct_read(iocb, iter, pos, uio);
> if (retval > 0)
> iocb->ki_pos = pos + retval;
>
> @@ -952,8 +1010,7 @@ out:
> /**
> * nfs_file_direct_write - file direct write operation for NFS files
> * @iocb: target I/O control block
> - * @iov: vector of user buffers from which to write data
> - * @nr_segs: size of iov vector
> + * @iter: vector of buffers from which to write data
> * @pos: byte offset in file where writing starts
> *
> * We use this function for direct writes instead of calling @@ -971,15
> +1028,15 @@ out:
> * Note that O_APPEND is not supported for NFS direct writes, as there
> * is no atomic O_APPEND write facility in the NFS protocol.
> */
> -ssize_t nfs_file_direct_write(struct kiocb *iocb, const struct iovec *iov,
> - unsigned long nr_segs, loff_t pos, bool uio)
> +ssize_t nfs_file_direct_write(struct kiocb *iocb, struct iov_iter *iter,
> + loff_t pos, bool uio)
> {
> ssize_t retval = -EINVAL;
> struct file *file = iocb->ki_filp;
> struct address_space *mapping = file->f_mapping;
> size_t count;
>
> - count = iov_length(iov, nr_segs);
> + count = iov_iter_count(iter);
> nfs_add_stats(mapping->host, NFSIOS_DIRECTWRITTENBYTES,
> count);
>
> dfprintk(FILE, "NFS: direct write(%s/%s, %zd@%Ld)\n", @@ -1004,7
> +1061,7 @@ ssize_t nfs_file_direct_write(struct kiocb *iocb, const struct
> iovec *iov,
>
> task_io_account_write(count);
>
> - retval = nfs_direct_write(iocb, iov, nr_segs, pos, count, uio);
> + retval = nfs_direct_write(iocb, iter, pos, uio);
> if (retval > 0) {
> struct inode *inode = mapping->host;
>
> diff --git a/fs/nfs/file.c b/fs/nfs/file.c index 582bb88..b4bf6ef 100644
> --- a/fs/nfs/file.c
> +++ b/fs/nfs/file.c
> @@ -172,28 +172,39 @@ nfs_file_flush(struct file *file, fl_owner_t id)
> EXPORT_SYMBOL_GPL(nfs_file_flush);
>
> ssize_t
> -nfs_file_read(struct kiocb *iocb, const struct iovec *iov,
> - unsigned long nr_segs, loff_t pos)
> +nfs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter, loff_t
> +pos)
> {
> struct dentry * dentry = iocb->ki_filp->f_path.dentry;
> struct inode * inode = dentry->d_inode;
> ssize_t result;
>
> if (iocb->ki_filp->f_flags & O_DIRECT)
> - return nfs_file_direct_read(iocb, iov, nr_segs, pos, true);
> + return nfs_file_direct_read(iocb, iter, pos, true);
>
> - dprintk("NFS: read(%s/%s, %lu@%lu)\n",
> + dprintk("NFS: read_iter(%s/%s, %lu@%lu)\n",
> dentry->d_parent->d_name.name, dentry->d_name.name,
> - (unsigned long) iov_length(iov, nr_segs), (unsigned long)
> pos);
> + (unsigned long) iov_iter_count(iter), (unsigned long) pos);
>
> result = nfs_revalidate_mapping(inode, iocb->ki_filp->f_mapping);
> if (!result) {
> - result = generic_file_aio_read(iocb, iov, nr_segs, pos);
> + result = generic_file_read_iter(iocb, iter, pos);
> if (result > 0)
> nfs_add_stats(inode, NFSIOS_NORMALREADBYTES,
> result);
> }
> return result;
> }
> +EXPORT_SYMBOL_GPL(nfs_file_read_iter);
> +
> +ssize_t
> +nfs_file_read(struct kiocb *iocb, const struct iovec *iov,
> + unsigned long nr_segs, loff_t pos)
> +{
> + struct iov_iter iter;
> +
> + iov_iter_init(&iter, iov, nr_segs, iov_length(iov, nr_segs), 0);
> +
> + return nfs_file_read_iter(iocb, &iter, pos); }
> EXPORT_SYMBOL_GPL(nfs_file_read);
>
> ssize_t
> @@ -610,19 +621,19 @@ static int nfs_need_sync_write(struct file *filp,
> struct inode *inode)
> return 0;
> }
>
> -ssize_t nfs_file_write(struct kiocb *iocb, const struct iovec *iov,
> - unsigned long nr_segs, loff_t pos)
> +ssize_t nfs_file_write_iter(struct kiocb *iocb, struct iov_iter *iter,
> + loff_t pos)
> {
> struct dentry * dentry = iocb->ki_filp->f_path.dentry;
> struct inode * inode = dentry->d_inode;
> unsigned long written = 0;
> ssize_t result;
> - size_t count = iov_length(iov, nr_segs);
> + size_t count = iov_iter_count(iter);
>
> if (iocb->ki_filp->f_flags & O_DIRECT)
> - return nfs_file_direct_write(iocb, iov, nr_segs, pos, true);
> + return nfs_file_direct_write(iocb, iter, pos, true);
>
> - dprintk("NFS: write(%s/%s, %lu@%Ld)\n",
> + dprintk("NFS: write_iter(%s/%s, %lu@%lld)\n",
> dentry->d_parent->d_name.name, dentry->d_name.name,
> (unsigned long) count, (long long) pos);
>
> @@ -642,7 +653,7 @@ ssize_t nfs_file_write(struct kiocb *iocb, const struct
> iovec *iov,
> if (!count)
> goto out;
>
> - result = generic_file_aio_write(iocb, iov, nr_segs, pos);
> + result = generic_file_write_iter(iocb, iter, pos);
> if (result > 0)
> written = result;
>
> @@ -661,6 +672,17 @@ out_swapfile:
> printk(KERN_INFO "NFS: attempt to write to active swap file!\n");
> goto out;
> }
> +EXPORT_SYMBOL_GPL(nfs_file_write_iter);
> +
> +ssize_t nfs_file_write(struct kiocb *iocb, const struct iovec *iov,
> + unsigned long nr_segs, loff_t pos) {
> + struct iov_iter iter;
> +
> + iov_iter_init(&iter, iov, nr_segs, iov_length(iov, nr_segs), 0);
> +
> + return nfs_file_write_iter(iocb, &iter, pos); }
> EXPORT_SYMBOL_GPL(nfs_file_write);
>
> ssize_t nfs_file_splice_write(struct pipe_inode_info *pipe, @@ -914,6
> +936,8 @@ const struct file_operations nfs_file_operations = {
> .write = do_sync_write,
> .aio_read = nfs_file_read,
> .aio_write = nfs_file_write,
> + .read_iter = nfs_file_read_iter,
> + .write_iter = nfs_file_write_iter,
> .mmap = nfs_file_mmap,
> .open = nfs_file_open,
> .flush = nfs_file_flush,
> diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h index 59b133c..8db3b11
> 100644
> --- a/fs/nfs/internal.h
> +++ b/fs/nfs/internal.h
> @@ -302,10 +302,12 @@ int nfs_file_fsync_commit(struct file *, loff_t,
> loff_t, int); loff_t nfs_file_llseek(struct file *, loff_t, int); int
> nfs_file_flush(struct file *, fl_owner_t); ssize_t nfs_file_read(struct kiocb *,
> const struct iovec *, unsigned long, loff_t);
> +ssize_t nfs_file_read_iter(struct kiocb *, struct iov_iter *, loff_t);
> ssize_t nfs_file_splice_read(struct file *, loff_t *, struct pipe_inode_info *,
> size_t, unsigned int);
> int nfs_file_mmap(struct file *, struct vm_area_struct *); ssize_t
> nfs_file_write(struct kiocb *, const struct iovec *, unsigned long, loff_t);
> +ssize_t nfs_file_write_iter(struct kiocb *, struct iov_iter *, loff_t);
> int nfs_file_release(struct inode *, struct file *); int nfs_lock(struct file *, int,
> struct file_lock *); int nfs_flock(struct file *, int, struct file_lock *); diff --git
> a/fs/nfs/nfs4file.c b/fs/nfs/nfs4file.c index afddd66..195188e 100644
> --- a/fs/nfs/nfs4file.c
> +++ b/fs/nfs/nfs4file.c
> @@ -123,6 +123,8 @@ const struct file_operations nfs4_file_operations = {
> .write = do_sync_write,
> .aio_read = nfs_file_read,
> .aio_write = nfs_file_write,
> + .read_iter = nfs_file_read_iter,
> + .write_iter = nfs_file_write_iter,
> .mmap = nfs_file_mmap,
> .open = nfs4_file_open,
> .flush = nfs_file_flush,
> diff --git a/include/linux/nfs_fs.h b/include/linux/nfs_fs.h index
> 4913e3c..9f8e8a9 100644
> --- a/include/linux/nfs_fs.h
> +++ b/include/linux/nfs_fs.h
> @@ -445,11 +445,9 @@ extern int nfs3_removexattr (struct dentry *, const
> char *name);
> * linux/fs/nfs/direct.c
> */
> extern ssize_t nfs_direct_IO(int, struct kiocb *, struct iov_iter *, loff_t); -
> extern ssize_t nfs_file_direct_read(struct kiocb *iocb,
> - const struct iovec *iov, unsigned long nr_segs,
> +extern ssize_t nfs_file_direct_read(struct kiocb *iocb, struct iov_iter
> +*iter,
> loff_t pos, bool uio);
> -extern ssize_t nfs_file_direct_write(struct kiocb *iocb,
> - const struct iovec *iov, unsigned long nr_segs,
> +extern ssize_t nfs_file_direct_write(struct kiocb *iocb, struct
> +iov_iter *iter,
> loff_t pos, bool uio);
>
> /*
Otherwise, everything looks fine to me...
Acked-by: Trond Myklebust <[email protected]>
Cheers
Trond
From: "From: Zach Brown" <[email protected]>
This adds iocb cmds which specify that memory is held in iov_iter
structures. This lets kernel callers specify memory that can be
expressed in an iov_iter, which includes pages in bio_vec arrays.
Only kernel callers can provide an iov_iter so it doesn't make a lot of
sense to expose the IOCB_CMD values for this as part of the user space
ABI.
But kernel callers should also be able to perform the usual aio
operations which suggests using the the existing operation namespace and
support code.
Signed-off-by: Dave Kleikamp <[email protected]>
Cc: Zach Brown <[email protected]>
---
fs/aio.c | 64 ++++++++++++++++++++++++++++++++++++++++++++
include/linux/aio.h | 3 +++
include/uapi/linux/aio_abi.h | 2 ++
3 files changed, 69 insertions(+)
diff --git a/fs/aio.c b/fs/aio.c
index 9a1a6fc..2c03681 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -1424,6 +1424,26 @@ static ssize_t aio_setup_single_vector(int type, struct file * file, struct kioc
return 0;
}
+static ssize_t aio_read_iter(struct kiocb *iocb)
+{
+ struct file *file = iocb->ki_filp;
+ ssize_t ret = -EINVAL;
+
+ if (file->f_op->read_iter)
+ ret = file->f_op->read_iter(iocb, iocb->ki_iter, iocb->ki_pos);
+ return ret;
+}
+
+static ssize_t aio_write_iter(struct kiocb *iocb)
+{
+ struct file *file = iocb->ki_filp;
+ ssize_t ret = -EINVAL;
+
+ if (file->f_op->write_iter)
+ ret = file->f_op->write_iter(iocb, iocb->ki_iter, iocb->ki_pos);
+ return ret;
+}
+
/*
* aio_setup_iocb:
* Performs the initial checks and aio retry method
@@ -1487,6 +1507,34 @@ static ssize_t aio_setup_iocb(struct kiocb *kiocb, bool compat)
if (file->f_op->aio_write)
kiocb->ki_retry = aio_rw_vect_retry;
break;
+ case IOCB_CMD_READ_ITER:
+ ret = -EINVAL;
+ if (unlikely(!is_kernel_kiocb(kiocb)))
+ break;
+ ret = -EBADF;
+ if (unlikely(!(file->f_mode & FMODE_READ)))
+ break;
+ ret = security_file_permission(file, MAY_READ);
+ if (unlikely(ret))
+ break;
+ ret = -EINVAL;
+ if (file->f_op->read_iter)
+ kiocb->ki_retry = aio_read_iter;
+ break;
+ case IOCB_CMD_WRITE_ITER:
+ ret = -EINVAL;
+ if (unlikely(!is_kernel_kiocb(kiocb)))
+ break;
+ ret = -EBADF;
+ if (unlikely(!(file->f_mode & FMODE_WRITE)))
+ break;
+ ret = security_file_permission(file, MAY_WRITE);
+ if (unlikely(ret))
+ break;
+ ret = -EINVAL;
+ if (file->f_op->write_iter)
+ kiocb->ki_retry = aio_write_iter;
+ break;
case IOCB_CMD_FDSYNC:
ret = -EINVAL;
if (file->f_op->aio_fsync)
@@ -1553,6 +1601,22 @@ void aio_kernel_init_rw(struct kiocb *iocb, struct file *filp,
}
EXPORT_SYMBOL_GPL(aio_kernel_init_rw);
+/*
+ * The iter count must be set before calling here. Some filesystems uses
+ * iocb->ki_left as an indicator of the size of an IO.
+ */
+void aio_kernel_init_iter(struct kiocb *iocb, struct file *filp,
+ unsigned short op, struct iov_iter *iter, loff_t off)
+{
+ iocb->ki_filp = filp;
+ iocb->ki_iter = iter;
+ iocb->ki_opcode = op;
+ iocb->ki_pos = off;
+ iocb->ki_nbytes = iov_iter_count(iter);
+ iocb->ki_left = iocb->ki_nbytes;
+}
+EXPORT_SYMBOL_GPL(aio_kernel_init_iter);
+
void aio_kernel_init_callback(struct kiocb *iocb,
void (*complete)(u64 user_data, long res),
u64 user_data)
diff --git a/include/linux/aio.h b/include/linux/aio.h
index f9e0292..2598e6c 100644
--- a/include/linux/aio.h
+++ b/include/linux/aio.h
@@ -126,6 +126,7 @@ struct kiocb {
* this is the underlying eventfd context to deliver events to.
*/
struct eventfd_ctx *ki_eventfd;
+ struct iov_iter *ki_iter;
};
static inline bool is_sync_kiocb(struct kiocb *kiocb)
@@ -229,6 +230,8 @@ struct kiocb *aio_kernel_alloc(gfp_t gfp);
void aio_kernel_free(struct kiocb *iocb);
void aio_kernel_init_rw(struct kiocb *iocb, struct file *filp,
unsigned short op, void *ptr, size_t nr, loff_t off);
+void aio_kernel_init_iter(struct kiocb *iocb, struct file *filp,
+ unsigned short op, struct iov_iter *iter, loff_t off);
void aio_kernel_init_callback(struct kiocb *iocb,
void (*complete)(u64 user_data, long res),
u64 user_data);
diff --git a/include/uapi/linux/aio_abi.h b/include/uapi/linux/aio_abi.h
index 86fa7a7..bd39bb2 100644
--- a/include/uapi/linux/aio_abi.h
+++ b/include/uapi/linux/aio_abi.h
@@ -44,6 +44,8 @@ enum {
IOCB_CMD_NOOP = 6,
IOCB_CMD_PREADV = 7,
IOCB_CMD_PWRITEV = 8,
+ IOCB_CMD_READ_ITER = 9,
+ IOCB_CMD_WRITE_ITER = 10,
};
/*
--
1.7.12.3
On Mon, Oct 22, 2012 at 10:15:00AM -0500, Dave Kleikamp wrote:
> This is the current version of the patchset I presented at the LSF-MM
> Summit in San Francisco in April. I apologize for letting it go so
> long before re-submitting.
>
> This patchset was begun by Zach Brown and was originally submitted for
> review in October, 2009. Feedback was positive, and I have picked up
> where he left off, porting his patches to the latest mainline kernel
> and adding support more file systems.
>
> This patch series adds a kernel interface to fs/aio.c so that kernel code can
> issue concurrent asynchronous IO to file systems. It adds an aio command and
> file system methods which specify io memory with pages instead of userspace
> addresses.
>
> This series was written to reduce the current overhead loop imposes by
> performing synchronus buffered file system IO from a kernel thread. These
> patches turn loop into a light weight layer that translates bios into iocbs.
I note that there is no support for XFS in this patch set. Is there
a particular problem that prevents XFS from being converted, or it
just hasn't been done?
Cheers,
Dave.
--
Dave Chinner
[email protected]
On 10/22/2012 07:07 PM, Dave Chinner wrote:
> On Mon, Oct 22, 2012 at 10:15:00AM -0500, Dave Kleikamp wrote:
>> This is the current version of the patchset I presented at the LSF-MM
>> Summit in San Francisco in April. I apologize for letting it go so
>> long before re-submitting.
>>
>> This patchset was begun by Zach Brown and was originally submitted for
>> review in October, 2009. Feedback was positive, and I have picked up
>> where he left off, porting his patches to the latest mainline kernel
>> and adding support more file systems.
>>
>> This patch series adds a kernel interface to fs/aio.c so that kernel code can
>> issue concurrent asynchronous IO to file systems. It adds an aio command and
>> file system methods which specify io memory with pages instead of userspace
>> addresses.
>>
>> This series was written to reduce the current overhead loop imposes by
>> performing synchronus buffered file system IO from a kernel thread. These
>> patches turn loop into a light weight layer that translates bios into iocbs.
>
> I note that there is no support for XFS in this patch set. Is there
> a particular problem that prevents XFS from being converted, or it
> just hasn't been done?
It just hasn't been done. It wasn't one of the trivial ones so I put it
off at first, and after that, it's an oversight. I'll follow up with an
xfs patch for your review.
Thanks,
Shaggy
>
> Cheers,
>
> Dave.
>
On Mon, Oct 22, 2012 at 07:53:40PM -0500, Dave Kleikamp wrote:
> On 10/22/2012 07:07 PM, Dave Chinner wrote:
> > On Mon, Oct 22, 2012 at 10:15:00AM -0500, Dave Kleikamp wrote:
> >> This is the current version of the patchset I presented at the LSF-MM
> >> Summit in San Francisco in April. I apologize for letting it go so
> >> long before re-submitting.
> >>
> >> This patchset was begun by Zach Brown and was originally submitted for
> >> review in October, 2009. Feedback was positive, and I have picked up
> >> where he left off, porting his patches to the latest mainline kernel
> >> and adding support more file systems.
> >>
> >> This patch series adds a kernel interface to fs/aio.c so that kernel code can
> >> issue concurrent asynchronous IO to file systems. It adds an aio command and
> >> file system methods which specify io memory with pages instead of userspace
> >> addresses.
> >>
> >> This series was written to reduce the current overhead loop imposes by
> >> performing synchronus buffered file system IO from a kernel thread. These
> >> patches turn loop into a light weight layer that translates bios into iocbs.
> >
> > I note that there is no support for XFS in this patch set. Is there
> > a particular problem that prevents XFS from being converted, or it
> > just hasn't been done?
>
> It just hasn't been done. It wasn't one of the trivial ones so I put it
> off at first, and after that, it's an oversight. I'll follow up with an
> xfs patch for your review.
Thanks Shaggy, that's all I wanted to know. No extra work for me
(apart from review and testing) is fine by me. ;)
Cheers,
Dave.
--
Dave Chinner
[email protected]
On Mon, Oct 22, 2012 at 10:15:00AM -0500, Dave Kleikamp wrote:
> This is the current version of the patchset I presented at the LSF-MM
> Summit in San Francisco in April. I apologize for letting it go so
> long before re-submitting.
>
> This patchset was begun by Zach Brown and was originally submitted for
> review in October, 2009. Feedback was positive, and I have picked up
> where he left off, porting his patches to the latest mainline kernel
> and adding support more file systems.
>
> This patch series adds a kernel interface to fs/aio.c so that kernel code can
> issue concurrent asynchronous IO to file systems. It adds an aio command and
> file system methods which specify io memory with pages instead of userspace
> addresses.
>
> This series was written to reduce the current overhead loop imposes by
> performing synchronus buffered file system IO from a kernel thread. These
> patches turn loop into a light weight layer that translates bios into iocbs.
>
> The downside of this is that in its current implementation, performance takes
> a big hit for non-synchonous I/O, since the underlying page cache is bypassed.
> The tradeoff is that all writes to the loop device make it to the underlying
> media, making loop-mounted file systems recoverable.
It also seems to still not fully kill thr old aio_read/write codepath.
At least XFS isn't touched yet. It also doesn't seem to kill the nasty
hack for in-kernel direct I/O introduced with the swap over nfs code
(grep for REQ_KERNEL / KERNEL_READ / KERNEL_WRITE)