Hi Christian,
Can you take this through the filesystem tree?
These patches make some changes to the kunit tests previously added for
iov_iter testing, in particular adding testing of UBUF/IOVEC iterators and
some benchmarking:
(1) Clean up a couple of checkpatch style complaints.
(2) Consolidate some repeated bits of code into helper functions and use
the same struct to represent straight offset/address ranges and
partial page lists.
(3) Add a function to set up a userspace VM, attach the VM to the kunit
testing thread, create an anonymous file, stuff some pages into the
file and map the file into the VM to act as a buffer that can be used
with UBUF/IOVEC iterators.
I map an anonymous file with pages attached rather than using MAP_ANON
so that I can check the pages obtained from iov_iter_extract_pages()
without worrying about them changing due to swap, migrate, etc..
[?] Is this the best way to do things? Mirroring execve, it requires
a number of extra core symbols to be exported. Should this be done in
the core code?
(4) Add tests for copying into and out of UBUF and IOVEC iterators.
(5) Add tests for extracting pages from UBUF and IOVEC iterators.
(6) Add tests to benchmark copying 256MiB to UBUF, IOVEC, KVEC, BVEC and
XARRAY iterators.
(7) Add a test to bencmark copying 256MiB from an xarray that gets decanted
into 256-page BVEC iterators to model batching from the pagecache.
(8) Add a test to benchmark copying 256MiB through dynamically allocated
256-page bvecs to simulate bio construction.
Example benchmarks output:
iov_kunit_benchmark_ubuf: avg 4474 uS, stddev 1340 uS
iov_kunit_benchmark_iovec: avg 6619 uS, stddev 23 uS
iov_kunit_benchmark_kvec: avg 2672 uS, stddev 14 uS
iov_kunit_benchmark_bvec: avg 3189 uS, stddev 19 uS
iov_kunit_benchmark_bvec_split: avg 3403 uS, stddev 8 uS
iov_kunit_benchmark_xarray: avg 3709 uS, stddev 7 uS
I've pushed the patches here also:
https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=iov-kunit
David
Changes
=======
ver #3)
- #include <linux/personality.h> to get READ_IMPLIES_EXEC.
- Add a test to benchmark decanting an xarray into bio_vecs.
ver #2)
- Use MAP_ANON to make the user buffer if we don't want a list of pages.
- KUNIT_ASSERT_NOT_ERR_OR_NULL() doesn't like __user pointers as the
condition, so cast.
- Make the UBUF benchmark loop, doing an iterator per page so that the
overhead from the iterator code is not negligible.
- Make the KVEC benchmark use an iovec per page so that the iteration is
not not negligible.
- Switch the benchmarking to use copy_from_iter() so that only a single
page is needed in the userspace buffer (as it can be shared R/O), not
256MiB's worth.
Link: https://lore.kernel.org/r/[email protected]/ # v1
Link: https://lore.kernel.org/r/[email protected]/ # v2
Link: https://lore.kernel.org/r/[email protected]/ # v3
David Howells (10):
iov_iter: Fix some checkpatch complaints in kunit tests
iov_iter: Consolidate some of the repeated code into helpers
iov_iter: Consolidate the test vector struct in the kunit tests
iov_iter: Consolidate bvec pattern checking
iov_iter: Create a function to prepare userspace VM for UBUF/IOVEC
tests
iov_iter: Add copy kunit tests for ITER_UBUF and ITER_IOVEC
iov_iter: Add extract kunit tests for ITER_UBUF and ITER_IOVEC
iov_iter: Add benchmarking kunit tests
iov_iter: Add kunit to benchmark decanting of xarray to bvec
iov_iter: Add benchmarking kunit tests for UBUF/IOVEC
arch/s390/kernel/vdso.c | 1 +
fs/anon_inodes.c | 1 +
kernel/fork.c | 2 +
lib/kunit_iov_iter.c | 1317 +++++++++++++++++++++++++++++++++------
mm/mmap.c | 1 +
mm/util.c | 3 +
6 files changed, 1139 insertions(+), 186 deletions(-)
Consolidate some of the repeated code snippets into helper functions to
reduce the line count.
Signed-off-by: David Howells <[email protected]>
cc: Christoph Hellwig <[email protected]>
cc: Christian Brauner <[email protected]>
cc: Jens Axboe <[email protected]>
cc: Al Viro <[email protected]>
cc: David Hildenbrand <[email protected]>
cc: John Hubbard <[email protected]>
cc: Brendan Higgins <[email protected]>
cc: David Gow <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
cc: [email protected]
---
lib/kunit_iov_iter.c | 189 +++++++++++++++++++------------------------
1 file changed, 84 insertions(+), 105 deletions(-)
diff --git a/lib/kunit_iov_iter.c b/lib/kunit_iov_iter.c
index 4a6c0efd33f5..ee586eb652b4 100644
--- a/lib/kunit_iov_iter.c
+++ b/lib/kunit_iov_iter.c
@@ -19,18 +19,18 @@ MODULE_AUTHOR("David Howells <[email protected]>");
MODULE_LICENSE("GPL");
struct kvec_test_range {
- int from, to;
+ int page, from, to;
};
static const struct kvec_test_range kvec_test_ranges[] = {
- { 0x00002, 0x00002 },
- { 0x00027, 0x03000 },
- { 0x05193, 0x18794 },
- { 0x20000, 0x20000 },
- { 0x20000, 0x24000 },
- { 0x24000, 0x27001 },
- { 0x29000, 0xffffb },
- { 0xffffd, 0xffffe },
+ { 0, 0x00002, 0x00002 },
+ { 0, 0x00027, 0x03000 },
+ { 0, 0x05193, 0x18794 },
+ { 0, 0x20000, 0x20000 },
+ { 0, 0x20000, 0x24000 },
+ { 0, 0x24000, 0x27001 },
+ { 0, 0x29000, 0xffffb },
+ { 0, 0xffffd, 0xffffe },
{ -1 }
};
@@ -69,6 +69,57 @@ static void *__init iov_kunit_create_buffer(struct kunit *test,
return buffer;
}
+/*
+ * Build the reference pattern in the scratch buffer that we expect to see in
+ * the iterator buffer (ie. the result of copy *to*).
+ */
+static void iov_kunit_build_to_reference_pattern(struct kunit *test, u8 *scratch,
+ size_t bufsize,
+ const struct kvec_test_range *pr)
+{
+ int i, patt = 0;
+
+ memset(scratch, 0, bufsize);
+ for (; pr->page >= 0; pr++)
+ for (i = pr->from; i < pr->to; i++)
+ scratch[i] = pattern(patt++);
+}
+
+/*
+ * Build the reference pattern in the iterator buffer that we expect to see in
+ * the scratch buffer (ie. the result of copy *from*).
+ */
+static void iov_kunit_build_from_reference_pattern(struct kunit *test, u8 *buffer,
+ size_t bufsize,
+ const struct kvec_test_range *pr)
+{
+ size_t i = 0, j;
+
+ memset(buffer, 0, bufsize);
+ for (; pr->page >= 0; pr++) {
+ for (j = pr->from; j < pr->to; j++) {
+ buffer[i++] = pattern(j);
+ if (i >= bufsize)
+ return;
+ }
+ }
+}
+
+/*
+ * Compare two kernel buffers to see that they're the same.
+ */
+static void iov_kunit_check_pattern(struct kunit *test, const u8 *buffer,
+ const u8 *scratch, size_t bufsize)
+{
+ size_t i;
+
+ for (i = 0; i < bufsize; i++) {
+ KUNIT_EXPECT_EQ_MSG(test, buffer[i], scratch[i], "at i=%x", i);
+ if (buffer[i] != scratch[i])
+ return;
+ }
+}
+
static void __init iov_kunit_load_kvec(struct kunit *test,
struct iov_iter *iter, int dir,
struct kvec *kvec, unsigned int kvmax,
@@ -79,7 +130,7 @@ static void __init iov_kunit_load_kvec(struct kunit *test,
int i;
for (i = 0; i < kvmax; i++, pr++) {
- if (pr->from < 0)
+ if (pr->page < 0)
break;
KUNIT_ASSERT_GE(test, pr->to, pr->from);
KUNIT_ASSERT_LE(test, pr->to, bufsize);
@@ -97,13 +148,12 @@ static void __init iov_kunit_load_kvec(struct kunit *test,
*/
static void __init iov_kunit_copy_to_kvec(struct kunit *test)
{
- const struct kvec_test_range *pr;
struct iov_iter iter;
struct page **spages, **bpages;
struct kvec kvec[8];
u8 *scratch, *buffer;
size_t bufsize, npages, size, copied;
- int i, patt;
+ int i;
bufsize = 0x100000;
npages = bufsize / PAGE_SIZE;
@@ -125,20 +175,8 @@ static void __init iov_kunit_copy_to_kvec(struct kunit *test)
KUNIT_EXPECT_EQ(test, iter.count, 0);
KUNIT_EXPECT_EQ(test, iter.nr_segs, 0);
- /* Build the expected image in the scratch buffer. */
- patt = 0;
- memset(scratch, 0, bufsize);
- for (pr = kvec_test_ranges; pr->from >= 0; pr++)
- for (i = pr->from; i < pr->to; i++)
- scratch[i] = pattern(patt++);
-
- /* Compare the images */
- for (i = 0; i < bufsize; i++) {
- KUNIT_EXPECT_EQ_MSG(test, buffer[i], scratch[i], "at i=%x", i);
- if (buffer[i] != scratch[i])
- return;
- }
-
+ iov_kunit_build_to_reference_pattern(test, scratch, bufsize, kvec_test_ranges);
+ iov_kunit_check_pattern(test, buffer, scratch, bufsize);
KUNIT_SUCCEED();
}
@@ -147,13 +185,12 @@ static void __init iov_kunit_copy_to_kvec(struct kunit *test)
*/
static void __init iov_kunit_copy_from_kvec(struct kunit *test)
{
- const struct kvec_test_range *pr;
struct iov_iter iter;
struct page **spages, **bpages;
struct kvec kvec[8];
u8 *scratch, *buffer;
size_t bufsize, npages, size, copied;
- int i, j;
+ int i;
bufsize = 0x100000;
npages = bufsize / PAGE_SIZE;
@@ -175,25 +212,8 @@ static void __init iov_kunit_copy_from_kvec(struct kunit *test)
KUNIT_EXPECT_EQ(test, iter.count, 0);
KUNIT_EXPECT_EQ(test, iter.nr_segs, 0);
- /* Build the expected image in the main buffer. */
- i = 0;
- memset(buffer, 0, bufsize);
- for (pr = kvec_test_ranges; pr->from >= 0; pr++) {
- for (j = pr->from; j < pr->to; j++) {
- buffer[i++] = pattern(j);
- if (i >= bufsize)
- goto stop;
- }
- }
-stop:
-
- /* Compare the images */
- for (i = 0; i < bufsize; i++) {
- KUNIT_EXPECT_EQ_MSG(test, scratch[i], buffer[i], "at i=%x", i);
- if (scratch[i] != buffer[i])
- return;
- }
-
+ iov_kunit_build_from_reference_pattern(test, buffer, bufsize, kvec_test_ranges);
+ iov_kunit_check_pattern(test, buffer, scratch, bufsize);
KUNIT_SUCCEED();
}
@@ -210,7 +230,7 @@ static const struct bvec_test_range bvec_test_ranges[] = {
{ 5, 0x0000, 0x1000 },
{ 6, 0x0000, 0x0ffb },
{ 6, 0x0ffd, 0x0ffe },
- { -1, -1, -1 }
+ { -1 }
};
static void __init iov_kunit_load_bvec(struct kunit *test,
@@ -225,7 +245,7 @@ static void __init iov_kunit_load_bvec(struct kunit *test,
int i;
for (i = 0; i < bvmax; i++, pr++) {
- if (pr->from < 0)
+ if (pr->page < 0)
break;
KUNIT_ASSERT_LT(test, pr->page, npages);
KUNIT_ASSERT_LT(test, pr->page * PAGE_SIZE, bufsize);
@@ -288,20 +308,14 @@ static void __init iov_kunit_copy_to_bvec(struct kunit *test)
b = 0;
patt = 0;
memset(scratch, 0, bufsize);
- for (pr = bvec_test_ranges; pr->from >= 0; pr++, b++) {
+ for (pr = bvec_test_ranges; pr->page >= 0; pr++, b++) {
u8 *p = scratch + pr->page * PAGE_SIZE;
for (i = pr->from; i < pr->to; i++)
p[i] = pattern(patt++);
}
- /* Compare the images */
- for (i = 0; i < bufsize; i++) {
- KUNIT_EXPECT_EQ_MSG(test, buffer[i], scratch[i], "at i=%x", i);
- if (buffer[i] != scratch[i])
- return;
- }
-
+ iov_kunit_check_pattern(test, buffer, scratch, bufsize);
KUNIT_SUCCEED();
}
@@ -341,7 +355,7 @@ static void __init iov_kunit_copy_from_bvec(struct kunit *test)
/* Build the expected image in the main buffer. */
i = 0;
memset(buffer, 0, bufsize);
- for (pr = bvec_test_ranges; pr->from >= 0; pr++) {
+ for (pr = bvec_test_ranges; pr->page >= 0; pr++) {
size_t patt = pr->page * PAGE_SIZE;
for (j = pr->from; j < pr->to; j++) {
@@ -352,13 +366,7 @@ static void __init iov_kunit_copy_from_bvec(struct kunit *test)
}
stop:
- /* Compare the images */
- for (i = 0; i < bufsize; i++) {
- KUNIT_EXPECT_EQ_MSG(test, scratch[i], buffer[i], "at i=%x", i);
- if (scratch[i] != buffer[i])
- return;
- }
-
+ iov_kunit_check_pattern(test, buffer, scratch, bufsize);
KUNIT_SUCCEED();
}
@@ -409,7 +417,7 @@ static void __init iov_kunit_copy_to_xarray(struct kunit *test)
struct page **spages, **bpages;
u8 *scratch, *buffer;
size_t bufsize, npages, size, copied;
- int i, patt;
+ int i;
bufsize = 0x100000;
npages = bufsize / PAGE_SIZE;
@@ -426,7 +434,7 @@ static void __init iov_kunit_copy_to_xarray(struct kunit *test)
iov_kunit_load_xarray(test, &iter, READ, xarray, bpages, npages);
i = 0;
- for (pr = kvec_test_ranges; pr->from >= 0; pr++) {
+ for (pr = kvec_test_ranges; pr->page >= 0; pr++) {
size = pr->to - pr->from;
KUNIT_ASSERT_LE(test, pr->to, bufsize);
@@ -439,20 +447,8 @@ static void __init iov_kunit_copy_to_xarray(struct kunit *test)
i += size;
}
- /* Build the expected image in the scratch buffer. */
- patt = 0;
- memset(scratch, 0, bufsize);
- for (pr = kvec_test_ranges; pr->from >= 0; pr++)
- for (i = pr->from; i < pr->to; i++)
- scratch[i] = pattern(patt++);
-
- /* Compare the images */
- for (i = 0; i < bufsize; i++) {
- KUNIT_EXPECT_EQ_MSG(test, buffer[i], scratch[i], "at i=%x", i);
- if (buffer[i] != scratch[i])
- return;
- }
-
+ iov_kunit_build_to_reference_pattern(test, scratch, bufsize, kvec_test_ranges);
+ iov_kunit_check_pattern(test, buffer, scratch, bufsize);
KUNIT_SUCCEED();
}
@@ -467,7 +463,7 @@ static void __init iov_kunit_copy_from_xarray(struct kunit *test)
struct page **spages, **bpages;
u8 *scratch, *buffer;
size_t bufsize, npages, size, copied;
- int i, j;
+ int i;
bufsize = 0x100000;
npages = bufsize / PAGE_SIZE;
@@ -484,7 +480,7 @@ static void __init iov_kunit_copy_from_xarray(struct kunit *test)
iov_kunit_load_xarray(test, &iter, READ, xarray, bpages, npages);
i = 0;
- for (pr = kvec_test_ranges; pr->from >= 0; pr++) {
+ for (pr = kvec_test_ranges; pr->page >= 0; pr++) {
size = pr->to - pr->from;
KUNIT_ASSERT_LE(test, pr->to, bufsize);
@@ -497,25 +493,8 @@ static void __init iov_kunit_copy_from_xarray(struct kunit *test)
i += size;
}
- /* Build the expected image in the main buffer. */
- i = 0;
- memset(buffer, 0, bufsize);
- for (pr = kvec_test_ranges; pr->from >= 0; pr++) {
- for (j = pr->from; j < pr->to; j++) {
- buffer[i++] = pattern(j);
- if (i >= bufsize)
- goto stop;
- }
- }
-stop:
-
- /* Compare the images */
- for (i = 0; i < bufsize; i++) {
- KUNIT_EXPECT_EQ_MSG(test, scratch[i], buffer[i], "at i=%x", i);
- if (scratch[i] != buffer[i])
- return;
- }
-
+ iov_kunit_build_from_reference_pattern(test, buffer, bufsize, kvec_test_ranges);
+ iov_kunit_check_pattern(test, buffer, scratch, bufsize);
KUNIT_SUCCEED();
}
@@ -573,7 +552,7 @@ static void __init iov_kunit_extract_pages_kvec(struct kunit *test)
while (from == pr->to) {
pr++;
from = pr->from;
- if (from < 0)
+ if (pr->page < 0)
goto stop;
}
ix = from / PAGE_SIZE;
@@ -651,7 +630,7 @@ static void __init iov_kunit_extract_pages_bvec(struct kunit *test)
while (from == pr->to) {
pr++;
from = pr->from;
- if (from < 0)
+ if (pr->page < 0)
goto stop;
}
ix = pr->page + from / PAGE_SIZE;
@@ -698,7 +677,7 @@ static void __init iov_kunit_extract_pages_xarray(struct kunit *test)
iov_kunit_create_buffer(test, &bpages, npages);
iov_kunit_load_xarray(test, &iter, READ, xarray, bpages, npages);
- for (pr = kvec_test_ranges; pr->from >= 0; pr++) {
+ for (pr = kvec_test_ranges; pr->page >= 0; pr++) {
from = pr->from;
size = pr->to - from;
KUNIT_ASSERT_LE(test, pr->to, bufsize);
Consolidate the test vector struct in the kunit tests so that the bvec
pattern check helpers can share with the kvec check helpers.
Signed-off-by: David Howells <[email protected]>
cc: Christoph Hellwig <[email protected]>
cc: Christian Brauner <[email protected]>
cc: Jens Axboe <[email protected]>
cc: Al Viro <[email protected]>
cc: David Hildenbrand <[email protected]>
cc: John Hubbard <[email protected]>
cc: Brendan Higgins <[email protected]>
cc: David Gow <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
cc: [email protected]
---
lib/kunit_iov_iter.c | 90 ++++++++++++++++++++++++--------------------
1 file changed, 50 insertions(+), 40 deletions(-)
diff --git a/lib/kunit_iov_iter.c b/lib/kunit_iov_iter.c
index ee586eb652b4..4925ca37cde6 100644
--- a/lib/kunit_iov_iter.c
+++ b/lib/kunit_iov_iter.c
@@ -18,22 +18,46 @@ MODULE_DESCRIPTION("iov_iter testing");
MODULE_AUTHOR("David Howells <[email protected]>");
MODULE_LICENSE("GPL");
-struct kvec_test_range {
+struct iov_kunit_range {
int page, from, to;
};
-static const struct kvec_test_range kvec_test_ranges[] = {
- { 0, 0x00002, 0x00002 },
- { 0, 0x00027, 0x03000 },
- { 0, 0x05193, 0x18794 },
- { 0, 0x20000, 0x20000 },
- { 0, 0x20000, 0x24000 },
- { 0, 0x24000, 0x27001 },
- { 0, 0x29000, 0xffffb },
- { 0, 0xffffd, 0xffffe },
+/*
+ * Ranges that to use in tests where we have address/offset ranges to play
+ * with (ie. KVEC) or where we have a single blob that we can copy
+ * arbitrary chunks of (ie. XARRAY).
+ */
+static const struct iov_kunit_range kvec_test_ranges[] = {
+ { 0, 0x00002, 0x00002 }, /* Start with an empty range */
+ { 0, 0x00027, 0x03000 }, /* Midpage to page end */
+ { 0, 0x05193, 0x18794 }, /* Midpage to midpage */
+ { 0, 0x20000, 0x20000 }, /* Empty range in the middle */
+ { 0, 0x20000, 0x24000 }, /* Page start to page end */
+ { 0, 0x24000, 0x27001 }, /* Page end to midpage */
+ { 0, 0x29000, 0xffffb }, /* Page start to midpage */
+ { 0, 0xffffd, 0xffffe }, /* Almost contig to last, ending in same page */
{ -1 }
};
+/*
+ * Ranges that to use in tests where we have a list of partial pages to
+ * play with (ie. BVEC).
+ */
+static const struct iov_kunit_range bvec_test_ranges[] = {
+ { 0, 0x0002, 0x0002 }, /* Start with an empty range */
+ { 1, 0x0027, 0x0893 }, /* Random part of page */
+ { 2, 0x0193, 0x0794 }, /* Random part of page */
+ { 3, 0x0000, 0x1000 }, /* Full page */
+ { 4, 0x0000, 0x1000 }, /* Full page logically contig to last */
+ { 5, 0x0000, 0x1000 }, /* Full page logically contig to last */
+ { 6, 0x0000, 0x0ffb }, /* Part page logically contig to last */
+ { 6, 0x0ffd, 0x0ffe }, /* Part of prev page, but not quite contig */
+ { -1 }
+};
+
+/*
+ * The pattern to fill with.
+ */
static inline u8 pattern(unsigned long x)
{
return x & 0xff;
@@ -44,6 +68,9 @@ static void iov_kunit_unmap(void *data)
vunmap(data);
}
+/*
+ * Create a buffer out of some pages and return a vmap'd pointer to it.
+ */
static void *__init iov_kunit_create_buffer(struct kunit *test,
struct page ***ppages,
size_t npages)
@@ -75,7 +102,7 @@ static void *__init iov_kunit_create_buffer(struct kunit *test,
*/
static void iov_kunit_build_to_reference_pattern(struct kunit *test, u8 *scratch,
size_t bufsize,
- const struct kvec_test_range *pr)
+ const struct iov_kunit_range *pr)
{
int i, patt = 0;
@@ -91,7 +118,7 @@ static void iov_kunit_build_to_reference_pattern(struct kunit *test, u8 *scratch
*/
static void iov_kunit_build_from_reference_pattern(struct kunit *test, u8 *buffer,
size_t bufsize,
- const struct kvec_test_range *pr)
+ const struct iov_kunit_range *pr)
{
size_t i = 0, j;
@@ -124,7 +151,7 @@ static void __init iov_kunit_load_kvec(struct kunit *test,
struct iov_iter *iter, int dir,
struct kvec *kvec, unsigned int kvmax,
void *buffer, size_t bufsize,
- const struct kvec_test_range *pr)
+ const struct iov_kunit_range *pr)
{
size_t size = 0;
int i;
@@ -217,28 +244,12 @@ static void __init iov_kunit_copy_from_kvec(struct kunit *test)
KUNIT_SUCCEED();
}
-struct bvec_test_range {
- int page, from, to;
-};
-
-static const struct bvec_test_range bvec_test_ranges[] = {
- { 0, 0x0002, 0x0002 },
- { 1, 0x0027, 0x0893 },
- { 2, 0x0193, 0x0794 },
- { 3, 0x0000, 0x1000 },
- { 4, 0x0000, 0x1000 },
- { 5, 0x0000, 0x1000 },
- { 6, 0x0000, 0x0ffb },
- { 6, 0x0ffd, 0x0ffe },
- { -1 }
-};
-
static void __init iov_kunit_load_bvec(struct kunit *test,
struct iov_iter *iter, int dir,
struct bio_vec *bvec, unsigned int bvmax,
struct page **pages, size_t npages,
size_t bufsize,
- const struct bvec_test_range *pr)
+ const struct iov_kunit_range *pr)
{
struct page *can_merge = NULL, *page;
size_t size = 0;
@@ -276,13 +287,13 @@ static void __init iov_kunit_load_bvec(struct kunit *test,
*/
static void __init iov_kunit_copy_to_bvec(struct kunit *test)
{
- const struct bvec_test_range *pr;
+ const struct iov_kunit_range *pr;
struct iov_iter iter;
struct bio_vec bvec[8];
struct page **spages, **bpages;
u8 *scratch, *buffer;
size_t bufsize, npages, size, copied;
- int i, b, patt;
+ int i, patt;
bufsize = 0x100000;
npages = bufsize / PAGE_SIZE;
@@ -305,10 +316,9 @@ static void __init iov_kunit_copy_to_bvec(struct kunit *test)
KUNIT_EXPECT_EQ(test, iter.nr_segs, 0);
/* Build the expected image in the scratch buffer. */
- b = 0;
patt = 0;
memset(scratch, 0, bufsize);
- for (pr = bvec_test_ranges; pr->page >= 0; pr++, b++) {
+ for (pr = bvec_test_ranges; pr->page >= 0; pr++) {
u8 *p = scratch + pr->page * PAGE_SIZE;
for (i = pr->from; i < pr->to; i++)
@@ -324,7 +334,7 @@ static void __init iov_kunit_copy_to_bvec(struct kunit *test)
*/
static void __init iov_kunit_copy_from_bvec(struct kunit *test)
{
- const struct bvec_test_range *pr;
+ const struct iov_kunit_range *pr;
struct iov_iter iter;
struct bio_vec bvec[8];
struct page **spages, **bpages;
@@ -411,7 +421,7 @@ static struct xarray *iov_kunit_create_xarray(struct kunit *test)
*/
static void __init iov_kunit_copy_to_xarray(struct kunit *test)
{
- const struct kvec_test_range *pr;
+ const struct iov_kunit_range *pr;
struct iov_iter iter;
struct xarray *xarray;
struct page **spages, **bpages;
@@ -457,7 +467,7 @@ static void __init iov_kunit_copy_to_xarray(struct kunit *test)
*/
static void __init iov_kunit_copy_from_xarray(struct kunit *test)
{
- const struct kvec_test_range *pr;
+ const struct iov_kunit_range *pr;
struct iov_iter iter;
struct xarray *xarray;
struct page **spages, **bpages;
@@ -503,7 +513,7 @@ static void __init iov_kunit_copy_from_xarray(struct kunit *test)
*/
static void __init iov_kunit_extract_pages_kvec(struct kunit *test)
{
- const struct kvec_test_range *pr;
+ const struct iov_kunit_range *pr;
struct iov_iter iter;
struct page **bpages, *pagelist[8], **pages = pagelist;
struct kvec kvec[8];
@@ -583,7 +593,7 @@ static void __init iov_kunit_extract_pages_kvec(struct kunit *test)
*/
static void __init iov_kunit_extract_pages_bvec(struct kunit *test)
{
- const struct bvec_test_range *pr;
+ const struct iov_kunit_range *pr;
struct iov_iter iter;
struct page **bpages, *pagelist[8], **pages = pagelist;
struct bio_vec bvec[8];
@@ -661,7 +671,7 @@ static void __init iov_kunit_extract_pages_bvec(struct kunit *test)
*/
static void __init iov_kunit_extract_pages_xarray(struct kunit *test)
{
- const struct kvec_test_range *pr;
+ const struct iov_kunit_range *pr;
struct iov_iter iter;
struct xarray *xarray;
struct page **bpages, *pagelist[8], **pages = pagelist;
Create a function to set up a userspace VM for the kunit testing thread and
set up a buffer within it such that ITER_UBUF and ITER_IOVEC tests can be
performed.
Note that this requires current->mm to point to a sufficiently set up
mm_struct. This is done by partially mirroring what execve does.
The following steps are performed:
(1) Allocate an mm_struct and pick an arch layout (required to set
mm->get_unmapped_area).
(2) Create an empty "stack" VMA so that the VMA maple tree is set up and
won't cause a crash in the maple tree code later. We don't actually
care about the stack as we're not going to actually execute userspace.
(3) Create an anon file and attach a bunch of folios to it so that the
requested number of pages are accessible.
(4) Make the kthread use the mm. This must be done before mmap is called.
(5) Shared-mmap the anon file into the allocated mm_struct.
This requires access to otherwise unexported core symbols: mm_alloc(),
vm_area_alloc(), insert_vm_struct() arch_pick_mmap_layout() and
anon_inode_getfile_secure(), which I've exported _GPL.
[?] Would it be better if this were done in core and not in a module?
Signed-off-by: David Howells <[email protected]>
cc: Andrew Morton <[email protected]>
cc: Christoph Hellwig <[email protected]>
cc: Christian Brauner <[email protected]>
cc: Jens Axboe <[email protected]>
cc: Al Viro <[email protected]>
cc: Matthew Wilcox <[email protected]>
cc: David Hildenbrand <[email protected]>
cc: John Hubbard <[email protected]>
cc: Brendan Higgins <[email protected]>
cc: David Gow <[email protected]>
cc: Huacai Chen <[email protected]>
cc: WANG Xuerui <[email protected]>
cc: Heiko Carstens <[email protected]>
cc: Vasily Gorbik <[email protected]>
cc: Alexander Gordeev <[email protected]>
cc: Christian Borntraeger <[email protected]>
cc: Sven Schnelle <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
cc: [email protected]
cc: [email protected]
cc: [email protected]
---
arch/s390/kernel/vdso.c | 1 +
fs/anon_inodes.c | 1 +
kernel/fork.c | 2 +
lib/kunit_iov_iter.c | 143 ++++++++++++++++++++++++++++++++++++++++
mm/mmap.c | 1 +
mm/util.c | 3 +
6 files changed, 151 insertions(+)
diff --git a/arch/s390/kernel/vdso.c b/arch/s390/kernel/vdso.c
index bbaefd84f15e..6849eac59129 100644
--- a/arch/s390/kernel/vdso.c
+++ b/arch/s390/kernel/vdso.c
@@ -223,6 +223,7 @@ unsigned long vdso_size(void)
size += vdso64_end - vdso64_start;
return PAGE_ALIGN(size);
}
+EXPORT_SYMBOL_GPL(vdso_size);
int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
{
diff --git a/fs/anon_inodes.c b/fs/anon_inodes.c
index d26222b7eefe..e4862dff235b 100644
--- a/fs/anon_inodes.c
+++ b/fs/anon_inodes.c
@@ -176,6 +176,7 @@ struct file *anon_inode_getfile_secure(const char *name,
return __anon_inode_getfile(name, fops, priv, flags,
context_inode, true);
}
+EXPORT_SYMBOL_GPL(anon_inode_getfile_secure);
static int __anon_inode_getfd(const char *name,
const struct file_operations *fops,
diff --git a/kernel/fork.c b/kernel/fork.c
index 10917c3e1f03..f6d9e0d0685a 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -494,6 +494,7 @@ struct vm_area_struct *vm_area_alloc(struct mm_struct *mm)
return vma;
}
+EXPORT_SYMBOL_GPL(vm_area_alloc);
struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig)
{
@@ -1337,6 +1338,7 @@ struct mm_struct *mm_alloc(void)
memset(mm, 0, sizeof(*mm));
return mm_init(mm, current, current_user_ns());
}
+EXPORT_SYMBOL_GPL(mm_alloc);
static inline void __mmput(struct mm_struct *mm)
{
diff --git a/lib/kunit_iov_iter.c b/lib/kunit_iov_iter.c
index eb86371b67d0..63e4dd1e7c1b 100644
--- a/lib/kunit_iov_iter.c
+++ b/lib/kunit_iov_iter.c
@@ -10,6 +10,13 @@
#include <linux/module.h>
#include <linux/vmalloc.h>
#include <linux/mm.h>
+#include <linux/pagemap.h>
+#include <linux/mman.h>
+#include <linux/file.h>
+#include <linux/kthread.h>
+#include <linux/anon_inodes.h>
+#include <linux/writeback.h>
+#include <linux/personality.h>
#include <linux/uio.h>
#include <linux/bvec.h>
#include <kunit/test.h>
@@ -68,6 +75,20 @@ static void iov_kunit_unmap(void *data)
vunmap(data);
}
+static void iov_kunit_mmdrop(void *data)
+{
+ struct mm_struct *mm = data;
+
+ if (current->mm == mm)
+ kthread_unuse_mm(mm);
+ mmdrop(mm);
+}
+
+static void iov_kunit_fput(void *data)
+{
+ fput(data);
+}
+
/*
* Create a buffer out of some pages and return a vmap'd pointer to it.
*/
@@ -151,6 +172,128 @@ static void iov_kunit_check_pattern(struct kunit *test, const u8 *buffer,
}
}
+static const struct file_operations iov_kunit_user_file_fops = {
+ .mmap = generic_file_mmap,
+};
+
+static int iov_kunit_user_file_read_folio(struct file *file, struct folio *folio)
+{
+ folio_mark_uptodate(folio);
+ folio_unlock(folio);
+ return 0;
+}
+
+static const struct address_space_operations iov_kunit_user_file_aops = {
+ .read_folio = iov_kunit_user_file_read_folio,
+ .dirty_folio = filemap_dirty_folio,
+};
+
+/*
+ * Create an anonymous file and attach a bunch of pages to it. We can then use
+ * this in mmap() and check the pages against it when doing extraction tests.
+ */
+static struct file *iov_kunit_create_file(struct kunit *test, size_t npages,
+ struct page ***ppages)
+{
+ struct folio *folio;
+ struct file *file;
+ struct page **pages = NULL;
+ size_t i;
+
+ if (ppages) {
+ pages = kunit_kcalloc(test, npages, sizeof(struct page *), GFP_KERNEL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, pages);
+ *ppages = pages;
+ }
+
+ file = anon_inode_getfile_secure("kunit-iov-test",
+ &iov_kunit_user_file_fops,
+ NULL, O_RDWR, NULL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, file);
+ kunit_add_action_or_reset(test, iov_kunit_fput, file);
+ file->f_mapping->a_ops = &iov_kunit_user_file_aops;
+
+ i_size_write(file_inode(file), npages * PAGE_SIZE);
+ for (i = 0; i < npages; i++) {
+ folio = filemap_grab_folio(file->f_mapping, i);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, folio);
+ if (pages)
+ *pages++ = folio_page(folio, 0);
+ folio_unlock(folio);
+ folio_put(folio);
+ }
+
+ return file;
+}
+
+/*
+ * Attach a userspace buffer to a kernel thread by adding an mm_struct to it
+ * and mmapping the buffer. If the caller requires a list of pages for
+ * checking, then an anon_inode file is created, populated with pages and
+ * mmapped otherwise an anonymous mapping is used.
+ */
+static u8 __user *__init iov_kunit_create_user_buf(struct kunit *test,
+ size_t npages,
+ struct page ***ppages)
+{
+ struct rlimit rlim_stack = {
+ .rlim_cur = LONG_MAX,
+ .rlim_max = LONG_MAX,
+ };
+ struct vm_area_struct *vma;
+ struct mm_struct *mm;
+ struct file *file;
+ u8 __user *buffer;
+ int ret;
+
+ KUNIT_ASSERT_NULL(test, current->mm);
+
+ mm = mm_alloc();
+ KUNIT_ASSERT_NOT_NULL(test, mm);
+ kunit_add_action_or_reset(test, iov_kunit_mmdrop, mm);
+ arch_pick_mmap_layout(mm, &rlim_stack);
+
+ vma = vm_area_alloc(mm);
+ KUNIT_ASSERT_NOT_NULL(test, vma);
+ vma_set_anonymous(vma);
+
+ /*
+ * Place the stack at the largest stack address the architecture
+ * supports. Later, we'll move this to an appropriate place. We don't
+ * use STACK_TOP because that can depend on attributes which aren't
+ * configured yet.
+ */
+ vma->vm_end = STACK_TOP_MAX;
+ vma->vm_start = vma->vm_end - PAGE_SIZE;
+ vm_flags_init(vma, VM_SOFTDIRTY | VM_STACK_FLAGS | VM_STACK_INCOMPLETE_SETUP);
+ vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
+
+ ret = insert_vm_struct(mm, vma);
+ KUNIT_ASSERT_EQ(test, ret, 0);
+
+ mm->stack_vm = mm->total_vm = 1;
+
+ /*
+ * If we want the pages, attach the pages to a file to prevent swap
+ * interfering, otherwise use an anonymous mapping.
+ */
+ if (ppages) {
+ file = iov_kunit_create_file(test, npages, ppages);
+
+ kthread_use_mm(mm);
+ buffer = (u8 __user *)vm_mmap(file, 0, PAGE_SIZE * npages,
+ PROT_READ | PROT_WRITE,
+ MAP_SHARED, 0);
+ } else {
+ kthread_use_mm(mm);
+ buffer = (u8 __user *)vm_mmap(NULL, 0, PAGE_SIZE * npages,
+ PROT_READ | PROT_WRITE,
+ MAP_PRIVATE | MAP_ANONYMOUS, 0);
+ }
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, (void __force *)buffer);
+ return buffer;
+}
+
static void __init iov_kunit_load_kvec(struct kunit *test,
struct iov_iter *iter, int dir,
struct kvec *kvec, unsigned int kvmax,
diff --git a/mm/mmap.c b/mm/mmap.c
index 1971bfffcc03..8a2595b8ec59 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -3383,6 +3383,7 @@ int insert_vm_struct(struct mm_struct *mm, struct vm_area_struct *vma)
return 0;
}
+EXPORT_SYMBOL_GPL(insert_vm_struct);
/*
* Copy the vma structure to a new location in the same mm,
diff --git a/mm/util.c b/mm/util.c
index aa01f6ea5a75..518f7c085923 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -455,6 +455,9 @@ void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack)
mm->get_unmapped_area = arch_get_unmapped_area;
}
#endif
+#ifdef CONFIG_MMU
+EXPORT_SYMBOL_GPL(arch_pick_mmap_layout);
+#endif
/**
* __account_locked_vm - account locked pages to an mm's locked_vm
Add a kunit test to benchmark an xarray containing 256MiB of data getting
decanted into 256-page BVEC iterators that get copied from - modelling
bufferage being drawn from the pagecache and batched up for I/O.
Signed-off-by: David Howells <[email protected]>
cc: Christoph Hellwig <[email protected]>
cc: Christian Brauner <[email protected]>
cc: Jens Axboe <[email protected]>
cc: Al Viro <[email protected]>
cc: David Hildenbrand <[email protected]>
cc: John Hubbard <[email protected]>
cc: Brendan Higgins <[email protected]>
cc: David Gow <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
cc: [email protected]
---
lib/kunit_iov_iter.c | 87 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 87 insertions(+)
diff --git a/lib/kunit_iov_iter.c b/lib/kunit_iov_iter.c
index 1a43e9518a63..2fbe6f2afb26 100644
--- a/lib/kunit_iov_iter.c
+++ b/lib/kunit_iov_iter.c
@@ -1509,6 +1509,92 @@ static void __init iov_kunit_benchmark_xarray(struct kunit *test)
KUNIT_SUCCEED();
}
+/*
+ * Time copying 256MiB through an ITER_XARRAY, decanting it to ITER_BVECs.
+ */
+static void __init iov_kunit_benchmark_xarray_to_bvec(struct kunit *test)
+{
+ struct iov_iter xiter;
+ struct xarray *xarray;
+ struct page *page;
+ unsigned int samples[IOV_KUNIT_NR_SAMPLES];
+ ktime_t a, b;
+ ssize_t copied;
+ size_t size = 256 * 1024 * 1024, npages = size / PAGE_SIZE;
+ void *scratch;
+ int i;
+
+ /* Allocate a page and tile it repeatedly in the buffer. */
+ page = alloc_page(GFP_KERNEL);
+ KUNIT_ASSERT_NOT_NULL(test, page);
+ kunit_add_action_or_reset(test, iov_kunit_free_page, page);
+
+ xarray = iov_kunit_create_xarray(test);
+
+ for (i = 0; i < npages; i++) {
+ void *x = xa_store(xarray, i, page, GFP_KERNEL);
+
+ KUNIT_ASSERT_FALSE(test, xa_is_err(x));
+ }
+
+ /* Create a single large buffer to copy to/from. */
+ scratch = iov_kunit_create_source(test, npages);
+
+ /* Perform and time a bunch of copies. */
+ kunit_info(test, "Benchmarking copy_to_iter() over BVECs decanted from an XARRAY:\n");
+ for (i = 0; i < IOV_KUNIT_NR_SAMPLES; i++) {
+ size = 256 * 1024 * 1024;
+ iov_iter_xarray(&xiter, ITER_SOURCE, xarray, 0, size);
+ a = ktime_get_real();
+
+ do {
+ struct iov_iter biter;
+ struct bio_vec *bvec;
+ struct page **pages;
+ size_t req, part, offset0, got;
+ int j;
+
+ npages = 256;
+ req = min_t(size_t, size, npages * PAGE_SIZE);
+ bvec = kunit_kmalloc_array(test, npages, sizeof(bvec[0]), GFP_KERNEL);
+ KUNIT_ASSERT_NOT_NULL(test, bvec);
+
+ pages = (void *)bvec + array_size(npages, sizeof(bvec[0])) -
+ array_size(npages, sizeof(*pages));
+
+ part = iov_iter_extract_pages(&xiter, &pages, req,
+ npages, 0, &offset0);
+ KUNIT_EXPECT_NE(test, part, 0);
+ KUNIT_EXPECT_GT(test, part, 0);
+
+ j = 0;
+ got = part;
+ do {
+ size_t chunk = min_t(size_t, got, PAGE_SIZE - offset0);
+
+ bvec_set_page(&bvec[j++], page, chunk, offset0);
+ offset0 = 0;
+ got -= chunk;
+ } while (got > 0);
+
+ iov_iter_bvec(&biter, ITER_SOURCE, bvec, j, part);
+ copied = copy_from_iter(scratch, part, &biter);
+ KUNIT_EXPECT_EQ(test, copied, part);
+ size -= copied;
+ if (test->status == KUNIT_FAILURE)
+ break;
+ } while (size > 0);
+
+ b = ktime_get_real();
+ samples[i] = ktime_to_us(ktime_sub(b, a));
+ if (test->status == KUNIT_FAILURE)
+ break;
+ }
+
+ iov_kunit_benchmark_print_stats(test, samples);
+ KUNIT_SUCCEED();
+}
+
static struct kunit_case __refdata iov_kunit_cases[] = {
KUNIT_CASE(iov_kunit_copy_to_ubuf),
KUNIT_CASE(iov_kunit_copy_from_ubuf),
@@ -1529,6 +1615,7 @@ static struct kunit_case __refdata iov_kunit_cases[] = {
KUNIT_CASE(iov_kunit_benchmark_bvec),
KUNIT_CASE(iov_kunit_benchmark_bvec_split),
KUNIT_CASE(iov_kunit_benchmark_xarray),
+ KUNIT_CASE(iov_kunit_benchmark_xarray_to_bvec),
{}
};
Make the BVEC-testing functions use the consolidated pattern checking
functions to reduce the amount of duplicated code.
Signed-off-by: David Howells <[email protected]>
cc: Christoph Hellwig <[email protected]>
cc: Christian Brauner <[email protected]>
cc: Jens Axboe <[email protected]>
cc: Al Viro <[email protected]>
cc: David Hildenbrand <[email protected]>
cc: John Hubbard <[email protected]>
cc: Brendan Higgins <[email protected]>
cc: David Gow <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
cc: [email protected]
---
lib/kunit_iov_iter.c | 42 +++++++++++-------------------------------
1 file changed, 11 insertions(+), 31 deletions(-)
diff --git a/lib/kunit_iov_iter.c b/lib/kunit_iov_iter.c
index 4925ca37cde6..eb86371b67d0 100644
--- a/lib/kunit_iov_iter.c
+++ b/lib/kunit_iov_iter.c
@@ -107,9 +107,11 @@ static void iov_kunit_build_to_reference_pattern(struct kunit *test, u8 *scratch
int i, patt = 0;
memset(scratch, 0, bufsize);
- for (; pr->page >= 0; pr++)
+ for (; pr->page >= 0; pr++) {
+ u8 *p = scratch + pr->page * PAGE_SIZE;
for (i = pr->from; i < pr->to; i++)
- scratch[i] = pattern(patt++);
+ p[i] = pattern(patt++);
+ }
}
/*
@@ -124,8 +126,10 @@ static void iov_kunit_build_from_reference_pattern(struct kunit *test, u8 *buffe
memset(buffer, 0, bufsize);
for (; pr->page >= 0; pr++) {
+ size_t patt = pr->page * PAGE_SIZE;
+
for (j = pr->from; j < pr->to; j++) {
- buffer[i++] = pattern(j);
+ buffer[i++] = pattern(patt + j);
if (i >= bufsize)
return;
}
@@ -287,13 +291,12 @@ static void __init iov_kunit_load_bvec(struct kunit *test,
*/
static void __init iov_kunit_copy_to_bvec(struct kunit *test)
{
- const struct iov_kunit_range *pr;
struct iov_iter iter;
struct bio_vec bvec[8];
struct page **spages, **bpages;
u8 *scratch, *buffer;
size_t bufsize, npages, size, copied;
- int i, patt;
+ int i;
bufsize = 0x100000;
npages = bufsize / PAGE_SIZE;
@@ -315,16 +318,7 @@ static void __init iov_kunit_copy_to_bvec(struct kunit *test)
KUNIT_EXPECT_EQ(test, iter.count, 0);
KUNIT_EXPECT_EQ(test, iter.nr_segs, 0);
- /* Build the expected image in the scratch buffer. */
- patt = 0;
- memset(scratch, 0, bufsize);
- for (pr = bvec_test_ranges; pr->page >= 0; pr++) {
- u8 *p = scratch + pr->page * PAGE_SIZE;
-
- for (i = pr->from; i < pr->to; i++)
- p[i] = pattern(patt++);
- }
-
+ iov_kunit_build_to_reference_pattern(test, scratch, bufsize, bvec_test_ranges);
iov_kunit_check_pattern(test, buffer, scratch, bufsize);
KUNIT_SUCCEED();
}
@@ -334,13 +328,12 @@ static void __init iov_kunit_copy_to_bvec(struct kunit *test)
*/
static void __init iov_kunit_copy_from_bvec(struct kunit *test)
{
- const struct iov_kunit_range *pr;
struct iov_iter iter;
struct bio_vec bvec[8];
struct page **spages, **bpages;
u8 *scratch, *buffer;
size_t bufsize, npages, size, copied;
- int i, j;
+ int i;
bufsize = 0x100000;
npages = bufsize / PAGE_SIZE;
@@ -362,20 +355,7 @@ static void __init iov_kunit_copy_from_bvec(struct kunit *test)
KUNIT_EXPECT_EQ(test, iter.count, 0);
KUNIT_EXPECT_EQ(test, iter.nr_segs, 0);
- /* Build the expected image in the main buffer. */
- i = 0;
- memset(buffer, 0, bufsize);
- for (pr = bvec_test_ranges; pr->page >= 0; pr++) {
- size_t patt = pr->page * PAGE_SIZE;
-
- for (j = pr->from; j < pr->to; j++) {
- buffer[i++] = pattern(patt + j);
- if (i >= bufsize)
- goto stop;
- }
- }
-stop:
-
+ iov_kunit_build_from_reference_pattern(test, buffer, bufsize, bvec_test_ranges);
iov_kunit_check_pattern(test, buffer, scratch, bufsize);
KUNIT_SUCCEED();
}
Add extraction kunit tests for ITER_UBUF- and ITER_IOVEC-type iterators.
This attaches a userspace VM with a mapped file in it temporarily to the
test thread.
[!] Note that this requires the kernel thread running the test to obtain
and deploy an mm_struct so that a user-side buffer can be created with mmap
- basically it has to emulated part of execve(). Doing so requires access
to additional core symbols: mm_alloc(), vm_area_alloc(), insert_vm_struct()
and arch_pick_mmap_layout(). See the iov_kunit_create_user_buf() function
added in the patch.
Signed-off-by: David Howells <[email protected]>
cc: Andrew Morton <[email protected]>
cc: Christoph Hellwig <[email protected]>
cc: Christian Brauner <[email protected]>
cc: Jens Axboe <[email protected]>
cc: Al Viro <[email protected]>
cc: Matthew Wilcox <[email protected]>
cc: David Hildenbrand <[email protected]>
cc: John Hubbard <[email protected]>
cc: Brendan Higgins <[email protected]>
cc: David Gow <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
cc: [email protected]
---
lib/kunit_iov_iter.c | 164 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 164 insertions(+)
diff --git a/lib/kunit_iov_iter.c b/lib/kunit_iov_iter.c
index 34f0d82674ee..fdf598e49c0b 100644
--- a/lib/kunit_iov_iter.c
+++ b/lib/kunit_iov_iter.c
@@ -863,6 +863,168 @@ static void __init iov_kunit_copy_from_xarray(struct kunit *test)
KUNIT_SUCCEED();
}
+/*
+ * Test the extraction of ITER_UBUF-type iterators.
+ */
+static void __init iov_kunit_extract_pages_ubuf(struct kunit *test)
+{
+ const struct iov_kunit_range *pr;
+ struct iov_iter iter;
+ struct page **bpages, *pagelist[8], **pages = pagelist;
+ ssize_t len;
+ size_t bufsize, size = 0, npages;
+ int i, from;
+ u8 __user *buffer;
+
+ bufsize = 0x100000;
+ npages = bufsize / PAGE_SIZE;
+
+ buffer = iov_kunit_create_user_buf(test, npages, &bpages);
+
+ for (pr = kvec_test_ranges; pr->page >= 0; pr++) {
+ from = pr->from;
+ size = pr->to - from;
+ KUNIT_ASSERT_LE(test, pr->to, bufsize);
+
+ iov_iter_ubuf(&iter, ITER_SOURCE, buffer + pr->from, size);
+
+ do {
+ size_t offset0 = LONG_MAX;
+
+ for (i = 0; i < ARRAY_SIZE(pagelist); i++)
+ pagelist[i] = (void *)POISON_POINTER_DELTA + 0x5a;
+
+ len = iov_iter_extract_pages(&iter, &pages, 100 * 1024,
+ ARRAY_SIZE(pagelist), 0, &offset0);
+ KUNIT_EXPECT_GE(test, len, 0);
+ if (len < 0)
+ break;
+ KUNIT_EXPECT_LE(test, len, size);
+ KUNIT_EXPECT_EQ(test, iter.count, size - len);
+ if (len == 0)
+ break;
+ size -= len;
+ KUNIT_EXPECT_GE(test, (ssize_t)offset0, 0);
+ KUNIT_EXPECT_LT(test, offset0, PAGE_SIZE);
+
+ /* We're only checking the page pointers */
+ unpin_user_pages(pages, (offset0 + len) / PAGE_SIZE);
+
+ for (i = 0; i < ARRAY_SIZE(pagelist); i++) {
+ struct page *p;
+ ssize_t part = min_t(ssize_t, len, PAGE_SIZE - offset0);
+ int ix;
+
+ KUNIT_ASSERT_GE(test, part, 0);
+ ix = from / PAGE_SIZE;
+ KUNIT_ASSERT_LT(test, ix, npages);
+ p = bpages[ix];
+ KUNIT_EXPECT_PTR_EQ(test, pagelist[i], p);
+ KUNIT_EXPECT_EQ(test, offset0, from % PAGE_SIZE);
+ from += part;
+ len -= part;
+ KUNIT_ASSERT_GE(test, len, 0);
+ if (len == 0)
+ break;
+ offset0 = 0;
+ }
+
+ if (test->status == KUNIT_FAILURE)
+ goto stop;
+ } while (iov_iter_count(&iter) > 0);
+
+ KUNIT_EXPECT_EQ(test, size, 0);
+ KUNIT_EXPECT_EQ(test, iter.count, 0);
+ KUNIT_EXPECT_EQ(test, iter.iov_offset, pr->to - pr->from);
+ }
+
+stop:
+ KUNIT_SUCCEED();
+}
+
+/*
+ * Test the extraction of ITER_IOVEC-type iterators.
+ */
+static void __init iov_kunit_extract_pages_iovec(struct kunit *test)
+{
+ const struct iov_kunit_range *pr;
+ struct iov_iter iter;
+ struct iovec iov[8];
+ struct page **bpages, *pagelist[8], **pages = pagelist;
+ ssize_t len;
+ size_t bufsize, size = 0, npages;
+ int i, from;
+ u8 __user *buffer;
+
+ bufsize = 0x100000;
+ npages = bufsize / PAGE_SIZE;
+
+ buffer = iov_kunit_create_user_buf(test, npages, &bpages);
+
+ iov_kunit_load_iovec(test, &iter, ITER_SOURCE, iov, ARRAY_SIZE(iov),
+ buffer, bufsize, kvec_test_ranges);
+ size = iter.count;
+
+ pr = kvec_test_ranges;
+ from = pr->from;
+ do {
+ size_t offset0 = LONG_MAX;
+
+ for (i = 0; i < ARRAY_SIZE(pagelist); i++)
+ pagelist[i] = (void *)POISON_POINTER_DELTA + 0x5a;
+
+ len = iov_iter_extract_pages(&iter, &pages, 100 * 1024,
+ ARRAY_SIZE(pagelist), 0, &offset0);
+ KUNIT_EXPECT_GE(test, len, 0);
+ if (len < 0)
+ break;
+ KUNIT_EXPECT_LE(test, len, size);
+ KUNIT_EXPECT_EQ(test, iter.count, size - len);
+ if (len == 0)
+ break;
+ size -= len;
+ KUNIT_EXPECT_GE(test, (ssize_t)offset0, 0);
+ KUNIT_EXPECT_LT(test, offset0, PAGE_SIZE);
+
+ /* We're only checking the page pointers */
+ unpin_user_pages(pages, (offset0 + len) / PAGE_SIZE);
+
+ for (i = 0; i < ARRAY_SIZE(pagelist); i++) {
+ struct page *p;
+ ssize_t part = min_t(ssize_t, len, PAGE_SIZE - offset0);
+ int ix;
+
+ KUNIT_ASSERT_GE(test, part, 0);
+ while (from == pr->to) {
+ pr++;
+ from = pr->from;
+ if (pr->page < 0)
+ goto stop;
+ }
+
+ ix = from / PAGE_SIZE;
+ KUNIT_ASSERT_LT(test, ix, npages);
+ p = bpages[ix];
+ KUNIT_EXPECT_PTR_EQ(test, pagelist[i], p);
+ KUNIT_EXPECT_EQ(test, offset0, from % PAGE_SIZE);
+ from += part;
+ len -= part;
+ KUNIT_ASSERT_GE(test, len, 0);
+ if (len == 0)
+ break;
+ offset0 = 0;
+ }
+
+ if (test->status == KUNIT_FAILURE)
+ break;
+ } while (iov_iter_count(&iter) > 0);
+
+stop:
+ KUNIT_EXPECT_EQ(test, size, 0);
+ KUNIT_EXPECT_EQ(test, iter.count, 0);
+ KUNIT_SUCCEED();
+}
+
/*
* Test the extraction of ITER_KVEC-type iterators.
*/
@@ -1111,6 +1273,8 @@ static struct kunit_case __refdata iov_kunit_cases[] = {
KUNIT_CASE(iov_kunit_copy_from_bvec),
KUNIT_CASE(iov_kunit_copy_to_xarray),
KUNIT_CASE(iov_kunit_copy_from_xarray),
+ KUNIT_CASE(iov_kunit_extract_pages_ubuf),
+ KUNIT_CASE(iov_kunit_extract_pages_iovec),
KUNIT_CASE(iov_kunit_extract_pages_kvec),
KUNIT_CASE(iov_kunit_extract_pages_bvec),
KUNIT_CASE(iov_kunit_extract_pages_xarray),
Add kunit tests to benchmark 256MiB copies to a KVEC iterator, a BVEC
iterator, an XARRAY iterator and to a loop that allocates 256-page BVECs
and fills them in (similar to a maximal bio struct being set up).
Signed-off-by: David Howells <[email protected]>
cc: Christoph Hellwig <[email protected]>
cc: Christian Brauner <[email protected]>
cc: Jens Axboe <[email protected]>
cc: Al Viro <[email protected]>
cc: David Hildenbrand <[email protected]>
cc: John Hubbard <[email protected]>
cc: Brendan Higgins <[email protected]>
cc: David Gow <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
cc: [email protected]
---
lib/kunit_iov_iter.c | 251 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 251 insertions(+)
diff --git a/lib/kunit_iov_iter.c b/lib/kunit_iov_iter.c
index fdf598e49c0b..1a43e9518a63 100644
--- a/lib/kunit_iov_iter.c
+++ b/lib/kunit_iov_iter.c
@@ -1262,6 +1262,253 @@ static void __init iov_kunit_extract_pages_xarray(struct kunit *test)
KUNIT_SUCCEED();
}
+static void iov_kunit_free_page(void *data)
+{
+ __free_page(data);
+}
+
+#define IOV_KUNIT_NR_SAMPLES 16
+static void __init iov_kunit_benchmark_print_stats(struct kunit *test,
+ unsigned int *samples)
+{
+ unsigned long long sumsq = 0;
+ unsigned long total = 0, mean, stddev;
+ unsigned int n = IOV_KUNIT_NR_SAMPLES;
+ int i;
+
+ //for (i = 0; i < n; i++)
+ // kunit_info(test, "run %x: %u uS\n", i, samples[i]);
+
+ /* Ignore the 0th sample as that may include extra overhead such as
+ * setting up PTEs.
+ */
+ samples++;
+ n--;
+ for (i = 0; i < n; i++)
+ total += samples[i];
+ mean = total / n;
+
+ for (i = 0; i < n; i++) {
+ long s = samples[i] - mean;
+
+ sumsq += s * s;
+ }
+ stddev = int_sqrt64(sumsq);
+
+ kunit_info(test, "avg %lu uS, stddev %lu uS\n", mean, stddev);
+}
+
+/*
+ * Create a source buffer for benchmarking.
+ */
+static void *__init iov_kunit_create_source(struct kunit *test, size_t npages)
+{
+ struct page *page, **pages;
+ void *scratch;
+ size_t i;
+
+ /* Allocate a page and tile it repeatedly in the buffer. */
+ page = alloc_page(GFP_KERNEL);
+ KUNIT_ASSERT_NOT_NULL(test, page);
+ kunit_add_action_or_reset(test, iov_kunit_free_page, page);
+
+ pages = kunit_kmalloc_array(test, npages, sizeof(pages[0]), GFP_KERNEL);
+ KUNIT_ASSERT_NOT_NULL(test, pages);
+ for (i = 0; i < npages; i++) {
+ pages[i] = page;
+ get_page(page);
+ }
+
+ scratch = vmap(pages, npages, VM_MAP | VM_MAP_PUT_PAGES, PAGE_KERNEL);
+ KUNIT_ASSERT_NOT_NULL(test, scratch);
+ kunit_add_action_or_reset(test, iov_kunit_unmap, scratch);
+ return scratch;
+}
+
+/*
+ * Time copying 256MiB through an ITER_KVEC.
+ */
+static void __init iov_kunit_benchmark_kvec(struct kunit *test)
+{
+ struct iov_iter iter;
+ struct kvec kvec[8];
+ unsigned int samples[IOV_KUNIT_NR_SAMPLES];
+ ktime_t a, b;
+ ssize_t copied;
+ size_t size = 256 * 1024 * 1024, npages = size / PAGE_SIZE, part;
+ void *scratch, *buffer;
+ int i;
+
+ /* Allocate a huge buffer and populate it with pages. */
+ buffer = iov_kunit_create_source(test, npages);
+
+ /* Create a single large buffer to copy to/from. */
+ scratch = iov_kunit_create_source(test, npages);
+
+ /* Split the target over a number of kvecs */
+ copied = 0;
+ for (i = 0; i < ARRAY_SIZE(kvec); i++) {
+ part = size / ARRAY_SIZE(kvec);
+ kvec[i].iov_base = buffer + copied;
+ kvec[i].iov_len = part;
+ copied += part;
+ }
+ kvec[i - 1].iov_len += size - part;
+
+ /* Perform and time a bunch of copies. */
+ kunit_info(test, "Benchmarking copy_to_iter() over KVEC:\n");
+ for (i = 0; i < IOV_KUNIT_NR_SAMPLES; i++) {
+ iov_iter_kvec(&iter, ITER_SOURCE, kvec, ARRAY_SIZE(kvec), size);
+
+ a = ktime_get_real();
+ copied = copy_from_iter(scratch, size, &iter);
+ b = ktime_get_real();
+ KUNIT_EXPECT_EQ(test, copied, size);
+ samples[i] = ktime_to_us(ktime_sub(b, a));
+ }
+
+ iov_kunit_benchmark_print_stats(test, samples);
+ KUNIT_SUCCEED();
+}
+
+/*
+ * Time copying 256MiB through an ITER_BVEC.
+ */
+static void __init iov_kunit_benchmark_bvec(struct kunit *test)
+{
+ struct iov_iter iter;
+ struct bio_vec *bvec;
+ struct page *page;
+ unsigned int samples[IOV_KUNIT_NR_SAMPLES];
+ ktime_t a, b;
+ ssize_t copied;
+ size_t size = 256 * 1024 * 1024, npages = size / PAGE_SIZE;
+ void *scratch;
+ int i;
+
+ /* Allocate a page and tile it repeatedly in the buffer. */
+ page = alloc_page(GFP_KERNEL);
+ KUNIT_ASSERT_NOT_NULL(test, page);
+ kunit_add_action_or_reset(test, iov_kunit_free_page, page);
+
+ bvec = kunit_kmalloc_array(test, npages, sizeof(bvec[0]), GFP_KERNEL);
+ KUNIT_ASSERT_NOT_NULL(test, bvec);
+ for (i = 0; i < npages; i++)
+ bvec_set_page(&bvec[i], page, PAGE_SIZE, 0);
+
+ /* Create a single large buffer to copy to/from. */
+ scratch = iov_kunit_create_source(test, npages);
+
+ /* Perform and time a bunch of copies. */
+ kunit_info(test, "Benchmarking copy_to_iter() over BVEC:\n");
+ for (i = 0; i < IOV_KUNIT_NR_SAMPLES; i++) {
+ iov_iter_bvec(&iter, ITER_SOURCE, bvec, npages, size);
+ a = ktime_get_real();
+ copied = copy_from_iter(scratch, size, &iter);
+ b = ktime_get_real();
+ KUNIT_EXPECT_EQ(test, copied, size);
+ samples[i] = ktime_to_us(ktime_sub(b, a));
+ }
+
+ iov_kunit_benchmark_print_stats(test, samples);
+ KUNIT_SUCCEED();
+}
+
+/*
+ * Time copying 256MiB through an ITER_BVEC in 256 page chunks.
+ */
+static void __init iov_kunit_benchmark_bvec_split(struct kunit *test)
+{
+ struct iov_iter iter;
+ struct bio_vec *bvec;
+ struct page *page;
+ unsigned int samples[IOV_KUNIT_NR_SAMPLES];
+ ktime_t a, b;
+ ssize_t copied;
+ size_t size, npages = 64;
+ void *scratch;
+ int i, j;
+
+ /* Allocate a page and tile it repeatedly in the buffer. */
+ page = alloc_page(GFP_KERNEL);
+ KUNIT_ASSERT_NOT_NULL(test, page);
+ kunit_add_action_or_reset(test, iov_kunit_free_page, page);
+
+ /* Create a single large buffer to copy to/from. */
+ scratch = iov_kunit_create_source(test, npages);
+
+ /* Perform and time a bunch of copies. */
+ kunit_info(test, "Benchmarking copy_to_iter() over BVEC:\n");
+ for (i = 0; i < IOV_KUNIT_NR_SAMPLES; i++) {
+ size = 256 * 1024 * 1024;
+ a = ktime_get_real();
+ do {
+ size_t part = min_t(size_t, size, npages * PAGE_SIZE);
+
+ bvec = kunit_kmalloc_array(test, npages, sizeof(bvec[0]), GFP_KERNEL);
+ KUNIT_ASSERT_NOT_NULL(test, bvec);
+ for (j = 0; j < npages; j++)
+ bvec_set_page(&bvec[j], page, PAGE_SIZE, 0);
+
+ iov_iter_bvec(&iter, ITER_SOURCE, bvec, npages, part);
+ copied = copy_from_iter(scratch, part, &iter);
+ KUNIT_EXPECT_EQ(test, copied, part);
+ size -= part;
+ } while (size > 0);
+ b = ktime_get_real();
+ samples[i] = ktime_to_us(ktime_sub(b, a));
+ }
+
+ iov_kunit_benchmark_print_stats(test, samples);
+ KUNIT_SUCCEED();
+}
+
+/*
+ * Time copying 256MiB through an ITER_XARRAY.
+ */
+static void __init iov_kunit_benchmark_xarray(struct kunit *test)
+{
+ struct iov_iter iter;
+ struct xarray *xarray;
+ struct page *page;
+ unsigned int samples[IOV_KUNIT_NR_SAMPLES];
+ ktime_t a, b;
+ ssize_t copied;
+ size_t size = 256 * 1024 * 1024, npages = size / PAGE_SIZE;
+ void *scratch;
+ int i;
+
+ /* Allocate a page and tile it repeatedly in the buffer. */
+ page = alloc_page(GFP_KERNEL);
+ KUNIT_ASSERT_NOT_NULL(test, page);
+ kunit_add_action_or_reset(test, iov_kunit_free_page, page);
+
+ xarray = iov_kunit_create_xarray(test);
+
+ for (i = 0; i < npages; i++) {
+ void *x = xa_store(xarray, i, page, GFP_KERNEL);
+
+ KUNIT_ASSERT_FALSE(test, xa_is_err(x));
+ }
+
+ /* Create a single large buffer to copy to/from. */
+ scratch = iov_kunit_create_source(test, npages);
+
+ /* Perform and time a bunch of copies. */
+ kunit_info(test, "Benchmarking copy_to_iter() over XARRAY:\n");
+ for (i = 0; i < IOV_KUNIT_NR_SAMPLES; i++) {
+ iov_iter_xarray(&iter, ITER_SOURCE, xarray, 0, size);
+ a = ktime_get_real();
+ copied = copy_from_iter(scratch, size, &iter);
+ b = ktime_get_real();
+ KUNIT_EXPECT_EQ(test, copied, size);
+ samples[i] = ktime_to_us(ktime_sub(b, a));
+ }
+
+ iov_kunit_benchmark_print_stats(test, samples);
+ KUNIT_SUCCEED();
+}
+
static struct kunit_case __refdata iov_kunit_cases[] = {
KUNIT_CASE(iov_kunit_copy_to_ubuf),
KUNIT_CASE(iov_kunit_copy_from_ubuf),
@@ -1278,6 +1525,10 @@ static struct kunit_case __refdata iov_kunit_cases[] = {
KUNIT_CASE(iov_kunit_extract_pages_kvec),
KUNIT_CASE(iov_kunit_extract_pages_bvec),
KUNIT_CASE(iov_kunit_extract_pages_xarray),
+ KUNIT_CASE(iov_kunit_benchmark_kvec),
+ KUNIT_CASE(iov_kunit_benchmark_bvec),
+ KUNIT_CASE(iov_kunit_benchmark_bvec_split),
+ KUNIT_CASE(iov_kunit_benchmark_xarray),
{}
};
Add kunit tests to benchmark 256MiB copies to a UBUF iterator and an IOVEC
iterator. This attaches a userspace VM with a mapped file in it
temporarily to the test thread.
Signed-off-by: David Howells <[email protected]>
cc: Andrew Morton <[email protected]>
cc: Christoph Hellwig <[email protected]>
cc: Christian Brauner <[email protected]>
cc: Jens Axboe <[email protected]>
cc: Al Viro <[email protected]>
cc: Matthew Wilcox <[email protected]>
cc: David Hildenbrand <[email protected]>
cc: John Hubbard <[email protected]>
cc: Brendan Higgins <[email protected]>
cc: David Gow <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
cc: [email protected]
---
lib/kunit_iov_iter.c | 95 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 95 insertions(+)
diff --git a/lib/kunit_iov_iter.c b/lib/kunit_iov_iter.c
index 2fbe6f2afb26..d5b7517f4f47 100644
--- a/lib/kunit_iov_iter.c
+++ b/lib/kunit_iov_iter.c
@@ -1325,6 +1325,99 @@ static void *__init iov_kunit_create_source(struct kunit *test, size_t npages)
return scratch;
}
+/*
+ * Time copying 256MiB through an ITER_UBUF.
+ */
+static void __init iov_kunit_benchmark_ubuf(struct kunit *test)
+{
+ struct iov_iter iter;
+ unsigned int samples[IOV_KUNIT_NR_SAMPLES];
+ ktime_t a, b;
+ ssize_t copied;
+ size_t size = 256 * 1024 * 1024, npages = size / PAGE_SIZE;
+ void *scratch;
+ int i;
+ u8 __user *buffer;
+
+ /* Allocate a huge buffer and populate it with pages. */
+ buffer = iov_kunit_create_user_buf(test, npages, NULL);
+
+ /* Create a single large buffer to copy to/from. */
+ scratch = iov_kunit_create_source(test, npages);
+
+ /* Perform and time a bunch of copies. */
+ kunit_info(test, "Benchmarking copy_to_iter() over UBUF:\n");
+ for (i = 0; i < IOV_KUNIT_NR_SAMPLES; i++) {
+ size_t remain = size;
+
+ a = ktime_get_real();
+ do {
+ size_t part = min(remain, PAGE_SIZE);
+
+ iov_iter_ubuf(&iter, ITER_SOURCE, buffer, part);
+ copied = copy_from_iter(scratch, part, &iter);
+ KUNIT_EXPECT_EQ(test, copied, part);
+ remain -= part;
+ } while (remain > 0);
+ b = ktime_get_real();
+ samples[i] = ktime_to_us(ktime_sub(b, a));
+ }
+
+ iov_kunit_benchmark_print_stats(test, samples);
+ KUNIT_SUCCEED();
+}
+
+/*
+ * Time copying 256MiB through an ITER_IOVEC.
+ */
+static void __init iov_kunit_benchmark_iovec(struct kunit *test)
+{
+ struct iov_iter iter;
+ struct iovec *iov;
+ unsigned int samples[IOV_KUNIT_NR_SAMPLES];
+ ktime_t a, b;
+ ssize_t copied;
+ size_t size = 256 * 1024 * 1024, npages = size / PAGE_SIZE, part;
+ size_t ioc = size / PAGE_SIZE;
+ void *scratch;
+ int i;
+ u8 __user *buffer;
+
+ iov = kunit_kmalloc_array(test, ioc, sizeof(*iov), GFP_KERNEL);
+ KUNIT_ASSERT_NOT_NULL(test, iov);
+
+ /* Allocate a huge buffer and populate it with pages. */
+ buffer = iov_kunit_create_user_buf(test, npages, NULL);
+
+ /* Create a single large buffer to copy to/from. */
+ scratch = iov_kunit_create_source(test, npages);
+
+ /* Split the target over a number of iovecs */
+ copied = 0;
+ for (i = 0; i < ioc; i++) {
+ part = size / ioc;
+ iov[i].iov_base = buffer + copied;
+ iov[i].iov_len = part;
+ copied += part;
+ }
+ iov[i - 1].iov_len += size - part;
+
+ /* Perform and time a bunch of copies. */
+ kunit_info(test, "Benchmarking copy_to_iter() over IOVEC:\n");
+ for (i = 0; i < IOV_KUNIT_NR_SAMPLES; i++) {
+ iov_iter_init(&iter, ITER_SOURCE, iov, npages, size);
+
+ a = ktime_get_real();
+ copied = copy_from_iter(scratch, size, &iter);
+ b = ktime_get_real();
+ KUNIT_EXPECT_EQ(test, copied, size);
+ samples[i] = ktime_to_us(ktime_sub(b, a));
+ }
+
+ iov_kunit_benchmark_print_stats(test, samples);
+ KUNIT_SUCCEED();
+}
+
/*
* Time copying 256MiB through an ITER_KVEC.
*/
@@ -1611,6 +1704,8 @@ static struct kunit_case __refdata iov_kunit_cases[] = {
KUNIT_CASE(iov_kunit_extract_pages_kvec),
KUNIT_CASE(iov_kunit_extract_pages_bvec),
KUNIT_CASE(iov_kunit_extract_pages_xarray),
+ KUNIT_CASE(iov_kunit_benchmark_ubuf),
+ KUNIT_CASE(iov_kunit_benchmark_iovec),
KUNIT_CASE(iov_kunit_benchmark_kvec),
KUNIT_CASE(iov_kunit_benchmark_bvec),
KUNIT_CASE(iov_kunit_benchmark_bvec_split),
Add copy kunit tests for ITER_UBUF- and ITER_IOVEC-type iterators. This
attaches a userspace VM with a mapped file in it temporarily to the test
thread.
Signed-off-by: David Howells <[email protected]>
cc: Andrew Morton <[email protected]>
cc: Christoph Hellwig <[email protected]>
cc: Christian Brauner <[email protected]>
cc: Jens Axboe <[email protected]>
cc: Al Viro <[email protected]>
cc: Matthew Wilcox <[email protected]>
cc: David Hildenbrand <[email protected]>
cc: John Hubbard <[email protected]>
cc: Brendan Higgins <[email protected]>
cc: David Gow <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
cc: [email protected]
---
lib/kunit_iov_iter.c | 236 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 236 insertions(+)
diff --git a/lib/kunit_iov_iter.c b/lib/kunit_iov_iter.c
index 63e4dd1e7c1b..34f0d82674ee 100644
--- a/lib/kunit_iov_iter.c
+++ b/lib/kunit_iov_iter.c
@@ -117,6 +117,23 @@ static void *__init iov_kunit_create_buffer(struct kunit *test,
return buffer;
}
+/*
+ * Fill a user buffer with a recognisable pattern.
+ */
+static void iov_kunit_fill_user_buf(struct kunit *test,
+ u8 __user *buffer, size_t bufsize)
+{
+ size_t i;
+ int err;
+
+ for (i = 0; i < bufsize; i++) {
+ err = put_user(pattern(i), &buffer[i]);
+ KUNIT_EXPECT_EQ(test, err, 0);
+ if (test->status == KUNIT_FAILURE)
+ return;
+ }
+}
+
/*
* Build the reference pattern in the scratch buffer that we expect to see in
* the iterator buffer (ie. the result of copy *to*).
@@ -172,6 +189,25 @@ static void iov_kunit_check_pattern(struct kunit *test, const u8 *buffer,
}
}
+/*
+ * Compare a user and a scratch buffer to see that they're the same.
+ */
+static void iov_kunit_check_user_pattern(struct kunit *test, const u8 __user *buffer,
+ const u8 *scratch, size_t bufsize)
+{
+ size_t i;
+ int err;
+ u8 c;
+
+ for (i = 0; i < bufsize; i++) {
+ err = get_user(c, &buffer[i]);
+ KUNIT_EXPECT_EQ(test, err, 0);
+ KUNIT_EXPECT_EQ_MSG(test, c, scratch[i], "at i=%x", i);
+ if (c != scratch[i])
+ return;
+ }
+}
+
static const struct file_operations iov_kunit_user_file_fops = {
.mmap = generic_file_mmap,
};
@@ -294,6 +330,202 @@ static u8 __user *__init iov_kunit_create_user_buf(struct kunit *test,
return buffer;
}
+/*
+ * Test copying to an ITER_UBUF-type iterator.
+ */
+static void __init iov_kunit_copy_to_ubuf(struct kunit *test)
+{
+ const struct iov_kunit_range *pr;
+ struct iov_iter iter;
+ struct page **spages;
+ u8 __user *buffer;
+ u8 *scratch;
+ ssize_t uncleared;
+ size_t bufsize, npages, size, copied;
+ int i;
+
+ bufsize = 0x100000;
+ npages = bufsize / PAGE_SIZE;
+
+ scratch = iov_kunit_create_buffer(test, &spages, npages);
+ for (i = 0; i < bufsize; i++)
+ scratch[i] = pattern(i);
+
+ buffer = iov_kunit_create_user_buf(test, npages, NULL);
+ uncleared = clear_user(buffer, bufsize);
+ KUNIT_EXPECT_EQ(test, uncleared, 0);
+ if (uncleared)
+ return;
+
+ i = 0;
+ for (pr = kvec_test_ranges; pr->page >= 0; pr++) {
+ size = pr->to - pr->from;
+ KUNIT_ASSERT_LE(test, pr->to, bufsize);
+
+ iov_iter_ubuf(&iter, ITER_DEST, buffer + pr->from, size);
+ copied = copy_to_iter(scratch + i, size, &iter);
+
+ KUNIT_EXPECT_EQ(test, copied, size);
+ KUNIT_EXPECT_EQ(test, iter.count, 0);
+ KUNIT_EXPECT_EQ(test, iter.iov_offset, size);
+ if (test->status == KUNIT_FAILURE)
+ break;
+ i += size;
+ }
+
+ iov_kunit_build_to_reference_pattern(test, scratch, bufsize, kvec_test_ranges);
+ iov_kunit_check_user_pattern(test, buffer, scratch, bufsize);
+ KUNIT_SUCCEED();
+}
+
+/*
+ * Test copying from an ITER_UBUF-type iterator.
+ */
+static void __init iov_kunit_copy_from_ubuf(struct kunit *test)
+{
+ const struct iov_kunit_range *pr;
+ struct iov_iter iter;
+ struct page **spages;
+ u8 __user *buffer;
+ u8 *scratch, *reference;
+ size_t bufsize, npages, size, copied;
+ int i;
+
+ bufsize = 0x100000;
+ npages = bufsize / PAGE_SIZE;
+
+ buffer = iov_kunit_create_user_buf(test, npages, NULL);
+ iov_kunit_fill_user_buf(test, buffer, bufsize);
+
+ scratch = iov_kunit_create_buffer(test, &spages, npages);
+ memset(scratch, 0, bufsize);
+
+ reference = iov_kunit_create_buffer(test, &spages, npages);
+
+ i = 0;
+ for (pr = kvec_test_ranges; pr->page >= 0; pr++) {
+ size = pr->to - pr->from;
+ KUNIT_ASSERT_LE(test, pr->to, bufsize);
+
+ iov_iter_ubuf(&iter, ITER_SOURCE, buffer + pr->from, size);
+ copied = copy_from_iter(scratch + i, size, &iter);
+
+ KUNIT_EXPECT_EQ(test, copied, size);
+ KUNIT_EXPECT_EQ(test, iter.count, 0);
+ KUNIT_EXPECT_EQ(test, iter.iov_offset, size);
+ if (test->status == KUNIT_FAILURE)
+ break;
+ i += size;
+ }
+
+ iov_kunit_build_from_reference_pattern(test, reference, bufsize, kvec_test_ranges);
+ iov_kunit_check_pattern(test, scratch, reference, bufsize);
+ KUNIT_SUCCEED();
+}
+
+static void __init iov_kunit_load_iovec(struct kunit *test,
+ struct iov_iter *iter, int dir,
+ struct iovec *iov, unsigned int iovmax,
+ u8 __user *buffer, size_t bufsize,
+ const struct iov_kunit_range *pr)
+{
+ size_t size = 0;
+ int i;
+
+ for (i = 0; i < iovmax; i++, pr++) {
+ if (pr->page < 0)
+ break;
+ KUNIT_ASSERT_GE(test, pr->to, pr->from);
+ KUNIT_ASSERT_LE(test, pr->to, bufsize);
+ iov[i].iov_base = buffer + pr->from;
+ iov[i].iov_len = pr->to - pr->from;
+ size += pr->to - pr->from;
+ }
+ KUNIT_ASSERT_LE(test, size, bufsize);
+
+ iov_iter_init(iter, dir, iov, i, size);
+}
+
+/*
+ * Test copying to an ITER_IOVEC-type iterator.
+ */
+static void __init iov_kunit_copy_to_iovec(struct kunit *test)
+{
+ struct iov_iter iter;
+ struct page **spages;
+ struct iovec iov[8];
+ u8 __user *buffer;
+ u8 *scratch;
+ ssize_t uncleared;
+ size_t bufsize, npages, size, copied;
+ int i;
+
+ bufsize = 0x100000;
+ npages = bufsize / PAGE_SIZE;
+
+ scratch = iov_kunit_create_buffer(test, &spages, npages);
+ for (i = 0; i < bufsize; i++)
+ scratch[i] = pattern(i);
+
+ buffer = iov_kunit_create_user_buf(test, npages, NULL);
+ uncleared = clear_user(buffer, bufsize);
+ KUNIT_EXPECT_EQ(test, uncleared, 0);
+ if (uncleared)
+ return;
+
+ iov_kunit_load_iovec(test, &iter, ITER_DEST, iov, ARRAY_SIZE(iov),
+ buffer, bufsize, kvec_test_ranges);
+ size = iter.count;
+
+ copied = copy_to_iter(scratch, size, &iter);
+
+ KUNIT_EXPECT_EQ(test, copied, size);
+ KUNIT_EXPECT_EQ(test, iter.count, 0);
+ KUNIT_EXPECT_EQ(test, iter.nr_segs, 0);
+
+ iov_kunit_build_to_reference_pattern(test, scratch, bufsize, kvec_test_ranges);
+ iov_kunit_check_user_pattern(test, buffer, scratch, bufsize);
+ KUNIT_SUCCEED();
+}
+
+/*
+ * Test copying from an ITER_IOVEC-type iterator.
+ */
+static void __init iov_kunit_copy_from_iovec(struct kunit *test)
+{
+ struct iov_iter iter;
+ struct page **spages;
+ struct iovec iov[8];
+ u8 __user *buffer;
+ u8 *scratch, *reference;
+ size_t bufsize, npages, size, copied;
+
+ bufsize = 0x100000;
+ npages = bufsize / PAGE_SIZE;
+
+ buffer = iov_kunit_create_user_buf(test, npages, NULL);
+ iov_kunit_fill_user_buf(test, buffer, bufsize);
+
+ scratch = iov_kunit_create_buffer(test, &spages, npages);
+ memset(scratch, 0, bufsize);
+
+ reference = iov_kunit_create_buffer(test, &spages, npages);
+
+ iov_kunit_load_iovec(test, &iter, ITER_SOURCE, iov, ARRAY_SIZE(iov),
+ buffer, bufsize, kvec_test_ranges);
+ size = iter.count;
+
+ copied = copy_from_iter(scratch, size, &iter);
+
+ KUNIT_EXPECT_EQ(test, copied, size);
+ KUNIT_EXPECT_EQ(test, iter.count, 0);
+ KUNIT_EXPECT_EQ(test, iter.nr_segs, 0);
+
+ iov_kunit_build_from_reference_pattern(test, reference, bufsize, kvec_test_ranges);
+ iov_kunit_check_pattern(test, reference, scratch, bufsize);
+ KUNIT_SUCCEED();
+}
+
static void __init iov_kunit_load_kvec(struct kunit *test,
struct iov_iter *iter, int dir,
struct kvec *kvec, unsigned int kvmax,
@@ -869,6 +1101,10 @@ static void __init iov_kunit_extract_pages_xarray(struct kunit *test)
}
static struct kunit_case __refdata iov_kunit_cases[] = {
+ KUNIT_CASE(iov_kunit_copy_to_ubuf),
+ KUNIT_CASE(iov_kunit_copy_from_ubuf),
+ KUNIT_CASE(iov_kunit_copy_to_iovec),
+ KUNIT_CASE(iov_kunit_copy_from_iovec),
KUNIT_CASE(iov_kunit_copy_to_kvec),
KUNIT_CASE(iov_kunit_copy_from_kvec),
KUNIT_CASE(iov_kunit_copy_to_bvec),
On Wed, 15 Nov 2023 at 10:50, David Howells <[email protected]> wrote:
>
> (3) Add a function to set up a userspace VM, attach the VM to the kunit
> testing thread, create an anonymous file, stuff some pages into the
> file and map the file into the VM to act as a buffer that can be used
> with UBUF/IOVEC iterators.
>
> I map an anonymous file with pages attached rather than using MAP_ANON
> so that I can check the pages obtained from iov_iter_extract_pages()
> without worrying about them changing due to swap, migrate, etc..
>
> [?] Is this the best way to do things? Mirroring execve, it requires
> a number of extra core symbols to be exported. Should this be done in
> the core code?
Do you really need to do this as a kunit test in the kernel itself?
Why not just make it a user-space test as part of tools/testing/selftests?
That's what it smells like to me. You're doing user-level tests, but
you're doing them in the wrong place, so you need to jump through all
these hoops that you really shouldn't.
Linus
On Wed, 15 Nov 2023 at 10:50, David Howells <[email protected]> wrote:
>
> This requires access to otherwise unexported core symbols: mm_alloc(),
> vm_area_alloc(), insert_vm_struct() arch_pick_mmap_layout() and
> anon_inode_getfile_secure(), which I've exported _GPL.
>
> [?] Would it be better if this were done in core and not in a module?
I'm not going to take this, even if it were to be sent to me through Christian.
I think the exports really show that this shouldn't be done. And yes,
doing it in core would avoid the exports, but would be even worse.
Those functions exist for setting up user space. You should be doing
this in user space.
I'm getting really fed up with the problems that ther KUnit tests
cause. We have a long history of self-inflicted pain due to "unit
testing", where it has caused stupid problems like just overflowing
the kernel stack etc.
This needs to stop. And this is where I'm putting my foot down. No
more KUnit tests that make up interfaces - or use interfaces - that
they have absolutely no place using.
From a quick look, what you were doing was checking that the patterns
you set up in user space came through ok. Dammit, what's wrong with
just using read()/write() on a pipe, or splice, or whatever. It will
test exactly the same iov_iter thing.
Kernel code should do things that can *only* be done in the kernel.
This is not it.
Linus
On Wed, 15 Nov 2023 at 10:50, David Howells <[email protected]> wrote:
>
> Add kunit tests to benchmark 256MiB copies to a KVEC iterator, a BVEC
> iterator, an XARRAY iterator and to a loop that allocates 256-page BVECs
> and fills them in (similar to a maximal bio struct being set up).
I see *zero* advantage of doing this in the kernel as opposed to doing
this benchmarking in user space.
If you cannot see the performance difference due to some user space
interface costs, then the performance difference doesn't matter.
Yes, some of the cases may be harder to trigger than others.
iov_iter_xarray() isn't as common an op as ubuf/iovec/etc, but that
either means that it doesn't matter enough, or that maybe some more
filesystems could be taught to use it for splice or whatever.
Particularly for something like different versions of memcpy(), this
whole benchmarking would want
(a) profiles
(b) be run on many different machines
(c) be run repeatedly to get some idea of variance
and all of those only get *harder* to do with Kunit tests. In user
space? Just run the damn binary (ok, to get profiles you then have to
make sure you have the proper permission setup to get the kernel
profiles too, but a
echo 1 > /proc/sys/kernel/perf_event_paranoid
as root will do that for you without you having to then do the actual
profiling run as root)
Linus
Linus Torvalds <[email protected]> wrote:
> From a quick look, what you were doing was checking that the patterns
> you set up in user space came through ok. Dammit, what's wrong with
> just using read()/write() on a pipe, or splice, or whatever. It will
> test exactly the same iov_iter thing.
I was trying to make it possible to do these tests before starting userspace
as there's a good chance that if the UBUF/IOVEC iterators don't work right
then your system can't be booted.
Anyway, if I drop patches 5, 6, 7 and 10 (ie. the ones doing stuff with UBUF
and IOVEC-type iterators), would you be okay with the rest?
David
On Wed, 15 Nov 2023 at 11:39, David Howells <[email protected]> wrote:
>
> I was trying to make it possible to do these tests before starting userspace
> as there's a good chance that if the UBUF/IOVEC iterators don't work right
> then your system can't be booted.
Oh, I don't think that any unit test should bother to check for that
kind of catastrophic case.
If something is so broken that the kernel doesn't boot properly even
into some basic test infrastructure, then bisection will trivially
find where that breakage was introduced.
And if it's something as core as the iov iterators, it won't even get
past the initial developer unless it's some odd build system
interaction.
So extreme cases aren't even worth checking for. What's worth testing
is "the system boots and works, but I want to check the edge cases".
IOW, when it comes to things like user copies, it's things like
alignment, and the page fault edge cases with EFAULT in particular.
You can easily get the return value wrong for a user copy that ends up
with an unaligned fault at the end of the last mapped page. Everything
normal will still work fine, because nobody does something that odd.
But those are best handled as user mode tests.
Linus