2023-09-15 02:38:54

by David Howells

[permalink] [raw]
Subject: [RFC PATCH 0/9] iov_iter: kunit: Cleanup, abstraction and more tests

Hi Al, Linus,

These patches make some changes to the kunit tests previously added for
iov_iter testing, in particular adding support for testing UBUF/IOVEC
iterators:

(1) Clean up a couple of checkpatch style complaints.

(2) Consolidate some repeated bits of code into helper functions and use
the same struct to represent straight offset/address ranges and
partial page lists.

(3) Add a function to set up a userspace VM, attach the VM to the kunit
testing thread, create an anonymous file, stuff some pages into the
file and map the file into the VM to act as a buffer that can be used
with UBUF/IOVEC iterators.

I map an anonymous file with pages attached rather than using MAP_ANON
so that I can check the pages obtained from iov_iter_extract_pages()
without worrying about them changing due to swap, migrate, etc..

[?] Is this the best way to do things? Mirroring execve, it requires
a number of extra core symbols to be exported. Should this be done in
the core code?

(4) Add tests for copying into and out of UBUF and IOVEC iterators.

(5) Add tests for extracting pages from UBUF and IOVEC iterators.

(6) Add tests to benchmark copying 256MiB to UBUF, IOVEC, KVEC, BVEC and
XARRAY iterators.

[!] Note that this requires 256MiB of memory for UBUF and IOVEC; the
KVEC, BVEC and XARRAY benchmarking maps a single page multiple times.
I might be able to shrink that if I can add the same page multiple
times to the anon file's pagecache. I'm sure this is not recommended,
but I might be able to get away with it for this particular
application.

(7) Add a test to benchmark copying 256MiB through dynamically allocated
256-page bvecs to simulate bio construction.

Example benchmarks output:

iov_kunit_benchmark_ubuf: avg 26899 uS, stddev 142 uS
iov_kunit_benchmark_iovec: avg 26897 uS, stddev 74 uS
iov_kunit_benchmark_kvec: avg 2688 uS, stddev 35 uS
iov_kunit_benchmark_bvec: avg 3139 uS, stddev 21 uS
iov_kunit_benchmark_bvec_split: avg 3379 uS, stddev 15 uS
iov_kunit_benchmark_xarray: avg 3582 uS, stddev 13 uS

I've pushed the patches here also:

https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=iov-kunit

David

David Howells (9):
iov_iter: Fix some checkpatch complaints in kunit tests
iov_iter: Consolidate some of the repeated code into helpers
iov_iter: Consolidate the test vector struct in the kunit tests
iov_iter: Consolidate bvec pattern checking
iov_iter: Create a function to prepare userspace VM for UBUF/IOVEC
tests
iov_iter: Add copy kunit tests for ITER_UBUF and ITER_IOVEC
iov_iter: Add extract kunit tests for ITER_UBUF and ITER_IOVEC
iov_iter: Add benchmarking kunit tests
iov_iter: Add benchmarking kunit tests for UBUF/IOVEC

fs/anon_inodes.c | 1 +
kernel/fork.c | 2 +
lib/kunit_iov_iter.c | 1211 +++++++++++++++++++++++++++++++++++-------
mm/mmap.c | 1 +
mm/util.c | 1 +
5 files changed, 1024 insertions(+), 192 deletions(-)


2023-09-15 03:25:11

by David Howells

[permalink] [raw]
Subject: [RFC PATCH 6/9] iov_iter: Add copy kunit tests for ITER_UBUF and ITER_IOVEC

Add copy kunit tests for ITER_UBUF- and ITER_IOVEC-type iterators. This
attaches a userspace VM with a mapped file in it temporarily to the test
thread.

Signed-off-by: David Howells <[email protected]>
cc: Andrew Morton <[email protected]>
cc: Christoph Hellwig <[email protected]>
cc: Christian Brauner <[email protected]>
cc: Jens Axboe <[email protected]>
cc: Al Viro <[email protected]>
cc: Matthew Wilcox <[email protected]>
cc: David Hildenbrand <[email protected]>
cc: John Hubbard <[email protected]>
cc: Brendan Higgins <[email protected]>
cc: David Gow <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
cc: [email protected]
---
lib/kunit_iov_iter.c | 200 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 200 insertions(+)

diff --git a/lib/kunit_iov_iter.c b/lib/kunit_iov_iter.c
index 3353bca9c40f..78f566ebd4a6 100644
--- a/lib/kunit_iov_iter.c
+++ b/lib/kunit_iov_iter.c
@@ -309,6 +309,202 @@ static u8 __user *__init iov_kunit_create_user_buf(struct kunit *test,
return buffer;
}

+/*
+ * Test copying to an ITER_UBUF-type iterator.
+ */
+static void __init iov_kunit_copy_to_ubuf(struct kunit *test)
+{
+ const struct iov_kunit_range *pr;
+ struct iov_iter iter;
+ struct page **spages;
+ u8 __user *buffer;
+ u8 *scratch;
+ ssize_t uncleared;
+ size_t bufsize, npages, size, copied;
+ int i;
+
+ bufsize = 0x100000;
+ npages = bufsize / PAGE_SIZE;
+
+ scratch = iov_kunit_create_buffer(test, &spages, npages);
+ for (i = 0; i < bufsize; i++)
+ scratch[i] = pattern(i);
+
+ buffer = iov_kunit_create_user_buf(test, npages, NULL);
+ uncleared = clear_user(buffer, bufsize);
+ KUNIT_EXPECT_EQ(test, uncleared, 0);
+ if (uncleared)
+ return;
+
+ i = 0;
+ for (pr = kvec_test_ranges; pr->page >= 0; pr++) {
+ size = pr->to - pr->from;
+ KUNIT_ASSERT_LE(test, pr->to, bufsize);
+
+ iov_iter_ubuf(&iter, ITER_DEST, buffer + pr->from, size);
+ copied = copy_to_iter(scratch + i, size, &iter);
+
+ KUNIT_EXPECT_EQ(test, copied, size);
+ KUNIT_EXPECT_EQ(test, iter.count, 0);
+ KUNIT_EXPECT_EQ(test, iter.iov_offset, size);
+ if (test->status == KUNIT_FAILURE)
+ break;
+ i += size;
+ }
+
+ iov_kunit_build_to_reference_pattern(test, scratch, bufsize, kvec_test_ranges);
+ iov_kunit_check_user_pattern(test, buffer, scratch, bufsize);
+ KUNIT_SUCCEED();
+}
+
+/*
+ * Test copying from an ITER_UBUF-type iterator.
+ */
+static void __init iov_kunit_copy_from_ubuf(struct kunit *test)
+{
+ const struct iov_kunit_range *pr;
+ struct iov_iter iter;
+ struct page **spages;
+ u8 __user *buffer;
+ u8 *scratch, *reference;
+ size_t bufsize, npages, size, copied;
+ int i;
+
+ bufsize = 0x100000;
+ npages = bufsize / PAGE_SIZE;
+
+ buffer = iov_kunit_create_user_buf(test, npages, NULL);
+ iov_kunit_fill_user_buf(test, buffer, bufsize);
+
+ scratch = iov_kunit_create_buffer(test, &spages, npages);
+ memset(scratch, 0, bufsize);
+
+ reference = iov_kunit_create_buffer(test, &spages, npages);
+
+ i = 0;
+ for (pr = kvec_test_ranges; pr->page >= 0; pr++) {
+ size = pr->to - pr->from;
+ KUNIT_ASSERT_LE(test, pr->to, bufsize);
+
+ iov_iter_ubuf(&iter, ITER_SOURCE, buffer + pr->from, size);
+ copied = copy_from_iter(scratch + i, size, &iter);
+
+ KUNIT_EXPECT_EQ(test, copied, size);
+ KUNIT_EXPECT_EQ(test, iter.count, 0);
+ KUNIT_EXPECT_EQ(test, iter.iov_offset, size);
+ if (test->status == KUNIT_FAILURE)
+ break;
+ i += size;
+ }
+
+ iov_kunit_build_from_reference_pattern(test, reference, bufsize, kvec_test_ranges);
+ iov_kunit_check_pattern(test, scratch, reference, bufsize);
+ KUNIT_SUCCEED();
+}
+
+static void __init iov_kunit_load_iovec(struct kunit *test,
+ struct iov_iter *iter, int dir,
+ struct iovec *iov, unsigned int iovmax,
+ u8 __user *buffer, size_t bufsize,
+ const struct iov_kunit_range *pr)
+{
+ size_t size = 0;
+ int i;
+
+ for (i = 0; i < iovmax; i++, pr++) {
+ if (pr->page < 0)
+ break;
+ KUNIT_ASSERT_GE(test, pr->to, pr->from);
+ KUNIT_ASSERT_LE(test, pr->to, bufsize);
+ iov[i].iov_base = buffer + pr->from;
+ iov[i].iov_len = pr->to - pr->from;
+ size += pr->to - pr->from;
+ }
+ KUNIT_ASSERT_LE(test, size, bufsize);
+
+ iov_iter_init(iter, dir, iov, i, size);
+}
+
+/*
+ * Test copying to an ITER_IOVEC-type iterator.
+ */
+static void __init iov_kunit_copy_to_iovec(struct kunit *test)
+{
+ struct iov_iter iter;
+ struct page **spages;
+ struct iovec iov[8];
+ u8 __user *buffer;
+ u8 *scratch;
+ ssize_t uncleared;
+ size_t bufsize, npages, size, copied;
+ int i;
+
+ bufsize = 0x100000;
+ npages = bufsize / PAGE_SIZE;
+
+ scratch = iov_kunit_create_buffer(test, &spages, npages);
+ for (i = 0; i < bufsize; i++)
+ scratch[i] = pattern(i);
+
+ buffer = iov_kunit_create_user_buf(test, npages, NULL);
+ uncleared = clear_user(buffer, bufsize);
+ KUNIT_EXPECT_EQ(test, uncleared, 0);
+ if (uncleared)
+ return;
+
+ iov_kunit_load_iovec(test, &iter, ITER_DEST, iov, ARRAY_SIZE(iov),
+ buffer, bufsize, kvec_test_ranges);
+ size = iter.count;
+
+ copied = copy_to_iter(scratch, size, &iter);
+
+ KUNIT_EXPECT_EQ(test, copied, size);
+ KUNIT_EXPECT_EQ(test, iter.count, 0);
+ KUNIT_EXPECT_EQ(test, iter.nr_segs, 0);
+
+ iov_kunit_build_to_reference_pattern(test, scratch, bufsize, kvec_test_ranges);
+ iov_kunit_check_user_pattern(test, buffer, scratch, bufsize);
+ KUNIT_SUCCEED();
+}
+
+/*
+ * Test copying from an ITER_IOVEC-type iterator.
+ */
+static void __init iov_kunit_copy_from_iovec(struct kunit *test)
+{
+ struct iov_iter iter;
+ struct page **spages;
+ struct iovec iov[8];
+ u8 __user *buffer;
+ u8 *scratch, *reference;
+ size_t bufsize, npages, size, copied;
+
+ bufsize = 0x100000;
+ npages = bufsize / PAGE_SIZE;
+
+ buffer = iov_kunit_create_user_buf(test, npages, NULL);
+ iov_kunit_fill_user_buf(test, buffer, bufsize);
+
+ scratch = iov_kunit_create_buffer(test, &spages, npages);
+ memset(scratch, 0, bufsize);
+
+ reference = iov_kunit_create_buffer(test, &spages, npages);
+
+ iov_kunit_load_iovec(test, &iter, ITER_SOURCE, iov, ARRAY_SIZE(iov),
+ buffer, bufsize, kvec_test_ranges);
+ size = iter.count;
+
+ copied = copy_from_iter(scratch, size, &iter);
+
+ KUNIT_EXPECT_EQ(test, copied, size);
+ KUNIT_EXPECT_EQ(test, iter.count, 0);
+ KUNIT_EXPECT_EQ(test, iter.nr_segs, 0);
+
+ iov_kunit_build_from_reference_pattern(test, reference, bufsize, kvec_test_ranges);
+ iov_kunit_check_pattern(test, reference, scratch, bufsize);
+ KUNIT_SUCCEED();
+}
+
static void __init iov_kunit_load_kvec(struct kunit *test,
struct iov_iter *iter, int dir,
struct kvec *kvec, unsigned int kvmax,
@@ -884,6 +1080,10 @@ static void __init iov_kunit_extract_pages_xarray(struct kunit *test)
}

static struct kunit_case __refdata iov_kunit_cases[] = {
+ KUNIT_CASE(iov_kunit_copy_to_ubuf),
+ KUNIT_CASE(iov_kunit_copy_from_ubuf),
+ KUNIT_CASE(iov_kunit_copy_to_iovec),
+ KUNIT_CASE(iov_kunit_copy_from_iovec),
KUNIT_CASE(iov_kunit_copy_to_kvec),
KUNIT_CASE(iov_kunit_copy_from_kvec),
KUNIT_CASE(iov_kunit_copy_to_bvec),

2023-09-15 04:37:01

by David Howells

[permalink] [raw]
Subject: [RFC PATCH 2/9] iov_iter: Consolidate some of the repeated code into helpers

Consolidate some of the repeated code snippets into helper functions to
reduce the line count.

Signed-off-by: David Howells <[email protected]>
cc: Christoph Hellwig <[email protected]>
cc: Christian Brauner <[email protected]>
cc: Jens Axboe <[email protected]>
cc: Al Viro <[email protected]>
cc: David Hildenbrand <[email protected]>
cc: John Hubbard <[email protected]>
cc: Brendan Higgins <[email protected]>
cc: David Gow <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
cc: [email protected]
---
lib/kunit_iov_iter.c | 189 +++++++++++++++++++------------------------
1 file changed, 84 insertions(+), 105 deletions(-)

diff --git a/lib/kunit_iov_iter.c b/lib/kunit_iov_iter.c
index 4a6c0efd33f5..ee586eb652b4 100644
--- a/lib/kunit_iov_iter.c
+++ b/lib/kunit_iov_iter.c
@@ -19,18 +19,18 @@ MODULE_AUTHOR("David Howells <[email protected]>");
MODULE_LICENSE("GPL");

struct kvec_test_range {
- int from, to;
+ int page, from, to;
};

static const struct kvec_test_range kvec_test_ranges[] = {
- { 0x00002, 0x00002 },
- { 0x00027, 0x03000 },
- { 0x05193, 0x18794 },
- { 0x20000, 0x20000 },
- { 0x20000, 0x24000 },
- { 0x24000, 0x27001 },
- { 0x29000, 0xffffb },
- { 0xffffd, 0xffffe },
+ { 0, 0x00002, 0x00002 },
+ { 0, 0x00027, 0x03000 },
+ { 0, 0x05193, 0x18794 },
+ { 0, 0x20000, 0x20000 },
+ { 0, 0x20000, 0x24000 },
+ { 0, 0x24000, 0x27001 },
+ { 0, 0x29000, 0xffffb },
+ { 0, 0xffffd, 0xffffe },
{ -1 }
};

@@ -69,6 +69,57 @@ static void *__init iov_kunit_create_buffer(struct kunit *test,
return buffer;
}

+/*
+ * Build the reference pattern in the scratch buffer that we expect to see in
+ * the iterator buffer (ie. the result of copy *to*).
+ */
+static void iov_kunit_build_to_reference_pattern(struct kunit *test, u8 *scratch,
+ size_t bufsize,
+ const struct kvec_test_range *pr)
+{
+ int i, patt = 0;
+
+ memset(scratch, 0, bufsize);
+ for (; pr->page >= 0; pr++)
+ for (i = pr->from; i < pr->to; i++)
+ scratch[i] = pattern(patt++);
+}
+
+/*
+ * Build the reference pattern in the iterator buffer that we expect to see in
+ * the scratch buffer (ie. the result of copy *from*).
+ */
+static void iov_kunit_build_from_reference_pattern(struct kunit *test, u8 *buffer,
+ size_t bufsize,
+ const struct kvec_test_range *pr)
+{
+ size_t i = 0, j;
+
+ memset(buffer, 0, bufsize);
+ for (; pr->page >= 0; pr++) {
+ for (j = pr->from; j < pr->to; j++) {
+ buffer[i++] = pattern(j);
+ if (i >= bufsize)
+ return;
+ }
+ }
+}
+
+/*
+ * Compare two kernel buffers to see that they're the same.
+ */
+static void iov_kunit_check_pattern(struct kunit *test, const u8 *buffer,
+ const u8 *scratch, size_t bufsize)
+{
+ size_t i;
+
+ for (i = 0; i < bufsize; i++) {
+ KUNIT_EXPECT_EQ_MSG(test, buffer[i], scratch[i], "at i=%x", i);
+ if (buffer[i] != scratch[i])
+ return;
+ }
+}
+
static void __init iov_kunit_load_kvec(struct kunit *test,
struct iov_iter *iter, int dir,
struct kvec *kvec, unsigned int kvmax,
@@ -79,7 +130,7 @@ static void __init iov_kunit_load_kvec(struct kunit *test,
int i;

for (i = 0; i < kvmax; i++, pr++) {
- if (pr->from < 0)
+ if (pr->page < 0)
break;
KUNIT_ASSERT_GE(test, pr->to, pr->from);
KUNIT_ASSERT_LE(test, pr->to, bufsize);
@@ -97,13 +148,12 @@ static void __init iov_kunit_load_kvec(struct kunit *test,
*/
static void __init iov_kunit_copy_to_kvec(struct kunit *test)
{
- const struct kvec_test_range *pr;
struct iov_iter iter;
struct page **spages, **bpages;
struct kvec kvec[8];
u8 *scratch, *buffer;
size_t bufsize, npages, size, copied;
- int i, patt;
+ int i;

bufsize = 0x100000;
npages = bufsize / PAGE_SIZE;
@@ -125,20 +175,8 @@ static void __init iov_kunit_copy_to_kvec(struct kunit *test)
KUNIT_EXPECT_EQ(test, iter.count, 0);
KUNIT_EXPECT_EQ(test, iter.nr_segs, 0);

- /* Build the expected image in the scratch buffer. */
- patt = 0;
- memset(scratch, 0, bufsize);
- for (pr = kvec_test_ranges; pr->from >= 0; pr++)
- for (i = pr->from; i < pr->to; i++)
- scratch[i] = pattern(patt++);
-
- /* Compare the images */
- for (i = 0; i < bufsize; i++) {
- KUNIT_EXPECT_EQ_MSG(test, buffer[i], scratch[i], "at i=%x", i);
- if (buffer[i] != scratch[i])
- return;
- }
-
+ iov_kunit_build_to_reference_pattern(test, scratch, bufsize, kvec_test_ranges);
+ iov_kunit_check_pattern(test, buffer, scratch, bufsize);
KUNIT_SUCCEED();
}

@@ -147,13 +185,12 @@ static void __init iov_kunit_copy_to_kvec(struct kunit *test)
*/
static void __init iov_kunit_copy_from_kvec(struct kunit *test)
{
- const struct kvec_test_range *pr;
struct iov_iter iter;
struct page **spages, **bpages;
struct kvec kvec[8];
u8 *scratch, *buffer;
size_t bufsize, npages, size, copied;
- int i, j;
+ int i;

bufsize = 0x100000;
npages = bufsize / PAGE_SIZE;
@@ -175,25 +212,8 @@ static void __init iov_kunit_copy_from_kvec(struct kunit *test)
KUNIT_EXPECT_EQ(test, iter.count, 0);
KUNIT_EXPECT_EQ(test, iter.nr_segs, 0);

- /* Build the expected image in the main buffer. */
- i = 0;
- memset(buffer, 0, bufsize);
- for (pr = kvec_test_ranges; pr->from >= 0; pr++) {
- for (j = pr->from; j < pr->to; j++) {
- buffer[i++] = pattern(j);
- if (i >= bufsize)
- goto stop;
- }
- }
-stop:
-
- /* Compare the images */
- for (i = 0; i < bufsize; i++) {
- KUNIT_EXPECT_EQ_MSG(test, scratch[i], buffer[i], "at i=%x", i);
- if (scratch[i] != buffer[i])
- return;
- }
-
+ iov_kunit_build_from_reference_pattern(test, buffer, bufsize, kvec_test_ranges);
+ iov_kunit_check_pattern(test, buffer, scratch, bufsize);
KUNIT_SUCCEED();
}

@@ -210,7 +230,7 @@ static const struct bvec_test_range bvec_test_ranges[] = {
{ 5, 0x0000, 0x1000 },
{ 6, 0x0000, 0x0ffb },
{ 6, 0x0ffd, 0x0ffe },
- { -1, -1, -1 }
+ { -1 }
};

static void __init iov_kunit_load_bvec(struct kunit *test,
@@ -225,7 +245,7 @@ static void __init iov_kunit_load_bvec(struct kunit *test,
int i;

for (i = 0; i < bvmax; i++, pr++) {
- if (pr->from < 0)
+ if (pr->page < 0)
break;
KUNIT_ASSERT_LT(test, pr->page, npages);
KUNIT_ASSERT_LT(test, pr->page * PAGE_SIZE, bufsize);
@@ -288,20 +308,14 @@ static void __init iov_kunit_copy_to_bvec(struct kunit *test)
b = 0;
patt = 0;
memset(scratch, 0, bufsize);
- for (pr = bvec_test_ranges; pr->from >= 0; pr++, b++) {
+ for (pr = bvec_test_ranges; pr->page >= 0; pr++, b++) {
u8 *p = scratch + pr->page * PAGE_SIZE;

for (i = pr->from; i < pr->to; i++)
p[i] = pattern(patt++);
}

- /* Compare the images */
- for (i = 0; i < bufsize; i++) {
- KUNIT_EXPECT_EQ_MSG(test, buffer[i], scratch[i], "at i=%x", i);
- if (buffer[i] != scratch[i])
- return;
- }
-
+ iov_kunit_check_pattern(test, buffer, scratch, bufsize);
KUNIT_SUCCEED();
}

@@ -341,7 +355,7 @@ static void __init iov_kunit_copy_from_bvec(struct kunit *test)
/* Build the expected image in the main buffer. */
i = 0;
memset(buffer, 0, bufsize);
- for (pr = bvec_test_ranges; pr->from >= 0; pr++) {
+ for (pr = bvec_test_ranges; pr->page >= 0; pr++) {
size_t patt = pr->page * PAGE_SIZE;

for (j = pr->from; j < pr->to; j++) {
@@ -352,13 +366,7 @@ static void __init iov_kunit_copy_from_bvec(struct kunit *test)
}
stop:

- /* Compare the images */
- for (i = 0; i < bufsize; i++) {
- KUNIT_EXPECT_EQ_MSG(test, scratch[i], buffer[i], "at i=%x", i);
- if (scratch[i] != buffer[i])
- return;
- }
-
+ iov_kunit_check_pattern(test, buffer, scratch, bufsize);
KUNIT_SUCCEED();
}

@@ -409,7 +417,7 @@ static void __init iov_kunit_copy_to_xarray(struct kunit *test)
struct page **spages, **bpages;
u8 *scratch, *buffer;
size_t bufsize, npages, size, copied;
- int i, patt;
+ int i;

bufsize = 0x100000;
npages = bufsize / PAGE_SIZE;
@@ -426,7 +434,7 @@ static void __init iov_kunit_copy_to_xarray(struct kunit *test)
iov_kunit_load_xarray(test, &iter, READ, xarray, bpages, npages);

i = 0;
- for (pr = kvec_test_ranges; pr->from >= 0; pr++) {
+ for (pr = kvec_test_ranges; pr->page >= 0; pr++) {
size = pr->to - pr->from;
KUNIT_ASSERT_LE(test, pr->to, bufsize);

@@ -439,20 +447,8 @@ static void __init iov_kunit_copy_to_xarray(struct kunit *test)
i += size;
}

- /* Build the expected image in the scratch buffer. */
- patt = 0;
- memset(scratch, 0, bufsize);
- for (pr = kvec_test_ranges; pr->from >= 0; pr++)
- for (i = pr->from; i < pr->to; i++)
- scratch[i] = pattern(patt++);
-
- /* Compare the images */
- for (i = 0; i < bufsize; i++) {
- KUNIT_EXPECT_EQ_MSG(test, buffer[i], scratch[i], "at i=%x", i);
- if (buffer[i] != scratch[i])
- return;
- }
-
+ iov_kunit_build_to_reference_pattern(test, scratch, bufsize, kvec_test_ranges);
+ iov_kunit_check_pattern(test, buffer, scratch, bufsize);
KUNIT_SUCCEED();
}

@@ -467,7 +463,7 @@ static void __init iov_kunit_copy_from_xarray(struct kunit *test)
struct page **spages, **bpages;
u8 *scratch, *buffer;
size_t bufsize, npages, size, copied;
- int i, j;
+ int i;

bufsize = 0x100000;
npages = bufsize / PAGE_SIZE;
@@ -484,7 +480,7 @@ static void __init iov_kunit_copy_from_xarray(struct kunit *test)
iov_kunit_load_xarray(test, &iter, READ, xarray, bpages, npages);

i = 0;
- for (pr = kvec_test_ranges; pr->from >= 0; pr++) {
+ for (pr = kvec_test_ranges; pr->page >= 0; pr++) {
size = pr->to - pr->from;
KUNIT_ASSERT_LE(test, pr->to, bufsize);

@@ -497,25 +493,8 @@ static void __init iov_kunit_copy_from_xarray(struct kunit *test)
i += size;
}

- /* Build the expected image in the main buffer. */
- i = 0;
- memset(buffer, 0, bufsize);
- for (pr = kvec_test_ranges; pr->from >= 0; pr++) {
- for (j = pr->from; j < pr->to; j++) {
- buffer[i++] = pattern(j);
- if (i >= bufsize)
- goto stop;
- }
- }
-stop:
-
- /* Compare the images */
- for (i = 0; i < bufsize; i++) {
- KUNIT_EXPECT_EQ_MSG(test, scratch[i], buffer[i], "at i=%x", i);
- if (scratch[i] != buffer[i])
- return;
- }
-
+ iov_kunit_build_from_reference_pattern(test, buffer, bufsize, kvec_test_ranges);
+ iov_kunit_check_pattern(test, buffer, scratch, bufsize);
KUNIT_SUCCEED();
}

@@ -573,7 +552,7 @@ static void __init iov_kunit_extract_pages_kvec(struct kunit *test)
while (from == pr->to) {
pr++;
from = pr->from;
- if (from < 0)
+ if (pr->page < 0)
goto stop;
}
ix = from / PAGE_SIZE;
@@ -651,7 +630,7 @@ static void __init iov_kunit_extract_pages_bvec(struct kunit *test)
while (from == pr->to) {
pr++;
from = pr->from;
- if (from < 0)
+ if (pr->page < 0)
goto stop;
}
ix = pr->page + from / PAGE_SIZE;
@@ -698,7 +677,7 @@ static void __init iov_kunit_extract_pages_xarray(struct kunit *test)
iov_kunit_create_buffer(test, &bpages, npages);
iov_kunit_load_xarray(test, &iter, READ, xarray, bpages, npages);

- for (pr = kvec_test_ranges; pr->from >= 0; pr++) {
+ for (pr = kvec_test_ranges; pr->page >= 0; pr++) {
from = pr->from;
size = pr->to - from;
KUNIT_ASSERT_LE(test, pr->to, bufsize);

2023-09-15 08:08:05

by David Howells

[permalink] [raw]
Subject: [RFC PATCH 1/9] iov_iter: Fix some checkpatch complaints in kunit tests

Fix some checkpatch complaints in the new iov_iter kunit tests:

(1) Some lines had eight spaces instead of a tab at the start.

(2) Checkpatch doesn't like (void*)(unsigned long)0xnnnnnULL, so switch to
using POISON_POINTER_DELTA plus an offset instead.

Reported-by: Johannes Thumshirn <[email protected]>
Signed-off-by: David Howells <[email protected]>
cc: Christoph Hellwig <[email protected]>
cc: Christian Brauner <[email protected]>
cc: Jens Axboe <[email protected]>
cc: Al Viro <[email protected]>
cc: David Hildenbrand <[email protected]>
cc: John Hubbard <[email protected]>
cc: Brendan Higgins <[email protected]>
cc: David Gow <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
cc: [email protected]
---
lib/kunit_iov_iter.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/lib/kunit_iov_iter.c b/lib/kunit_iov_iter.c
index 859b67c4d697..4a6c0efd33f5 100644
--- a/lib/kunit_iov_iter.c
+++ b/lib/kunit_iov_iter.c
@@ -53,7 +53,7 @@ static void *__init iov_kunit_create_buffer(struct kunit *test,
void *buffer;

pages = kunit_kcalloc(test, npages, sizeof(struct page *), GFP_KERNEL);
- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, pages);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, pages);
*ppages = pages;

got = alloc_pages_bulk_array(GFP_KERNEL, npages, pages);
@@ -63,7 +63,7 @@ static void *__init iov_kunit_create_buffer(struct kunit *test,
}

buffer = vmap(pages, npages, VM_MAP | VM_MAP_PUT_PAGES, PAGE_KERNEL);
- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, buffer);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, buffer);

kunit_add_action_or_reset(test, iov_kunit_unmap, buffer);
return buffer;
@@ -548,7 +548,7 @@ static void __init iov_kunit_extract_pages_kvec(struct kunit *test)
size_t offset0 = LONG_MAX;

for (i = 0; i < ARRAY_SIZE(pagelist); i++)
- pagelist[i] = (void *)(unsigned long)0xaa55aa55aa55aa55ULL;
+ pagelist[i] = (void *)POISON_POINTER_DELTA + 0x5a;

len = iov_iter_extract_pages(&iter, &pages, 100 * 1024,
ARRAY_SIZE(pagelist), 0, &offset0);
@@ -626,7 +626,7 @@ static void __init iov_kunit_extract_pages_bvec(struct kunit *test)
size_t offset0 = LONG_MAX;

for (i = 0; i < ARRAY_SIZE(pagelist); i++)
- pagelist[i] = (void *)(unsigned long)0xaa55aa55aa55aa55ULL;
+ pagelist[i] = (void *)POISON_POINTER_DELTA + 0x5a;

len = iov_iter_extract_pages(&iter, &pages, 100 * 1024,
ARRAY_SIZE(pagelist), 0, &offset0);
@@ -709,7 +709,7 @@ static void __init iov_kunit_extract_pages_xarray(struct kunit *test)
size_t offset0 = LONG_MAX;

for (i = 0; i < ARRAY_SIZE(pagelist); i++)
- pagelist[i] = (void *)(unsigned long)0xaa55aa55aa55aa55ULL;
+ pagelist[i] = (void *)POISON_POINTER_DELTA + 0x5a;

len = iov_iter_extract_pages(&iter, &pages, 100 * 1024,
ARRAY_SIZE(pagelist), 0, &offset0);

2023-09-15 11:29:08

by David Howells

[permalink] [raw]
Subject: [RFC PATCH 4/9] iov_iter: Consolidate bvec pattern checking

Make the BVEC-testing functions use the consolidated pattern checking
functions to reduce the amount of duplicated code.

Signed-off-by: David Howells <[email protected]>
cc: Christoph Hellwig <[email protected]>
cc: Christian Brauner <[email protected]>
cc: Jens Axboe <[email protected]>
cc: Al Viro <[email protected]>
cc: David Hildenbrand <[email protected]>
cc: John Hubbard <[email protected]>
cc: Brendan Higgins <[email protected]>
cc: David Gow <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
cc: [email protected]
---
lib/kunit_iov_iter.c | 42 +++++++++++-------------------------------
1 file changed, 11 insertions(+), 31 deletions(-)

diff --git a/lib/kunit_iov_iter.c b/lib/kunit_iov_iter.c
index 4925ca37cde6..eb86371b67d0 100644
--- a/lib/kunit_iov_iter.c
+++ b/lib/kunit_iov_iter.c
@@ -107,9 +107,11 @@ static void iov_kunit_build_to_reference_pattern(struct kunit *test, u8 *scratch
int i, patt = 0;

memset(scratch, 0, bufsize);
- for (; pr->page >= 0; pr++)
+ for (; pr->page >= 0; pr++) {
+ u8 *p = scratch + pr->page * PAGE_SIZE;
for (i = pr->from; i < pr->to; i++)
- scratch[i] = pattern(patt++);
+ p[i] = pattern(patt++);
+ }
}

/*
@@ -124,8 +126,10 @@ static void iov_kunit_build_from_reference_pattern(struct kunit *test, u8 *buffe

memset(buffer, 0, bufsize);
for (; pr->page >= 0; pr++) {
+ size_t patt = pr->page * PAGE_SIZE;
+
for (j = pr->from; j < pr->to; j++) {
- buffer[i++] = pattern(j);
+ buffer[i++] = pattern(patt + j);
if (i >= bufsize)
return;
}
@@ -287,13 +291,12 @@ static void __init iov_kunit_load_bvec(struct kunit *test,
*/
static void __init iov_kunit_copy_to_bvec(struct kunit *test)
{
- const struct iov_kunit_range *pr;
struct iov_iter iter;
struct bio_vec bvec[8];
struct page **spages, **bpages;
u8 *scratch, *buffer;
size_t bufsize, npages, size, copied;
- int i, patt;
+ int i;

bufsize = 0x100000;
npages = bufsize / PAGE_SIZE;
@@ -315,16 +318,7 @@ static void __init iov_kunit_copy_to_bvec(struct kunit *test)
KUNIT_EXPECT_EQ(test, iter.count, 0);
KUNIT_EXPECT_EQ(test, iter.nr_segs, 0);

- /* Build the expected image in the scratch buffer. */
- patt = 0;
- memset(scratch, 0, bufsize);
- for (pr = bvec_test_ranges; pr->page >= 0; pr++) {
- u8 *p = scratch + pr->page * PAGE_SIZE;
-
- for (i = pr->from; i < pr->to; i++)
- p[i] = pattern(patt++);
- }
-
+ iov_kunit_build_to_reference_pattern(test, scratch, bufsize, bvec_test_ranges);
iov_kunit_check_pattern(test, buffer, scratch, bufsize);
KUNIT_SUCCEED();
}
@@ -334,13 +328,12 @@ static void __init iov_kunit_copy_to_bvec(struct kunit *test)
*/
static void __init iov_kunit_copy_from_bvec(struct kunit *test)
{
- const struct iov_kunit_range *pr;
struct iov_iter iter;
struct bio_vec bvec[8];
struct page **spages, **bpages;
u8 *scratch, *buffer;
size_t bufsize, npages, size, copied;
- int i, j;
+ int i;

bufsize = 0x100000;
npages = bufsize / PAGE_SIZE;
@@ -362,20 +355,7 @@ static void __init iov_kunit_copy_from_bvec(struct kunit *test)
KUNIT_EXPECT_EQ(test, iter.count, 0);
KUNIT_EXPECT_EQ(test, iter.nr_segs, 0);

- /* Build the expected image in the main buffer. */
- i = 0;
- memset(buffer, 0, bufsize);
- for (pr = bvec_test_ranges; pr->page >= 0; pr++) {
- size_t patt = pr->page * PAGE_SIZE;
-
- for (j = pr->from; j < pr->to; j++) {
- buffer[i++] = pattern(patt + j);
- if (i >= bufsize)
- goto stop;
- }
- }
-stop:
-
+ iov_kunit_build_from_reference_pattern(test, buffer, bufsize, bvec_test_ranges);
iov_kunit_check_pattern(test, buffer, scratch, bufsize);
KUNIT_SUCCEED();
}

2023-09-15 12:39:39

by David Howells

[permalink] [raw]
Subject: [RFC PATCH 5/9] iov_iter: Create a function to prepare userspace VM for UBUF/IOVEC tests

Create a function to set up a userspace VM for the kunit testing thread and
set up a buffer within it such that ITER_UBUF and ITER_IOVEC tests can be
performed.

Note that this requires current->mm to point to a sufficiently set up
mm_struct. This is done by partially mirroring what execve does.

The following steps are performed:

(1) Allocate an mm_struct and pick an arch layout (required to set
mm->get_unmapped_area).

(2) Create an empty "stack" VMA so that the VMA maple tree is set up and
won't cause a crash in the maple tree code later. We don't actually
care about the stack as we're not going to actually execute userspace.

(3) Create an anon file and attach a bunch of folios to it so that the
requested number of pages are accessible.

(4) Make the kthread use the mm. This must be done before mmap is called.

(5) Shared-mmap the anon file into the allocated mm_struct.

This requires access to otherwise unexported core symbols: mm_alloc(),
vm_area_alloc(), insert_vm_struct() arch_pick_mmap_layout() and
anon_inode_getfile_secure(), which I've exported _GPL.

[?] Would it be better if this were done in core and not in a module?

Signed-off-by: David Howells <[email protected]>
cc: Andrew Morton <[email protected]>
cc: Christoph Hellwig <[email protected]>
cc: Christian Brauner <[email protected]>
cc: Jens Axboe <[email protected]>
cc: Al Viro <[email protected]>
cc: Matthew Wilcox <[email protected]>
cc: David Hildenbrand <[email protected]>
cc: John Hubbard <[email protected]>
cc: Brendan Higgins <[email protected]>
cc: David Gow <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
cc: [email protected]
---
fs/anon_inodes.c | 1 +
kernel/fork.c | 2 +
lib/kunit_iov_iter.c | 158 +++++++++++++++++++++++++++++++++++++++++++
mm/mmap.c | 1 +
mm/util.c | 1 +
5 files changed, 163 insertions(+)

diff --git a/fs/anon_inodes.c b/fs/anon_inodes.c
index 24192a7667ed..4190336180ee 100644
--- a/fs/anon_inodes.c
+++ b/fs/anon_inodes.c
@@ -176,6 +176,7 @@ struct file *anon_inode_getfile_secure(const char *name,
return __anon_inode_getfile(name, fops, priv, flags,
context_inode, true);
}
+EXPORT_SYMBOL_GPL(anon_inode_getfile_secure);

static int __anon_inode_getfd(const char *name,
const struct file_operations *fops,
diff --git a/kernel/fork.c b/kernel/fork.c
index 3b6d20dfb9a8..9ab604574400 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -494,6 +494,7 @@ struct vm_area_struct *vm_area_alloc(struct mm_struct *mm)

return vma;
}
+EXPORT_SYMBOL_GPL(vm_area_alloc);

struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig)
{
@@ -1337,6 +1338,7 @@ struct mm_struct *mm_alloc(void)
memset(mm, 0, sizeof(*mm));
return mm_init(mm, current, current_user_ns());
}
+EXPORT_SYMBOL_GPL(mm_alloc);

static inline void __mmput(struct mm_struct *mm)
{
diff --git a/lib/kunit_iov_iter.c b/lib/kunit_iov_iter.c
index eb86371b67d0..3353bca9c40f 100644
--- a/lib/kunit_iov_iter.c
+++ b/lib/kunit_iov_iter.c
@@ -10,6 +10,12 @@
#include <linux/module.h>
#include <linux/vmalloc.h>
#include <linux/mm.h>
+#include <linux/pagemap.h>
+#include <linux/mman.h>
+#include <linux/file.h>
+#include <linux/kthread.h>
+#include <linux/anon_inodes.h>
+#include <linux/writeback.h>
#include <linux/uio.h>
#include <linux/bvec.h>
#include <kunit/test.h>
@@ -68,6 +74,20 @@ static void iov_kunit_unmap(void *data)
vunmap(data);
}

+static void iov_kunit_mmdrop(void *data)
+{
+ struct mm_struct *mm = data;
+
+ if (current->mm == mm)
+ kthread_unuse_mm(mm);
+ mmdrop(mm);
+}
+
+static void iov_kunit_fput(void *data)
+{
+ fput(data);
+}
+
/*
* Create a buffer out of some pages and return a vmap'd pointer to it.
*/
@@ -96,6 +116,20 @@ static void *__init iov_kunit_create_buffer(struct kunit *test,
return buffer;
}

+static void iov_kunit_fill_user_buf(struct kunit *test,
+ u8 __user *buffer, size_t bufsize)
+{
+ size_t i;
+ int err;
+
+ for (i = 0; i < bufsize; i++) {
+ err = put_user(pattern(i), &buffer[i]);
+ KUNIT_EXPECT_EQ(test, err, 0);
+ if (test->status == KUNIT_FAILURE)
+ return;
+ }
+}
+
/*
* Build the reference pattern in the scratch buffer that we expect to see in
* the iterator buffer (ie. the result of copy *to*).
@@ -151,6 +185,130 @@ static void iov_kunit_check_pattern(struct kunit *test, const u8 *buffer,
}
}

+/*
+ * Compare a user and a scratch buffer to see that they're the same.
+ */
+static void iov_kunit_check_user_pattern(struct kunit *test, const u8 __user *buffer,
+ const u8 *scratch, size_t bufsize)
+{
+ size_t i;
+ int err;
+ u8 c;
+
+ for (i = 0; i < bufsize; i++) {
+ err = get_user(c, &buffer[i]);
+ KUNIT_EXPECT_EQ(test, err, 0);
+ KUNIT_EXPECT_EQ_MSG(test, c, scratch[i], "at i=%x", i);
+ if (c != scratch[i])
+ return;
+ }
+}
+
+static const struct file_operations iov_kunit_user_file_fops = {
+ .mmap = generic_file_mmap,
+};
+
+static int iov_kunit_user_file_read_folio(struct file *file, struct folio *folio)
+{
+ folio_mark_uptodate(folio);
+ folio_unlock(folio);
+ return 0;
+}
+
+static const struct address_space_operations iov_kunit_user_file_aops = {
+ .read_folio = iov_kunit_user_file_read_folio,
+ .dirty_folio = filemap_dirty_folio,
+};
+
+/*
+ * Create an anonymous file and attach a bunch of pages to it. We can then use
+ * this in mmap() and check the pages against it when doing extraction tests.
+ */
+static struct file *iov_kunit_create_file(struct kunit *test, size_t npages,
+ struct page ***ppages)
+{
+ struct folio *folio;
+ struct file *file;
+ struct page **pages = NULL;
+ size_t i;
+
+ if (ppages) {
+ pages = kunit_kcalloc(test, npages, sizeof(struct page *), GFP_KERNEL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, pages);
+ *ppages = pages;
+ }
+
+ file = anon_inode_getfile_secure("kunit-iov-test",
+ &iov_kunit_user_file_fops,
+ NULL, O_RDWR, NULL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, file);
+ kunit_add_action_or_reset(test, iov_kunit_fput, file);
+ file->f_mapping->a_ops = &iov_kunit_user_file_aops;
+
+ i_size_write(file_inode(file), npages * PAGE_SIZE);
+ for (i = 0; i < npages; i++) {
+ folio = filemap_grab_folio(file->f_mapping, i);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, folio);
+ if (pages)
+ *pages++ = folio_page(folio, 0);
+ folio_unlock(folio);
+ folio_put(folio);
+ }
+
+ return file;
+}
+
+static u8 __user *__init iov_kunit_create_user_buf(struct kunit *test,
+ size_t npages,
+ struct page ***ppages)
+{
+ struct rlimit rlim_stack = {
+ .rlim_cur = LONG_MAX,
+ .rlim_max = LONG_MAX,
+ };
+ struct vm_area_struct *vma;
+ struct mm_struct *mm;
+ struct file *file;
+ u8 __user *buffer;
+ int ret;
+
+ KUNIT_ASSERT_NULL(test, current->mm);
+
+ mm = mm_alloc();
+ KUNIT_ASSERT_NOT_NULL(test, mm);
+ kunit_add_action_or_reset(test, iov_kunit_mmdrop, mm);
+ arch_pick_mmap_layout(mm, &rlim_stack);
+
+ vma = vm_area_alloc(mm);
+ KUNIT_ASSERT_NOT_NULL(test, vma);
+ vma_set_anonymous(vma);
+
+ /*
+ * Place the stack at the largest stack address the architecture
+ * supports. Later, we'll move this to an appropriate place. We don't
+ * use STACK_TOP because that can depend on attributes which aren't
+ * configured yet.
+ */
+ vma->vm_end = STACK_TOP_MAX;
+ vma->vm_start = vma->vm_end - PAGE_SIZE;
+ vm_flags_init(vma, VM_SOFTDIRTY | VM_STACK_FLAGS | VM_STACK_INCOMPLETE_SETUP);
+ vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
+
+ ret = insert_vm_struct(mm, vma);
+ KUNIT_ASSERT_EQ(test, ret, 0);
+
+ mm->stack_vm = mm->total_vm = 1;
+
+ file = iov_kunit_create_file(test, npages, ppages);
+
+ kthread_use_mm(mm);
+ buffer = (u8 __user *)vm_mmap(file, 0, PAGE_SIZE * npages,
+ PROT_READ | PROT_WRITE,
+ MAP_SHARED, 0);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, buffer);
+ return buffer;
+}
+
static void __init iov_kunit_load_kvec(struct kunit *test,
struct iov_iter *iter, int dir,
struct kvec *kvec, unsigned int kvmax,
diff --git a/mm/mmap.c b/mm/mmap.c
index b56a7f0c9f85..2ea4a98a2cab 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -3284,6 +3284,7 @@ int insert_vm_struct(struct mm_struct *mm, struct vm_area_struct *vma)

return 0;
}
+EXPORT_SYMBOL_GPL(insert_vm_struct);

/*
* Copy the vma structure to a new location in the same mm,
diff --git a/mm/util.c b/mm/util.c
index 8cbbfd3a3d59..a393a308607c 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -455,6 +455,7 @@ void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack)
mm->get_unmapped_area = arch_get_unmapped_area;
}
#endif
+EXPORT_SYMBOL_GPL(arch_pick_mmap_layout);

/**
* __account_locked_vm - account locked pages to an mm's locked_vm